Virtualization benefits for manufacturers

Virtualization growth in manufacturing is continuing as more end users are taking advantage of the cost benefits it offers such as increased efficiency, reduced costs, and better security.

By Gregory Hale, ISSSource August 18, 2016

A manufacturing executive looking to increase profits while reducing costs should be taking steps to move in a virtual environment. Virtualization growth in manufacturing is continuing as more end users are taking advantage of the cost benefits it offers.

In a tight market and reduced workforce, it is imperative to keep a weather eye on the network to make sure it is running properly and that there are no outsiders peering in trying to gather, or steal, vital information.

The goal is to the keep the network up and running by eliminating any unplanned downtime, so that is where network monitoring comes into play as a strong tool to alert and keep end users aware of nuances and changes going on in a network. In short, increase network visibility.

Virtualization is growing in all industries, especially manufacturing and the numbers seem to back that up with one industry snapshot saying the global storage virtualization market will enjoy a compound annual growth rate of more than 24% from 2015 to 2019, according to Technavios market research.

From a hardware perspective, virtualization makes it possible to run more applications on the same hardware, which translates into cost savings. If less servers are purchased, then there will be fewer capital expenditures and maintenance costs.

Virtual machines can end up centrally managed and monitored, which allows a manufacturer to more easily achieve greater process consistency across the enterprise. Benefits include ease of continuous process improvement, greater agility and less training burden as employees transfer, or leave the company, get promoted, or retire.

Manufacturing software such as manufacturing execution systems (MES), among others, have a long lifecycle, given the upfront time and cost to implement, as well as the needed ongoing training. Oftentimes, manufacturers avoid making changes to their software configurations as a way to reduce risk. By separating software from hardware updates, a virtual IT environment might offer benefits to ease this management lifecycle of software and OS system updates. Hardware purchases can also occur on a regular or scheduled basis, resulting in greater consistency in system specifications.

Virtualization provides greater cost savings 

As mentioned, virtualization is growing on the industrial, or OT, side. Automation’s gains over the past decade have come from the ability to connect business systems to the plant floor and drive factories based on orders received and collect data out of the plant and use that to analyze and improve performance. Knowing and understanding all that, end users are deriving great cost savings by virtualizing PCs onto fewer physical servers.

When doing that, the top benefit is cost savings and another benefit is manufacturers are protecting themselves against hardware failure. Downtime would be lower because users can build clusters of PCs that add redundancy that would not be there if each application or server was running on its own dedicated piece of hardware.

Along those lines, with manufacturers increasing their network dependency, it means more network connected devices which will all need a monitoring tool.

In a complex cloud style cluster of virtual machine servers, there is a need for monitoring and configuration tools.

CapEx-OpEx spending down

Virtualizing servers in a manufacturing environment does return a considerable cost savings in terms of the amount of equipment the user would need to purchase. So capital expenditures (CapEx) would be down when going to a virtual environment and operational expenditures (OpEx) over time would also reduce because the less CapEx means the less a user has to buy in three years when the warranty runs out.

By going the virtualization route, it gives the manufacturer the ability to buy two physical servers and then run five or six virtual servers on each of those and if one fails the other five or six virtual servers on the other machine can continue to run so it reduces downtime in a failure for a business that has a mission critical application. That reduced downtime can lead to an increase in productivity, which leads to making more product which could mean higher profitability.

Another aspect in an industry always looking to squeeze more and more out of its processes is it can also reduce costs for staffing. Traditionally, a manufacturer may have internal IT staff, but they would need less staff to manage a platform that has been virtualized. In addition, a staff that works over a virtualized platform has a cross pollination of their skill set so there is not one person focused on the network and another for the servers, there are just folks skilled on the virtualized platform.

One more aspect often overlooked is virtualization allows a manufacturer to scale up or down according to market demands.

Monitoring more interfaces

Along the lines of scalability, there are tools that can monitor up to 600,000 interfaces from a physical server and there are some right now monitoring up to 900,000 interfaces from a virtual server.

While that may sound like a lot, it is possible to treat each interface separately. That can allow for the user to pull metrics independently of the device and then run reports and then show metrics around how much traffic is occurring on an interface, how many errors are occurring on an interface, whether the interface has a connection or not. Another benefit of treating each interface separately is when it comes to troubleshooting a network, it is easier to find out exactly which port or physical connection to go to find a faulty cable or a device that has a problem.

Network monitoring is a solid tool, but its mission is to report on what it sees from the interface’s perspective. The interface will say whether it is suffering from a physical fault, potentially a cable fault or a misconfiguration, or if there is too much traffic going across the interface at the time which could be an indication someone is using the network inappropriately.

It also allows the user to create alerts across any metrics. As an example, it would be possible to write a custom alert to go to the administrator if there is a potential cable fault. The alert could say there is a potential cable fault on router one, interface two. It can be smart enough to give them that kind of feedback. So, it is possible to bring together multiple metrics from an interface to look at bandwidth and errors at the same time and send them in one email. The administrator at that point could look at it and say it doesn’t look like a bandwidth issue but it is a fault with the device because we are dropping interface metrics, or packets, on the interface.

Setting a baseline

The issue is all about knowing the network and understanding what is going on. That all can happen once the user develops a baseline of what the network should look like. Then they can find and determine discrepancies.

By running the network monitoring tool over time, it is possible to determine the average usage so if something was happening on the network, the administrator would get a report saying this interface normally would see 30 megs of traffic. So it would then be possible, knowing the baseline, to set an alert saying send a notification if the traffic gets to 50 or 60 megs.

The goal is to keep the history as long as the user wants. By knowing the history of that baseline that could go back for years, it could also help in designing the network of the future. In doing that, a manufacturer could say, "Last year we were using 30 megs, but in the last three months we have been at 50 megs." That may not trigger any alerts, but it could allow the manufacturer to capacity plan and say, "If we keep growing at this rate we are going to have to replace our equipment to handle the new bandwidth requirement in this area of the network."

Defense-in-depth tool

Whether it is a virtual environment or a regular physical network, in today’s Internet-connected manufacturing environment, network monitoring becomes another strong tool in solid defense in depth program.

From a security perspective, the goal is to ensure the network stays up as much as possible, which minimizes downtime and maximizes operational return.

The biggest risk on a network is when something that impacts the business such as a distributed denial of service (DDoS) attack or when someone is trying to find an exploit and are running scanners across the network. By monitoring NetFlow traffic, it is possible to report on unusual activity on unknown ports and provide that information in real time as it is happening. NetFlow is a feature on Cisco routers that provides the ability to collect IP network traffic as it enters or exits an interface.

Because it is possible to report on things connected or disconnected on the network, if there are critical connections on the network that go down which are mission critical, the network monitoring tool can immediately alert the user the second the tool receives an outage notice from a device. The tool can inform the administrator of a fault before they realize it has happened on the network.

For example, if the administrator is not aware of what is occurring on the network, he or she can go back and look at the history of the network and understand the baseline. The same goes for the unusual traffic.

It is possible to classify traffic based on the rules defined to say port 80 and port 443 are normal protocols on the network and classify them as web traffic. But looking at port 3389 for remote desktop connections and put that over in a different area as a normal business application and then anything outside of that consider it as unknown. Then the alert can say if there is more than 20% network traffic that is unknown at anytime, the administrator should know about it so they can start to investigate. That type of alert will tell them about things they were not prepared for or even looking for because it will end up classified in that unknown category.

OT growth coming

Network monitoring has been a staple in the IT industry for a decade or so and the manufacturing automation industry is just now starting to pick up on the benefits that visibility bring. 

OT networks tend to be a lot smaller than IT networks, but that is in the process of changing, especially with the looming shift to the Industrial Internet of Things (IIoT). When that happens, and there are industry pundits saying it will happen en masse sooner than later, the number of network connected devices is absolutely going to mushroom. So most likely in two years, network monitoring will be as important in this industry as it already is in IT. 

A boost in visibility will allow everyone from engineers to C-level leaders make informed decisions in the new era of manufacturing.

Gregory Hale is the editor and founder of Industrial Safety and Security Source (ISSSource.com), a news and information Website covering safety and security issues in the manufacturing automation sector. This content originally appeared on ISSSource.com. Edited by Chris Vavra, production editor, CFE Media, Control Engineeringcvavra@cfemedia.com.

ONLINE extra

See additional stories from ISSSource about virtualization linked below.

Original content can be found at www.isssource.com.