Performing device integration testing for Foundation fieldbus devices
While fieldbus registration guarantees a basic level of interoperability, you may find some unpleasant surprises as you try to integrate devices with your host system. Here are some ways to prevent that.
Dozens of Foundation fieldbus device manufacturers promote their products as interoperable and claim unique features that make them a perfect fit for your automation project. They point to the Foundation’s registration check mark as one form of proof. But is registration adequate to verify the supplier’s claims and determine how well the devices work with your host system?
While important to verify interoperability of devices, Foundation registration alone isn’t sufficient as an arbiter for device selection or a guarantor of a smooth plug-and-play start-up. Even with registration, unfortunate surprises may lurk and be discovered at a bad time—during factory acceptance testing, or worse, during construction and commissioning.
For example, you may learn that the registered valve positioners you’ve selected need more execution time than expected, requiring additional segments to distribute the segment loading and maintain the required amount of free time on the segments. Having to add segments results in additional engineering and installation costs. You may also learn that some devices require more lift-off or turn-on voltage than expected, which in turn may restrict the total quantity of devices on the segment or limit the total segment length due to allowable voltage drop.
Fortunately, you can avoid surprises, delays, and cost overruns by performing device integration testing to verify device operation as advertised and ensure that the selected device functionality matches your process requirements. Observing, documenting, and recording device functionality and operation during device integration testing can provide a certainty of outcome at factory acceptance test. Host system designers provide a basis for operating and maintenance procedures as well as hands-on training for construction, start-up, and maintenance personnel. You’ll then be ready for smooth commissioning and start-up.
A basic first step for Foundation fieldbus device selection is choosing devices registered as interoperable. The Fieldbus Foundation registered product catalog lists all of the devices carrying the official Foundation check mark. It identifies products by manufacturer and device type, and describes features of the devices such as function blocks available, profile and device class, revision status, stack and physical layer information, as well as a link for downloading the DD (device description). The Foundation’s registration check mark certifies that products have successfully completed testing per the ITK (interoperability test kit). This validates that the communications are per Foundation specifications including the requirement that all mandatory parameters are present and functional. Registration ensures a basic interoperability, meaning the device communicates with host systems and other devices per the Foundation’s specifications.
Interoperability is only a starting point for selection and preparation prior to using fieldbus devices. Consider, for example, that there are 106 companies shown on the Registered Products page of the Fieldbus Foundation’s website as of July 2011.
Consider these counts of registered devices:
- Pressure: 46 devices from 16 manufacturers
- Flow: 52 devices from 22 manufacturers
- Level: 57 devices from 20 manufacturers, and
- Final control elements: 47 devices from 25 manufacturers.
Are they all equal? While registration verifies interoperability, it does not mean that the product will perform as advertised or as needed by your process. The Foundation is not responsible for individual manufacturer’s designs. Will it work as expected? What features are available, and which do you need? Are the displays and interfaces easy to see and use? What are the quirks and potential pitfalls? How well does the device work with your host system?
Compare brands side-by-side
A simple but important first step is creating a side-by-side technical evaluation of candidate field devices, comparing them on at least the following:
- What area classifications it is certified for
- What other certifications it has (ATEX, IEC, etc.)
- Current consumption
- Linkmaster capable (if needed)
- Resource and transducer blocks
- Function blocks available (vs. what is needed)
- Function block execution time
- What FF communication stack is used
- What ITK version was used for certification (latest?)
- Whether the device uses enhanced EDDL (electronic device description language)
- Whether the device uses FDT/DTM (field device tool/device type manager) technology
- What other hardware and software is required to use it
- Whether methods are available for calibration and routine maintenance tasks, and
- Whether the device is firmware-download capable.
Evaluate transmitters, final control elements, and all other field devices to ensure the functionality needed for your process is available. As an example, questions about valve positioners should include:
- What diagnostics are available
- What is needed to view/run the diagnostics
- What reports are available
- Whether reports can be converted to MS Word or must be native format
- Whether the history is stored in the device, and
- Whether the history is retained when power is turned off.
Create a short list of criteria that can make a significant difference for your application and compare them side-by-side as illustrated in the sample positioner comparison in the table. For example, included in the list are current draw and function block execution time. Why should we be concerned with things like current draw and execution time? Because devices with higher current draw and longer execution tend to limit the total number of devices on a segment for a given design.
For example, consider the impact of positioner selection on your segment design. Basic requirements according to Foundation Fieldbus AG-181 System Engineering Guidelines rev 3.1 are as follows:
- The maximum recommended number of devices for a segment with a one-second macrocycle is 12, with a maximum of three final control elements.
- The guidelines recommend at least 30% free time.
- For most host systems, the recommended maximum required time is 50%, 500 ms.
- Therefore, assume that the maximum allowed time for a segment is 500 ms.
If control in the field is used, as is recommended for most applications, then the positioner choice presents options:
- Required execution time for one control valve (PID and AO function blocks) using positioner C on the chart would be 215 ms (120 + 95).
- Each of the control valves would require an execution time of 215 ms.
- Transmitters (depending on the manufacturer and type) can have execution times from 20 to 100 ms. (For this illustration, assume an execution time of 45 ms.)
- A control loop would then require 45 + 120 + 95 = 260 ms.
- If a natural schedule was used, this design would be limited to one control loop plus possibly two to three additional transmitters on the segment, allowing no room for expansion since the recommended free macrocycle time for acyclic communication is already used.
As you can see, positioner choice can severely limit segment design, possibly necessitating an increase in the number of segments, thereby increasing the overall cost. By contrast, if you selected positioner A, the total time for a control loop would be 100 ms (30 + 25 + 45), which is 38% of the time required for one control loop using positioner C.
Fortunately, natural schedules are rarely used in today’s host systems. It is important to note that most host systems can optimize the macrocycle to be more efficient, such as executing devices in parallel or using subcycles so that execution time becomes much less of a problem. The designer must know and understand the capabilities of the host system being used on the project in order to confirm segment design.
Once this side-by-side technical evaluation is done, other factors need to be evaluated for a given field device. Manufacturers are usually quite willing to participate in demonstrating that their product is truly interoperable.
Integration testing: How and why
In addition to verifying that devices are interoperable with the chosen host, convenience, factors of speed, and ease-of-use are very important. Two or more device brands may appear nearly equal in a side-by-side comparison of published specifications; however, one may be much easier to work with and more compatible with your plant’s operating procedures than the others. It is important to understand and evaluate devices fully by thorough device integration testing while connected to your chosen host system in a laboratory environment. In this way you will be able to set up procedures and methods to configure and apply what may be hundreds or even thousands of devices efficiently and confidently in your plant.
Device integration testing should be done as part of your project after host selection and prior to customer acceptance testing, commissioning, and start-up. Your engineering team may define and run the tests, or you may arrange to have it done by the host supplier or other consultant. Knowledge gained and data from the testing ensures that devices are configured properly and function like you expect them to as part of the plant’s Foundation fieldbus networks.
When connected, the host HMI (human machine interface) presents all device configuration data, including parameters and other details. The device integration test setup provides an opportunity to review and adjust device configuration parameters to ensure specific functionality as needed for the process. You’ll find that all devices are not created equal in terms of the parameter information provided and in ease of access. The parameter data is even displayed differently by various host suppliers.
It is important to understand the format of the information presented and to make it as easy as possible for the engineering team to know where data resides and enable them to go directly to key information. Some suppliers employ IEC standard enhanced EDDL to create displays that are meaningful, quick, and easy to work with. Others use FDT/DTM technology, and still others used standard EDDL only.
A device may contain hundreds of data fields (parameters) embedded in its transmitter, but careful analysis will show that the vast majority of them are information only and do not require configuration. Typically, only 5% to 10% of a device’s parameters will require configuration. The most widely used parameters that need to be accessible are shown in the diagram.
It is important also that device data provides consistency and methods by which device configuration is made as simple as possible. In these ways, device integration testing enables engineering to know devices—how they look, act, and feel—so they can design and implement best and most efficient practices.
A device’s operating manual should show (unfortunately some do not) the as-shipped configuration so engineering can determine if it is directly useable in the process or needs to be changed. As an example, the data in the diagram shows that the device’s as-shipped configuration has the WRITE_LOCK parameter set as “unlocked.” It is important to check that a device’s features provide the configuration options needed for your process. Consider multibit alarm support as an example. This feature allows multiple alarms to be annunciated from a device, even if they have not yet been acknowledged or cleared. While this is very useful in the right context, not all host systems support such functionality yet. Consequently, even if a specific feature is available in the device, if the host does not support it, it is not relevant.
Further examples of important data learned from device integration testing efforts include:
- Mode handling—If the resource block is placed out-of-service (OOS), are the other associated function blocks also placed OOS automatically? When the resource block is returned to automatic, do the associated blocks return to their previous state, or do they need to be changed manually? The Fieldbus foundation does not specify a requirement for this return, so your plant’s work processes need to accommodate the device supplier’s approach.
- Communications—If the field device is link master capable, is it set to basic or link master by the supplier at the factory? Since segments can have only one link active scheduler (LAS), jurisdictional problems can result if a device is set to link master and the designer does not wish it to be, especially when a segment is initially powered. To avoid issues, learn the easiest way to reset the device to basic mode.
- Addresses—What node address is set into the device at the supplier’s factory? Ideally it will be a temporary setting that you can easily reset. Conversely, if device(s) were set with address 18, for example, there could be a problem since this address is reserved for the host.
- Switches—Are field device switch settings correct as received from the supplier? Most often they will be, but it’s important to check. For example, the switches on a multi-input temperature transmitter shown in the photo are recommended to be set to “off” initially.
Any host suppliers you deal with should be willing to discuss areas of concern for field devices. These may include features available in devices that are not supported in the host, issues with specific function blocks, or the device object dictionary. Knowing these in advance enables your device integration testing to move more quickly since you can access and account for known issues.
Another outcome of device integration testing is uncovering any upload or download errors with resource and function blocks or devices so they can be analyzed and corrected. Host-driven displays typically present error listings.
Performance in external noise environments can be tested to provide confidence in system operation. Test results, shown below, at 6,000 mV of random noise where all devices are communicating, versus 7,500 mV where some field devices are seen to have dropped off and are no longer communicating accurately.
Newer host displays add features to simplify and streamline device usage as contrasted with traditional displays that present all information in text listings. Your device integration testing process enables the project team members to familiarize themselves with the full family of displays. This provides an important review to make sure the host supplier has consistency across the range of devices. For example, all displays should be scaled to show a complete view without requiring the engineer or maintenance tech to scroll up or down for access to some of the information. If there is a problem or inconsistency, both the host and device suppliers should be informed for a timely resolution.
Certainty of outcome
As described, the rigor of checks for interoperability, device selection, and device integration testing should not be foreboding, but rather are intended to enable a certainty of outcome for success in engineering, particularly during the high-pressure commissioning and start-up stages that follow. Time spent up front, before factory acceptance testing, will pay off in cost and schedule savings. It allows your personnel to find out how the combinations of host and device work before the time crunch of startup activities. It enables any minor problems to be rectified in a timely manner, and provides a validated basis for operating and maintenance procedures.
David S. Lancaster, PE, is a certified Foundation fieldbus instructor at Trine University, Angola, Ind.
For more information, visit:
Case Study Database
Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.
2012 Salary Survey
In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.
Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.