Temperature calibration basics

The types of calibration applicable to a manufacturing plant include temperature, pressure, flow, and electrical parameters, such as power, voltage, current, and resistance. This article addresses temperature calibration primarily. But much of the information applies to calibration of all instruments.

By Jack Smith, Senior Editor, Plant Engineering Magazine September 10, 2003

Key Concepts

Necessity

Traceability

Outsourcing

Glossary

Sections: Why is it necessary to calibrate instruments? Which instruments require calibration? What is traceability? Inside or outside?

The types of calibration applicable to a manufacturing plant include temperature, pressure, flow, and electrical parameters, such as power, voltage, current, and resistance. This article addresses temperature calibration primarily. But much of the information applies to calibration of all instruments.

For an electronic instrument to be used for measurement, the instrument must be connected to some type of sensor. In some cases, the sensor itself must be calibrated, certified, or its range of error documented and compensated within systems where these sensors are used. Temperature sensors include thermocouples, resistance temperature detectors (RTDs), thermistors, filled bulb, and bimetal thermostat. Because of their widespread use, only thermocouples and RTDs will be discussed in this article.

Why is it necessary to calibrate instruments?

Instruments controlling your process that are not accurate may affect the quality of your products, the existence of your products, or the cost of energy required to manufacture your products. Instrument and control accuracy may make the difference between manufacturing widgets and manufacturing scrap.

Also, the government or some other entity regulating your industry may require you to calibrate your instruments and controls, the test equipment used to troubleshoot and repair your equipment, and the calibration sources or standards used to calibrate these instruments. Instrument calibration, documentation, and traceability have been longstanding mandates in military and aerospace, as well as food, beverage, and pharmaceutical industries. Most regulatory entities require calibration traceability to the National Institute for Standards and Technology (NIST), formerly known as the National Bureau of Standards. However, some industries have not always been this meticulous in complying with established national standards.

Which instruments require calibration?

In general terms, all of them. But specifically, it depends on the type of instrument, where the instrument is used, how the instrument is used, the process, if any, in which it is used, and the documentation and regulatory compliance required by your plant or its regulating organizations.

Temperature instruments are calibrated by substituting the sensor signal with a known electrical equivalent that has a higher degree of accuracy than the instrument being calibrated (Fig. 1). Older equipment consisted of a simple voltage source and a precision voltmeter, which read the source voltage applied to the instrument to be calibrated. For example, the uncompensated dc voltage equivalent for a type K thermocouple at 1450 F is 32.774 mV. The point of measurement presents a cold junction, the effects for which must be compensated (see “Instrument calibration glossary”).

Today many calibrations are performed using multifunction calibrators. Newer multifunction calibrators can measure and source signals. The ability to source signals allows the multifunction calibrator to emulate signals from thermocouples, RTDs, and thermistors, as well as nontemperature parameters, such as voltage, current, resistance, frequencies, and waveforms. Some are capable of HART device support and documenting of calibration tasks (Fig. 2).

The more equipment you can calibrate with the fewest calibrators the better. A multifunction calibrator might be more useful than a single-function calibrator if you have a wide range of equipment to calibrate.

What is traceability?

Traceability of a calibration facility or company’s calibration equipment (and procedures) means that performance testing has been done on the facility, equipment, or procedures, which includes the units tested, the test conditions, the associated instruments, and personnel performing the calibrations. The results of the testing are quantified according to the applicable NIST reference standards. Testing is done periodically to ensure that instruments are as accurate as specified.

NIST traceability refers to the ability to establish assurance and quantify any imprecision of the components of a given measurement, and assess the systematic error of the result. These errors are not only the uncertainties of NIST units, but also the uncertainties of every step in the process chain — down to the measurement with which you are concerned.

Some technicians feel that traceability of instrument calibration to industry standards is unnecessary if their products are manufactured to specifications. Others believe that vendors mean it when they say their instruments don’t need field calibration. However, instrument calibrations not traceable to NIST standards can lead to inconsistent or inferior products. The only way you can be certain that your instruments are accurate according to a known and consistent accuracy standard is to require NIST traceability. If your previous calibrations were not NIST-traceable, you cannot be certain of their accuracy.

Inside or outside?

Many plants have inhouse calibration of instruments and controls used in their manufacturing processes. Others use outsourced calibration services on a periodic or contract basis. Regardless of who does the calibration, the equipment used to calibrate instruments that control manufacturing processes must itself be calibrated, and should be NIST-traceable.

“Calibrating the calibrator” is an important task and should not be overlooked. Deciding whether your company should calibrate these instruments or hire an outside lab to calibrate them depends on many factors, which include time, labor, and equipment needed for calibration tasks.

If your plant has a handful of calibrators, test equipment, and digital multimeters (DMMs), outsourcing their calibration may be the better choice. But if your plant has many such test instruments, it may be more cost-effective for your company to set up a calibration lab inhouse.

A big advantage of setting up your own calibration facility is that you can control the workflow. You can also set up a documentation procedure to fit your needs.

Using an outside calibration lab also has its advantages. Depending on your plant, an outside lab may be more efficient. Some labs can essentially manage your entire calibration program for you.

To determine whether to calibrate your instruments inhouse, or to outsource calibration, ask yourself these questions:

What is the calibration workload?

How many instruments require calibration?

How often must each instrument be calibrated?

How much time — in both labor and equipment unavailability — will be required for calibration of these instruments?

How much time should be estimated for unscheduled calibration and repair?

How much will your calibration equipment depreciate each year?

How much will calibration lab equipment cost?

What would the calibration lab payroll expense be?

How much would training cost?

How much would overhead cost?

If you decide to have instruments calibrated at an outside calibration lab, you should audit your outside calibration supplier regularly to ensure that adequate equipment, procedures, and documentation exist to guarantee NIST traceability. However, if you cannot justify the cost and time required to perform these onsite audits, the use of an ISO 9000-certified or MIL-STD-approved calibration supplier is an acceptable alternative.

PLANT ENGINEERING magazine extends its appreciation to E Instruments Group, LLC; Eurotherm; Fluke Corp; Hart Scientific; and TRANSCAT, Inc., for the use of their materials in the preparation of this article.

Accuracy — The maximum deviation expected between a meter reading and the actual value being measured under specified operating conditions — usually expressed in percent of full scale for analog instruments, percent of reading for digital instruments.

Analog — Physical representation of information in a continuous form, as contrasted with digital information, which is represented in a discrete, discontinued, or stepped form.

Balco — A nickel-iron alloy generally used for low-cost resistance sensors or cold junction compensation for temperature instruments.

Calibration — The process of comparing an instrument or device with a standard to determine its accuracy or to devise a corrected scale.

Cold junction (Seebeck effect) — The production of a voltage when two properly chosen materials form a closed circuit and one junction is hotter than the other (see Thermocouple). It is assumed that the hotter junction is the measurement junction, while the cold junction is the reference junction.

Cold junction compensation — Factor that negates the voltage created by the cold junction, allowing only the voltage produced by the thermocouple to be sensed by the instrument. An RTD or Balco resistor (compensating component) senses the cold junction temperature at the instrument terminal. This compensating component is placed in the measurement circuit in such a way that it produces a voltage equal to, but opposite of the voltage produced by the cold junction, canceling the effect of the cold junction.

Conformity error — For thermocouples and RTDs, the difference between the actual reading and the temperature shown in published tables for a specific voltage input.

Digit — A measure of the display span of a meter or instrument. By convention, a full digit can assume any value from 0 through 9, a

Drift — A variation in a reading or set point value resulting from changes in component value, ambient temperature, and line voltage.

Hysteresis — An error resulting from the inability of an electrical signal or mechanical system to produce the same reading or position when approached slowly from either direction. Also referred to as dead band.

Precision — An indication of the number of distinguishable valid alternatives presented by an instrument scale or readout for obtaining a measurement. The greater the number of graduations (analog) or significant figures (digital), the higher the precision, assuming that the subdivisions have been meticulously achieved. Precision and accuracy are independent, but interrelated characteristics. A high degree of precision implies a high degree of accuracy, but does not assure it. However, a high degree of accuracy requires a high degree of precision.

Reference junction — The other junction to which the measuring thermocouple junction is compared. The output voltage of a thermocouple is approximately proportional to the temperature difference between the measuring (hot) junction and the reference (cold) junction.

Repeatability — The ability of an instrument to register the same reading in successive measurements of the same input. It is usually expressed as a percentage of full scale.

Resistance temperature detector (RTD) — A resistor made of a material for which the electrical resistivity varies with temperature according to a known and measurable function. An RTD is a temperature sensor constructed using a precision winding of copper, nickel, Balco, platinum, or tungsten element.

Resolution — The degree to which nearly equal values of a quantity can be discriminated. In analog meters, resolution is the difference between the values represented by two adjacent scale divisions. In digital meters, resolution is the value represented by a one-digit change in the least-significant digit.

Sensitivity — The minimum input signal that will produce a specified output.

Stability — Deviation from a steady state or accuracy over a specific period of time due to aging of components or environmental changes. Stability is usually expressed in parts per million (ppm) for a given period of time.

Thermocouple — A temperature sensor formed by joining two electrical conductors constructed of two dissimilar metals. The junction is placed at the point of heat (or cold) application. A resulting voltage difference, directly proportional to the temperature, is developed across the free ends and is measured potentiometrically. Reference, or calibration, to standard tables, is made to determine the temperature. A thermocouple alloy can be any of a number of metal alloys having standardized and controlled thermoelectric properties. Designated dissimilar pairs of these alloys are used together to facilitate measuring or extension portions of thermocouple circuits.

Thermocouple types

Thermocouple type
Composition (+lead/-lead)
Useful temperature range in deg F
Maximum temperature range in deg F

J
Iron (Fe)*/constantan (Cu-Ni)
32 to 1382
-346 to 2192

K
Chromel (Ni-Cr)/Alumel (Ni-Al)*
-328 to 2282
-454 to 2501

T
Copper (Cu)/constantan (Cu-Ni)
-328 to 662
-454 to 752

E
Chromel (Ni-Cr)/constantan (Cu-Ni)
-328 to 1652
-454 to 1832

N
Nicrosil (Ni-Cr-Si)/nisil (Ni-Si-Mg)
-450 to 2372
-450 to 2372

R
Platinum + 13% rhodium (Pt-13% Rh)/Platinum (Pt)
32 to 2642
-58 to 3214

S
Platinum + 10% rhodium (Pt-10% Rh)/Platinum (Pt)
32 to 2642
-58 to 3214

B
Platinum + 30% rhodium (Pt-30% Rh)/
32 to 3092
32 to 3308

Platinum + 6% rhodium (Pt-6% Rh)

C**
Tungsten + 5% rhenium (W-5% Re)/
32 to 4208
32 to 4208

Tungsten + 26% rhenium (W-26% Re)

*Denotes magnetic lead.**Type C thermocouple is not an ANSI recognized symbol. All other types listed in this table are recognized as ANSI symbols.Table indicates popular thermocouple types. List is not exhaustive.All thermocouple types shown have red insulation on the negative lead. This color convention is for the U.S. only. Other countries may have different color code designations. Check applicable standards.