Product Related

How to Select the Right Load Cell

How to select the right load cell

The process of choosing a load cell is often complicated by the fact that each option has its own unique benefits. Truth be told, different load cell applications have their own special requirements. As such, you should discuss load cell capacity requirements with a knowledgeable supplier in advance of making a selection.

The decision you arrive at based on the load cell selection criteria is only one step in the overall process of implementation. Beyond that, you must still ensure you correctly installed and equipped the load cell with the proper instrumentation, otherwise you will not get precise measurements. This load cell selection guide will help you make a more informed choice.

Understand Your Application

In order to understand your load cell application, you must determine how to conduct the measurement and how the load should be applied. The process of measurement concerns a variety of factors, including:

  • Bending
  • Compression
  • Multi-axis
  • Tension

Load cell measurements help determine the weight of tanks and the performance of durability and break point tests.

Define Your Capacity Requirements

Determine the maximum amount of load capacity required for your applications, as well as the necessary minimum.

Stress and load cell performance

As you go about determining the load cell capacity requirements of your application, you will need to bear in mind the extraneous factors. To ensure you have the optimal capacity, choose one that exceeds the highest operating load. Moreover, you will need to establish which engineering units will be required for the process.

The combined stress caused by extraneous load and moments can put a dampener on the performance of an application. Consequently, you could seriously compromise the accuracy of an application if you fail to select the proper load cell.

The majority of in-line sensors are ill-equipped for the possibility of extraneous load. For the more high-endurance applications, choose a sensor with an optimal fatigue rating, as specified by the manufacturer.

Define Your Load Cell Needs

In order to select the right load cell, determine the needs of your applications. The following questions can help you make that determination:

  • Is the load of your application dynamic or static?
  • Do you intend to make in-line use of the load cell?
  • Do you intend to make side-mounted use of the load cell?

Another factor is the design of the device in question. Load cell capacity requirements are in part determined by the shape of the device in use, which could include:

  • Compressor washer
  • Female/male thread
  • Flange mount
  • In-line
  • Side mount
  • Thru-hole

When choosing a load cell, make sure you first take these factors into account to ensure that your purchase fits the requirements of the applications.

Define Your Size and Specification Requirements

Another key component of load cell selection criteria is the definition of size requirements. Most specifically, you must determine the needs of your application along the following measures:

  • Width and height
  • Weight
  • Length

To ensure you have sufficient load cell capacity, you will also need to determine the possible variances that could arise, such as:

  • Bridge resistance
  • Hysteresis
  • Nonlinearity

Finally, you will need to base your choice in part on whether you will conduct the applications in hot or cold temperatures. Likewise, load cell capacity requirements can be determined by whether applications are carried out in submerged settings or out of water. Factors such as the frequency of response and the possible need for special calibration are also matters of interest when you determine your requirements.

Select Instrumentation

At the same time you choose a load cell, be prepared to also select any necessary instruments for the applications you intend to perform. When you pick all the vital pieces at the same time, you can better ensure a system-wide functionality between every component in use.

You should also include system calibration in your order. This will ensure you integrate your instrument and sensor for the same system. Calibration is a vital component to all load cell applications.

Load Cell Applications

Load cell application factors

A variety of load cell options are preferable in relation to the demands of the setting or application in question. As you learn how to select the right load cell for a particular application, consider your foremost requirements. In general, the performance of a load cell will correlate to the demands of an application as follows:

  • High endurance. In an industrial setting, a strain gage load cell is one of the best options, because the accuracy is near perfect in applications that involve experimental stress.
  • Sanitation and safety. In applications that demand precise mechanical balance, pneumatic load cells are the more preferable option.
  • Remote applicability. When an application is conducted in a remote setting, the best choice is a hydraulic load cell, which can operate without a power connection.

Load cell selection criteria — or the choice between a strain gage, pneumatic or hydraulic load cell — should largely be determined by the preceding factors.

Load Cell Types

Strain Gauge Load Cells

The most widely used load cells throughout the industrial sector are of the strain gauge variety. People value strain gauge load cells for their durability, stiffness and resonance values. The stain gauge of the load cell is a planar resistor that deforms in connection to the activities of the load cell material. The electrical resistance of the strain gauge changes in form at levels that correlate to the strain.

Piezoelectric Load Cells

With the piezoelectric load cell, the deformation is similar to that of the strain gauge load cell. However, the piezoelectric matter generates voltage in response to the changing form of the load cell. The voltage does not serve as a measurement of static values, but nonetheless remains important when the strain undergoes changes. With the conditioning of a charge amplifier, the piezoelectric load cell measures wide ranges particularly well.

Hydraulic Load Cells

The hydraulic load cell works in combination with a cylinder and diaphragm-covered piston. It works like this — place oil in the load cell, and intensify the pressure of the oil by movement the piston makes in response to the load. The transfer of pressure through a hose gauges hydraulic pressure. Due to the lack of electrical parts, a hydraulic load cell can safely be used in hazardous environments.

Pneumatic Load Cells

The pneumatic load cell is made to control the balance of pressure. One side of the diaphragm is exposed to the pressure of air, which travels through the under-nozzle of the load cell. An attached gauge measures the pressure in the load cell.

Load Cell Shapes

S-Beam Load Cells

You can identify S-Beam load cells — alternately known as Z-Beam load cells —by their shape, which resembles the namesake letter. Used primarily in applications that involve tension, S-Beam load cells provide accuracy when the weighing system becomes suspended or hung.

In addition to their high precision, S-Beam load cells enjoy popularity due in large part their affordability and ease of setup. However, S-Beam load cells are made exclusively for in-line applications and tend not to perform accurately with extraneous loads.

S-Beam load cell benefits

Beam Load Cells

You can use beam load cells, named after their rectangular shape, for everything from static weight and dynamic weighing to hopper weighing, silo weighing and tank weighing. Representing a broad category, beam load cells can be subdivided as follows:

  • Bending beam. Designed for bench scale applications, bending beams need to be set up with utmost precaution, otherwise side-loading could occur. Constructed from aluminum alloy, bending beams are ideal for low-capacity operations in the range of 1 to 500 kg.
  • Shear beam. Designed with an interior shear machine — which keeps the cell safeguarded from side-loading — shear beams come in single-ended and double-ended varieties. Medium-capacity applications generally employ the single, while applications that call for higher capacity use the double. Made of carbon-steel alloy with nickel plating, shear beams are resistant to corrosion and optimal for heavy-duty applications.

Overall, bending beam and shear beam load cells are a low-cost option for a range of industrial applications.

Canister Load Cells

Named for their canister-like shape, canister load cells date back to the very beginning of the strain gauge load cell. In contemporary use, canister load cells have become a common choice for compression applications with capacity requirements of 100,000 lbs. or more.

Pancake Load Cells

Applications that involve high precision use the pancake load cell, alternately known as the Low Profile load cell. Pancake load cells come in one of two designs — those with bending beams and those with shear struts. The majority of pancake load cells feature a mounting rig and female center-thread, which makes them suitable for compression and tension applications.

Button Load Cells

As one of the smaller load cell designs, button load cells received their name from their raised center-button. Due to their compactness, button load cells are one of the most ideal options for applications in narrow, confined settings. Button load cells are especially popular in the medical sector, where operating spaces come at a premium. Applications in the automation industry also benefit from the small design of button load cells.

Thru-Hole Load Cells

Alternately known as donut load cells due to their Lifesaver-like shape, thru-hole load cells make an ideal choice for applications that involve the use of clamp force or employ the measurement of bolt force. Designed for high stiffness, thru-hole load cells provide utmost accuracy in press-load and off-center applications.

Adverse Loading Conditions

Consider the following when dealing with adverse loading conditions:

  • Overloading capacity. This must be determined first in order to know the level at which the load cell must be proof loaded, otherwise the risk factors could be too high.
  • Dynamic loads. A load cell rating must take into account the persistence, speed and pressure of the dynamic loads, otherwise you might fail to determine the overload capacity and leave the load cell vulnerable in the event of high dynamics.
  • Fatigue loading. You must determine the per-cycle load level along with information about cycle frequency and totals throughout the life of a load cell. Otherwise, the succession of cycles endured by the load cell could result in fatigue loading.
  • Off-axis loading. The accuracy and wellbeing of a load cell can be compromised by off-axis loading, which can occur in applications where the load does not conform to the design axis. You can counter the problem with the use of bearings and load buttons, while some load cells resist the problem entirely.

Contact Cooper Instruments Today for Your Load Cell Needs

Contact Cooper Instruments

Since 1988, Cooper Instruments & Systems has been a leading supplier of load cells used for applications that involve air and liquid pressure monitoring, compression testing, data acquisition, insertion forces, overload monitoring, peel force measurement, process control, tensile testing, torque measurement and a whole lot more.

At Cooper Instruments & Systems, we offer equipment for use in the manufacturing sector for the purposes of safety and quality control. Many organizations use our sensors in applications to take measurements on the pressure that results from physical processes.

For example, a construction company could monitor the amount of pressure that goes into effect during a mechanical lifting operation with one of our sensors, to ensure the crane bearing the weight does not become overloaded. Meanwhile, makers of food products accurately weigh ingredients and ensure the proper balance of additives with our sensors.

At Cooper Instruments & Systems, we provide the best customer service in our industry on our full range of items, which include products from competing companies in addition to our own product line. By contrast, most manufacturers only carry their own line, and this forces many customers to go to multiple sources to purchase all the necessary components for a load cell system.

With Cooper Instruments & Systems, you can buy everything you need in one purchase. Remember, the best way to choose a load cell is to determine all the necessary components at once so you can buy them at the same time from the same source, as this will ensure compatibility between all the parts to your system.

The capacity of our load cells ranges from 0-10 grams to 0-3 million lbs. Companies around the world use our load cells, including those in the automotive, aerospace, construction, education, energy, manufacturing and medical sectors, in addition to a range of other industries.

Contact Cooper Instruments & Systems today for more information on the best load cell for your application or to request a quote. We can find what you are looking for and offer custom solutions that will meet your needs.

The AP 3000 Dial Mechanical Dynamometer

The AP Dynamometer

The Dillon AP Dynamometer (Cooper part number AP 3000) was first bought to market in 1936 and has since become the market leader in dynamometers. Its simplistic, mechanical form has limitless versatility and used in many diverse jobs such as suspended weighing, mounting cables for bridges, adjusting tension on guy wires and field testing of ropes, chains and wire.

The rugged build of the product makes it suitable for almost any enviromnment and has been seen put through its paces by the military on vehicles, submerged under water and in harsh field environments.

Read more in the product’s full specifications

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

History of Force Measurement in the US – Part 23

Dr. Briggs proceeded slowly and cautiously when it came to the atomic issue, but one week before the attack on Pearl Harbor, there was an official recommendation made to commit to the production of an atomic bomb. Less than 2 weeks after Pearl Harbor, a timeline was established that would result in the production of a finished bomb by January 1945. Five promising approaches to bomb production were identified and all were to be explored through the pilot plant stage. The Bureau’s role at this point became “the development of analytical procedures for controlling the purity of critical materials in the reactors and in the bomb.” In this capacity, the Bureau received almost 9,000 material samples on which they performed almost 30,000 analyses.

Work on the bomb progressed through research performed in the military, governmental and educational sectors to the point of the assemblage of a dream team of theoretical and experimental physicists, mathematicians, armament experts, specialists in radium chemistry and in metallurgy, specialists in explosives and in precision measurement at Los Alamos, New Mexico. The Bureau contributed a group from its proximity fuze program and a team dedicated to the purification of U235 scrap so it could be used again. Finally, in July 1945, the bomb was tested successfully.

Though overshadowed by the immensity of the development of the atomic bomb, the WWII era yielded two other amazing advances – namely, the airburst proximity fuze and radar. The airburst proximity fuze allowed for bombs to be detonated in the air prior to impact with the ground which greatly enhanced their destructive power. Using this technology, detonation occurs when radio waves emitted are reflected back to the device with sufficient intensity to indicate close proximity to a large object triggering an electronic switch to initiate the detonation.

The Bureau became involved in work on this type of fuze after the NDRC (National Defense Research Committee) assigned the research to the Department of Terrestrial Magnetism at the Carnegie Institution of Washington in 1940. Within the Bureau, the work fell to the team that had previously constructed the radiosonde and radiotelemeter. Within 6 months, they determined that different types of radio would be necessary for rotating projectiles (used by the Navy in antiaircraft guns) and nonrotating (for the Army and Air Force to use with bombs, rockets and mortars). Only the work on the nonrotating element fell to the Bureau, which focused on the potentials of either an acoustic fuze or a photoelectric fuze. After eliminating acoustic and other methods, testing began using the Doppler effect of reflected radio waves. By early 1941 they had achieved proof of concept but it took almost 2 more years to develop to the point of being used by the military in combat operations.

Testing and development continued and the program outgrew its laboratory space at the Bureau in 1942. In December of that year, with requests for additional fuze types and other related projects, the Bureau consolidated the various projects into the ordnance development administration. Since the face of the war was constantly changing, the fuze projects were as well. A project established to meet one threat might have to be retooled as that threat gave way to another. Fuze designs had to be tweaked to accommodate different types of exploding weapons. For example, the differences presented by the dry battery used in the bomb fuze as opposed to the power source for the rocket fuze. The dry battery was far more temperature sensitive and had a very short shelf life. These limitations meant an alternate power source had to be found – ultimately resulting in a small generator being fitted to the spinning vane of the conventional bomb fuse. This solution all but eliminated the problems of shelf life and temperature sensitivity, and also made the bomb safer to handle, since it wouldn’t detonate unless sufficient wind passed the vane (as in a drop) to produce enough power to trip the fuze.

All of the advances in fuze and detonation technologies dramatically increased the destructive power of exploding weapons used by the U.S. during WWII. In fact, the technology was so powerful, that use of the fuzes was forbidden in circumstances where the enemy might be able to recover a fuze for later analysis or identify its nature simply by observation. For example, bombs incorporating the proximity fuze were not used for D-Day for fears that a fuze might be recovered from the beach at Normandy. As the war neared its end, fuze plants were “monopolizing 25 percent of the total facilities of the electronic industry and 75 percent of all molding plastics firms.”

**The information presented here is drawn from “Measures For Progress: A History of The National Bureau of Standards” (Rexmond C. Cochrane)

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

History of Force Measurement in the US – Part 22

Based on work out of Germany, France and Finland, and at the request of the US Weather Bureau, two researchers of the Bureau’s electrical division began an endeavor to devise a practical system of radiometeorography for the weather service. A similar request was made by the aerological division of the Navy’s Bureau of Aeronautics, this one being researched by a team from the radio laboratory. This second team’s offering seemed better suited to both requests, so the duo from the electrical division fitted the device they had developed with Geiger counters and began launching them 20+ miles up to gather cosmic ray data. Their findings would impact thinking on radiation and the effect of cosmic rays on radio communication as well as the study of atomic structure. Using data gathered from 18 launches of their device, Leon Curtiss and Allen Astin confirmed international reports proposing that the greater part of cosmic-ray phenomena was caused by secondary effects within the Earth’s atmosphere.

The team from the radio division, meanwhile, successfully devised a unit that transmitted continuous data on cloud height and thickness, temperature, pressure, humidity, and light intensity in the upper atmosphere. Dubbed the “radiosonde”, the device was effective at 15+ miles up and at distances up to 200 miles. By 1940, it completely changed the US weather and meteorological services with 35,000 units being built and launched each year.

During the 1930s and 1940s, the Bureau was party to nearly every expedition sponsored by the National Geographic Society, including visits to the polar regions and balloon flights 14 miles into the stratosphere. The Society and the Bureau co-sponsored an expedition to the USSR to observe and photograph the 1936 solar eclipse, capturing the first-ever natural color photographs of an eclipse using a 14-foot camera conceived and constructed at the Bureau. Both the camera and the Bureau would participate in several other solar eclipse expeditions around the globe over the next few years. Dr. Briggs even organized an eclipse expedition to Brazil in 1947 that comprised 76 researchers from the Bureau, armed forces and National Geographic Society.

Concurrent with this atmospheric research, huge breakthroughs were made across the world in the fields of physics and atomic research. The Bureau’s first studies in this vein were into atomic chemistry, not physics. The existence of isotopes (atoms of the same chemical element with different atomic weights) had been discovered, but researchers were having difficulty finding a heavy isotope of hydrogen using the existing technology. The Bureau stepped in to suggest use of its cryogenic lab to study liquid hydrogen where experiments confirmed the existence of the proposed heavy hydrogen isotope.

In a series of discoveries by American and European scientists, the existence of neutrons was confirmed and the first nuclear reactions were performed. Enrico Fermi experimented using uranium with an atomic weight of 238 and bombarding the atoms with neutrons to split the nucleus, but his results were inconclusive. Later experiments by others confirmed that the same isotope of uranium could be split and finally that it could be split into two nuclei of roughly equal size but producing enormous quantities of energy in the process. These findings were relayed to Albert Einstein by Niels Bohr, who also informed him that Hitler had control of the only known source uranium ore and had placed an embargo on it.

This news and its significance were conveyed to President Roosevelt, who immediately sought the advice of Dr. Briggs at the Bureau. Within a week, Dr. Briggs was chairman of the newly formed Advisory Committee on Uranium. The Committee’s task was to investigate uranium fission (faster than Nazi scientists could). Less than a month from Einstein’s initial letter to the President, the Committee issued a report indicating the distinct theoretical possibility of a chain reaction that would produce enough energy for an explosive weapon or to power a submarine.

As the Second World War began in Europe, and recognizing the potential implications of researching nuclear fission, Dr. Briggs hesitated as to what he and the Bureau should do next. Was this a line of research he and his organization would or should pursue? The Committee was absorbed, renamed and absorbed again into a series of other national defense programs created as Nazi Germany continued its European conquests, before finally becoming inactive under the umbrella of the Manhattan District division of the Army Corps of Engineers in 1942.

**The information presented here is drawn from “Measures For Progress: A History of The National Bureau of Standards” (Rexmond C. Cochrane)

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

History of Force Measurement in the US – Part 21

With much of the US in denial, a group of foreign-born scientists led by Niels Bohr foresaw the country’s eventual involvement in WWII. Bohr, for example, urged a moratorium on publication in the Allied countries of research related to nuclear fission. It was almost a year before the scientific community truly headed Bohr’s warnings. Dr. Briggs, from his position on the Advisory Committee on Uranium, began to prepare himself and his agency for the possibility of war. Briggs prepared for the Department of Commerce and list of services the Bureau was prepared to offer “in the event of war.” Among these: to test all materials to be purchased under the Strategic Materials Act, to increase its output of optical glass, to certify US materials sent abroad (especially instruments, gages, metals and cement), and more. Dr. Briggs also included with his memorandum a copy of “The War Work of the Bureau of Standards” which detailed the Bureau’s contributions during WWI.

The country as a whole was totally unprepared for a new war – the armed forces had outdated equipment (and that in short supply) while much of the nation was still facing the high unemployment and sluggish manufacturing of the Great Depression. The general mood of the country was against involvement in the war (as evidenced by the 1940 Democratic Party Platform) and thus mobilization to prepare for war was slow. In taking on projects related to wartime preparation, the Bureau was forced to begin classifying much of its research. As a result, the annual reports from the Bureau became restricted to only nonconfidential research. By 1942, so much of the material was classified that there was no point in printing the annual report at all. The sensitive nature of the work being done at the Bureau also led Dr. Briggs to close the laboratories to visitors, fence in the property and close Van Ness Street, which ran through the site. By the beginning of 1942, 90 percent of Bureau staff were dedicated to war research and Military Police patrolled the “prohibited zone” that was the Bureau grounds.

That the Bureau would be tasked with testing the strength and properties of material like metals used for weapons, airplanes and the like or with finding materials that could be substituted for those in short supply as a result of the war would seem obvious. There were also more obscure aspects of war to be considered, however. One interesting example is the Bureau’s participation in a “joint Army-Navy program to determine the characteristics of sky glow from artificial sources and the extent to which sky glow and shore lights might aid hostile ships offshore.” Among other priority Bureau projects during the early part of the war were research on petroleum conservation (because oil tankers were great targets for enemy submarines) and the production of synthetic rubber. Gas was rationed (to save the rubber in car tires more than to save gas), resulting in numerous citizen inventions intended to save gas being submitted to the Bureau for testing.

Thanks to the war, the Bureau’s staff would increase by more than 238 percent from 1939 to 1945, including over 200 members of the armed forces. Even more dramatic, funding increased from $3 million just prior to US entrance into the war to $13.5 million by 1944. To accommodate the huge demand for testing and the now huge staff, all of the Bureau’s conference and lecture rooms were converted to laboratories and 2nd and 3rd shifts were introduced to make maximum use of the space and equipment. The standard work week was also lengthened from 39 hours before the war to 44 hours.

The Bureau continued to be involved in the development of the atomic bomb by testing the purity of uranium and other elements. While many at the Bureau suspected that a weapon using uranium might be under development, the secrecy ran so deep and the security was so tight that even researchers working directly on the project sometimes failed to realize what the end-game might be, thinking instead that the uranium would be used for power plants to power planes or submarines.

**The information presented here is drawn from “Measures For Progress: A History of The National Bureau of Standards” (Rexmond C. Cochrane)

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

Blog: A Perpetual Dilemma: Rent or Buy?

In this article by Robert Preville (Founder & CEO, Kwipped), the author discusses the potential benefits of renting lab equipment, as opposed to buying. Did you know that Cooper Instruments & Systems offers equipment rental options? The article expands on the following advantages renting can offer:

control cash flow
“try before you buy”
bring in job-specific equipment
more access to new technology

Click here to view the white paper.

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

Q&A: Calibration Curve

Following is the transcript from a question and answer session conducted with a Cooper Instruments technical engineer.

Q: What is a calibration curve?

A: A calibration curve is the performance curve of the transducer; that is to say, its output in relationship to the range of applied loads from a no load condition to that of a full scale load.

If you were to measure the output of a 100 LBF load cell and record the output at 0, 25, 50, 75 and 100 LBF, theoretically you would have a curve much like the one in fig 1. As this is a perfect example, the performance curve is a straight line. In real world conditions however, there will always be a slight error that will prevent a straight line, but depending on the amount of error, the performance could possibly be accepted as straight. Whether the performance is good or not, the resulting performance representation is considered a “curve”.

Q: How is it calculated?

A: As shown above, the calibration curve is not calculated but rather it’s measured. Some calibrations are calculated if, during a calibration, the lab is unable to apply certain data points or force points. In cases like this, the laboratory can turn to statistical tools to calculate what the performance is most likely to be at these unmeasured points. It is; however, always best to stay away from extrapolating data unless absolutely necessary.

Q: Most indicators offer a 2-point calibration: what does that mean?

A: An indicator is a device that allows the user to equate a desired engineering unit to the output of the transducer. Most indicators allow for what is called a 2-point calibration, meaning you can define only 2 points on the calibration curve and the meter assumes a straight (linear) curve. So, if you follow the recommendations in the manual, you will use a no load condition of your transducer and define it as your zero load point. The second point is usually your full load point, and in the case found in Fig 1, the full scale would be defined as 100. Consequently, when the meter reads exactly half the value of the full scale point, the meter will report 50.

Q: What are the limitations of a 2-point calibration?

A: The limitation to a 2 point calibration is that it assumes your transducer is performing at a theoretical (perfect) level. You’ve heard the saying “nature abhors a straight line”? Well, this is a very true saying, and applies to transducer performance just as much as it does to rivers and streams. Now, the effect of the lack of linearity depends on the errors due to inaccuracies and on the user’s quality requirements.

Let’s assume that the 100 LBF transducer in our example performs with a nonlinearity specification of ±1% at half scale, and the 2 point indicator has been calibrated at 0 Lbf and 100 Lbf, when 50 Lbf is applied, the display will read 50, but the actual force is in the range of 49 to 51 Lbf. So due to the 2-point calibration, the inaccuracy of the performance curve is hidden.

So in short, the limitation of a 2 point calibration is the inability to compensate for nonlinear performing load cells, yielding inaccurate readings.

**It’s important to note that a load cell will always perform within OEM specification limits when new. However, after prolonged use and age the performance will begin to degrade, and a 2 point calibration may not be able to compensate for this change.

Q: How does using a multi-point calibration linearize a load cell’s


A: Now that we know there can be an amount of error between 2 points of a curve, it stands to reason that if we shorten the distance between the points, the smaller the errors will be had between those same points.

So, if we calibrate our meter with more than 2 points, we will begin to reduce the measured errors and improve the performance of the load cell system. In Fig-2 below you can see the same 2 point calibration curve in the blue dotted line. Compare this curve with that of the individual data points. You can see an error grows up to the halfway point and then reduces as it gets to the highest data point.

If we were to use a 5-point calibration method on this instance, you would get a performance more on the lines that the black line represents. Fig-3 demonstrates the new calibration curve with the red dotted line, when applying the multi-point calibration for 5 points.

Q: How does a linearized performance prolong the useful life of a load cell?

A: As the load cell ages with use, the linear performance will decay. With the use of a 2 point calibration, there will be a point where the load cell will not maintain the performance needed and may eventually get to a point where the errors cannot be compensated for.

With the use of an indicator with multi-point calibration functionality, the errors can be compensated for and performance can be maintained.

Q: What solutions can Cooper Instruments offer when an indicator with 2-point calibration is not sufficient for the user’s application?

A: Cooper Instruments offers several products with multi-point calibration


The DFI INFINITY B, M3, M5 and 7i all allow for multi-point calibration. We also offer the DSC USB, which has software that allows for multi-point calibration. Our sales representatives would be happy to help you select the right product for your application.

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

History of Force Measurement in the US – Part 20

So, the last installment of the series left off with the establishment of the Mathematical Tables Project, which, by 1943, had produced 27 book-length tables as well as many shorter ones. The thirties also gave rise to an undertaking to identify and quantify the physical constants of pure substances, especially of industrially important organic compounds. Importing a method devised by a scientist at the Polytechnic Institute of Warsaw, Bureau chemists researched a number of substances by determining their vapor pressure, boiling point and more.


Thus, as mentioned in a previous installment, although the Great Depression brought with it reductions in staff and funding, as well as other hardships, the reduced bureaucracy of the time allowed the Bureau staff who remained to focus their energies on some much-needed fundamental research that would serve as the building blocks for years to come.


In September 1933, two Bureau researchers, Burt Carroll and Donald Hubbard, were awarded medals by the Société Française de Photographie et de Cinématographie in recognition of their contributions to the world of photo-sensitive emulsions. The Bureau’s involvement in this field began in 1921 with the need for emulsions sensitive to infrared spectra for which commercially available film was unsuited. With German equipment installed in the basement of the Bureau’s chemistry building, Carroll and Hubbard set to work on creating a better film. For 7 years, their efforts were largely futile with sometimes over 400 batches of emulsion made in a single year. By 1933, however, the two were publishing their 17th report on the mechanism of photographic hypersensitivity. They were finally creating emulsions superior to commercial ones and in publishing their methods, they would threaten trade secrets of those commercial producers. Therefore, when budget cuts were made, the emulsion project was among the first to go as one of seven projects which the Visiting Committee specifically targeted (the others were heavy hydrogen research, dental cements and alloys, certain industrial concerns, internal combustion engines, production methods for levulose and the design of a telephoto astronomical objective).


In 1933, Congress made its biggest reduction in Bureau appropriations with a cut of 54 percent which affected over 100 projects. Particularly hard hit were the projects involving automotive engines (over 40 different projects), because of their unpopularity with the auto industry when, for example, one manufacturer’s engine was deemed by the Bureau to be superior to the others. Also due to budget concerns, the Bureau surrendered work on standardization and specifications to the American Standards Association. Amid backlash from the industrial community at the change, it was agreed that the Bureau would continue to cooperate with the ASA.


Around the same time, the Bureau and Dr. Briggs were embroiled in lawsuits regarding the issuance of patents to Bureau researchers. The practice under Dr. Stratton had been that patentable material would be patented in the name of the Government and would be for public use. This method was challenged in 1922 by two researchers of the radio section who developed a method by which radios could be operated by current rather than the traditional batteries. This innovation fell outside the area of their assigned field of research and as such, they filed three patents in their own names relating to the technology. In response, a formal policy regarding patents was devised and it explicitly stated that patents for inventions and discoveries of Bureau employees would be registered to the Government. The District Court of Delaware later decided in favor of Lowell and Dunmore because the invention was not part of their assigned work. An appeal to the US Circuit Court upheld the District Court’s decision as did a further appeal to the Supreme Court which was decided in 1933 in favor of the inventors.


While the funding cuts were bitterly made, Dr. Briggs did acknowledge that some programs had become entrenched, not because they were useful or truly merited ongoing research, but because all possible angles of research had not yet been completely exhausted. The reductions in staff and resources forced various projects, such as radio research, down to their absolute most important aspects.


**The information presented here is drawn from “Measures For Progress: A History of The National Bureau of Standards” (Rexmond C. Cochrane)


As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing


We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

History of Force Measurement in the US – Part 19

Finally, in 1935, the Bureau could document an increase in requests from industry for data. This coincided with increased building at the state and federal levels which brought an increase in government requests for tests and calibrations (as well as a modest increase in funding, sufficient to rehire former staffers). In 1938, Congress approved construction of a new electrical testing laboratory to replace the obsolete one built 25 years earlier when voltage ranges were much lower than those being produced in the late ‘30s, further evidencing the improving economy. Thanks in large part to new dam-building projects across the country, the opening of new branch laboratories also increased during the late ‘30s.


Efforts to stimulate the economy through low-cost housing also led to Bureau funding for research into structural and fire-resistant properties of construction materials to be used for housing. This program and its funding were cut from New Deal sponsorship as WWII approached, but the work continuing at the urging of the building industry. After a hiatus during the war, building technology became its own division within the Bureau in 1950.


Also during the 1930s, the Bureau completed research relating to the preservation of paper records. The work, funded by the Carnegie Foundation, tested the effects of such forces as light, heat and humidity on storage of paper and books. Sulphur dioxide was determined to be the greatest enemy of paper storage. The work led, in turn, to studies on the preservation of all types of media and to the Bureau’s involvement in the preservation of the Declaration of Independence and the Constitution at the National Archives.


Another interesting line of study at the time related to X-ray dosages and ultraviolet radiation. Although both technologies were becoming quite widely used by medical professionals, they did not really understand the thresholds of safe and unsafe exposure, particularly to the equipment operators as opposed to the patients. At the urging of the president of the Radiological Society of North America, Congress provided funding at the Bureau began to research the issue. Physicist Lauriston Taylor, who had been working on X-rays and electronics at Cornell was brought on to Bureau staff to lead the work.


Taylor’s first order of business was to construct new equipment for the testing, which he did from parts of other equipment on hand at the Bureau. In 1928, he attended the Second International Congress of Radiology and became the first Chairman of the National Committee on Radiation Protection and Measurements. Taylor published research in 1929 showing that X-ray dosages could be quantitatively measured and in 1931 he published guides for safety shielding of operating rooms, patients and operators. Similar publications for radium, at the hand of Dr. Leon Custiss, followed in 1934.


Paints, made from compounds including radium, were developed to have luminous properties for applications on instrument panels for the military during WWI and also on watch faces. Little was known about the effects of the radium paint at the time. It was later determined that the amount used for a watch face was fine, but the problem was the factory application of the paint to the watch during production. Being wartime, mostly girls worked in the factories and the put their paint brush tips in their mouths to draw them to a point, thereby ingesting the paint. Hundreds of these girls died of what was later diagnosed as radium poisoning. In 1932, the American Medical Association discontinued all internal administration of radium as a remedy. The Bureau’s research on the topic was found in the 1932 handbook on radium protection and in 1941 it had a handbook of its own.


Also during the 1930s, work advanced in spectroanalysis with new and accurate measurements of the atomic emission spectra of chemical elements, rare gases, and rare metals. An index was published by the American Society for Testing Materials that listed almost 1,000 papers on the subject written during the preceding two decades. Dr. Briggs also proposed that the Bureau sponsor a central agency for computing fundamental tables for applied mathematics. With basically no equipment provided, the project began in New York City with hundreds of workers doing calculations by hand. The first order of business? To prepare the 16-place values of natural logarithms, the 15-place values of probability functions, and the 10-place values of Bessel functions of complex arguments. Within a decade, equipment existed to compute in minutes what 400 individuals with pencils did in months, but the Mathematical Tables Project was widely and gratefully recognized at the time.


**The information presented here is drawn from “Measures For Progress: A History of The National Bureau of Standards” (Rexmond C. Cochrane)


As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing


We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.

History of Force Measurement in the US – Part 18

We left off with the Bureau facing three simultaneous investigations into its functions and budget. All three investigations resulted in the conclusion that it was necessary for the Bureau to continue performing essential testing whether specifically noted in its organic act or not. The investigations also pointed out that the Bureau had been hit disproportionately hard by funding cuts and staff reductions during the Depression, although they differed in their recommendations of how the Bureau should move forward. While no official change was made to formalize Bureau authority to perform testing deemed by some to be outside its scope, tacit approval was given by the new appropriations of 1935, which replaced 29 appropriations items with one act divided into four general funds, one of which included a provision for the Bureau’s continued involvement with the ASA and work on standards for commerce.

As the Depression dragged on through the 1930s, Dr. Briggs repeatedly contested that new inventions would stimulate industry, the consumer and the economy. One stimulus strategy employed at the time was the idea of transforming the nation from a production-based economy to a consumption-based economy, or ushering in “consumerism.” The National Recovery Administration, set up by the Roosevelt administration in 1933, would be advised by an Industrial Board, a Labor Advisory Board and a Consumers’ Advisory Board to ensure maximum benefit to all parties from new legislation concerning issues like minimum wage, work hours and price regulation to the end of stimulation consumption.

The task of the Consumers’ Advisory Board was to promote use of specifications and labeling in consumer products through NRA code, but some believe agencies like the ASA and the Bureau of Standards incapable of making such recommendations because of their perceived propensity to favor industry above the consumer. Despite this concern, there were no reasonable grounds for setting up an independent laboratory for the task, so over the NRA’s 2-year life, the Bureau reviewed almost 500 codes for fair competition involving consumer standards.

Over the course of the late 1920’s to late 1930’s a number of agencies, publications and laboratories appeared dedicated to consumer education, but they lacked the organization to form a unified national force. The Bureau, for its part, had difficulties working with these consumer groups because their organic act legally oriented their efforts to industry. Nevertheless, the Bureau recognized the benefit of performing consumer testing within a single institution under the Federal Government but thought the creation of a consumer testing agency unlikely as Congressional funding would probably not be approved. The Bureau cooperated with the consumer movement by advising consumer laboratories on their test instruments and equipment, developing new testing equipment and issuing publications geared toward the individual consumer. One such publication was entitled “Services of the National Bureau of Standards to the Consumer,” which explained how the Bureau’s efforts benefitted the individual.

Even though it was a challenging period, the Depression created lulls in committee assignments to the Bureau and travel, as well as drop in supervisory duties which created more time for actual research. The Bureau was forced to make staffing cuts and could not hire qualified scientists, but their funding did provide for “clerks” and “draftsmen.” It was also during this time that Dr. Brigg won his battle to add the word “National” back to the Bureau’s name after 30 years of being the “Bureau of Standards.”

**The information presented here is drawn from “Measures For Progress: A History of The National Bureau of Standards” (Rexmond C. Cochrane)

As always, if you have any questions related to this material, our support staff at Cooper Instruments is available to help. Contact them by calling (800) 344-3921 or emailing

We’d love to hear your feedback regarding this or any other article we’ve posted. To leave feedback, ‘Like’ us on Facebook and then post your feedback on our wall.