What is in this article?:
- NFPA zeros in on reliability
- The Weibull method
The National Fluid Power Association will soon be publishing a comprehensive recommended practice on component reliability. Results based on T2.12.11-2 should be useful for designers of equipment controlled by fluid power.
What is reliability? Of course, technical definitions exist, but what is reliability in practical terms? We can all identify with some common examples:
• your car starts right away on a cold morning
• your flight departs and arrives on time
• your restaurant order meets or exceeds all your expectations, and
• all the parts of your child’s ready-to-assemble toy are in the box — including sensible instructions.
Okay, maybe some of these aren’t so common, but if reliability is a number from zero to one — a percentage — how would you grade these experiences? How would you grade some of the products you purchase? Now, how do you think your customers grade your own products’ reliability?
That last question might be a little more difficult, and we may be a bit more defensive about it. If we wanted to be fair, some guidelines could be established to make a uniform judgement — industry standards to determine the reliability of fluid power products.
NFPA began the formidable task of developing such standards and recommended practices back in 2001, and some of them are now available. The basis of these standards varies from measuring data in a laboratory to collecting data from field use. If the product is a hydraulic component in a directional boring machine, it encompasses theories of reliability analysis, which include heavy doses of statistics. Therefore, this article presents a simple, non-mathematical explanation of the statistics on which the published and ongoing NFPA standards and recommended practices are based.
As stated in the new standard and recommended practice, “Reliability is associated with dependability and availability, successful operation and performance, and the absence of breakdowns or failures. Failure occurs because of manufacturing defects, misapplication of product, inadequate maintenance, cumulative wear and degradation, design deficiencies, and random chance.” This means that although some of the responsibility for a component’s reliability lies with the component manufacturer, much of it lies with the designer of the machine in which the component is used and the end user.
Reliability and failure statistics
The reliability statements about products are obtained from samples — either observed in an application or measured in a laboratory. So when a reliability statement is made, it is a prediction of the performance of similar products based on those samples. But predictions are not always accurate; consider a weather report. These reports suggest a probability of rain — not a specific statement that it will or will not rain tomorrow. Likewise, reliability of a fluid power product is a probability that it will perform a specific function, in a specific environment, for a specific duration. That’s a lot of specifics!
Now consider the variety of conditions into which fluid power products are used, and it becomes obvious that some clarifications are necessary. A Buna-N seal may last a long time at room temperature, but it could fail in a few minutes at high temperatures. But what is failure? Because test conditions are necessary, test standards have definitions for the operating environments. Statistical measures require a comparison of failures to successes. If no failures occur, no statistics are available — unless you believe in 100% reliability. Two classifications of failure exist: a sudden cessation of function, and a gradual deterioration of performance. The first one is easy to observe, but the second one is subject to interpretation.
The experts developing the NFPA documents tackled this question and came up with specific values for leakage, friction, time delay, etc. It was not easy, and it was only decided after the concept of failure was modified to be a threshold value.
Imagine several competitors gathered to define failure of their products. That could have killed the project even before it was started. But these participants agreed on a level of performance that, when reached, would terminate a test unit in a laboratory program for statistical analysis. The product would still be functional, but would be considered to have reached a performance level sufficient for termination. Thus, statistical failure is a conservative concept for determining the reliability of products.