User:David.John.Hartshorne/Total performance characterization
This is not a Wikipedia article: It is an individual user's work-in-progress page, and may be incomplete and/or unreliable. For guidance on developing this draft, see Wikipedia:So you made a userspace draft. Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Total performance characterization (TPC) is a methodology used in both verification and validation (“are you making it right?” and “are you even making the right thing?”) of manufactured products. It consists of a set of empirical observations that together carry enough information about the ability of a system to perform its intended function that it will lead to a rapid diagnosis and ultimately a full causal explanation for that performance. It is applied to both products and the processes that make them.
If the system is part of a manufacturing process, the easier to access source of information for a TPC may be that carried on the process outputs, and this is often adequate for explaining behavior. But for all systems or devices, a set of observations that show how well they manage energy will provide a TPC.
Individual observations, or data, may be manifested as numbers (measurements), images or even sounds. Observations become information when they are connected within a Datum reference framework. The framework is essentially a high level model of the correct functioning of the system (referred to as Topographic[1][2]). An important central concept within TPC is that of the cycle. Every function involves a cycle, and every cycle has a datum reference framework in both space and time.
TPC characterizes the actual functioning of a system, which can be compared to a model of the normal functioning of that system in order to flush out diagnostic information. Characterizing function is a natural opening move in diagnosing complex systems having elements with specific functions that are individually recognizable in the overall system response. This method has the virtue of avoiding unnecessary detail in the early stages of diagnosis and quickly focusing to the problem areas[3]. From a practical perspective, a functional search can be undertaken when only one hardware system is available to study.
Examples
editTPC differs from a process capability study. The purpose of conducting a process capability study is to determine the extent to which a process can meet customer specifications or engineering tolerances, and how stable it is. A process capability is often described as the relationship between six standard deviations and the required specification. Process capability is assessed for individual characteristics for a sample (of at least seventeen, usually many more) of the population. These single characteristics do not carry the information needed to explain and improve the process, their purpose is to determine if a process outputs meet specification. TPC applied to process outputs builds on the principles first described by Leonard Seder in 1950[4][5], importantly focusing on the concept of the process cycle and restricting observations to fundamental elements of geometry, avoiding calculated characteristics.
As a simple illustration of the difference between TPC and a process capability study when information is carried on the outputs of a process, consider a round component part that has engineering tolerances specified for diameter (max and min allowed) and maximum roundness deviation from a perfect circle. To some degree this is an oxymoron. If the part is not round, then we cannot say what the diameter is since the recorded diameter depends on where it was measured. Nevertheless, one diameter per part is recorded, perhaps an average of a few measurements. The value of roundness reported is a ratio of two circles; one that just fits inside the actual shape of the part and the other that just encloses it. The ideal value is zero for a part whose shape is a perfect circle. Diameter and roundness become two individual characteristics for each part in a sample of the population. Such characteristics can be subjected to statistical analysis. However, if the relationship between six standard deviations and the required specification for either characteristic are unsatisfactory, it becomes necessary to explain the process behavior.
A total performance characterization in this simple example starts by taking a single part from one machine. This part is given an identifying mark that orients it precisely with the machine. Typically, the shape of the part is something that repeats non-randomly with respect to the machine if we examine a second or third part, and from this information it is usually a short step to an explanation of how that machine generates both size and shape of the part. It helps the diagnosis greatly if the results are displayed graphically. The analysis could be repeated for many more parts, but that is rarely necessary thanks to the Pareto principle. The rich source of information comes from the data being connected to the datum framework of a cycle of the machine.
Another simple illustration shows how a single characteristic and TPC differ when looking at how well devices manage energy when functioning. Consider a simple piston pump. The primary performance characteristic usually specified for such a pump is a volumetric flow when running at a defined speed and against a hydraulic load that is adjusted to provide a defined back pressure. Each pump produced is slightly different, and with an appropriate sample we can see how the flow values are distributed and subject them to statistical analysis.
A total performance characterization of the same pump is very simple, but aimed at understanding behavior. Once again it starts with a characterization of one system (pump). But instead of just a single back pressure, flow would be measured at three or even five back pressures and the pressure-flow relationship plotted. The result is the characteristic curve for that particular pump. Again, it is possible to repeat this analysis for many pumps, but it is not normally necessary. A close examination of the curve for single pump that has low flow will reveal much about what causes the flow performance for that specific pump. It will point to component dimensions driving performance. And importantly, the same is likely to be happening to a large portion, if not all, of the population. Deriving the Causal Explanation is extremely rapid, and its simple. Again, it helps the diagnosis a lot if we view the results graphically.
Pressure and volumetric flow are known as conjugate energy variables, and together they provide the powerful Total Performance Characterization. There are just four key generic variables that come as three conjugate pairs. The four variables each exist in every engineering domain such as electrical, magnetic, rotational, hydraulic, thermal and so on. A pair are associated with either energy storage or conversion, transmission and dissipation. Characterizations using the pair, or one of the pair changing with time (throughout a cycle), are the richest source of information about exactly what is happening as a system performs its functions.
Further information about conjugate energy variables can be found in references to the modelling technique Bond graph, invented by Henry Paynter in the early 1960s[6]. Bond graphs provide a concise pictorial representation of interacting dynamic systems. Bond graphs account for all energy and, in so doing, provide the common link among various engineering systems. A "bond" represents the energy exchange among subsystems. Each bond carries power. The principles developed are very important to successful engineering problem solving.
Applications
editTPC is a simple methodology to greatly speed up engineering problem solving by building as much information as possible, as quickly as possible, and so circumventing most of the complex tools and time consuming steps of Root cause analysis, classical Design of experiments, Taguchi methods, Six Sigma methods and the Shainin System.
Rare, No Trouble Found and intermittent faults are often difficult to reproduce by engineers concerned with solving the problem. At the same time, these faults are typically associated with marginal performance (products pass end-of-line test, but only just). By intensively probing performance TPC overcomes the difficulties, ensuring that underlying weaknesses are identified.
As a result, TPC can also flush out processes and products with marginal performance very early in a product’s life, and at critical junctures such as starting up a new manufacturing process. This makes it an effective means for avoiding problems in engineering and manufacture when used in conjunction with Failure mode, effects, and criticality analysis (FMECA).
References
edit- ^ Rasmussen, Jens. "Strategies for State Identification and Diagnosis in Supervisory Control Tasks, and Design of Computer-Based Support Systems." Advances in man-machine systems research. JAI Press, 1984. 139-193.
- ^ Rasmussen, Jens. Information processing and human-machine interaction. An approach to cognitive engineering. North-Holland, 1987.
- ^ Venkatasubramanian, Venkat, Raghunathan Rengaswamy, and Surya N. Kavuri. "A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies." Computers & chemical engineering 27.3 (2003): 313-326.
- ^ Seder, Leonard (1950), "Diagnosis with Diagrams—Part I", Industrial Quality Control, New York, New York: American Society for Quality Control, vol. 7, no. 1, pp. 11–19.
- ^ Seder, Leonard (1950), "Diagnosis with Diagrams—Part II", Industrial Quality Control, New York, New York: American Society for Quality Control, vol. 7, no. 2, pp. 7–11.
- ^ Paynter, Henry M. "Analysis and design of engineering systems." MIT press (1961).
See also
editFurther Reading
editR. Carnap, The Cognition of Inductive Methods, 1952, University of Chicago Press.
R. Davis, Reasoning From First Principles in Electronic Troubleshooting, International Journal of Man-Machine Studies.
Jeroen de Mast and Marcus Bergman Hypothesis Generation in Quality Improvement Projects: Approaches for Exploratory Studies Quality & Reliability Engineering. Int. 2006, John Wiley & Sons.
Dean L. Gano Effective Problem Solving: A New Way of Thinking ASQ Congress Proceedings 2001.
F. M. Gryna, Quality Improvement, 1988, Juran’s Quality Control Handbook 4th Ed, McGraw-Hill.
David J. Hartshorne (2021), Diagnosing Performance and Reliability. United Kingdom, New Science of Fixing Things. ISBN 2021. 979-8512866870
J. M. Hoc and X. Carlier, A Method to Describe Human Diagnostic Strategies in Relation to Design of Human-Machine Cooperation, 2000, International Journal of Cognitive Ergonomics.
A. H. Lesgold and S. Lajoie, Complex Problem Solving in Electronics, 1991, in Complex Problem Solving: Principles and Mechanisms, edited by R. J. Sterberg et al, Lawrence Erlbaum Associates.
Lorenzo Magnani, An Abductive Theory of Scientific Reasoning, 2002, in International Workshop on Computational Models of Scientific Reasoning and Applications (CMSRA’02), Vol. 3, edited by H. R. Arabnia and Younsong Mun, CSREA Press.
R. Milne, Strategies for Diagnosis, 1987, IEEE Transactions on Systems, Man and Cybernetics.
N. M. Morris and W. B. Rouse, Review and Evaluation of Empirical Research in Troubleshooting, 1985, Human Factors
Nancy J. Nersessian, Model-Based Reasoning in Conceptual Change, 1999, in Model-Based Reasoning in Scientific Discover, edited by L. Magnani et al, Kluwer Academic/Plenum.
Ellis R. Ott and Edward G. Schilling, Process Quality Control: Troubleshooting and Interpretation of Data, 1990, McGraw-Hill.
N. Rescher, Peirce’s Philosophy of Science: Critical Studies in his Theory of Induction and Scientific Method, 1978, University of Notre Dame Press.
Gerald F. Smith ‘Determining the Cause of Quality Problems: Lessons from Diagnostic Disciplines’ ASQ 1998.
R. W. Traver, Manufacturing Solutions for Consistent Quality and Reliability, 1995, American Management Association.
Edward R. Tufte, The Visual Display of Quantitative Information, 1983, Graphics Press.
P. F. Wilson, L. D. Dell and G. F. Anderson, Root Cause Analysis: A Tool for Total Quality Management, 1993, ASQC Quality Press.