Talk:Six Sigma/Archive 4

Latest comment: 14 years ago by Jayen466 in topic Failure mode and effects analysis
Archive
Archives
  1. January 2004 – August 2005
  2. September 2005 – June 2006
  3. July 2005 – December 2007


Terrible!

This entire article is written in incomprehensible corporate doublespeak. It needs a rewrite. Pw33n (talk) 06:01, 28 April 2008 (UTC)

Did you know that you can help improve the quality of the article tremendously simply by marking the parts that you feel need clarification with {{clarifyme}}? Note that it is vastly more helpful to mark individual sentences or sections than it is to tag the entire article with {{clarify}}. Please see Wikipedia:Please clarify for more details.
-- DanielPenfield (talk) 12:48, 28 April 2008 (UTC)
Agreed. By the nature of the beast, there will always be some "corporate doublespeak" in the article, as well as some statistical reasoning that will be above the head of the general reader. Having said that, I agree that the article can do with improving, so specific pointers to sections that are hard to understand or badly structured might be useful. Jayen466 01:31, 29 April 2008 (UTC)
One suggestion: Do we need the table on "Other Design for Six Sigma methodologies"? It's just a list right now, with a lot of words that aren't really saying very much and are likely to make the reader's eyes glaze over. Should we lose it? Jayen466 01:34, 29 April 2008 (UTC)
Yes. I must admit I sympathise with Pw33n - even the first sentence is so generalised (although entirely accurate) that it is not much help to the average punter. I think the article is being written by, and addressed to, practitioners (or worse) which is not what a general purpose wiki article should be. It is, to use an appropriate metaphor, as though the wiki article(s?) on prostitution were written by pimps and call girls, for pimps and call girls. Greg Locock (talk) 02:03, 29 April 2008 (UTC)
Okay, we can and should lose some of the corporate jargon and use more general language. Much of the article is also woefully unsourced ... Jayen466 00:15, 30 April 2008 (UTC)
Wikipedia:Make technical articles accessible encourages us to avoid "dumbing down the article". Furthermore, "just adding more words" and "just using different jargon" (e.g., "non-fulfilment" instead of "nonconformity") does not help anybody's understanding. Instead of flailing about, why don't we put more effort into understanding what drives these periodic reactions of disgust (i.e., the D, M, and A in DMAIC). Once we understand, then we can improve. Wikipedia has mechanisms that we can take advantage of: Wikipedia:Requests for comment, Wikipedia:Third opinion, and maybe others I don't know about.
Finally, it is clear that we're "editing' the article without having widely read up on the subject. Why not take some time to hit the local college library and dig up sources that are closer to the origin of Six Sigma--in my own effort, I find these are less subject to distortions from Chinese whispers or sales spin (though they are hardly free of that). Arguably this is the first step we ought to take.
-- DanielPenfield (talk) 13:50, 30 April 2008 (UTC)
I agree that we need to access sources, and I'll have a look what I can dig up in google scholar and the questia.com library. As for my recent attempts to clean up the language and the "dumbing down" you fear might occur, I'd like to hear views from other editors. I've been involved in the creation of training materials on these topics for 20 years; in my experience, expressions like "process", "nonconformity" and "reducing variation" are not readily understood by the general public. That is why each foundation course begins with a definition of these terms. So I would like the intro at least to use more general language that does not immediately put the general reader off. Sentences like "Continuous efforts to reduce variation in process outputs is key to business success" are pure jargon (apart from being shoddy grammar) – standard in companies that apply these methods, but not very good style for an encyclopedia. I think this is what bugs readers most. For a general overview – which is all we can aspire to here for now – the language should be more accessible. But I am perfectly happy to be guided by consensus. Jayen466 15:29, 30 April 2008 (UTC)
  • "apart from being shoddy grammar"
Feel free to correct subject-predicate agreement issues independently of stylistic edits, then.
  • "That is why each foundation course begins with a definition of these terms. So I would like the intro at least to use more general language that does not immediately put the general reader off."
It sounds like you haven't read Wikipedia:Make technical articles accessible beyond the first paragraph. In particular, Wikipedia:Make_technical_articles_accessible#"Introduction to..." articles sounds like a better solution than what you've set out to do on your own.
  • "I've been involved in the creation of training materials on these topics for 20 years"
Based on my reading of Six_Sigma#Origin_and_meaning_of_the_term_.22six_sigma_process.22, I have to infer that you're training children, as, quite frankly, that's the level of diction that you seem to prefer. I cannot believe that the average Wikipedia reader needs to have the term "sigma" explained to him. (And if he does need explanation, that he's incapable of clicking on the hyperlink to find out for himself.) I also cannot believe that the use of smarmy phrases like "often used by quality professionals" is encyclopedic. It really sounds to me like you believe your audience is uncapable of understanding anything but the most basic, watered-down content and is incapable of cross-referencing terms that they find novel as they read the article. I believe you are wrong: Your audience is much more intelligent than you believe.
  • "Sentences like "Continuous efforts to reduce variation in process outputs is key to business success" are pure jargon"
As User talk:Greglocock has already pointed out it is very, very difficult to come up with content that is equivalent, yet could be readily understood by someone with no exposure to this area of knowledge. You were unable to do it yourself. I think most people would agree, however, that it is an improvement over where we were last year. Additionally, had you actually read "The Nature of Six Sigma Quality" (the historically important reference that you deleted and re-added a few months back), you'd have to admit that bullet point an accurate summary of what Mikel Harry wrote in its introduction ("Why Variation Is the Enemy").
In summary, here's what I understand to be our points of contention:
  1. You believe the audience for this article is much less intelligent than I do.
  2. You believe (or believed) that you have enough knowledge to edit the article without significant background reading.
  3. You believe that there is only one course of action: Hack up the article in isolation, when there are other avenues (notably Wikipedia:Make_technical_articles_accessible#"Introduction to..." articles and Wikipedia:Requests for comment) which I argue would produce better results for everyone.
-- DanielPenfield (talk) 16:30, 30 April 2008 (UTC)
It seems to me you're not listening to the Voice of the Customer. You wrote "We really don't understand what it is about Six Sigma that so bothers people that they express their disgust on Talk:Six Sigma—they are uniformly unable to articulate anything actionable, despite repeated encouragement." That's not entirely true – what the people above have said is that the article is "incomprehensible corporate doublespeak", that it feels like it is "written by, and addressed to, practitioners", and that it is full of "quasi-meaningless management consultancy jargon and fashionable corporate buzzwords". Can't you see what they mean? Jayen466 16:41, 30 April 2008 (UTC)
No, I can't, that's why I asked for {{clarifyme}}, repeatedly (User talk:LaylahM‎, User talk:Statman45‎, User talk:Pw33n). You presume to know what they're talking about, yet your edits lead me to believe otherwise—you rely on simple expansions of the obvious ("achieve greater uniformity (or reduce variation)" instead of "reduce variation", "the non-fulfilment of any aspect" instead of "nonconformity", etc) or statements of no value at all ("A brief outline of the statistical background follows below." in a section already entitled "Origin and meaning of the term"). I'll ask again, rhetorically: Did you really improve the article? Did you make it more clear? Or did you just switch a few words around and find more complex ways of describing things? And did you delete genuinely useful stuff like the diagram you deleted last time because you're not familiar with the material?
-- DanielPenfield (talk) 17:16, 30 April 2008 (UTC)
Also, people freely voiced their disgust before the addition of the bulleted items you're focusing on, so there's got to be something else (or in addition) that really bothers them. -- DanielPenfield (talk) 17:52, 30 April 2008 (UTC)

(outdent) Like I said above, my experience is that "reduce variation" means little to the general reader, whereas "achieving greater uniformity" is a useful paraphrase in non-statistical language that anyone can understand. That is exactly what WP:Make technical articles accessible is talking about. Let's look at what else it says there, apart from "Don't dumb down":

  • Put the most accessible parts of the article up front. It's perfectly fine for later sections to be highly technical, if necessary. Those who are not interested in details will simply stop reading at some point, which is why the material they are interested in needs to come first. Linked sections of the article should ideally start out at about the same technical level, so that if the first, accessible paragraph of an article links to a section in the middle of the article, the linked section should also start out accessible.
  • Add a concrete example. Many technical articles are inaccessible (and more confusing even to expert readers) only because they are abstract. Examples might exist in other language wikis, such as German/Italian wikipedias, which have statistical examples. Use Google, Yahoo! or Systran translation, etc. to help convert.
  • Use jargon and acronyms judiciously. In addition to explaining jargon and expanding acronyms at first use, you might consider using them sparingly thereafter, or not at all. Especially if there are many new terms being introduced all at once, substituting a more familiar English word might help reduce confusion (as long as accuracy is not sacrificed).
  • Use language similar to what you would use in a conversation. Many people use more technical language when writing articles and speaking at conferences, but try to use more understandable prose in conversation. [1]
  • Use analogies to describe a subject in everyday terms. The best analogies can make all the difference between incomprehension and full understanding.

I think the article could benefit a lot from implementing these recommendations, plus solid sourcing. Btw, as per WP:LEDE we shouldn't be having those bullet points in the lede. And if these bullets are sourced to Harry, then it would be advantageous to add a reference that makes that clear. Cheers, Jayen466 18:18, 30 April 2008 (UTC)

Plus, I really think the "alphabet soup" list under "Other Design for Six Sigma methodologies" should go, or should at least go to the end of the article. Too much detail too early in the article, and practically useless without explaining what these companies understand by these terms. Jayen466 18:22, 30 April 2008 (UTC)

I still don't understand precisely to what you're objecting. Perhaps you could humor me by starting with the lede alone and going line by line:
Sentence User talk:Jayen466 opinion User talk:DanielPenfield opinion
Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects. "Process" on its own is unclear. That is why books on Six Sigma first explain what is meant by process. Accurate and clear
Buzzword free, with the exception of the article title
A defect is defined as nonconformity of a product or service to its specifications. "Nonconformity to specifications" is a jargon expression here, we could say something like "failure to meet all the requirements defined in its specification". Btw, feel free to come up with something better! Required to explain preceding sentence
Buzzword free
While the particulars of the methodology were originally formulated by Bill Smith at Motorola in 1986, Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects. Fine This sentence is necessary to give one an idea as to what Six Sigma is by relating it to past efforts in the same arena--if Joseph M. Juran can't see the difference between Six Sigma and the Statistical Quality Control he used for seven decades (see [1]), then really, there probably is no material difference.
Buzzword free, with the exception of the necessary evil of the methodology names and the article title
Like its predecessors, Six Sigma asserts the following: What could be made clearer is the claimed effect on a company's bottom-line profitability, by reducing the cost of bad quality. That is a major aspect of how Six Sigma is sold to management. Another aspect that could be made clearer is the data-based approach. Yet another is that it is supposed to be a complete management system, changing fundamentally the way a company is managed. That includes the team-based cross-functional approach, and the role of Black Belts. (Also in this vein, see the more recent Mikel Harry quote below, under "Suggested sources".) Six Sigma's dirty little secret is that one never seems to be able to find anyone to give a clear overview of what exactly it is and how it differs from its predecessors. I've been looking for two years and haven't found anything that ever comes close. This is a "best effort" attempt to summarize the core principles, which, by the way aren't really that different that SQC, ZeroDefects, TQM, etc.
Buzzword free, with the exception of the article title
Continuous efforts to reduce variation in process outputs are key to business success The whole sentence is jargon and can be put in more general language. You cannot assume that everyone checking into this article is familiar with the concept of variation as measured by standard deviation or variance. As User:Greglocock noted, this is "entirely accurate". It's necessary to point out that variation is the thing to be controlled as the intuitive "no defects" = OK is the thing Deming, Taguchi, Juran, Crosby, etc. criticise as misguided.
As near as I can tell, you object to one or more of the following as "fashionable corporate buzzwords": "Continuous efforts", "process outputs", and/or "business success".
Manufacturing and business processes can be measured, analyzed, improved and controlled Shorthand jargon. A process can't be measured. You measure parameters of the process, or product characteristics. You then analyze the data collected. A plain English restatement of DMAIC
Buzzword free
Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management Fine. Repeated to death in pretty much anything one reads about Six Sigma (and SQC, ZeroDefects, TQM, etc. for that matter), so repeated to death here.
Buzzword free, with the possible exception of "quality improvement"
The term "Six Sigma" refers to the ability of highly capable processes to produce output within specification. "Capable process" is not part of the general vocabulary. It means little to someone who has never heard of a process capability study. Also tautological; capability is defined as the ability of a process to consistently produce output within spec. Summary of "Origin and meaning of the term "six sigma process"" section with wikilinks to technical terms.
Buzzword free, technical terms wikilinked
In particular, processes that operate with six sigma quality produce at defect levels below 3.4 defects per (one) million opportunities (DPMO). "processes that operate with six sigma quality" -- please, what does this tell a person who doesn't know this already? Perhaps the only thing unique to Six Sigma is specifying the process capability "goal"—this is of course criticized in the "Based on arbitrary standards" section
Buzzword free with the exception of "six sigma quality" which this sentence actually defines, technical terms wikilinked
Six Sigma's implicit goal is to improve all processes to that level of quality or better. Okay, sort of. ;-) Required to emphasize preceding sentence and provide motivation for the Six Sigma methodology
Buzzword free, with the exception of the article title
-- DanielPenfield (talk) 21:20, 30 April 2008 (UTC)

Let's first of all have a look at WP:LEDE. The lede is supposed to be a summary of the article as a whole. It is not supposed to introduce information that is only found in the lede and occurs nowhere in the remainder of the article. Jayen466 22:39, 30 April 2008 (UTC)

Re the last para of the lede, "In addition to Motorola, companies that adopted Six Sigma methodologies early on and continue to practice them today include Honeywell International (previously known as Allied Signal) and General Electric (introduced by Jack Welch).". This makes it sound like there are a half dozen companies out there doing this. Compare this against the following: In the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives aimed at reducing costs and improving quality. (Juran Institute's Six SIGMA Breakthrough and Beyond, p. 6.) Jayen466 23:57, 30 April 2008 (UTC)


User talk:Jayen466 statement User talk:DanielPenfield response
"Process" on its own is unclear. That is why books on Six Sigma first explain what is meant by process. It is now wikilinked so that those unfamiliar with the term can easily discover what it means. Wikilinking eliminates the need to duplicate its definition in article after article after article that refers to it. You are the only editor that I've encountered in my three years of active editing who has problems grasping this concept.
"Nonconformity to specifications" is a jargon expression here, we could say something like "failure to meet all the requirements defined in its specification". Btw, feel free to come up with something better! If you had actually read up on the subject, you'd know that this is the terminology that quality practitioners, statisticians, and managers use. To invent some new term does nobody any good. Furthermore, I believe most readers are intelligent enough to understand what it means.
What could be made clearer is the claimed effect on a company's bottom-line profitability, by reducing the cost of bad quality. That is a major aspect of how Six Sigma is sold to management. Another aspect that could be made clearer is the data-based approach. Yet another is that it is supposed to be a complete management system, changing fundamentally the way a company is managed. That includes the team-based cross-functional approach, and the role of Black Belts. (Also in this vein, see the more recent Mikel Harry quote below, under "Suggested sources".) If you had actually read up on the subject, you'd realize that SQC, ZeroDefects, TQM, etc., etc., etc. all make these claims. The key is that each successive "methodology" has to portray its predecessors as less comprehensive, not focused on the bottom line, making superficial changes versus the "all encompassing", fundamentally changing, bottom-line focus of the "new" methodology, which invariably incorporates the old methodologies with a new sales spin.
In other words, Six Sigma is pretty much like its predecessors with some new terminology, hence "Like its predecessors, Six Sigma asserts the following"
The fact that you alone refer to it as the "cost of bad quality" when the rest of the world knows it to be the Cost of poor quality is yet another red flag that you simply have not done enough reading to comment intelligently on anything related to quality.
The whole sentence is jargon and can be put in more general language. You cannot assume that everyone checking into this article is familiar with the concept of variation as measured by standard deviation or variance. Put your money where your mouth is and offer a counterproposal. Additionally, I cannot believe that readers are as poorly informed as you claim. We are not writing an encyclopedia for high-school dropouts here.
Shorthand jargon. A process can't be measured. You measure parameters of the process, or product characteristics. You then analyze the data collected. Then change the wording to fit your whim: Use "aspects of manufacturing and business processes" or some equivalent.
"Capable process" is not part of the general vocabulary. It means little to someone who has never heard of a process capability study. Also tautological; capability is defined as the ability of a process to consistently produce output within spec. It is wikilinked so that those unfamiliar with the term can easily discover what it means. Wikilinking eliminates the need to duplicate its definition in article after article after article that refers to it. You are the only editor that I've encountered in my three years of active editing who has problems grasping this concept.
"processes that operate with six sigma quality" -- please, what does this tell a person who doesn't know this already? Can you read? It tells them that six sigma quality == very low defect levels and gives them the magnitude: 3.4 defects per (one) million opportunities (DPMO)
You are the only editor that I've encountered in my three years of active editing who has problems grasping this concept. Then re-read the comments from other editors above. You are ignoring the fact that editors regularly comment on this talk page that the article is very poorly written. Someone commented three days ago that it was written in "incomprehensible corporate doublespeak". Another said, last October, about essentially the same article that we have now, "Ive read this page several times and still have no idea what 'six sigma' is meant to be; the article is just a string of quasi-meaningless management consultancy jargon and fashionable corporate buzzwords. Could someone rewrite the article with all the bullshit removed, in a format suitable for wikipedia rather than an inane powerpoint presentation."
If you had actually read up on the subject, you'd know that this is the terminology that quality practitioners, statisticians, and managers use. To invent some new term does nobody any good. Furthermore, I believe most readers are intelligent enough to understand what it means. Again, this is precisely what an editor criticised a couple of days ago, citing this very sentence we are discussing here: "I think the article is being written by, and addressed to, practitioners (or worse) which is not what a general purpose wiki article should be. It is, to use an appropriate metaphor, as though the wiki article(s?) on prostitution were written by pimps and call girls, for pimps and call girls." I think there is ample evidence that editors have characterised the article as not increasing their understanding. It is ludicrous to argue otherwise.
In addition, the content in the bullet points is unsourced. Did you not say it was closely based on Harry's book? Could you provide a title, page number etc., so we can add a cite? And it needs to be moved from the lede to the article proper, as per WP:LEDE. Jayen466 09:33, 1 May 2008 (UTC)
If you had actually read up on the subject, you'd realize that SQC, ZeroDefects, TQM, etc., etc., etc. all make these claims. The key is that each successive "methodology" has to portray its predecessors as less comprehensive ..., not focused on the bottom line, making superficial changes versus the "all encompassing", fundamentally changing, bottom-line focus of the "new" methodology, which invariably incorporates the old methodologies with a new sales spin. Well, to make this NPOV, I'd suggest we first say what the claims of the Six Sigma folks are. Then there can be a discussion of these claims, using comments like the one sourced to Juran, with attribution and a reference.
We are not writing an encyclopedia for high-school dropouts here. Please show me the policy or guideline that says that. As far as I am aware, we are writing an encyclopedia for use by the general population, not just that section of it that has been educated beyond a certain level. Jayen466 11:06, 1 May 2008 (UTC)

USER FEEDBACK - while the article certainly has room for improvement, it does give a useful introduction to the basic concepts of Six Sigma for somebody with a modicum of business management experience and no prior knowledge of Six Sigma. Stephengeis (talk) 14:31, 30 June 2009 (UTC)

Suggested sources

(ec) We need to base the article on sources. The following books provide well-written, relatively jargon-free explanations of what Six Sigma is. Please state whether you feel these are acceptable (and feel free to add more). Jayen466 16:32, 30 April 2008 (UTC)

  • Pyzdek, The Six Sigma Handbook, McGraw-Hill Professional: [2] -- this has a useful intro to Six Sigma
  • Tennant, Six Sigma: SPC and TQM in Manufacturing and Services, Gower Publishing, Ltd.: [3]
I believe Pyzdek to be a serial spammer, based on http://en.wikipedia.org/w/index.php?title=Talk:Six_Sigma&diff=next&oldid=95763527 and the frequency at which his books are anonymously added as "references" (http://www.google.com/search?hl=en&q=pyzdek+site%3Aen.wikipedia.org). I also took a look at one of his books and wasn't particularly impressed.
I believe the best references will be ones written by Motorola people before 1995, as these are closest to the origin of Six Sigma before it gained its current notoriety and everybody put his own spin on it. In particular:
  • The Harry brochure ("The Nature of six sigma quality") was required reading in a recent certification program I attended (Ph.D. statisticians with decades in the practice, not consultants of dubious origin).
  • Articles from ASQ publications (like Quality Progress) that introduced the topic in the mid- to late- 1990s
  • Books by notable statisticians (e.g.,Hoerl, Snee, Marquardt, etc.)
  • Books by notable quality experts or at least with their imprimatur (e.g., Juran, Feigenbaum)
-- DanielPenfield (talk) 16:48, 30 April 2008 (UTC)
The Pyzdek book is well cited according to google scholar ([4]) and from a reputable publisher. Let's agree that we need both those original sources and more recent ones charting the growth of the methodology. How about placing a Request for Comment? Jayen466 18:03, 30 April 2008 (UTC)
As I understand it, editors accepting our Wikipedia:Requests for comment would likely balk at us asking which sources are "best"--they're pretty much there to review articles, templates, categories, or users.
How about this suggestion?:
You, for your own benefit, find a pre-1996 Six Sigma book that you could read or at least browse locally (e.g., using http://www.worldcat.org/search?q=ti%3Asix+sigma&fq=yr%3A..1996+%3E&qt=advanced) and compare it to Pyzdek.
Find Quality Progress back issues at your local library (e.g., via http://www.worldcat.org/search?q=quality+progress&qt=results_page) and read at least the first first article with "Six Sigma" in the title, from June 1993, p 37 "Six Sigma Quality Programs" and compare it to Pyzdek.
You'll be able to verify for yourself what I've been emphatically stating and, more importantly, you'll have broadened your horizons enough where you'd be on a better footing to add or change content.
-- DanielPenfield (talk) 18:51, 30 April 2008 (UTC)
I've got quite a few old Quality Progress copies from the nineties on my bookshelf. ;-) I do not understand why you would want to somehow restrict this article to Six Sigma as it was in 1993. It's like writing an article about communism and insisting that Lenin, Stalin, Tito etc. have nothing to do with it. By all means, let's bring material from these early sources into a history section, but the understanding and relevance of Six Sigma today is not adequately covered that way. And I really don't see that this article is so brilliant as it stands at the moment. Jayen466 22:37, 30 April 2008 (UTC)
Here is a more recent quote from Mikel Harry re the difference between TQM and Six Sigma:
MIKEL HARRY: I think Six Sigma is now squarely focused on quality of business, where TQM is concerned with the business of quality. That is, when you adopt TQM, you become involved in the business of doing quality, and when you adopt Six Sigma, you're concerned about the quality of business. In a nutshell, TQM is a defect-focused quality improvement initiative, whereas Six Sigma is an economics-based strategic business management system. Didn't start off that way, but it has evolved that way. Book Title: Rath & Strong's Six Sigma Leadership Handbook. Contributors: Thomas Bertels - editor. Publisher: John Wiley & Sons. Place of Publication: Hoboken, NJ. Publication Year: 2003. Page Number: 5. Jayen466 23:47, 30 April 2008 (UTC)
"I think Six Sigma is now squarely focused on quality of business, where TQM is concerned with the business of quality." Now, I think that is exactly the sort of smart ass comment that typifies the problem with this article. Yes, it does sort of make sense,and is moderately amusing. But it is profoundly unhelpful in an introductory article for 6S. I would vote very strongly to keep the alpabetti spaghetti such as alternatives to DMAIC out, they are just astroturfing by pimps, not essential to a basic understanding of the subject. I am a bit surprised about the dirty little secret comment, I'd have said the differences were obvious - 6S is marketed to senior management first, and presented in terms they can understand ($). It coopts the technical, statistical and problem solving methodologies built up over the last century and presents them in a coherent framework. Greg Locock (talk) 04:03, 1 May 2008 (UTC)

Another source you might like is the "Juran Institute's Six SIGMA Breakthrough and Beyond". Jayen466 23:39, 30 April 2008 (UTC)

"I do not understand why you would want to somehow restrict this article to Six Sigma as it was in 1993."
I never said this. What I said was "I find [sources closer to the origin] are less subject to distortions from Chinese whispers or sales spin." I never "restricted" anything—that is completely your invention.
"It's like writing an article about communism and insisting that Lenin, Stalin, Tito etc. have nothing to do with it."
Conversely, the analog of your approach would be to just pick just one and only one source from the works of Fidel Castro, Kim Il-sung, or Ho Chi Minh and completely ignore Marx, Engels, Plekhanov, Lenin, and Trotsky.
You also conveniently ignored my final two recommendations, which tend to be less prone to B.S.:
  • Books by notable statisticians (e.g.,Hoerl, Snee, Marquardt, etc.) NO DATE SPECIFIED
  • Books by notable quality experts or at least with their imprimatur (e.g., Juran, Feigenbaum) NO DATE SPECIFIED
"And I really don't see that this article is so brilliant as it stands at the moment."
You're confounding the article's need for improvement with your need to improve your understanding of the subject matter: We would be better off if both happened.
"Another source you might like is"
Just do us both a favor: Read (or browse) multiple sources from different timeframes. It would also be beneficial if you read something on statistical process control so that you can start to understand that Juran was right: Six Sigma is pretty much a case of "what's old is new again".
"In a nutshell, TQM is a defect-focused quality improvement initiative, whereas Six Sigma is an economics-based strategic business management system."
This is 100% sales pitch and a graphic illustration as to why one really ought to revisit the older sources which were written before Six Sigma became a road to big $$$$.
I am not interested in your personal opinion on Six Sigma. I am interested in making this an article that is easy to understand, explains the topic, and is based on reliable sources. This should include the notable criticism that Six Sigma appears to have spawned a lucrative cottage industry for consultants, trainers etc. Statements to that effect are available in the literature and can be sourced. Jayen466 09:54, 1 May 2008 (UTC)
Let me add that I've got nothing against Hoerl and Snee. I'd be delighted if you added well-sourced, readable content to the article! But I think you are prejudiced with regard to Pyzdek. As fas as I can see in google scholar, Pyzdek's tome is more widely cited than either Hoerl or Snee. Google scholar is commonly used in Wikipedia as a criterion for measuring the influence and notability of a writer. And which book(s) by Marquardt did you mean? Jayen466 12:05, 1 May 2008 (UTC)

Here is another well-cited book on Six Sigma that would make an excellent source: Implementing Six Sigma: Smarter Solutions Using Statistical Methods by Forrest W. Breyfogle, III

Obviously, "Six Sigma: the breakthrough management strategy revolutionizing the world's top corporations" by MJ Harry, R Schroeder belongs in the list as well, along with Harry's earlier publications. Jayen466 12:19, 1 May 2008 (UTC)

Differences between Six Sigma and earlier quality improvement initiatives

Here is a paper entitled "Pros and cons of Six Sigma: an academic perspective" that outlines some purported differences between Six Sigma and earlier quality improvement initiatives. Also has a useful overview of literature. (Originally published in TQM Magazine, I believe.) Jayen466 13:07, 1 May 2008 (UTC)

Statistics and robustness

The core of the Six Sigma methodology is a data-driven, systematic approach to problem solving, with a focus on customer impact. Statistical tools and analysis are often useful in the process. However, it is a mistake to view the core of the Six Sigma methodology as statistics; an acceptable Six Sigma project can be started with only rudimentary statistical tools.

Still, some professional statisticians criticize Six Sigma because practitioners have highly varied levels of understanding of the statistics involved.

Six Sigma as a problem-solving approach has traditionally been used in fields such as business, engineering, and production processes.

Why does this section have the heading "Statistics and Robustness"? We don't explain what robustness is, in fact, the word does not recur in the entire section. It's just a gratuitous buzzword. Moreover, the entire section is unsourced WP:OR. Jayen466 23:19, 30 April 2008 (UTC)

It's something somebody wrote years ago and I would not be sad to see it go, though I don't presume to understand what the original author was trying to convey. BTW, if you actually read anything at all on the subject, you'd know that robustness is a technical term (designing a product so that it performs well no matter what the inputs are, the goal of Taguchi methods, and DFSS). Since you can't be bothered to read anything on process or product improvement you will naturally believe that it is a "gratuitous buzzword". -- DanielPenfield (talk) 00:07, 1 May 2008 (UTC)
Daniel, I know very well what Robustness is, who Taguchi is, what a signal-to-noise ratio is, what Parameter Design is, what an orthogonal array is, what a screening experiment is, what a response-surface design is, etc. But that does not mean we can gratuitously drop in the word robustness just because it sounds cool! Jayen466 00:11, 1 May 2008 (UTC)
Your reading comprehension is horrendous:
You say "Robustness is a gratuitous buzzword"
I say "No, it's a technical term with a specific meaning"
You say "That does not mean we can gratuitously drop in the word robustness just because it sounds cool."
I say "When the hell did I ever drop in the word robustness? And, why, if you "know" that it's a technical term, would you ever claim that 'it's a gratuitous buzzword'?"
-- DanielPenfield (talk) 01:17, 1 May 2008 (UTC)
Robustness is a buzzword – do a google search for "robustness" and "buzzword" and you'll find plenty of people who agree with me. What makes it gratuitous is the fact that there was no context whatsoever to justify its use as a technical term here. Jayen466 09:44, 1 May 2008 (UTC)
You're both right. Robustness is used both as a buzzword and as a specific technical term (see e.g. robust statistics). As long as the text clarifies the intended meaning and why it is applicable here, it's fine. Dcoetzee 10:46, 1 May 2008 (UTC)
Thanks. For your reference, the text in question is reproduced above, at the beginning of this talk page section, in italics. That was all there was. It did not clarify the meaning, it didn't even mention the concept again. Jayen466 11:12, 1 May 2008 (UTC)


Structure

I think one of the big problems is the order of the sections. Here is the current status, and my commenst

Intro

Methodology

1.1 DMAIC

1.2 DMADV

1.3 Other Design for Six Sigma methodologies (this is absolute agony, does it really warrant third place?)

2 Implementation roles (dull, is this really important? Call yourself a Grand poobah for all I care)

3 Origin

3.1 Origin and meaning of the term "six sigma process"

4 Criticism (why is this so far up?)

4.1 Lack of originality

4.2 Studies that indicate negative effects caused by Six Sigma

4.3 Based on arbitrary standards

5 Examples of some key tools used (at long last, some content that somebody new to the topic might be interested in)

5.1 Software used for Six Sigma (incredibly boring detail for the average punter)

5.2 List of Six Sigma companies (mildly interesting)

6 References

7 See also

8 External links

So, let's try and get the information an intelligent, interested, but non expert reader would want up towards the start of the article, and leave the pedantic willy-waving to the end.

Intro

3 Origin

3.1 Origin and meaning of the term "six sigma process"

Methodology

1.1 DMAIC

1.2 DMADV

1.3 Examples of some key tools used (needs a para of explanation, some classification), not just a list of 50 year old statistics and fads.

5.1 Software used for Six Sigma

5.2 List of Six Sigma companies

2 Implementation roles

1.3 Other Design for Six Sigma methodologies

4 Criticism

4.1 Lack of originality

4.2 Studies that indicate negative effects caused by Six Sigma

4.3 Based on arbitrary standards

6 References

7 See also

8 External links

Just a suggestion Greg Locock (talk) 11:16, 1 May 2008 (UTC)

1.3 Other Design for Six Sigma methodologies (this is absolute agony, does it really warrant third place?) I am in favour of deleting that section. In fact, I had deleted it, but Daniel restored it. If it is kept at all, it should be somewhere near the end of the article. It is just empty words. Jayen466 11:23, 1 May 2008 (UTC)
I'd say that looks already better. A couple of points: The present content of the lede should go into the article proper, most of it probably to the history section, and we need a lede that actually does its job of summing up the article, in 2 or 3 paragraphs. The implementation roles are important, they're what (arguably) sets Six Sigma apart from the earlier initiatives. But I agree the present bullet format is incredibly dull. It needs a well-sourced, well-written paragraph that describes how a business managed according to Six Sigma works, and how this is different from more conventional management structures. Jayen466 11:31, 1 May 2008 (UTC)

Restructure

I have revised the article in line with Greg's suggestion above. Please review. It is still not a great article, but I hope it is a starting point from which we can improve further. Jayen466 19:21, 1 May 2008 (UTC)

I just reviewed your changes and the improvement is dramatic. This is much closer to an NPOV article. I endorse this revision strongly. Dcoetzee 22:51, 1 May 2008 (UTC)
Thanks. Jayen466 23:06, 1 May 2008 (UTC)
Yes that is heaps better. Greg Locock (talk) 02:03, 5 May 2008 (UTC)

Suspect math

I was curious about the section, 'Origin and meaning of the term "six sigma process",' and so I decided to run the numbers for myself. The figure of 3.4 x 10^-6 defects per opportunity coming from a specification limit 4.5 standard deviations away from seems, at first glance, to be incorrect. The probability of a normally-distributed random number being 4.5 or more standard deviations away from the mean is 3.93 x 10^-10 (figure calculated in MATLAB using the complementary error function; 2 * erfc(4.5)). Using the inverse complementary error function we should be able to find the right number of sigmas for a defect rate of 3.4 x 10 ^ -6; erfcinv(3.4 * 10^6 / 2) = 3.38. So, allowing for the arbitrary, unjustified 1.5 sigma cushion, maybe 6 sigma should be called 4.88 sigma?

If anyone else could verify that the figures provided in the article (3.4 DPMO corresponding to a specification limit 4.5 standard deviations away from the mean) are incorrect, that belongs on the Six Sigma page itself as a criticism. The whole thing reeks of a management process that claims some mathematical pedigree in operations or production research. Showing that the figures supposedly derived from the theoretical underpinnings of this idea are incorrect speaks a lot about the nature of this business management strategy.

In short, stop complaining about how bad the article is and start pointing out what's wrong with Six Sigma!

Additionally, one can trivially satisfy six sigma; just make the specification limit far enough away from the mean!66.17.137.1 (talk) 12:24, 8 May 2008 (UTC)

Tables showing cumulative normal probabilities give .9999966 for z = 4.50. 1 – .9999966 = .0000034 = 3.4 ppm. Also see the bottom table on this page, giving right-tail probabilities. Jayen466 13:33, 8 May 2008 (UTC)
So p(z>4.5) = 3.4 x 10^-6. But p(|z|>4.5) = 6.8 x 10^-6. 66.17.137.1 (talk) 18:11, 8 May 2008 (UTC)
It's unilateral, as it says in the text. The assumption is that the mean may wander off to one side. As soon as it does so, the figures for the other tail of the distribution become vanishingly small (cf. the table linked to above). Of course, if the mean remains completely stationary and just the standard deviation increases by a third, you'd get the same 3.4 ppm in each tail; but I reckon the Six Sigma guys would argue that empirically, that is not what tends to happen in practice. Jayen466 18:27, 8 May 2008 (UTC)

Literature section

Has there been any consideration to what the criteria are for inclusion in the 'Literature' section? It seems to be a rather random assortment of books, and has attracted at least one outright promotional addition. I'd usually attempt to prune it myself, but there are quite a few experts editing here, so I'll simply toss up the question. Kuru talk 00:18, 28 May 2008 (UTC)

I noticed the promotional addition as well (Six Sigma and Minitab). I reckon it should go. Otherwise, the list was based on the bibliography given in ref. 1, plus the work by Tennant that was cited in the article. IIRC, all the other works are American, and Tennant is UK; it's good to have at least one Brit in there. Jayen466 08:50, 28 May 2008 (UTC)

Opposing viewpoint

Denton (talk · contribs)/IP 67.172.245.11, please stop reinserting the "opposing viewpoint". This material is not taken from a reliable source, but appears to be your original research. In addition, what you are saying is based on a misunderstanding. The idea is that a process that has ±6 sigma within tolerance in a short-term study will, over the long term, only fit ±4.5 sigma. This is analogous to the difference between Pp and Cp – Cp is bound to be lower than Pp, because it reflects data collected over a longer time period, where more sources of variation have a chance to affect the process. By describing a 3.4 ppm level as "6 sigma", no one is increasing its Cp index, this is still 1.5. On the contrary, what the 1.5 sigma shift says is that a Pp index of 2 (short term study) will, in real life, only result in a Cp = 1.5 level of quality. This is consistent with the fact that company standards have higher minimum requirements for Pp than Cp. Cheers, Jayen466 18:04, 21 July 2008 (UTC)

  • I've temporarily protected the page to encourage editors to discuss changes here rather than reverting. ·:· Will Beback ·:· 20:17, 21 July 2008 (UTC)


Thank you for your explanation. But I think you do not understand my contribution.

I take the following statements to be correct:

  • 1. A Capability Study is an estimate of a process's ability to meet specifications. It may be an imperfect estimate, but it is an estimate of the actual performance of the process.
  • 2. The sigma level of a process is the number of process standard deviations between the process mean and the nearest specification limit. It is found by multiplying Ppk by 3. A a perfectly stable and predictable process with a Ppk of 1.5 will be found to have 4.5 standard deviations between the process mean and the nearest specification limit, and will be properly called a 4.5 sigma process.
  • 3. A 6 sigma process is better than a 4.5 sigma process, all other factors equal.
  • 4. A normally distributed process with a "one-sided" specification 4.5 standard deviations from the process mean will produce 3.4 DPPM.
  • 5. Practically all the relevant literature acknowledges that a 6 sigma process is one that produces 3.4 DPPM.
  • 6. Combining statements 2, 4 and 5 we can write this expression: 4.5 sigma = 3.4 DPPM = 6 sigma. The glaring contradiction in this expression is routinely resolved by resorting to the 1.5 sigma shift.

Do you find fault with any of these statements? If we can agree on a set of statements we both take as true, then I think resolution may be easy for both of us.

There are many other things wrong with the 1.5 sigma shift, but I think it's best we resolve just this issue.

denton (talk) 22:46, 21 July 2008 (UTC)

  • I don't know a thing about this topic or much of anything about statistics. But I suggest that the best way to solve most Wikipedia disputes is by providing sources. This topic does seem to have been covered thoroughly so any significant point should be able to be referenced from reliable sources. ·:· Will Beback ·:· 23:27, 21 July 2008 (UTC)
  • Denton, your maths is spot-on. Six Sigma aims for Ppk = 2 when developing a process, recognising that in real-life, long-term operation this will result (empirically, as a rough and ready guideline – the uncritical, universal application of which has been criticised as "goofy") in a Cpk of 1.5. No one in Six Sigma says that a process with Cpk = 1.5 fits 6 Sigma between the mean and the critical spec.
  • So you aim for ±6 Sigma in development to get ±4.5 Sigma in real life, which is deemed good enough. Jayen466 23:46, 21 July 2008 (UTC)

Jayen, I thoroughly understand your point. Bear with me. Do you accept my six statements? denton (talk) 00:17, 22 July 2008 (UTC)

  • I think I accept them all except no. 6, which does not correspond to how Six Sigma is taught. Jayen466 00:52, 22 July 2008 (UTC)
  • I guess you'd have to write "4.5 sigma in real life = 3.4 DPPM in real life = 6 sigma (and correspondingly fewer DPPM) in a short-term study conducted to verify the process design", to make it right. --Jayen466 00:59, 22 July 2008 (UTC)


I see Will protected the article page, and such action invites hopefully useful outside comments. Since the reason seems to be a series of misunderstandings, I'll try to make some useful comments and be on my way.
Jayen has spent years working with statistics, so he is probably technically correct at the level of detail. Still, I think I see what Denton is getting at.
My view after reading the issues, but doing no other research, is that Six Sigma is superficially a statistical practice of business management, but is better understood as a long-term financial practice of business management. The critical idea is that a Six Sigma design is useless if it is installed and profits slowly decline year after year. Investors will conclude that it doesn't work.
So, the as-built Six Sigma formula is fudge-factored, so that a statistical Six Sigma design which investors must pay for, is defined as yielding 3.4 defects per million and a profit proportional to those 3.4 defects. However, during the first year the actual yield (unstated here - find out), is fewer defects proportional to 'higher than expected profits'. Investors love to hear that.
What's the fix? I notice some really great tables on this page. How about a table to explain the relationship between statistical Six Sigma and financial practice Six Sigma (or whatever are the correct terms of art)?
Yes, there's always those verifiable references to find, but that's always easier if you get clues for what to look for from any helpful OR on the talk page.
I hope this has been helpful. Milo 00:43, 22 July 2008 (UTC)

Hi Milo. :-) I think the degradation from 6 to 4.5 Sigma needs much less than a year to kick in. Basically, any process that works well in theory, or in a controlled environment, works less well in practice, and that's what the 1.5 sigma shift is supposed to take care of. It's a kind of inbuilt safety margin. Cheers, --Jayen466 01:13, 22 July 2008 (UTC)

I expect that Jayen does have good credentials, as stated. He writes well.

And I'm pleased that he has read the six statements I made and asked for agreement on. Five out of six is a good starting point.

And I believe that the current version of his section is now more nearly correct.

I think his disagreement on point 6 finds the root of the issue. Contrary to his statement here, there are an unfortunately large number of Six Sigma loremongers who teach that you calculate the capability of the process, multiply Ppk by 3 to get sigmas, and then arbitrarily add 1.5 sigmas so that a 3.4 DPPM process comes out 6 sigma. That was particularly prevalent with SBTI and others in the early days of Six Sigma. Given his statements here, I think Jayen will agree that they were very wrong on that point.

If we agree on that arbitrarily adding 1.5 sigmas to the outcome of a Capability Study is wildly wrong, then I think we are in violent agreement on that point, and continuing with my statements is unnecessary.

I have other fundamental issues with the 1.5 sigma shift that we might take up for amusement, but I don't think the short Wiki article on origins of the name of the program is the place for such a discussion. denton (talk) 02:00, 22 July 2008 (UTC)

It sounds like you may be a Six Sigma process guru. Being an expert with long experience is frustrating at Wikipedia, since articles are about verifiability, not truth. You can convince everyone here on the talk page that you're correct, but your discovered key facts can't go in the article without verifiable references. Sometimes the best that can be done is to get consensus to remove statements that would give readers the wrong impression, on the way to finding out for themselves what you found out for yourself. Milo 02:37, 22 July 2008 (UTC)
  • I think what you are talking about is explained here, for example (the 95.44% mentioned correspond to 1.7 Sigma + 1.5 Sigma = 3.2 Sigma level).
  • So in that example, the 1.5 Sigma subtracted to get from SigmaST to SigmaLT is added back again to get from SigmaLT to SigmaST (LT=long term, ST=short term).
  • The purpose of the shift is not to give spuriously increased Sigma levels, but to increase the expected DPPM to realistic proportions when all you have is short-term Sigma level. But it's true that when you have long-term data, a 4.5 SigmaLT process (i.e. operating at 3.4 ppm) is described as a 6 SigmaST process. That's how the scale is set up, to enable comparison.
  • Note that in doing so, people are not taking "extra credit" for another 1.5 Sigma, because the DPPM associated with 6 SigmaST remains at 3.4 ppm, rather than the 1 in a billion that would be associated with 6 SigmaLT. --Jayen466 02:27, 22 July 2008 (UTC)
Can somebody please show that -in the present context- standard deviations are additive. Unbeatablevalue (talk) 02:52, 22 July 2008 (UTC) Another way of asking the question: In the present context, what meaning is attached to the addition of standard deviations?


The agreed and empirically based assumption is that the mean may move by 1.5 Sigma over the long term. So a distance of 6 sigma to the nearest spec limit under ideal, controlled conditions may reduce to just 4.5 sigma in "real life conditions". --Jayen466 03:09, 22 July 2008 (UTC)
You treat standard deviations as if they are additive, without further justification. In the present context, I continue to question whether such operation passes muster. Just mathematics. Unbeatablevalue (talk) 03:19, 22 July 2008 (UTC)
It's not a question of standard deviations being "additive", as in adding variation from multiple sources. The standard deviation is simply a value expressed in the unit of measurement of the characteristic, e.g. σ = 10 grams. If the process mean is 3,000 grams, and the critical spec is 3,060 grams, the mean is 6σ away from the specification limit. Now, what the Six Sigma people say is: if under controlled conditions, you are 6σ away from the SL, it would behove you to assume that in the long term, you'll only manage to be 4.5σ away from the SL (because your mean may wander off track by 1.5σ, or your sigma may increase to that extent, or a mixture of both). So you should expect correspondingly more defects than you are getting now. You are expressing the amount by which the mean might shift in terms of σ. --Jayen466 03:31, 22 July 2008 (UTC)
I'm only struck by your "1.7 sigma + 1.5 sigma = 3.2 sigma." Mathematically, this doesn't make sense, sorry. Has nothing to do with Six Sigma or any other stochastic process in particular, but it makes me wonder about the proponents of a process so defended . Unbeatablevalue (talk) 04:19, 22 July 2008 (UTC)
Cumulative normal probability for z = 1.69 = 0.95449. Termed a 3.2 Sigma (short-term) process according to the Sigma table. Suggest you look up "1.5 sigma shift" in any of the standard reference works. A brief explanation (off the top of google) is given here. Jayen466 04:32, 22 July 2008 (UTC)

My proposal: Put everything after “Experience has shown” (with the exception of “Hence the widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities,” which should stay with Origin and meaning of the term sig sigma process.) in a new section, say Long-term effect. The dispute about the opposing view can then be solved in the context of that new section, with a renewed hope that the general reader of the article won't be too confused. -- Iterator12n Talk 02:31, 22 July 2008 (UTC)

  • Guys, whatever we put into the article, it has to be cited to a reliable source. If there is literature on this aspect, by all means let's find it and use it. Jayen466 03:11, 22 July 2008 (UTC)
  • Alternatively, we could try and write a summary of the discussion in the source given above. --Jayen466 03:31, 22 July 2008 (UTC)

Jayen, your experience was apparently better than my early experience. There were and are those who teach that the proper way to state sigmas is to multiply Ppk by 3, then add 1.5. Somewhere, I probably even have very dusty copies of the original material and video tapes of the class that caused me to participate in a classroom revolt on that point, a very long time ago. That is the fallacy I object to.

Mikel Harry's own explanation of the subject has changed dramatically over the years. They are similar only in the fact that they both contain the words 1.5 and sigma. Oh... and I don't agree with either of them.

I think we would serve the readers well if we deleted my contribution, and made a small addition to Jayen's, something like:

"Some practitioners have improperly invoked the 1.5 sigma shift as a means of overstating the capability of a process."

With that, I'm happy.

Jayen, if there is a place we can have a private conversation, I'll share something with you that I think you will enjoy.

denton (talk) 04:24, 22 July 2008 (UTC)

You can use my talk page or e-mail me from my user page (click "e-mail this user" on the far left). Cheers, Jayen466 04:42, 22 July 2008 (UTC)
  • Since editors are now engaged in discussion I'll unprotect the page. ·:· Will Beback ·:· 22:08, 22 July 2008 (UTC)
    • Tx. I propose we add a summary of the source referenced above to the Reception section, drawing attention to the criticism and confusion focused on the 1.5 Sigma Shift, and the Sigma table being offset against the table of cumulative normal probabilities. Jayen466 22:42, 22 July 2008 (UTC)

  Done Diff-link for edits made --Jayen466 23:30, 22 July 2008 (UTC)

Reception

I believe this entire section violates the Wikipedia policy of neutrality. Navywings (talk) 14:48, 22 September 2008 (UTC)

Too negative? Jayen466 15:52, 22 September 2008 (UTC)
I guess I just think it shouldn't be there at all. I believe the article itself should be focused on what Six Sigma is and not controversy surrounding it. If there's only one paper out there with a dissenting opinion, does it constitute an entry for opposing views? Where is the line drawn? My 2 cents.Navywings (talk) 16:14, 22 September 2008 (UTC)
I think there has been quite a lot of criticism of Six Sigma – the sources refer to such, even as they rebut it – and a statement by a notable person like Juran e.g. is okay to have. If there are positive assessments out there, they can be included too, but it kind of goes without saying, as Six Sigma has been so widely adopted (which the article makes clear.) As you are a Black Belt, what do you think of the rest of the article? Jayen466 16:42, 22 September 2008 (UTC)
It would be really innapropriate to not have a reception/criticism section considering the issues surrounding Six Sigma. One question that needs to be thought of is whether SS is a set of fundamental scientific/statistical theories or a product. If it is the former then criticisms are fairly levelled at it for its questionable assumptions, and if it is the latter then the criticisms are fairly levelled at it for its questionable results. —Preceding unsigned comment added by 158.35.225.231 (talk) 19:16, 21 July 2009 (UTC)

1.5 Sigma shift

This change, while it seems well sourced, doesn't quite work for me. The 3.4 ppm appear ex-cathedra; it is no longer clear how you get from 6 sigma to those 3.4 ppm, and why. I'll have a go at combining the various sources. Jayen466 23:59, 24 July 2008 (UTC)

Iterator, I didn't revert, I placed the above note and tried to get the best of both edits together. I also updated a number of sources, replacing a blog post with a published work, etc. Please self-revert, and then we can talk. Jayen466 00:58, 25 July 2008 (UTC)
Let's get a consensus first, before we change the text again. -- Iterator12n Talk 01:01, 25 July 2008 (UTC)
That's not how WP:BRD works. Jayen466 01:08, 25 July 2008 (UTC)
Problems: Ref. 10 is a blog post; besides, it does not say what is cited to it. Jayen466 01:09, 25 July 2008 (UTC)
Second, as pointed out above, we do not explain where the 3.4 ppm come from, and why they are stipulated.
Third, the terms "short-term" and "long-term" are easily understood by any layperson. "Momentary variation", unfortunately, is not, rendering the passage somewhat opaque to the general reader. In my edit, I had retained this more statistical wording, at the end of the paragraph.
Fourth, ref. 12 is WP:OR (only cited to a normal distribution table). That wasn't perfect in my version yet either, but at least there was a secondary published source.
Overall, we seem to have lost some detail, some sense of logical progression, and ease of understanding, and replaced good sources with inferior ones. Jayen466 01:21, 25 July 2008 (UTC)
Re. the operation of WP:BRD: You have been bold, as a matter of fact you've been all over the article, including item 1.1.1. Now you have to find the Most Interested Persons. Acc. to WP:BRD you do that by letting others revert your stuff. With a revert you have discovered a Most Interested Person. Next you discuss the matter with the Most Interested Person, to come to a compromise. -- Iterator12n Talk 05:20, 25 July 2008 (UTC)
The article has been stable for a number of months. As for the revision I undertook, the feedback by other editors is above, under #Restructure. --Jayen466 14:19, 25 July 2008 (UTC)
Stable??? Before I made my first edit on July 20, the article had undergone about 420 edits in 2008 alone, with about 170 coming from Jayen. No stability here. Iterator12n Talk 15:03, 28 July 2008 (UTC)
Not that it matters much, but see [5]. What edits were made over that time period were mostly plugs for various companies and websites, reverted by me and others. Article content was stable. --Jayen466 16:56, 28 July 2008 (UTC)
The trouble with the text that Jayen wants to leave goes to these words: “According to this idea, a process that fits six sigmas between the process mean and the nearest specification limit in a short-term study […].” Whatever way you look at these words, they postulate a process that operates at 9.866 defects per 10,000,000,000 outcomes. Even in the short-term, without the longer-term drift and without the longer-term variation in the standard deviation, this process simply does not exist in the context of Six Sigma. -- Iterator12n Talk 01:19, 25 July 2008 (UTC)
That is not so. It corresponds to Pp = 2, and that is what practitioners aim for in development. The traditional requirement was Pp = 1.67, and Cp = 1.33. Six Sigma upped this to Pp = 2, and Cp = 1.5. There are plenty of reliable sources that state so, quite explicitly. See e.g. Pyzdek. On page 437, Pyzdek has an example of a process with Cp = 5 (that would be a 15 sigma process, and you'd have to add a whole number of zeros). Jayen466 01:23, 25 July 2008 (UTC)
Here are any number of sources documenting that Six Sigma aims for a short-term process capability index of Cp/Pp = 2, reduced to 1.5 by long-term variation: [6] [7]
More: [[8]
Since the company is pursuing six-sigma process capability, we use this performance goal to determine the expected minimum of the process variation; that is, we set Ppk = 2 and induce the minimum of σp as (μp - LSL)/6. (International Journal of Applied Science and Engineering 2004. 2, 1: 59-71)
Let's build this up. Do we agree that six sigmas stands for 9.866E-10? -- Iterator12n Talk 01:37, 25 July 2008 (UTC)
No more WP:OR please. It is late here. I shall revert now and go to bed. Try and find references that say that Six Sigma does not strive for a Cp of 2, to be reduced by long-term variation to Cp = 1.5. Read Tennant p. 25-27 for an understanding of why 3 Sigma capability is deemed to be insufficient. I am not making this up, this is the general gist of Six Sigma literature. --Jayen466 01:47, 25 July 2008 (UTC)
Any normal distribution table shows that six sigmas stands for 9.866E-10, see for example here. Nothing WP:OR, don’t use this argument to cut off a discussion Second, notions like Pp and Cp imply a host of assumptions and conventions, not examined here by either of us. I submit that if we can’t discuss the subject in plain English, we don’t understand the essence of what we are talking about, nor will the general reader who stumbles into this discussion. Third, your revert now is hard to reconcile with a desire to reach consensus. But also, there is nothing in Wikipedia that let’s ill will stand. Finally, we don’t have a deadline, take your rest, and we discuss this later. -- Iterator12n Talk 02:23, 25 July 2008 (UTC)
Rather than thinking of it in terms of producing 1 billion parts and finding the faulty one, think of it in terms of getting sigma to be 1/12 of the tolerance spread. Then that is the result you get. For a real-life example, if you have a spec of 1.45m ± 1.20 for people's standing height, you will only get 2 in a billion "defective". Cheers, Jayen466 01:58, 25 July 2008 (UTC)
While you slept… I had a moment to see where we are not communicating. One of the general problems seems to be that you’re not always telling your readers the whole story. Take for instance the above "real-life example." A minor problem is that you leave out the unit of measure for the "± 1.20" OK, that one is easy to guess: ± 1.20m. Much more important is that you’re not telling the reader that in your example the standard deviation for people’s standing height is 0.20m - only if the standard deviation is 0.20m is it true that in your example 2 in a billion are "defective." Like I said, your reasoning is often difficult if not impossible to follow. Therefore my proposal to build our discussion step by step. Let’s see what we agree upon. Cheers. -- Iterator12n Talk 04:26, 25 July 2008 (UTC)
When you say “think of it in terms of sigma to be 1/12 of the tolerance spread” you are saying the equivalent of “think of the tolerance ("spread" is redundant) in terms of six sigmas on each side of the process mean.” In building up our discussion, do we first of all agree that six sigmas stands for 9.866E-10? -- Iterator12n Talk 02:32, 25 July 2008 (UTC)
Re-reading your answer above, I take it that your “2 in a billion” acknowledges that six sigmas stands for 9.866E-10, ok, make that 1.0E-9. Next question. Do you think that a Six Sigma process that over the longer term produces 3.4 defects/million outcomes and that has built in 3 sigmas for the combination of longer-term drift and longer-term variation in the standard deviation produces at any particular moment (that is, without the effect of the two longer-term phenomena) widgets with not more than 2 in a billion defects? -- Iterator12n Talk 05:00, 25 July 2008 (UTC)
Please bring reliable sources making your point. Jayen466 12:08, 25 July 2008 (UTC)
To answer your question, yes, I do – and more to the point, so do reliable sources. It is elementary probability theory. I have no desire to spend a great deal of time countering your WP:OR with researching quotes in WP:RS. See Google scholar: [9] Kindly self-revert. Jayen466 13:39, 25 July 2008 (UTC)
Thank you for your “To answer your question, yes I do.” I took the time to follow up on your Google scholar list, searching for empirical, WP:RS evidence that a so-called Six Sigma process indeed produces at any particular moment widgets with only 2 in a billion defects (emphasis on empirical, as we are in agreement on elementary statistical theory). I couldn’t get to the first article (the GE one) on the Google list but I found the second one, the article by Robert Binder. Conclusion: Binder does NOT provide empirical evidence that some Six Sigma process at any particular moment produces widgets with only 2 in a billion defects. Binder talks about 2 in a billion only in the theoretical. (“With 6 sigma tolerances, a single part, and a stable production process, you’d expect to have only 2 defects per billion.”) Before spending more time checking items from a mechanically-produced Google list, may I ask you to point me to one, personally-verified, easily-accessible article that provides empirical evidence of a Six Sigma process with a momentary 2 in a billion defects. Thanks in advance. Presence or absence of empirical evidence will guide my future editing of the relevant parts of the Six Sigma article. (P.S.: It’s worth reading the whole of Binder’s article, particularly where it comes to the this conclusion: “Used as a slogan, 6 sigma means some (subjectively) very low defect level. The precise statistical sense is lost.” In due time, I will also return to this WP:RS quote, particularly “The precise statistical sense is lost”, and will add it to the Six Sigma article. For sure, Six Sigma is a positive thing, however, its strength is not in mathematics. The latter too should come across in Wikipedia, and not just at the tail-end of the article.) Cheers. -- Iterator12n Talk 01:09, 28 July 2008 (UTC)
The point is moot, because the article does not mention the 2 per billion. All we are saying is that short-term σ should be 1/12 of the tolerance width, or less. That is a straightforward, and easily verifiable goal. Jayen466 09:46, 28 July 2008 (UTC) (See also my post below, dd. 11:55, 26 July 2008 (UTC)). Jayen466 10:09, 28 July 2008 (UTC)
I think commentators are agreed that Six Sigma "fudges" the statistics. Apart from the 1.5 sigma shift, this comes from the normal distribution assumption which gives you those 2 parts per billion. The normal distribution and the probabilities derived from it are an idealised model which will never perfectly describe a real-life process.
Such criticism belongs in the article (and it is there), but it belongs in the "Reception" section.
Having said that, to come back to your original question, if ±6 σST, with σST calculated from, say, 50 values, fits into the tolerance range, it's well-nigh a mathematical impossibility to have a value outside the tolerance range in the data that yielded that σST value. Your set would have to comprise 49 identical values and one outlier – at which point the assumption of normality would break down, again rendering the point moot. Jayen466 10:09, 28 July 2008 (UTC)
Of course, well-nigh is not a mathematical concept. I’m on the road now but when I have a moment I’ll compute the required batch size to be able to say with 90 percent certainty whether the momentary process, according to the distribution of its observed outcomes, performs at a 2 in a billion defect rate. -- Iterator12n Talk 14:58, 28 July 2008 (UTC)
By all means, but, with respect, you're still missing the point. Six Sigma requires that the short-term σ, calculated from data in a "short-term study" (a concept that, note, has no universally agreed definition), should be 1/12 of the tolerance range, or less. If it is, and if you then plug that σ value into an equation that tells you how many sigma units you are away from the nearest specification limit (disregarding confidence intervals for σ estimates that would depend on how many data the value is based on, as well as the precise probability density function, which is unknown) and if you then assume normality, then your normal probability table will tell you that for a short-term sigma that is exactly 1/12 the tolerance range, with the process perfectly centred, you would get 2 defects per billion. But no one makes that point here; our article certainly does not make it. All we are saying is that Six Sigma wants a short-term study to result in a σ that is 1/12 the tolerance width, or less. That is something that is easily verifiable – if your tolerance is ±30 mm, and your short-term sigma is 5 mm or less, then you have done what Six Sigma asks you to do. And Six Sigma does not tell you that then you will have 2 defects in a billion, it tells you that you should assume that you will have 3.4 DPMO, and that, empirically speaking, this value is a fairly good assumption to operate on. --Jayen466 16:38, 28 July 2008 (UTC)
As somebody else already said below, Both recent versions of the article seem to agree on this, although one version is rather more verbose than the other. Trim the fat. -- Iterator12n Talk 20:29, 28 July 2008 (UTC)
What confuses me is that it's not clear what the edits have changed - I believe the claim that 4.5 standard deviations suffices to meet the bar of 3.4 ppm, and that the additional 1.5 sigmas is an empirically determined to account for long-term shifts in the distribution. Both recent versions of the article seem to agree on this (although one version is rather more verbose than the other). Both editors seem to be acting in good faith, although I caution Jayen to be less aggressive. What exactly are you arguing about? Dcoetzee 23:46, 25 July 2008 (UTC)
Thanks for looking in. As I've said above, I see the following problems with the present version:
  • Ref. 10 is a blog post and does not say what is cited to it. The source I inserted (Snee, now deleted) did.
  • We do not explain where the 3.4 ppm come from – they come from starting with 6 Sigma, the benchmark to be aimed for, and then subtracting (not adding) 1.5 sigma, to account for long-term variation. This leaves 4.5 sigma, with 3.4 DPMO. So the definition of a 6 sigma process as equivalent to 3.4 DPMO is secondary – not primary, as the present text makes it appear. To put it another way, no one started out with the idea that it would be nice to have processes that deliver 3.4 DPMO. There is nothing special or desirable about the number 3.4. People started out with the notion of aiming for 6 Sigma (rather than the previous 3, 4 or 5, which still resulted in defect levels that caused too much cost). Experience had appeared to indicate that over time, the distance between the process mean and the critical specification limit generally ends up diminishing by 1.5 sigma. Starting from a 6 sigma process, this left a distance of just 4.5 sigma, and this happens to correspond to 3.4 DPMO. If we begin by saying to the reader that it's "a central Six Sigma tenet" for a process to have no more than 3.4 defects in one million process outcomes, as we currently do, the reader will say, Why? What's special about 3.4? It makes the aim appear arbitrary, and more difficult to understand. I suppose this is my main concern.
  • The terms "short-term" and "long-term" are part of the general language, while "momentary variations" is a terminus technicus likely to flummox the average reader. Six Sigma is a topic of general interest that many readers without detailed mathematical knowledge are faced with at their place of work. Hence I believe it is a service to the reader to use general language, at least when presenting the basics.
  • Ref. 12 cites a normal distribution table; I believe it would be preferable to cite a textbook (which I had done).
  • I was also concerned that Iterator changed the text, while using the same references that were in place before his edit. Of course that's fine if he had the publication on his desk and based his wording on the wording in the source.
So, my apologies to Iterator for having been impatient; in my defense I will say it was 3.30 in the morning, and I felt rather unwilling to begin a long drawn-out mathematical debate about whether the fundamentals of Six Sigma, as presented in the relevant literature, make sense or not. And I will say again that that is not our job; we are here to summarize Six Sigma thought, as presented in reliable sources, as well as notable criticism of it, as presented in reliable sources. --Jayen466 00:55, 26 July 2008 (UTC)
I see now. The calculations are simple, but the computation of the 3.4 ppm figure is still original research, because it implies a certain intention on the part of Six Sigma, when in fact it is attempting to reverse engineer their intention from their result; my misunderstanding is illustrative of how other readers will misunderstand this figure. Either Six Sigma's original target value should be located, or the language should be generalized to merely indicate that this value is larger than the value that would be obtained by using all six sigmas. Dcoetzee 02:19, 26 July 2008 (UTC)
The primary aim is 6 sigma. This simply means that if the tolerance is, say, ±30 mm, the standard deviation of short-term data should be 5 mm, or smaller. The aim is not to have 2 defects per billion opportunities, which could not realistically be verified; this is simply a probability value that results from the properties of the normal distribution. Another problem with the present wording is that we have lost the term "1.5 sigma shift", which is central to Six Sigma, and is also central to criticism of Six Sigma – in fact, we have a subsection "Criticism of the 1.5 sigma shift" in Reception, and readers won't now know what we are talking about. Hence I am in favour of reverting to the previous version. --Jayen466 11:55, 26 July 2008 (UTC)

←The previous version also has significant problems, but it was an improvement based on the outcome of previous discussions shown above. Therefore, if an improved compromise version cannot be agreed, I consense to restoration of the previous version of 13:33, 24 July 2008.

Iterator12n (05:20): "You have been bold, as a matter of fact you've been all over the article, including item 1.1.1" My judgment is that Iterator12n is engaged in a tu quoque fallacy, conflating previous editing cycles, which were discussed, with the present one in order to impose his bold version on the article prior to discussing it. (see 11-edit diff).
Furthermore, by removing the word "definition" ("the widely accepted definition of a six sigma process"), removing a subheading in a confusing article that needs more not less subheadings, and introducing a previously undefined "momentary variations" concept ("moment" is a word to be used only with great contextual care in technical writing, since it can mean an instant to an eon), the article became more confusing.

Part of the overall confusion is Motorola's. They use a rhetorically-incorrect word, "Using this scale, "Six Sigma" equates to 3.4 defects per one million opportunities (DPMO)." It does not equate, it is defined that way. Equate implies a simple statistical-mathematical correctness, rather than the Six SigmaSM metric's pragmatic definition. Naturally GE has simply repeated what Motorola wrote as the owner of the service mark, but it should not be repeated here.

As a partial fix, I recommend adding a two-column table to show the relationship between "Six SigmaSM metric definition" and "6 sigma mathematics". This table should include some of the confusing terms, symbols, and variant approximations with units:

           | Six SigmaSM metric definition      | 6 sigma mathematics |
            ——————————————————————————————————————————————————————————
Standard   | 4.5σ                               | 6σ                  |
deviations |                                    |                     |
            ——————————————————————————————————————————————————————————
success    | 99.9997%                           | 99.9999998%         |
 rate      |                                    |                     |
            ——————————————————————————————————————————————————————————
DPMO       | 3.4ppm                             | ≈ 2 ppb             |
           |                                    | = 1.9866E-9 ppb     |
           |                                    | ≈ .002 ppm          |
           |                                    | = 0.0019866 ppm     |
            ——————————————————————————————————————————————————————————
ST         | 2 ppb only during capability study | 2 ppb               | 
            ——————————————————————————————————————————————————————————
LT         | 3.4 ppm                            | 3.4 ppm             |
            ——————————————————————————————————————————————————————————
Cp         | 1.5                                | 2.0                 |
            ——————————————————————————————————————————————————————————
Cpk        | ?                                  | ?                   |
            ——————————————————————————————————————————————————————————
Pp         | ?                                  | ?                   |
            ——————————————————————————————————————————————————————————
Ppk        | ?                                  | ?                   |
            ——————————————————————————————————————————————————————————

This table would need a key to the symbols and acronyms. Note the question marks which reflect my reluctance to spend a lot of research time on details well-known to Jayen.

While doing research, I found some of the external links to be of inadequate to poor quality. In particular the Honeywell material was unfindable, the Six Sigma Guide is written in ESL, and the USA Today link is stale. GE was not useful for the Six SigmaSM metric, but perhaps had some interesting material on the Six SigmaSM management system, which was not what I was focused on. I had to do a Google search for better ones. Valerie Bolhouse was useful to me for part of the Six SigmaSM metric.

Of top level importance, the introduction and article structure need to be changed to show that Six SigmaSM is first of all a service mark that has evolved to mean three things distractingly different from the original metric concept rooted in statistics. These three Six SigmaSM things, metric, methodology, and management system are described at Motorola University What is Six Sigma?. There should be a new section above #Historical overview with three subheads parallel to the Motorola What is Six Sigma? page. Milo 21:55, 26 July 2008 (UTC)

Industry's missing standards for process indices

  • Thanks for your comments. I've reverted for now to the old version; we'll have to work on the other issues. Just for added inspiration, the German article contains the following table, which, once verified and sourced, could perhaps be of use as a starting point (it includes the 1.5 sigma shift, we could perhaps add a column for the normal probabilities without the shift):
Sigma DPMO % conforming
1 691,462 30.85375
2 308,537 69.14625
3 066,807 93.31928
4 006,210 99.37903
5 000233 99.97673
6 000003.4 99.99966
7 000000.019 99.9999981

The capability index notation I think could get us into deep waters, since this is not sufficiently standardised. Some people write Cp = 2 and Cpk = 1.5 to denote the effect of the 1.5 sigma shift, others may write Ppk = 2 and Cpk = 1.5 to express the same idea. This requires some more thought and research. Jayen466 01:29, 27 July 2008 (UTC)

''Jayen466 (01:29): "the German article contains the following table"
It might eventually fit into the section #Origin and meaning of the term "six sigma process". At present there is no supporting text focusing it into a business article, such as describing a claim I saw on the web that previous process systems were constructed to 3 sigma quality.
Jayen466 (01:29): "capability index notation I think could get us into deep waters"
Perhaps. The problem is that you used those notations to support your side of a debate issue, so they seem to be of considerable importance to the contentious 1.5 sigma shift issues. Being a newcomer to the subject I don't know how deep those waters are, but I subscribe to the teaching practice of leaving clues to knowing what one doesn't know.
There's no brief article account as to how the 1.5 sigma shift was empirically discovered. An unstated account of an empirical discovery tends to enhance Wheeler's "goofy" dismissal. Unstandardized biz/stat-babble notations don't help to counter that claim.
Jayen466 (01:29): "since this is not sufficiently standardised"
It depends on how many possibilities exist. Only two possibilities may not be difficult to put in a table, possibly with the names of schools/authors that teach it. If there's a range of possibilities, such might go into a table as a hyphenated range. This has the advantage of admitting the notations exist in an inherently summarized table that at least reveals a general relationship to the other terms. If they lead to deep water either because of statistical formula complexity, or because of multiple schools of practice, they can be footnote-marked in the table, described as complex for those reasons in the footnote text, and referred to a source. Milo 05:39, 27 July 2008 (UTC)
This page here and this section from Breyfogle illustrate the problems with index notation. It is a mess, I'm afraid, and one that I'd rather steer clear of in this article. The implications of doing justice to it are just too daunting; I fear they would take up a large part of this article. Jayen466 16:07, 27 July 2008 (UTC)
previous process systems were constructed to 3 sigma quality The maths of process capability give you a value of Cpk = 1 if you can just fit xbar ±3 sigma into the tolerance range. Up until the mid-eighties, this was considered a "capable process". In the late eighties, the automotive industry moved to asking for Cpk = 1.33 (±4 sigma) for long-term data, and asking for Ppk = 1.67 (±5 sigma) for short-term data (this is Ppk in the sense of "preliminary process capability", a short-term study that can be based on as little as 50–100 values). Then Six Sigma came along, asking essentially for a Ppk of 2 (±6 Sigma), and a Cpk of 1.5. (Cp just compares process spread and tolerance width, while Cpk takes into account how close the process mean is to the nearest specification limit.) --Jayen466 16:22, 27 July 2008 (UTC)
Jayen466 (16:07): "I fear they would take up a large part of this article"
While there appear to be only two possibilities, I see your point. Taken all together the indices are used (or so-believed used) in exactly opposite ways to mean either short-term or long-term by at least two groups of industries. (Fortunately,"k" apparently refers in all cases to a centering-sensitive version of each index.)
Thanks for the good choice of those two references, "Pp and Ppk", Samir & Hemanth posts and Breyfogle, "Implementing Six Sigma: Smarter...", pg. 257-260. Breyfogle identifies "AIAG (1995b)" (an automotive group) as one defining standard for process capability and performance indices. Breyfogle further identifies "Opinion 1" and "Opinion 2" (p. 259-260) of the meanings and use of the indices. Posters Samir & Hemanth identify one practice as that of the automotive industry, but since AIAG is North American, is AIAG aligned with Toyota's pre-North American practices? As Motorola, an electronics manufacturer, is the Six SigmaSM mark owner, do they represent the other major practice?
My reading of Breyfogle is that Cp, Cpk, Pp, Ppk, shouldn't be used without identifying them as to whether they mean short-term or long-term. I looked over your previous dialogs using these indices, and I couldn't follow them without an exhaustive (and exhausting) contextual analysis of each referred author. Assuming that I correctly identified the issue, you could use what I understand to be a conventional method of extending notations, by subscripting "ST" or "LT" each time you casually mention an index on this talk page, thus:
CpST , CpkST , PpST , PpkST or CpLT , CpkLT , PpLT , PpkLT .
(The extension subscript wikicodes are <sub>ST</sub> and <sub>LT</sub> .)
This extension subscript should immediately reveal to practitioners of differing notation meanings what you really intend, and guide newcomers to your context. For longer formal expositions, say, quoted from textbooks, you could state the notation standard in use at the beginning.
I suggest another article to handle this problem, perhaps Process performance and capability index standards, which as a new stub should reference Breyfogle. Milo 21:44, 27 July 2008 (UTC)
I guess the right place to address this is Process capability. An added difficulty is that some companies insist that the calculation of process capability indices is only permissible if the process is stable (under statistical control, common cause variation only), while others calculate Cp or Pp from long-term data that include special cause variation. And you're right of course about AIAG saying one thing, and the automotive manufacturers supposedly subscribing to AIAG apparently saying another. <sigh> Here is more of the same confusion. The most I feel competent to point out in Wikipedia at this point is that various companies have different, mutually contradictory definitions. --Jayen466 22:15, 27 July 2008 (UTC)

Jayen466 (22:15): "Here is more of the same confusion"
The good news is that you've studied beyond them in a meta-view, and so were able to point me to a reliable source. That means Wikipedia is in a position to help with disambiguation.
Though Breyfogle is a source that this issue exists and what to do about it, he doesn't identify the schools of practice, other than "AIAG", "Opinion 1", and "Opinion 2". Still, it's more than adequate for a stub.
Jayen466 (22:15): "place to address this is Process capability"
I hadn't previously noticed Process capability, and I see that there is no parallel article for Process performance <-- (currently red linked). Trying to develop issues about competing standards inside what one school might perceive as "their" article doesn't strike me as wise. I had briefly looked at Process capability index and Process performance index, both of which are math-focused articles. My sense is they are all inappropriate to modify until a standards article is stable. The latter two did suggest to me the article name "Process performance and capability index standards".
This issue isn't really about "process capability" or "process performance" per se. Whatever their name or index symbol, each of the two study types, ST and LT, apparently more or less work and each seems to have a distinguishably separate function – if you understand the ST and LT methods without being able to unambiguously name them. The presenting issue is nomenclature and related standards of symbolic identity, with a pressing business need to figure out what the confused customer really needs for a data acquisition method and a matching choice of software calculations.
It seems to me that an apparently important industry communications issue shouldn't be subsumed under an existing title. However, I have no dog in the outcome of this one.
Milo 02:19, 28 July 2008 (UTC)

External links

I removed all but one link as being overly promotional, along with the hidden comment, "These links were chosen after extensive debate. Please do not make changes without discussing it on the talk page first." It looks like these debates happened years ago, and were only relevant to a couple of the links. I think the article has evolved to the point where we can be more selective of what external links are listed. --Ronz (talk) 02:12, 13 August 2008 (UTC)

Good call. Jayen466 02:48, 13 August 2008 (UTC)

I uploaded a independent media source, www.sixsigmazone.com and it was removed why was this? It seemed to me good content and articles about the subject like the USA Today link... JohnN66 (talk) 12:18, 6 May 2009 (UTC)

Hi John, Firstly, thank you for engaging here and welcome to Wikipedia. I removed your link due to a number of concerns. If you take a look at WP:EL it is clear that external links should be kept to a minimum. Imagine if every "notable" site was added to every "suitable" host article, the project would grind to a halt. So, it is much harder to include a link by satisfying the requirements. In particular, your link did not pass based on four points (from links to be avoided):
  • Information is not unique (based on point 1);
  • The link can be viewed as promotion for the website (4);
  • The site exisits primarily to promote events, books, training and other services (5);
  • It includes lists of suppliers such as consultants and other vendors (14).
In addition, "avoid linking to a site that you own, maintain, or represent" may apply. So, how to get around this? Well you are clearly expert in the field and would be most welcome to contribute to the article. In fact, this enables editors to understand that your motives are in the best interests of the project. I hope this makes sense. Please review the links policy again. If I can clarify anything further, please don't hesitate to ask either here or on my talk page. Once again, thanks for engaging. Best regards Nelson50T 13:40, 6 May 2009 (UTC)

Lean six sigma

I'm thinking of forking out Lean Six Sigma as a separate article. I would like to know everyone's opinions on this move. Thanks Zithan (talk) 16:09, 26 December 2008 (UTC)

We had an article on Lean Six Sigma previously. It just collected advertisements for various consultants and was eventually deleted. I will support recreation of a new article if it cites appropriate sources published by reputable publishing houses. Jayen466 22:18, 29 December 2008 (UTC)
Absolutely, Lean Six Sigma is a fusion of Lean and Six Sigma concepts. Lean emphasizes transforming processes to improve velocity and eliminate waste, while Six Sigma emphasizes statistical control of processes to minimize defects. Many organizations are recognizing that the two methods are complementary and are adopting formal Lean Six Sigma programs (for example, see the US Army's position at [10].--Gordon Jones (talk) 15:26, 27 January 2009 (UTC)

Technical error in graph title, "inflection point"

This article has a technical error in the title of the graph. It says, "The Greek letter s marks the distance on the horizontal axis between the mean, µ, and the curve's inflection point" but the curve does NOT have any inflection point.

Sigma is equal to the standard deviation of the normal distribution, such that a larger sigma makes a wider (more spread-out) bell shape.

What do you think the author meant? What would be a better sentence here? Markgenius (talk) 23:44, 13 February 2009 (UTC)

The normal curve most assuredly does have a point of inflection (or, strictly speaking, two of them). As for the distance between the point of inflection and the mean being equivalent to the standard deviation, that is a pretty well-known fact of statistics. [11] If you think it can be formulated better, by all means let's look at it. Jayen466 00:16, 14 February 2009 (UTC)

All you need to know about six sigma can be learned from this talk page...

I ventured to this page after hearing complaints from my friends about "corporate nerds" at their companies who subscirbed to some way of thinking called Six Sigma, and how they were completely immersed in some bizarre world of business double-speak and cliched thinking, the type of people who probably have those corporate motivational "teamwork" posters hanging up in their homes. After reading the article I decided to check out the talk page, and WHOA NELLIE. Talk about hitting the nail on the head. I now feel as though I know EXACTLY what they were talking about.

Thanks – if there are bits in the article itself (rather than the talk page) that leave you scratching or shaking your head, do list'em here. For we are committed to the pursuit of cross-functional team efforts resulting in appropriate corrective measures designed to raise the quality of process output to six sigma level, and having analysed your data and implemented suitable improvement actions, we will institute controls to safeguard the gains and maintain quality at this level, all the while endeavouring to improve quality even further, true to the principle of neverending improvement. (Note: The second sentence is a joke, the first isn't.) Jayen466 17:07, 26 February 2009 (UTC)

Yellow Belts

[12] Jayen466 17:07, 26 February 2009 (UTC)

Six Sigma Without the Religion

I've heard this term bandied around in C-level discussions, and thoughts on if this is even definable? —Preceding unsigned comment added by 143.112.144.129 (talk) 22:24, 29 July 2009 (UTC)

Failure mode and effects analysis

Hello, I am trying to make my first Wikipedia edit. In doing so, I wrongly attached my name to a particular edit (twice) and it appears that these edits have been removed, which makes sense...thanks for catching this! The change I am trying to make is very simple and correct. I'm trying to add the acroynm FMEA to the tool failure mode and effects analysis. In practice, and literature, the tool is usually referred to as FMEA. Since othe acronyms aren't referenced - for example QFD - I'm assuming that acronyms don't require a reference. Please correct me if I am wrong. HyperTransparency (talk) 17:13, 6 September 2009 (UTC)

  Done. Thanks for contributing to WP, I hope you continue to help with our project, just be careful not to sign articles, only talks! RaseaC (talk) 17:17, 6 September 2009 (UTC)
Ah, I see now what you were trying to do. You're quite right, the abbreviation is good to have there. Best, JN466 21:45, 6 September 2009 (UTC)