Development dates

Most of the initial AHP development was in the 1970s, not the 1980s. The seeds of it were sown from 1963-1969 when Dr. Saaty worked for the State Department (See his "Fundamentals of Decision Making and Priority Theory," page ix.) The first book on it was published in 1980 (Saaty's "The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation.") Lou Sander (talk) 15:09, 3 February 2008 (UTC)

Irrelevant

what i deleted was irrelevent. article is imbalanced. Sweet Death (talk) 01:48, 7 February 2008 (UTC)

Improving the criticisms section

There are some criticisms of this analytical process, but they have not found wide support. This section does not present the situation fairly --

  • Undue weight is given to 18 year old debates among professors, including extensive quoting from one side.
  • The claim that "the use of arbitrary scales and certain internal inconsistencies and theoretical flaws have been discussed extensively in the literature" is supported by only two (2) citations, both over ten (10) years old. Nobody in this century ever pays attention to them, let alone repeats them.
  • The use of Wikipedia subheadings gives these matters undue importance in the article's Table Of Contents. They are of historical interest only.

How can these problems be fixed? Skyrocket654 (talk) 18:59, 10 February 2008 (UTC)

It seems Skyrocket654 has a point here. Because I am rather new at being a Wikipedia editor I am reluctant to make any changes myself.MathDame (talk) 17:03, 11 February 2008 (UTC)
I have also been bothered by the many TOC entries for this small section. I am fixing the problem by replacing the headings with bolded letters. Hope it's O.K. Cleome (talk) 18:38, 11 February 2008 (UTC)
I noted ERosa's remarks. 1) If all we have is 18-year-old stuff, we shouldn't say a lot about it. Maybe mention that there were debates 18 years ago, and provide the references. 2) If the critics are still arguing that their charges weren't answered, we should include current references. 3) We shouldn't care very much about proponents or opponents or what they might think, or how many there are, and so forth. Let them publish their stuff. We can include it if it's appropriate to do so. DCLawyer (talk) 02:06, 12 February 2008 (UTC)
I have been bothered for months by the quotations from those long ago debates. There were at least eight or nine articles in the series. It is not our job to summarize them, especially by arbitrarily selecting quotations. But the debates did take place, and there is no reason not to mention them. Readers who are interested in them can get a complete picture by reading the articles in the references.
The references should be kept but the quotations should be deleted. All this can be fixed by deleting everything between "In these debates,..." and "...good basis from which to develop." The two references that appear there are duplicates of those in the "A series of debates..." sentence. Ohio Mailman (talk) 16:17, 13 February 2008 (UTC)
Again, note my remarks below about why the age of a still-valid criticism matters not. The question should only be about whether they represent the weight of current academic thinking (people tend not to republish the same criticism even though it is still valid since anyone can just cite the original). Some here have remarked that these criticisms "have not found wide support" but if that is the case they should cite their sources. I find these quotes to be the predominent thinking among people in the decision sciences but, again, a claim either way would be unfounded and OR without a verfiable source.ERosa (talk) 23:18, 13 February 2008 (UTC)
But I thin Ohio Mailman makes a valid point about the quotes. All criticisms should stand (rank reversal should be re-entered)but could be replaced with a description of the debating positions instead of quotes.ERosa (talk) 23:20, 13 February 2008 (UTC)

There are some famous theorems in geometry that are over 2500 years old, and they are still good. The problem is that the criticisms were sound 18 years ago and the proponents never answered them in a way that is conclusive proof that the initial criticism is wrong. Contrary to what DCLawyer said, the burden of proof should be on the side that has not provided citations for how the critics were answered. If these criticisms are "out of date" then provide the citation that conclusively disproved them. Otherwise, assume they are still valid. To simply label them old without specific citation amounts to original research. Let's avoid breaking the NOR rule. I could point out that the criticisms of areas such as phrenology, astrology, the ptolemaic system, and certain aspects of Aristotelian physics are also quite old. If we use DCLawyers logic, should we now conclude that the criticisms have worn off and it is the burden of the critics to cite new criticisms? No,until the old ones are answered conclusively, they are still good.ERosa (talk) 02:43, 12 February 2008 (UTC)

The debates started 18 years ago but the critics argue that their charges were never adequately answered. No empirical findings to date have overturned these criticsms. If anyone has information to the contrary, please cite the specific source of the controlled experiment (not just testimonials from a case study) or the mathematical proof of the error in the criticism. I realize AHP proponents are almost religious in their support of AHP and prefer not to admit to any current debate on the topic. But the debate is real and is still real after 18 years. Furthermore, you should compare the relative emphasis given to criticisms sections in other methodologies an I think you will find that this is entirely consitent with how criticisms are presented in peer-reviewed literature. Finally, I'm not sure what Sweet Death meant by not having wide support, but among academics I find very few pro-AHP researchers. The one exception I found as a student of Saaty who started an AHP software company. On the other hand, if you feel that you have empirical evidence of how wide or narrow support for AHP is among objective researchers (not, of course, among AHP vendors and managers who bet their careers on buying the AHP tools) then you should cite that in the article. ERosa (talk) 18:56, 11 February 2008 (UTC)
I also just noticed that the history of the article included a point about rank reversal. Proponents argue that this has been addressed with the "ideal process mode". But this is still of interest at least historically because it was a key criticism for many years. Its also interesting that Saaty's initial argument was that when rank reversal happens, it actually *should* be happening and is not, as the critics charges, evidence of a fundamental logical flaw in the process. It was only much later that Saaty apparently agreed that rank reversals are often (I would say always) an absurd result and then developed the Ideal Process mode to correct it. My concern is that when a method has such a fundamental flaw, one has to be concerned about the entire approach and that just fixing the outcome after the fact is putting lipstick on a pig. But, since I don't want to be accused of original research, I'll see if some other source has already responded to Saaty's fix to rank reversal and write a section that cites it. In the mean time, I think rank reversal is important enough to at least be mentioned as part of the historical evolution of AHP.ERosa (talk) 21:15, 11 February 2008 (UTC)

I decided to be bold and make my first edit to this article. I found some good accurate words in two of the existing references and I thought they were better than what was there. MathDame (talk) 03:14, 15 February 2008 (UTC)

Rank reversal

I don't know much about this phenomenon, but it seems like it gives rise to an important criticism of the AHP. Apparently this criticism has been around for a long time, and at least a few people might still be talking about it. We probably should say something about it in the Criticisms section. Here is what I think should be said:

  • We should acknowledge that there is a phenomenon called "rank reversal," and that some people think it's a problem.
  • We should describe the phenomenon, in a way that is appropriate to a general encyclopedia. Maybe include a practical example.
  • Also, in an appropriate way, we should say something about what critics have said about the effect of rank reversal on AHP's suitability, reliability, etc. If there is a debate involved, which it seems there is, we should also say something about the rebuttals to the criticisms. We should NOT cherry pick quotes. Hopefully, somebody can find a reliable, unbiased 21st century source that summarizes the debate.
  • We should carefully avoid giving this criticism undue weight. AHP is used, studied, and taught very extensively in many countries of the world, by intelligent people who know what they are doing. Whatever rank reversal is, it does NOT seem to be something that "proves" that AHP is no good, or that seems to be having much visible effect on its use.

I'm not very interested in pursuing this matter myself. I'm just putting in my two cents worth. Skyrocket654 (talk) 15:51, 15 February 2008 (UTC)

What you've said is fine but I would caution that a method being widely used is no evidence of it actually working or being beneficial. If that were the case, we would have to acknowledge astrology as even more beneficial. The point that some of the critics (including those listed in the article) have made is a lot of people are spending time with a method that - they argue - doesn't really work at all. I think the fact that AHP is widely used is already clear from the article but the criticisms section should not necessarilly be blunted as if wide use were evidence the criticisms are invalid. Regarding this specific criticism, we should mention that Saaty's "Ideal Process Mode" is meant to correct rank reversal. The fact that Saaty originally argued that rank reversal SHOULD occur and then, contrary to his original position, created a method to "fix" the probem may give the issue undue weight as a criticism. But another point I would make is that the issue of rank reversal doesn't even necessarilly have to be a criticism. It is simply part of the evolution of AHP. How a method evolved does seem to be a legitimate topic in articles about other methods.ERosa (talk) 17:56, 15 February 2008 (UTC)
We are not dealing with astrology here. AHP is being used and taught by large numbers of well-qualified and well-informed people. If the critics are arguing that the method doesn't work and that people are wasting their time with it, a lot of serious people don't seem to accept their arguments. DCLawyer (talk) 09:55, 16 February 2008 (UTC)
But you need to understand that the critics are also well-qualified and well-informed. In fact, I would argue that the vast majority of AHP proponents would nor could never be published in the same level of journals as the critics cited. And, yes, if the critics are right then a lot of people *are* wasting their time with it. The mathematical flaws the critics refer to were not arrived at by popular vote. It wouldn't be the first time in history that a popular method (even one with some academic proponents)was debunked. And the popularity of it is all the more reason to point out the flaws (by the way, astrology has also had academic support in the past - so they are, in fact, identical situations).ERosa (talk) 13:59, 16 February 2008 (UTC)
By the way, DCLawyer should review the Wikipedia article on the fallacy Argumentum ad populum. I initially assumed a trained lawyer would be wary of this common fallacy but I guess not.ERosa (talk) 14:13, 16 February 2008 (UTC)

"General Consensus" Source not adequate

I just changed a version that claimed there was a "...the general consensus that the [AHP] technique is both technically valid and practically useful,..". The cited source was merely an online journal for software developers, not actually an academic research journal in the decision sciences. If anyone has a proper source for such a claim, it should be cited instead of this one. However, you will not find any such claim in a peer reviewed journal.[citation needed] People who know the controversy behind this topic wouldn't even claim AHP proponents are even the majority among researchers, much less approaching "consensus".[citation needed] I suspect anyone who thinks there is a consensus is not a researcher but probably one of the vendors or customers of AHP, who are mostly unaware of the problems in AHP or, at least, don't seek out the numerous critics.

The cited source was MUCH less reputable than the sources cited for the criticisms.[citation needed] I would recommend, in fact, that if anyone wants to claim there is anything remotely like a "consensus" that AHP is technically valid, they should cite a source as reputable as those cited for the criticisms (Managment Science and the Journal of the Operations Research Society). Be advised that in the decision sciences, that's about as high a standard as there is.

I did change another point. The previous version of the criticisms claimed that it was the use of arbitrary scales that resulted in "rank reversal". Actually, that is not the reason rank reversal occurs. It occurs because of the nature of matrices based on pairwise comparisons both within and among the columns.ERosa (talk) 04:54, 16 February 2008 (UTC)

ERosa: We realize you are new here, but please do not remove properly sourced material from Wikipedia. And please do not corrupt the formatting of the article. Using the Show Preview button will avoid this. If you need help in editing, others will usually be happy to provide it. DCLawyer (talk) 09:22, 16 February 2008 (UTC)
And I realize you are probably new to research in this area (I'm not) but it was not properly sourced. It was an off-hand comment in an online journal artile for software developers and the author provided no evidence whatsoever of this claim. In fact, the citation of rank reversal that was in here over a month ago WAS properly sourced and was removed. If you insist on leaving in such a weak source from someone trained in an unrelated topic, then the comment needs to make that clear. You need to think very carefully about whether any standards should apply at all to sources or it would be possible to cite a source for virtually anything.ERosa (talk) 14:04, 16 February 2008 (UTC)
Also, I apologize for the formatting problem. I'll use preview as you suggest.ERosa (talk) 14:24, 16 February 2008 (UTC)
ERosa: Please do not change the clear meaning of properly sourced material. It is not up to editors to interpret or evaluate it. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources. (See WP:OR). Also please refrain from making comments about other editors. Please do not give them unsolicited advice. Comment on content, not on the contributor. These things can be seen as personal attacks. (See WP:NPA) DCLawyer (talk) 15:08, 16 February 2008 (UTC)
Regarding your "citation needed" comments within this talk page, please do not confuse discussions with actual edits to the article. If citations were needed on each sentence in the talk page, I would certainly pepper yours. Also you apparently see no irony in your statement "Please do not give [authors] unsolicited advice". Have I solicited any from you? I will change it again in a way that attempts to satisfy your need to position a quote from a little-known online journal for software developers to equal standing with the peer-reviewed Management Science.ERosa (talk) 16:25, 16 February 2008 (UTC)

Without removing it, I put the "general consensus" claim in its proper context and moved another properly sourced quote next to it to show the contrast. Since my comment simply explains the source, it cannot be deemed as "changing the meaning" of it in any way. Any change to the part I just added should meet the standards as stated by DCLawyer as 1) we should not change the meaning of the properly sourced quote (it was previously presentd as if it were an uncontested fact, not simply a comment in an online journal unrelated to the decision sciences) 2) we should not remove or change the meaning of the properly sourced quotes from the critics, 3) there should be no original research and any changes at all that change the meaning, according to any reader, in any way, would be seen as original research (as DCLawyer apparently interprets the rules).

If we were to take DCLawyer's position literally, I would encourage anyone who has "properly sourced" statements which, according to DCLawyer, are any reference to any article whether or not the article itself justifies the claim or makes it as an offhand comment. Also, the expertise of the author, the subject matter relevance of the journal, the reputation of the journal and even whether or not it is peer-reviewed journal are all, as DCLawyer indicates, irrelevant and to remove such a claim would be removing a "properly sourced" claim. In short, we shouldn't worry about the validity of the source at all because to even question the validity of the source is original research (according to DCLawyer). Clearly, this is not actually how the term "original research" or "properly sourced" should be used. But if this argument continues on a fundamental point about the meaning of the Wikipedia rules, I suggest we seek arbitration. ERosa (talk) 17:04, 16 February 2008 (UTC)

Please stop the personal attacks on people who are trying to help you. Discuss the article, not the editors, including yourself.
My discussion of recent changes to the article is this. MathDame made a very good first paragraph of the Criticisms section. It cited a recent source that put the criticisms in context. That was followed by material from another recent source that listed the categories of criticisms. That was followed by several paragraphs of mostly very old material about criticisms in those categories. The first paragraph is not the place for that very specific, very old material. The MathDame version is entirely appropriate. Ohio Mailman (talk) 22:21, 16 February 2008 (UTC)
Its not about personal attacks, its about the rules. You have now done exactly what DCLawyer accused me of - deleting a "properly sourced" reference. I undid your unwarranted removal of the properly sourced material and your attempt to present the reference as an uncontested fact. Your citation was an offhand comment without any supporting evidence from an online journal for software developers. The criticisms were from established and respected journals in the decision sciences (including the journal where Saaty originally published AHP). Your citation certainly did not put anything in context and to claim so is, according to the same rules DCLawyer uses, original research. The age of the mathematical flaws is irrelevant (a mathematical proof doesn't age). IF you have a more recent citation that conclusively disproves the original criticism, then you should state it. But you cannot make any judgement about which is more valid without breaking the rule against OR. Furthermore, the more recent criticism I just added shows how the debate still goes on. Please do not insert POV or OR. Please stick to the Wikipedia rules.ERosa (talk) 23:16, 16 February 2008 (UTC)

First paragraph of Criticisms section

I believe the first paragraph should be the MathDame version: A statement of the state of the world from MSDN, followed by a listing of the areas of criticism from de Steiguer. I believe it should NOT include specific criticisms. They should be aired in the body of the section. Basically, I agree with Ohio Mailman. DCLawyer (talk) 00:56, 17 February 2008 (UTC)

Thank You. I find the reasons to do otherwise are not very persuasive. Ohio Mailman (talk) 01:00, 17 February 2008 (UTC)
Be careful that your judgement to do otherwise - or anything - is not original research by the standards DCLawyer applied to others (but apparently not to you). This same situation came up a couple months ago in an exchange with me and DCLawyer and it now appears this uneven application of the rules is used with ERosa. If specific criticisms should not be aired in the first paragraph, neither should specific pro-AHP statements (especially sourced from an online journal that has nothing to do with the decision sciences). It appears that the claim that there is a "general consensus" is what caused this most recent problem. This claim, as it was originally inserted, was lifted directly out of the source and presented as if it were a fact instead of the opinion (the author provided no data to support this claim) from an author of an article in a journal that covers an unrelated topic. My previous attempt to remove other comments were criticized by DCLawyer as removal of a "properly sourced" claim, which, according to him, can never be removed without commiting an infraction against NOR. Pick which rules you want to stick to. If any removal of "properly sourced" claims is a violation of NOR, then you can't remove any specific quote in the first paragraph. If, as you insist that such a claim must be included without any further qualification, then it should not be the first item in the criticisms section. The rest of the article is full of very flowery language about AHP. If you prefer, there is even a subsection at the bottom of this section which is a response to criticisms by proponents where you could insert this claim from the software developers journal. It should not be the first sentence in the criticisms section. I find the reasons to do otherwise as completely illogical and hypocritical.Hubbardaie (talk) 01:18, 17 February 2008 (UTC)
Actually, now that I've reviewed the history, I'm not sure this "properly sourced" exchange is something I had with DCLawyer but I apparently did have a similar exchange with GoodCop. So I apologize to DCLawyer for accusing him of this. (I'll presume they aren't just socks).Hubbardaie (talk) 01:44, 17 February 2008 (UTC)
I had a longer response but I lost the data in an update conflict. Actually, yes, the claim about the alleged existance of a supposed general consensus is what much of this debate traces back to. I made one other recent change regarding a new criticism but I haven't seen a debate about that, specifically. I would be interested in seeing what any of you propose as a change. To be clear, I'm not complaining about the inconsiste application of rules (yet) since I haven't seen DCLawyer do anything completely hypocritical. But it would be inconsistent to remove one quote and leave another especially given the relative standing of the sources as journals of decision sciences (Saaty didn't publish his first AHP paper in MSDN) and especially if the removed quote is a criticism in this section on criticisms.ERosa (talk) 01:36, 17 February 2008 (UTC)
Please stop the personal attacks. Please discuss the article, not the editors or your debates with them.
The article should begin with a statement backed by the MSDN reference. Then it should list the major areas of disagreement, backed by the de Steigeur reference. Then the doors are open to discussing the specific criticisms, as long as they are not given undue weight.
To start the section with the criticisms gives them undue weight, in my opinion. Thousands of serious people use AHP on a regular basis and are satisfied with it. Its use continues to grow. A handful of researchers publish occasional criticisms with strong and possibly valid claims that AHP has serious flaws. The use of AHP continues to grow. By all means describe the criticisms, but give them the weight they deserve as minority views. DCLawyer (talk) 03:20, 17 February 2008 (UTC)
First, I don't know which personal attacks you are talking about. I see none in my prior statement. Please do not make unfounded accusations. Second, the article should NOT start with the statement "backed" by the online journal which has nothing to do with the decision sciences. To start a section titled "Criticisms" with an actual criticism is not undue weight. I repeat, this is the *criticisms* section. The article itself starts with no criticisms at all. Criticisms are already far down the list and you still want to start even that section with more pro-AHP comments. Your judgement about what gives what undue weight would constitute OR by the same standards you attempt to apply to others. Please apply rules consistently.
And, again, the number of people who use it is irrelevant if the criticisms are formal, mathematical proofs or the result of controlled empirical experiments. As I told you before, that is the common fallacy of Argumentum ad populum. If showing that a lot of people used or believed in something was sufficient proof of anything, imagine what else we could "prove". The fact is that the criticisms listed (with one exception) are from the same journals where Saaty first published AHP. The researchers of mathematical proofs and empirical evidence of errors in AHP are fully aware that lots of people use the method and one listed in his aricle even responds to that. Science is not a vote. Do not resort to your own OR about how many believers a method has to have to be true.
What you are doing is simply making the same response that already exists at the end of the section where responses by proponents are made. Put your MSDN comment there, not at the top of the section. The rest of the entire article is pro-AHP, the published researchers who found specific flaws are given a small section near the end and even then there is a place for the proponents to respond to the critics at the end of the section. To insist that without putting more pro-AHP comments at the top of the one section that mentions any criticisms would constitute undue weight is absurd. Please stop making argumentum ad populum fallacies, claims of personal attacks, and violations of POV and OR.ERosa (talk) 03:53, 17 February 2008 (UTC)
One more thing. Please, cite an actual survey of researchers in the decision sciences (not merely customers or vendors of AHP, of course) that proves your presumption that critics are a "minority" view among those who actually study the topic professionally. I'm not surprised there are few critics among the customers of AHP. Then they wouldn't be customers. As one of the critics already stated, the proponents are simply unaware of these serious flaws. Please, do not continue to violate the NOR rule by presuming who is the minority. Please cite sources.ERosa (talk) 04:14, 17 February 2008 (UTC)
A reliable source contains a verifiable statement about consensus and criticisms. That is enough. The statement also rings true[citation needed], given the voluminous literature about AHP and the numerous users, teachers, and students. There do not seem to be reliable sources which claim that the critics represent any sort of consensus[citation needed]. A few individuals have raised criticisms of AHP. These claims have been published in reliable sources. The criticisms should be heard, but cannot given undue weight. The critics are few in number[citation needed]. Their views, as true (or false) as they might be, are minority views [citation needed] that have failed to penetrate the larger community of users, teachers, and students. Minority views must not be given undue weight. The first paragraph of the Criticisms section should be the MathDame version. DCLawyer (talk) 04:59, 17 February 2008 (UTC)
I have not claimed that one side or the other holds the consensus, but you claimed one side is the majority. You have the burden of proof. Again, the number of students and teachers is irrelevant (something about the fallacy of argumentum ad populum is not getting through to you). Again, it is not "undue weight" to actually have criticisms in the criticisms section or even to *start* the criticisms section with criticisms. It would, in fact, give undue weight to the proponents if you start the criticisms section with this quote from the software developers journal (which, when you read it, offers no evidence of the claim). You apparently assume that those published criticisms are the only people who have any criticism and not just representative of some larger group. Unless you think they rate of publication is 100% among all critics (which would make them an unusually prolific group), then there are critics who have not published. Since I found yet another criticism today, I suspect there are even more to be found. In other words, the criticisms are also growing. Let me repeat, if you have a documented survey of academic researchers in this field that show that critics of AHP are a minority, then cite it. Whether you think this quote fom an online journal about software development "rings true" is also OR. Please do not keep promoting your own OR. If you think that any response to any of the mathematical proofs of flaws in AHP conclusively disprove the original criticism, then cite it. Otherwise the proof of the flaw still stands, regardless of the number of users. The first paragraph of the criticisms section should start with criticisms. Pro-AHP comments have plenty of other places to sit in this article. If your point is that all that one needs to start a section is that the claim be "verifiable" (that is, there is a place I can look online where some guy said it, no matter the relevance of the journal to the decision sciences), then any of the criticisms also meet(and exceed) that standard. But I suggest a possible resolution. If you can prove that the MSDN source is actually supposed to be more credible in this field than Management Science or the OR Journal or can prove that critics are actually the minority among decision science researchers, then we can leave the the first sentence as you suggest. To do otherwise is violation of NOR. Please do not violate NOR. ERosa (talk) 05:23, 17 February 2008 (UTC)
MSDN claims the consensus. That is sufficient. "([S]omething about the fallacy of argumentum ad populum is not getting through to you)" is an uncivil personal attack. Please stop that. DCLawyer (talk) 09:10, 17 February 2008 (UTC)
It may be sufficient (although I would disagree, given how weak the source is) for inclusion elsewhere in the article, not as the first sentence in the criticisms section. Your actions reveal strong POV. Please refrain from makng further unfounded changes to this material. Your refusal to consider the argumentum ad populum fallacy of your argument is an objective observation. We were having a debate about a topic and I, rightly, pointed out the flaw in your argument. That is a legitimate part of this process. If you choose to take it as a personal insult that is of no concern to me other than it causes you to violate the rule of assumption of good faith. Please cease doing that. ERosa (talk) 13:52, 17 February 2008 (UTC)
By the way, the sentence you want to include is also within the first sentence of the paragraph I just reverted to. What we are talking about is not even whether the MSDN quote shoud lead the criticisms section. What we are talking about is how you want to make it appear. The next quote "Thus the questions about the validity of AHP are far from having been settled. Also, it is clear that such discussion over a long period has not sufficiently penetrated the AHP users' community, with the result that paper are still being published using a faulty method." is also properly sourced but it is actually from a peer-reviewed decision sciences journal. The contrast between the MSDN article claim of a consensus and the decision science journal quote that the validity of AHP is far from settled is important. To ignore this gives undue weight to a unsubstantiated quote from a software development journal over a properly researched quote from a decision sciences journal. Given that the sentence is already in the top of the first paragraph, given that the source is from a journal that is not in the decision sciences (I can't even tell if it is peer reviewed, given that the quote from the reputable decision science journal contradicts it, given that the rest of the article has plenty of room for this claim, and given that this is the criticisms section, your insistance on a form that only this one particular form of presenting the MSDN quote is acceptable is completely unreasonable. Please do not make Wikipedia your own playground for your own POV.ERosa (talk) 14:17, 17 February 2008 (UTC)
Be further advised that the countdown for 3RR has already begun. Please do not violate it.ERosa (talk) 14:04, 17 February 2008 (UTC)

It's actually pretty clear that this process is a mainstream technique in the decision sciences. They teach it at Wharton and Cambridge. There are learned societies devoted to it in Poland, Japan, China, and Indonesia, plus who knows where else. It's also pretty clear that the critics are a vocal minority[citation needed] that doesn't get a lot of ink any more, let alone swing a lot of weight with decision makers. It doesn't make a lot of sense to claim that argumentum ad populum is applicable when the argument is that something is used by a lot of people. 216.183.185.87 (talk) 21:21, 17 February 2008 (UTC)

Actually, this is exactly the argmetum ad populum fallacy. It doesn't matter how many use it or teach it. The errors found were mathematical proofs or measured in controlle experiments. Such errors do not require a "popular vote" to confirm. These criticisms were published the major decision science journals. Your claim they are even a minority among actual professional reserchers in the decisions sciences has no basis in fact. I'm sure the users of AHP outnumber critics but the vast majority of users are not schooled in the decision sciences and certaintly not published on the topic. The critics listed each have many published works in some of the most respected journals in decision science and they must be given weight in the only section in the article that even mentions criticisms. Again, present any evidence you have that the majority view *among professional researchers in the decision sciences* finds no error with AHP. Once agan, nobody is even saying the quote from the sofware development journal cannot be used. But this article in an online software developers' journal cannot be given weight over several prolific researchers in this field.ERosa (talk) 21:34, 17 February 2008 (UTC)
Three words: argumentum ad verecundiam Cleome (talk) 23:33, 17 February 2008 (UTC)
But don't forget that the NOR rule and verifiaility rule mean that the ultimate basis for every article in Wikipedia is appeal to authority. The issue is whether any real authority in this area believes that the number of users overrides a mathematical proof or experimental evidence.ERosa (talk) —Preceding comment was added at 02:30, 18 February 2008 (UTC)
I think this article might now be past the 3RR and this debate requires some context at this point. Those who till presume a "consensus" in favor of AHP appear to be misunderstanding the nature of this recent flury of edits. From what I've seen so far, the different versions offered both have this quote from MSDN. The difference appears to be how the first quote is framed. In the AHP-has-flaws camp (which includes me and most academics I know), the version presents this quote as a view of proponents - which is accurate - and then contrasts it with a quote from a very respected researcher in a very respected journal. In the AHP-is-so-widely-used-it-must-be-perfect camp, the attempt is to make the issues of flaws in AHP as "settled" in favor of AHP. But its useful to note that I haven't seen anyone here argue AHP is absolutely useless (although a couple of the critics find pretty serious flaws). Its really only an issue of whether AHP has flaws or not. Some of these flaws, like the ones from Dyer, Schenkerman and Perez are a matter of formal, mathematical proofs. The one from Poyhonen is shown experimentally. The flaws are real but may not be debilitating. Note that Saaty himself denied that rank reversal was a problem for years before he decided it needed fixing. If you want a REAL consensus, I think the consensus is that AHP has flaws but still might be useful (although the degree of usefulness does vary according to who you ask. To deny that is has any flaws flies in the face of the mathematical proofs uncovered by these experts.
By the way, I address some problems of AHP in my book "How to Measure Anything" but I recuse myself from adding references to my own book due to COI. It looks like my criticisms might overlap some listed but other criticisms I mention are not yet addressed. One is that AHP seems to make people think they can analyze important decisions without empirical real-world measurements. Another problem is that in practice AHP users are asked to compare factors they can't possibly fully understand. I wrote about one example where a group in an AHP workshop was asked whether they prefer "strategic alignment" or "reduced risk". The users don't really know how much of either of these they are comparing (i.e. how much risk reduction vs. how much alignment) nor even what the terms mean. Unlike quantitative economic/statistical/actuarial modeling methods, AHP users don't appear to encouraged to clarify those points before they choose preferences. If I have any readers reading this, perhaps someone could consider making such a reference. Thanks in advance.Hubbardaie (talk) 03:04, 18 February 2008 (UTC)
Hubbardaie: 1) This is not your blog. 2) Are you and Erosa the same person? 3) Do you recognize THIS material? DCLawyer (talk) 03:39, 18 February 2008 (UTC)
First, I didn't intend it to be my blog. Its a talk page about a particular article and I was, like a good Wikipedian, acknowledging my COI but explaining why the reference my be relevant. Second, I don't know ERosa but, since you brought it up, are you the same person as GoodCop? The discussions have some curious similarities plus your page says you are a media lawyer in DC and Goodcop says he does parttime security for media companies. Not conclusive, I admit, but curious. Anyway, I'll assume good faith as the guidelines suggest. Third, I'm aware of the reference and it is straight out of my book. But although Tang clearly has his POV, I only made those comments after citing some of the more respected people in decision science. I'm not sure which parts Tang takes exception to. AHP is, in fact, a subjective weighted scoring method. Rank reversal was even acknowledged by pro-AHP people that it was something that needed to be corrected (thats how Ideal Process Mode got developed). ANd the example of trading "strategic aligment" vs. "risk reduction" was something I personally witnessed in a workshop run by a major AHP tool provider. Anyway, I'll defer more detailed responses to Tang (including responses from his peers who agree with me) for my blog.Hubbardaie (talk) 04:15, 18 February 2008 (UTC)
Where is your blog? DCLawyer (talk) 05:47, 18 February 2008 (UTC)
I never started one but apparently I need to. I only felt obliged to respond since you brought it up as part of this discussion about AHP. But thanks for adding my book page to Wikipedia. I might have done that myself but THAT would have been COI. If you meant that me merely arguing in a way consistent to how I was previously publised, I don't believe that is COI itself. If that were the case, then only unpublished people could contribute to WP, or people who contribute to areas other than which they were published. Sounds like lowering the bar a bit. Frankly, it seems like we are really talking about something pretty minor since both versions still put the MSDN quote in the first line. I've made my point.Hubbardaie (talk) 06:01, 18 February 2008 (UTC)

I Lost data from two edit conflicts so now I've learned to save my work before I submit. I don't think either side has said outright that AHP is useless. This is just about whether it has flaws. In that regard, I don't think many would disagree that it has flaws. At the same time, it is actually used in a lot of places. My personal opinion is that the main flaw is that it is used where sometimes more specialized mathematical methods are already proven to work. And it seems to encourage fact-finding through group meetings as opposed to real-world measurements. But that may just be due to how I see it applied. Our library DOES appear to have Hubbardaie's book so I'll check it out.ChicagoEcon (talk) 04:02, 18 February 2008 (UTC)

ChicagoEcon: It is interesting to see that you place more reliance on real-world measurements than you do on group meetings. Those real-world measurements you prefer to rely on were not a standard until relatively recently when someone or some group decided the length of the king's foot was going to be the standard for linear measure and after that it became a real-world measurement. We still have to interpret what a foot means even after it has been made a standard. It depends on the situation. If it is a distance to be travelled, that is one thing; if it is an error in cutting a plank for a table, it is another. Interpreting real-world measurements is still a task for human judgment. I think what really counts is what the group decides, for whatever reason, to push through. Group dynamics probably always trumps logic, facts and data. Consider the current election we are going through.MathDame (talk) 03:41, 19 February 2008 (UTC)
I'm not sure if you are saying that real-world observations using standard measurements are less valid because there were not always standard measurements. If so, that wouldn't make sense but please clarify. But the topic wasn't really about measurement standards, anyway, it about known mathematical solutions to particular problems. If you have a function and want to find its maximum, there is a procedure for that. Sometimes I see weighted scores applied to problems that already have proven optimal solutions. I'm just learning about AHP, but it does just seem to take subjective statements and turn them into a scale. No matter what math it applies after that, if the first step is flawed, so is the output. Garbage in, garbage out. Perhaps you are using the term "interpretation" broadly but I sure hope the engineers who designed my building didn't broadly interpret the units for the flexural strength of steel beams. And I hope that the nuclear engineers in the power plant down state didn't broadly interpret the units for the critical mass of uranium, neutron absorption ratios reactor material, or the tensile strength of a containment vessel. We need real numbers used in real mathematical solutions. The problem is that in many economic decisions, managers just assume that their problem is not like the ones I just mentioned so they don't even look for a scientific approach. Just think about how your computer optimizes memory, processing, or disk storage. I bet its not a weighted score. Anyway, that's my "bias" but thanks for the conversation.DCstats (talk) 14:37, 22 February 2008 (UTC)
Erosa: Good job on removing the incorrect reason given for rank reversal. I am not a pro-AHP person. I am just here to use my modest skills in improving this encyclopedia article on AHP. The other editors mostly seem to be doing the same thing. MathDame (talk) 19:02, 18 February 2008 (UTC)

Informal statistics

Here is something from the Helsinki School of Economics. Unpublished as far as I can tell, but from a pretty reputable place:

"Since Saaty developed the AHP, it has become a very popular decision system. From the ISI Web of Knowledge database (02/06/08)) we found 3,191 articles (since 1986) using key words: AHP OR Analytic* Hierarch* Process. ("*" refers to a wild character.) The total number of citations to those articles was 37,071. Correspondingly, the key words (AHP OR Analytic* Hierarch* Process) AND Application* found 684 articles which were cited 8,278 times, indicating that there are also quite many published and cited applications. The popularity of AHP has developed very rapidly. In 2007, there were 5,125 citations to the AHP articles mentioned in the ISI Web of Knowledge, whereas before 1992 there were less than 100 citations/year."

There must be a lot of good, recent citations to use as sources for this article. Skyrocket654 (talk) 18:43, 19 February 2008 (UTC)

AHP Criticisms in Context

I have a little bit of knowledge about AHP and related subjects. Here is what I think is the actual situation. I put it here only for informational purposes. I have no plans to add any of it to the main article.

  • The Analytic Hierarchy Process was developed almost forty years ago and has been expanded and improved since then.
  • Today it is in widespread and rapidly-growing use. The AHP user community is large and worldwide. It includes many high-level decision makers in large organizations, using AHP to help them with problems that have important organizational consequences.
  • There is also widespread and rapidly-growing academic study of the AHP and its applications. A recent search of just one academic database identified over 3,000 papers on the subject since 1986; they had been referenced over 35,000 times in other papers in the database, including over 5,000 references in the most recent year.
  • From the very early days, academics from several fields have expressed criticisms of the method, with conclusions ranging from “AHP should be improved” to “AHP doesn’t work.” Other academics have disputed their criticisms, with a similar range of opposite conclusions. In the early 1990s, there were vigorous ongoing debates about this in several academic journals. The debate continues today, but at a greatly reduced level.
  • Much or all of the debate is theoretical, with one side using complex mathematics to show that AHP can give incorrect results under certain conditions, and the other side either using complex mathematics to show that the first side is wrong, or claiming that the first side misunderstands AHP.
  • In spite of the theoretical concerns, there are no known examples of important real-world decisions going astray due to flaws in the AHP.

The "no known examples" claim just means that none are known to me. There may be some for all I know. Skyrocket654 (talk) 00:54, 23 February 2008 (UTC)

I suspect that when you say "theoretical" you might mean "without practical consequence". Actually, the authors of these proofs show the contrary. The researchers Dyer, Holder, Perez, and others are quite accomplished mathematicians and fully understand the relatively simple matrix algebra AHP uses. And, don't forget, that even though the AHP communitiy initially argued that rank reversal should happen, they eventually agreed this non-problem should be "fixed" with Ideal Process Mode. In regards to the claim that no known examples have lead decisions astray, that would be true for a wide variety of unproven decision making methods since outcomes are not actully tracked against controls. This is the primary criticism I've always had with AHP. There is no evidence to show that decisions improve by comparing test and control groups against decisions with measurable outcomes. This means that any percption of a benefit could simply be a "placebo" effect. I've read a large number of case studies written where AHP was applied to some problem. But, if you read them carefully, you never see the effectiveness of AHP challenged. The benefit of AHP is simply presumed and you will also find that the researchers in most cases are specialist in that particular field in which AHP is applied, not specialists in the decision sciences. There is no attempt to gauge the resuts against a control group and show a statisticaly significant, measured improvement. The case studies simply show what AHP produced. It would be great if someone could find a single case study that met the standards of evidence that the decision was better (i.e. not just anecdotal testimonials). In regards to the level of the debate, I believe it is because the non-AHP researchers in the decision sciences considered the issue settled in their favor quite while ago. And the study from Perez (which I added to this article) is quite recent. If you track the beginning history of medical methods or scientific theories that are now out of favor, you will find it is not much different from AHP. They were criticized for some time even though they were popular and even though in some cases the opposite was proven. And those same methods were taught in universities, had prestigious proponents, and were used by many people who thought they worked. That's why I never take popular opinion over a mathematical flaw in a method.ERosa (talk) 15:28, 23 February 2008 (UTC)

Pairwise Comparisons

I just added a subsection on this. There is more to come about pairwise comparisons, as soon as I get it written. It looks like I forgot to sign in on this public computer before I made the edit. Believe me, it was me who made it. Lou Sander (talk) 20:56, 20 February 2008 (UTC)

I just expanded this section. Still to come is comparisons of the alternatives. Give me a few days, please. Does anyone know of an online pairwise comparision "engine" other than the one from the Canadian Conservation Institute? Theirs is easy to use, but not as accurate as desired. Lou Sander (talk) 14:02, 7 March 2008 (UTC)

Included a new reference

I have included an additional reference in the criticisms section. Skyrocket654 (talk) 15:34, 29 February 2008 (UTC)

Comparing alternatives and Making the decision

I have posted extensive new material on comparing alternatives and making the decision. These additions complete the example of the "Jones family car buying decision." Lou Sander (talk) 03:12, 31 March 2008 (UTC)