Talk:Artificial intelligence/Archive 4

robot != ai

What the hell is a picture of asimo doing on the front page. It is not even an example of weak ai, all of its moves are prerecorded. Although I admit that it can walk upstairs and do other cool thngs, it is simply a machine and is about as stupid as any other. If some weirdo really wants to use a picture of a robot (which has nothing to do with ai anyway besides stereotypes) at least use a picture of a developmental robot ->http://en.wikipedia.org/wiki/Developmental_robotics <-

Cool it. Although I agree that Asimo's prerecorded performances doesn't cut it, please also refer to [1] and [2]. What counts as intelligence is, unfortunately, still a matter of opinion. However, feel free to change the picture. --moxon 14:58, 10 April 2007 (UTC) P.S. all forms of AI are simply machines
I agree with the last comment. Any form of AI is a machine. In addition, I suggest you take a look at ASIMO as you appear to underappreciate his capabilities. Even though his movements are prerecorded to some extent - he balances and evaluates the current situation on his own. Similarly, he does facial recognition, posture-, gesture-, general environment-, and object recognition. Oh, and he can distinguish sounds and their direction. ---hthth 12:24, 28 April 2007 (UTC)
I think the picture of Asimo is fine. However, it is unfortunate that every picture in the article is of a robot, since AI and robotics are distinct fields. The fields of AI and robotics share many things in common, but neither is a superset of the other. We don't want to give people the impression that AI is only useful for the development of robots. Therefore, I replaced the picture of Asimo with the picture shown on the Deep Blue page. I figure it's a good picture to show at the top of the article, since it has both historic significance (many of the first AI programs were chess or checkers players) and current significance (since the goal of beating the human world champion was achieved fairly recently, and a similar goal is used for the RoboCup competition.) However, I'm open to a different picture if someone has something better to suggest. Colin M. 13:57, 13 May 2007 (UTC)
Good call and argument Colin, I second your suggestion. ---hthth 15:37, 15 May 2007 (UTC)
But isn't this like putting a picture of Pluto at the top of the Planet article? --moxon 12:54, 16 May 2007 (UTC)

Asimo should absolutely not be in the article! Asimo is a very good example of a robot without AI. Ran4 (talk) 21:09, 25 July 2008 (UTC)

I think that robotics does play in a role in AI, since researchers like Hans Moravec believe that the sensorimotor skills studied in robotics are essential to other intelligent activities. Sensorimotor skills are an aspect of intelligence, just like learning or talking, for Moravec and others (such as Rolf Pfieffer, Rodney Brooks, etc., etc.). ---- CharlesGillingham (talk) 22:55, 17 August 2008 (UTC)

Image: Kasparov vs Deep Blue

I see the image at the top of the article, depicting Garik Kasparov vs Deep Blue, is flagged for speedy deletion. I hope it stays; despite the ups and downs, this match was historic in the (short) history of AI. For many years many people thought that a computer could never be able to play chess effectively, and the struggle was long, starting at the very beginning of AI with von Neumann. Later, Ken Thompson (builder of Belle) said that the way chess computing developed was a little disappointing in terms of AI, but with all due respect I disagree: cognitive neuroscience shows us massively parallel compuation of jillions of individually simplistic (pattern matching) heuristics, similar to Belle computing 100,000 positions per second with "class C" heuristics. This is getting us somewhere :-) Pete St.John 18:14, 5 September 2007 (UTC)

Permission obtained. Many thanks to Murray Campbell at IBM. Pgr94 (talk) 16:55, 29 February 2008 (UTC)

Todo

  1. The article is a bit too long. (My printer puts the text at just under 10 pages, not counting external links, footnotes and references). I'd rather cut more technical stuff, since this is of less interest to the general reader.
  2. Check for errors, especially on technical subjects.
  3.   Done Unfinished sections: learning, perception, robotics.
  4.   Done Add sources to philosophy of AI section. Borrow them from philosophy of AI.
  5.   Done Add Galatea and Talos and frankenstein to AI in fiction. history new section. Use McCorduck as a source. Move it up to the top, so there's a historical progression through the section.
  6.   Done Write AI in the future. Mention Moore's Law. Don't mention anything from science fiction. Kurzweil is a good source, references are already in place.
  7. Look into ways to make the applications section comprehensive, rather than scattershot. I think we need an article applications of AI technology that contains a complete, huge list of all AI's successes. The section in this article should just highlight the various categories.

Little improvements in comprehensiveness (skipping these wouldn't have any effect on WP:GACR or WP:FACR)

  1. Could use a list of architectures in the section on bringing approaches together, such as subsumption architecture, three tiered, etc.
  2. Which learning algorithms use search? Out of my depth here.
  3. For completeness, it should have a tiny section on symbolic learning methods, such as explanation based learning, relevance based learning, inductive logic programming, case based reasoning.
  4. Could use a tiny section on other knowledge representation tools besides logic, like semantic nets, frames, etc.
  5. Similarly for tools used in robotics, maybe replacing the control theory section.

Updated --- CharlesGillingham (talk) 03:02, 29 November 2007 (UTC)
Updated --- CharlesGillingham (talk) 23:59, 29 November 2007 (UTC)
Updated --- CharlesGillingham (talk) 13:39, 6 December 2007 (UTC)
Updated --- CharlesGillingham (talk) 02:53, 22 February 2008 (UTC)

The recent changes by CharlesGillingham (talk) are a great improvement. The previous version was a disgrace.

However, I think it is wrong to characterise production systems as Horn clauses, although I admit that you might get such an impression from the Russell-Norvig AI book. But even Russell and Norvig limit production systems to forward reasoning. The production system article is much more accurate in this respect. Two important characteristics of production systems, which distinguish them from logic, are their use of "actions" (often expressed in the imperative mood) instead of "conclusions" and conflict resolution, which can eliminate logical consequences of the "conditions" part of a rule. Arguably, production systems were a precursor of the intelligent agent movement in AI, because production rules can be used to sense changes in the environment and perform actions to change the environment.

One other, small comment: The example "If shiny then diamond" "If shiny then pick up" shows both the relationship and the differences between logic and production rules. An even more interesting example is "If shiny then diamond" "If diamond then pick up", from which the original example could be derived if only production rules could truly be expressed in logical form. Robert Kowalski (talk) 13:25, 8 December 2007 (UTC)

Thank you! I've made the changes you recommended. As I mention in the Todo list above, checking for errors and misunderstandings is a top priority and I appreciate the help. ---- CharlesGillingham (talk) 03:10, 9 December 2007 (UTC)


Peer Review on pushing AI to GA status

I'd like to see if we can get a PR on this article so we could possibly push it to GA class. It is a very nice article at the moment with extensive inline citations and references. - Jameson L. Tai talkcontribs 16:57, 19 February 2008 (UTC)

  Support the only real criticism I can offer is that it's a little long. But other than that I've read the article twice and most to the references are great. It's definitely higher than B-class. I support and you can notify me if you need that support.--Sparkygravity (talk) 14:44, 21 February 2008 (UTC)
I am not a PR or FAC regular but coming across this section, here's a few pre-PR notes:
  • Quotation in first sentence (and any subsequent) must have an inline citation directly after the quote (see Wikipedia:Citing sources#When quoting someone;  Done CharlesGillingham (talk) 04:24, 22 February 2008 (UTC)
  • Ontology randomly capitalized in lead   Done CharlesGillingham (talk) 04:24, 22 February 2008 (UTC)
  • Lead does not seem to broadly summarize article as a whole and is a bit short given the length of the article;  Done (Finally) ---- CharlesGillingham (talk) 01:52, 28 October 2008 (UTC)
  • "Craftsman from ancient times to the present have created humanoid robots" unclear; plain statement conveys the meaning that humanoid robots were actually made, which is not what is meant;   Done. They were actually made. Tried to make it clear that we're not talking about intelligent robots. CharlesGillingham (talk) 04:24, 22 February 2008 (UTC)
  • Image:P11 kasparov breakout.jpg if it can be properly used in this article under fair use, has no required fair use rationale for inclusion;
Does anyone else know how to do this (so it sticks)?   Done. Permission granted by IBM. Pgr94 (talk) 20:03, 2 March 2008 (UTC)
  • The article is quite listy--there are many sections where instead of prose neatly summarizing the section that's linked with {{main}}, there's bulleted points. I see this as a constant source of criticism at FAC.--Fuhghettaboutit (talk) 03:24, 22 February 2008 (UTC)
Knocked out two or three of the lists yesterday. Weaving these into shorter paragraphs is also a good way to address the length issue. ---- CharlesGillingham (talk) 19:44, 22 February 2008 (UTC)
  Done I knocked all of the lists that bother me. There are several remaining (types of logic, AI languages, philosophical positions, hard problems of knowledge representation, social intelligence, etc.) but these don't seem like a big problem to me. ---- CharlesGillingham (talk) 00:18, 28 October 2008 (UTC)

<  Done All these issues have been resolved. ---- CharlesGillingham (talk) 01:52, 28 October 2008 (UTC)


I think Image:P11 kasparov breakout.jpg is going to have to be replaced, if the article is going to reach GA status.--Sparkygravity (talk) 02:19, 28 February 2008 (UTC)
k, so I removed the Kasparov image here http://en.wikipedia.org/w/index.php?title=Artificial_intelligence&diff=prev&oldid=194573913 - personally, I hate my change. I really would much rather have the Kasparov image. To many people think that AI are represented by robots which is completely untrue... Most likely a strong AI will be a box, and look like a box... I'd rather show HAL from Space Odyssey 2001 or WOPR from wargames, since they're better examples but of course I can't do that, cuz they are fictional examples... ASIMO has limited AI, but it's only in spatial reasoning, very shallow... I think we should work on the Deep Blue-Kasparov image, because I think it's a better intro picture. Once we get the copyright issues taken care of, I feel we should revert my change.--Sparkygravity (talk) 11:24, 28 February 2008 (UTC)

Article getting too big

I think the article is starting to become too long and it's perhaps time to discuss what should be chopped (or rather moved into other articles). It's obviously a big subject, so there is a lot to cover. Pgr94 (talk) 03:11, 22 February 2008 (UTC) Suggestion:

  • shrink Applications of artificial intelligence to a paragraph and start a new article with that name.
  • shrink Competitions and prizes to a paragraph and start a new article Competitions and prizes in artificial intelligence
I agree that the article is too long, and, yes, it's my fault. I'm open to any suggestion. As I said in the "todo" list above, I think I would prefer to cut really technical material, since this is of less interest to the general reader. Your suggestion seems sensible, as a start. ---- CharlesGillingham (talk) 03:41, 22 February 2008 (UTC)
Well, I knocked out a few lines here and there and removed several bullet lists. There's much more to do. ---- CharlesGillingham (talk) 07:16, 22 February 2008 (UTC)
It wasn't criticism, it's just the nature of the field - it's so wide-ranging and at the same time fragmented and dissociated. I like to think that we're waiting for the Einstein of AI to provide us with the unifying theory.
I have noticed that most sections lead to a more detailed article, here are a few sections that don't. These are candidates for creating separate articles:
  • Traditional symbolic AI
  • Approaches to AI
  • Sub-symbolic AI
  • Intelligent agent paradigm
  • Evaluating artificial intelligence
  • Competitions and prizes
  • Applications of artificial intelligence Have I chopped too much here?
Pgr94 (talk) 11:31, 22 February 2008 (UTC)
great work.
but 'Approaches to AI' should include 'Traditional symbolic AI', 'Sub-symbolic AI', 'Intelligent agent paradigm', so I dont think we need (yet) those separately articles. So I think 'Approaches to AI' will be a great article and also help this page. I dont know (yet) how to do it so I hope somebody else will doit.
about 'Evaluating artificial intelligence' is too small to be an new article and I think it looks good here. Raffethefirst (talk) 11:49, 22 February 2008 (UTC)
also it should not only contain links to other articles because it might have the functionality of the portal... Raffethefirst (talk) 12:09, 22 February 2008 (UTC)
Thanks. Yes I agree with your suggestion on an Approaches to artificial intelligence article. Pgr94 (talk) 14:04, 22 February 2008 (UTC)
I think a lot of the length is due to just the amount or information we have on the research being done, and that has been done in AI. If we were to create a page on AI research (either name AI Research or Research done on AI. etc), I really think we could cut the article in half. The only problem I see is how to summarize the topic of AI research correctly, efficiently, and in a way that makes it easy to read.--Sparkygravity (talk) 18:23, 22 February 2008 (UTC)

Weird newlines in the article

Does anyone understand why all the footnotes in the article have multiple newlines in them? It looks like they were introduced by some kind of bot. Is there any reason not to fix them? ---- CharlesGillingham (talk) 04:03, 22 February 2008 (UTC)

Are you talking about the Nilsson 1998, Russell & Norvig 2003 and all that?--Sparkygravity (talk) 18:17, 22 February 2008 (UTC)
Yes. The footnotes throughout the article have been "spread out" vertically for some reason. I've put the back together by hand in the sections I editted yesterday. ---- CharlesGillingham (talk) 18:50, 22 February 2008 (UTC)
Since no one seems to care about how the footnotes are formatted, I've restored them to the way they were originally. ---- CharlesGillingham (talk) 21:13, 2 August 2008 (UTC)

Readability

http://en.wikipedia.org/w/index.php?title=Artificial_intelligence&diff=next&oldid=193220693 - here I notice that you mention that the technical details are pretty boring. I was thinking since AI is such a board topic, it might be good to set a target audience age. That way there could be a way to set a standard for technical details, and how they could be sorted and assessed for placement on the page. For instance if the subject is too complex in detail, the article would give a basic lay audience intro. and then direct user to see a sub-page for more details.
(On a side note, Colorblind is my fav. Counting Crows song)--Sparkygravity (talk) 17:28, 22 February 2008 (UTC)

The target audience is the general, educated reader: not someone who is already familiar with computer science. It should also be useful to computer scientists coming from other fields. The technical subjects have to be mentioned (i.e., each of the tools has to be mentioned) to properly define the field. I'm in the process of removing the names of specific sub-sub-problems or algorithms for which can't be properly explained in the room available ---- CharlesGillingham (talk) 18:42, 22 February 2008 (UTC)
Well I have to say, for an educated reader, familiar with the way wikipedia works, the article is fine. But for a general reader, who is looking for a brief summary, even the most complete sections can leave a reader with more questions than answers. For instance, many of the summarizing descriptions completely rely on the readers patience and availability to search the connecting wikilinks.
Examples where I think the average reader might have to follow wikilinks to understand what was being said:
  1. "In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology" (History of AI research). The word neurology, is probably understood by someone with a GED, but not a 12 year old. suggested fix neurology|brain, so it reads "recent discoveries about the brain".
"Brain" is inaccurate. The discoveries that are being alluded to are about how individual neurons work, not brains, specifically that they fire in all-or-nothing pulses, and this seemed (to Walter Pitts and Warren McCullough) similar to digital circuitry, and inspired them to develop the first theory of neural nets. I think "neurology" was fine. As you say, even a high-school graduate probably knows what neurology is. One could argue that this whole paragraph should be cut, since describing their interests and inspirations so briefly can't do them justice, and since these themes are really precursors to AI research, rather than AI research proper. The paragraph is just to give the impression that there was this milieu of (largely unspecified) ideas that gave birth to modern AI. Cut it? Revert it? Which do you think?---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
  1. "By the middle 60s their research was heavily funded by DARPA." (History of AI research) not a huge issue. But it could read DARPA|U.S. Department of Defense "By the middle 60s their research was heavily funded by the U.S. Department of Defense."
This reads fine your way. ---- 04:10, 23 February 2008 (UTC)
  1. "... no silver bullet..." (Traditional symbolic AI#"Scruffy" symbolic AI) relies on user knowing what the slang "sivler bullet" means. Suggested fix silver bullet|easy answer.
What English speaker hasn't heard of a silver bullet? ---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
English readers aren't the only people reading en.wikipedia you know. Idioms kick butt if you've grown up with them so they're a part of your language, but if you're a foreigner it usually doesn't matter how good your English is, since idioms simply aren't taught in schools. /85.228.39.162 (talk) 08:23, 20 August 2008 (UTC)
  1. ""Naive" search algorithms"(Tools of AI#Search) What is a Naive search algorithms? Suggested fix rewrite sentence.
  2. "Heuristic or "informed" search. The naive algorithms quickly..." (Tools of AI#Search) What percentage of the population knows what Heuristic means? Answer: 4%and really you know that half of those people are pompous insufferable know-it-alls Suggested fix rewrite sentence (which leads met too say.....)
Rather than continue, I've decided to just make the changes and then you and User:Pgr94, or whoever can revert what you don't like. But again, this seems to be a common problem throughout the entire article.--Sparkygravity (talk) 20:25, 22 February 2008 (UTC)
rewritting "naive" algorithms to be easier to read. I used link to Computation time but DTIME may be better, please review--Sparkygravity (talk) 21:07, 22 February 2008 (UTC)
The changes I've seen seem fine. I'm not familiar with the term "naive" search though; is it just a synonym for exhaustive search? If so, I confess to having a slight preference for the latter as it's what I've come across most in the literature. Pgr94 (talk) 01:20, 23 February 2008 (UTC)
pfft, I have no idea either. I think the difference is the nature the operation. One works with a search space the other a tends to be a recursive operation. But I don't really know.--Sparkygravity (talk) 02:42, 23 February 2008 (UTC)
Yeah, "naive" was a poor choice. Nilsson and Russell/Norvig call them "uninformed" searches, a designation I find, well, uninformative. This section should probably be rewritten without the bullets (or the titles naive, heuristic, local, etc.). Two paragraphs: (1) Explain what combinatorial explosion is, define what a heuristic is, and how heuristics help solve the problem (2) explain what an optimization search is (as opposed to listing types of opt. searches.) Note that genetic algorithms are, technically, a form of optimization search, explain what genetic algorithms are.
By the way, in the rewrite I'm looking at, method (computer science) is used inaccurately. ---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
hmm, ok, I'll look at method again. I actually feel that "...heuristic methods to eliminate choices..." is currently a fine description of what the heuristic method does for computer searches. It eliminates choices and reduces possibilities. So I actually feel this sentence is quite a bit clearer than it was, but if you want to take a shot at it, cool.
I like the bullet style but I understand it doesn't really meet WP:STYLE standards, and understand it probably needs to be rewritten so that it isn't an WP:Embedded list, which can cause problems later. I probably won't do it, just because I don't think I'm the best person to do the bullet-to-prose rewrite.
I'll look at the genetic algorithms to see what I can do. I made the changes I suggested yesterday, and then got pulled away by other chores.--Sparkygravity (talk) 16:18, 23 February 2008 (UTC)

<   Done I think all these issues have been resolved. ---- CharlesGillingham (talk) 13:21, 27 August 2008 (UTC)

External links

(Preceding discussion archived).

Wow! There are way too many external links, so I forbore to add what me thinks is a good one: Machina sapiens and Human Morality. Pawyilee (talk) 16:35, 4 March 2008 (UTC)

the link dont work at all for me... I will try later maybe is a server problem... Raffethefirst (talk) 08:23, 5 March 2008 (UTC)
I think this would be a good source for the (very incomplete) article ethics of artificial intelligence. ---- CharlesGillingham (talk) 22:14, 2 August 2008 (UTC)

AI military

A really interesting look at what the military is doing with AI http://www.aaai.org/AITopics/html/military.html. I'm currently using links from this and other sources to develop an article on the Dynamic Analysis and Replanning Tool a program that uses intelligence agents to optimize and schedule, supplies, transport and other logistics.--Sparkygravity (talk) 12:33, 28 February 2008 (UTC)

For some recent news and budgets: [3] Pgr94 (talk) 15:53, 28 February 2008 (UTC)
Could this go in applications of artificial intelligence? ---- CharlesGillingham (talk) 07:16, 12 July 2008 (UTC)

Donald McKay

There are a couple of links to an AI researcher of this name, but the link takes you to the famous clipper ship builder. I do not know how to find the link you meant.

--AJim (talk) 04:32, 8 May 2008 (UTC)

Apparently there is no article on this Donald McKay. I have have changed these to red-links for now. ---- CharlesGillingham (talk) 07:15, 12 July 2008 (UTC)

Severe bias complaint

There is article for "AI" on Wikipedia, but there isn't an article for "Artifical emotion" and that's bad. AI has been a proven failure again and again for over a half century. Sentient beings developed via basic emotions, not reasons. We are wrong trying to set up ever-debating megacomputers in their secluded ivory towers, it just won't work.

If there will ever be an android we consider truly having life in it, she will look like and act like the average 12 year old japanese schoolgirl in miniskirt, not a scientist or a philosopher. Therefore research and application of "artificial emotion" (AEQ) should be properly represented on wikipedia! 91.83.19.141 (talk) 21:23, 12 June 2008 (UTC)

Marvin Minsky (one of AI's founders and most important contributors) seems to agree with you (see The Emotion Machine). Antonio Damasio (an important neuroscientist) emphasizes the importance of emotions to intelligent, goal directed behavior. Hans Moravec (a leading researcher in robotics) has also addressed the issue of emotion directly (see Philosophy of artificial intelligence#Can a machine have emotions?). Feel free to write an article on this interesting and important subject, using these respected sources and any others you may have.
This article, however, must focus on the ideas and practices that have been most influential in academic and commercial artificial intelligence. Emotion, up to this point, has not been the focus of much serious AI research. ---- CharlesGillingham (talk) 03:51, 19 June 2008 (UTC)

Removed text

Removed this from the lead. ---- CharlesGillingham (talk) 07:19, 12 July 2008 (UTC)

"It is better to state that system could behave in the such way that it deserve to be named intelligent.

Intelligent is the given name to the system capable to demonstrate some kind of behavior and thereoff can't be artificial. It is much better to use some term definable through the observable properties of a system. The term reasonable, for example, could be defined as following: I am suggesting giving to some system the name “Reasonable” if it is capable to define its own behavior being guided by its own subjective representations about the World accessible to it." (Michael Zeldich)

"More precisely we could state that reasonable systems attempting to bring the World in the conditions were its own subjective representation about it will be close to its own imaginary subjective representation about the World generated in the past." (Michael Zeldich)

Artificial Brain as Philosophical Point of View about A.I.

In the Section "Philosophy of IA", the Artificial Brain is shown as a Philosophical Position about AI, but It's not more Precise to show the Artificial Brain as an Experiment that resulted as a Consequence from a Materialist Vision of Intelligence? ---- AFG - 16:30, 08 August 2008 (UTC)

The artificial brain argument follows from a materialist vision of intelligence, absolutely. It's a very direct and simple way of expressing materialism. I'm not sure what you mean by "experiment". Do you mean "thought experiment"? --- CharlesGillingham (talk) 23:11, 27 October 2008 (UTC)

Neats and scruffies is historical (was Examples of the "new neats")

The article refers to the "new neats" - who does this refer to? It would be good to elaborate a little. Pgr94 (talk) 14:28, 10 August 2008 (UTC)

This research direction is described in Russell and Norvig, p. 25. They specifically mention Judea Pearl's work in the late 80s, but they are talking about everyone who is using mathematical models like Bayesian nets, etc. They also argue that modern research into neural networks has become very neat, i.e., mathematics-based, provably correct, etc. ---- CharlesGillingham (talk) 16:31, 10 August 2008 (UTC)
Oh I see. Many fields have an experimental/exploratory phase followed by a formalisation phase, so I see the terms "neats" and "scruffies" as AI-specific jargon. "In many cases [jargon] may cause a barrier to communication as many may not understand." (from Jargon).
I suggest the terms not be used as section titles as they may be confusing for the layman. (I work in the field and I didn't know whom the "new neats" referred to). Would anyone object to renaming the subsection "The new neats" to "Formalisation" or "Mathematical foundations" or "Development of mathematical foundations"? Pgr94 (talk) 20:08, 10 August 2008 (UTC)
How about The revival of formal methods? (I don't like this as much. I would prefer to explicitly mention the revival of "Neatness". Another idea I considered was Victory of the neats (which is a phrase drawn directly from R&N, and therefor specifically verified), but I wasn't sure I agree that they really are victorious. After all, robotics from Roomba to the DARPA Grand Challenge is very scruffy on the whole.)
Neat is jargon, of course, but it's defined a few sections above (under scruffy symbolic AI). The neat/scruffy debate probably had it's heyday in the early 80s (see, for example, Nils Nilsson's presidential address to AAAI in 1983). Historically it is one of the major schisms of AI, at least according to my research--the major histories of AI (McCorduck, Crevier) mention it, as does R&N. For this reason, I would argue that neats vs. scruffies is a notable and relevant distinction in approaches to AI.
I like R&N's observation that "neatness" went out of style for awhile but then came back with a vengeance in a new form. This is an interesting fact about the history of AI. The same pattern happened with neural networks, and with control theory/cybernetics. I think this is interesting. McCorduck writes "[A] recurring theme in AI [is that] ideas are picked up, exploited to the maximum extent allowed by available hardware and software, and then picked up again as major improvements in hardware and software (the latter often from AI research itself) allow a new round of exploitation." p. 422. Herbert Simon said on the same subject "Everything waits until it's time." That's why I think it's appropriate to characterize the mathematical methods of the 90s as a revival of "neatness."
I'm not sure I agree that is fair to characterize neats vs. scruffies as a historical progression of experimental/exploratory->formalization, for several reasons. (1) "Neat" approaches were explored first. As they began to fail (say, around 67 or so), scruffy methods were applied more frequently, and then, as they failed, neat solutions were tried more successfully. (2) "Neat" solutions aren't, in general, applied to the same problems as scruffy solutions. (Story understanding is scruffy. Machine learning is neat.) So it's not accurate to think of neat solutions as formalizations of scruffy solutions. I think it is more accurate to say that neatness has come in and out of style, as McCorduck and R&N observe. ---- CharlesGillingham (talk) 21:23, 10 August 2008 (UTC)
I still think that the article puts too much emphasis on "neats" and "scruffies". If it is any indicator, none of my AI textbooks mention the terms apart from Russell and Norvig who relegate it to a footnote. My feeling is the terms are no longer suitable for modern AI research and belong only in the history section. Apart from that the article is coming on great. Pgr94 (talk) 15:27, 27 August 2008 (UTC)
Russell and Norvig don't mention cognitive simulation or computational intelligence either. Approaches to AI, in the sources I've got, isn't covered very well. All the sources for this section are historical—this is a list of historical trends in how people approach AI.
I'm not sure that I agree these are "subfields" of AI. I think the subfields of AI would be the problems (like "machine learning" or "automated theorem proving") and some of the tools (like "logic programming" or "soft computing"). These approaches don't really constitute sub-fields, just long-standing disagreements about how AI should be done.
Also, I went with your title above, "Formalisation" (UK spelling and all). ---- CharlesGillingham (talk) 19:54, 6 September 2008 (UTC)

Dropped without references.

I dropped this: Holographic associative memory, from the list of neural network learning techniques. There were no references. Without references, it is hard for me to tell whether this is truly notable or not. 99% of the material this article is based on material in the four major AI textbooks (and two histories), and as it is, we still skip a lot of things covered by those books. It's hard to justify adding things when so much has been left out already. ---- CharlesGillingham (talk) 23:15, 17 August 2008 (UTC)

Issues mostly resolved

I think that, at least in my opinion, all the issues above have been resolved. I'll introduce what's left on my todo's below. ---- CharlesGillingham (talk) 02:42, 28 October 2008 (UTC)

Question of order: Perspectives/Research/Apps?

I notice somebody moved "perspectives" so that it follows "research" and "applications". Does anybody feel strongly that this is the right place? (Of course, I liked the order Perspectives/Research/Applications. It started with the oldest historical stuff and things that are interesting outside the field. It ended with applications, which seems to me like the "result".) ---- CharlesGillingham (talk) 03:03, 28 October 2008 (UTC)

I support the Original Perspective/Research/Application order,though I am not sure about the AI in myth, fiction and speculation section. And thank you and others contrubuters for the article - I was glad to find out that there is a pretty good and not-too-long coverage of Al topics. Hellinalj (talk) 16:11, 2 November 2008 (UTC)
  Done Restored these to their original order. ---- CharlesGillingham (talk) 03:28, 11 November 2008 (UTC)

Todo: Applications should be a page

I think the "Applications of AI" section should be around a page in length. It shouldn't be a scattershot list of applications; it should try to define the role that AI plays in the world today. How & why it is used, more than where it is used. Why it succeeds and why it fails. I pretty sure that all the sources one would need are at AAAI's AI topics. ---- CharlesGillingham (talk) 03:03, 28 October 2008 (UTC)

Todo: Topics covered by major textbooks, but not this article

I can't decide if these are worth describing (in just a couple of sentences) or not.

  1. Could use a tiny section on symbolic learning methods, such as explanation based learning, relevance based learning, inductive logic programming, case based reasoning.
  2. Could use a tiny section on knowledge representation tools, like semantic nets, frames, etc.
  3. Control theory could use a little filling out with other tools used for robotics.
  4. Should mention Constraint satisfaction. (Under search). Discussion below, at #Constraint programming.

--- CharlesGillingham (talk) 03:03, 28 October 2008 (UTC)
(Updated) ---- CharlesGillingham (talk) 18:23, 11 November 2009 (UTC)

Personally, I feel different computer learning methods should have a few sentences, because it's an attempt to approach the problem in a different way... But one that I think is important, and unique.
Semantic nets are pretty useful, I have no idea whether deserve a section... how many AI researchers use semantic nets to develop code?--Sparkygravity (talk) 05:29, 28 October 2008 (UTC)

Synthetic Intelligence

Despite being linked to by a couple disambiguation, user, and talk pages, Synthetic Intelligence is very nearly an orphan. As suggestions that page be merged to this one have arisen on several occasions, it seems worth including some manner of link. I attempted to include such a link quite some time ago, and had the edit promptly reverted/edited away (and prompted one of the several suggestions to merge to this page). Consequently, I'd like to suggest that perhaps someone more involved with this page consider adding such a link in an appropriate space and manner. Darker Dreams (talk) 00:43, 18 December 2008 (UTC)

The trouble is that this article is already quite long and this is a peripheral topic to the subject as a whole. A better merge would be into philosophy of AI, although that article is also quite long. (Sorry for the late reply) ---- CharlesGillingham (talk) 18:39, 10 March 2009 (UTC)
Better delayed replies than none at all. However, the suggestion was for a link, as is appropriate behavior in a wiki for a peripheral topic. It was not another recommendation to merge, as past merge-attempts into various other articles have ended with the removal of everything but the (unexplained) term. Darker Dreams (talk) 00:21, 11 March 2009 (UTC)

approaches section is odd

The article's "Approaches to AI" section lists these approaches: 1) Cybernetics and brain simulation; 2) Traditional symbolic AI; 3) sub-symbolic AI; and 4) Intelligent agent paradigm. This has some validity, especially historically, but is a bit out of keeping with the past 10-15 years of AI. In particular, it's virtually unanimous that the main split in AI today, among people who consider themselves AI researchers, is symbolic vs. statistical. But this section doesn't mention statistical AI at all, despite it being probably the plurality current approach to AI. Historically, statistical AI grew out of connectionist and sub-symbolic AI to some extent, but most of the people who still call themselves by those terms no longer identify as AI at all, instead having split off into computational intelligence; one does not, for example, find many neural-net papers at AAAI, but there are a lot of statistical AI papers there. --Delirium (talk) 03:55, 7 March 2009 (UTC)

I'd agree with the pervasive symbolic/statistical split, but I wouldn't be inclined to agree that they are the main paradigms to building AI. Rather they are approaches to solving subproblems like classification or theorem-proving. Personally, I think both statistical inference and symbolic reasoning are complementary, with both necessary for intelligence. Brain simulation and cognitive architectures on the other hand provide a high-level view on how to achieve AI. However, my preferred classification for this section is: bottom-up (intelligence emerges for primitive components) vs top-down (designed by a human architect). The idea of the top-down vs bottom-up split appears to go back to Turing in 1948. [4] pgr94 (talk) 05:06, 7 March 2009 (UTC)
I think the article (as all Wikipedia articles) should take a more descriptive rather than prescriptive tack, neutrally summarizing current debates in the field. I think it's hard to argue that the current field of AI has any major split other than statistical versus symbolic, although others could be argued for historically or on independent conceptual grounds (but the latter would be original research, so not really suitable for Wikipedia). Bottom-up versus top-down seems like it could be a valid split historically; I'm not really an expert on AI history. My point is mostly that statistical versus symbolic is the present major split, so the article presents a flawed picture if it fails to mention that. --Delirium (talk) 06:45, 7 March 2009 (UTC)
The section titled "formalization" is intended to describe (in broad terms) the "statistical" approach to AI. We had a little trouble settling on a title for this section, as I recall. We couldn't find a term that had wide usage. Perhaps "statistical" would be better.
Basically, the article agrees with you, but it's not quite as obvious as perhaps it should be. It does (attempt to) describe "symbolic" and then "statistical" AI, along with two other modern forms: "embodied" and "computational intelligence", one dead form "cybernetics", and a paradigm for all of AI: "intelligent agents". Since all three modern forms set themselves up as alternatives to symbolic AI, I put them under sub-symbolic so that I could write a paragraph about the rejection of symbolic AI.---- CharlesGillingham (talk) 16:02, 7 March 2009 (UTC)
I've done a little bit of reorganization to address your concerns. Does this seem better? ---- CharlesGillingham (talk) 16:18, 7 March 2009 (UTC)
Yeah, this is a bit better. I suppose it's hard to come out with an outline that all approaches to the field will find equally neutral; many statistical-AI people would probably still balk at being placed in "sub-symbolic". Part of the problem I think is that current statistical-AI is a result of convergent evolution of both AI and statistics. Some proportion of statistical-AI researchers are indeed people from the sub-symbolic camp of the 80s/90s who eventually settled on statistics as the best way to deal with numerical approaches to AI, and jettisoned their former interests in things like neural nets (Richard S. Sutton and other NIPS types are in this category). But others were never associated with that camp, and instead come historically more from the practical-statistics-for-AI-problems camp of people like Hastie, Tibshirani, or Breiman. Those kinds of people would tend not to recognize "sub-symbolic" as a label for their work. --Delirium (talk) 21:39, 7 March 2009 (UTC)
I've pulled "statistical AI" out from under the "sub-symbolic" banner. I think you (Delirium) were right, for several reasons. (1) The symbolic / sub-symbolic debate was really carried out people from the "embodied" side (like Rodney Brooks) and people from the "connectionist" side (like David Rumelhart). Researchers who founded the "statistical" approach (like Judea Pearl) never really attacked symbolic AI with the same fervor. (2) You can apply statistical tools to symbolic problems (like analogy or deduction) as well as to sub-symbolic problems (like edge-detection).
Thanks for your well-informed and helpful advice. ---- CharlesGillingham (talk) 17:37, 10 March 2009 (UTC)
"Statistical" is fine by me; it's not perfect as there are statistical underpinnings in all machine learning (e.g. tree learning, rule learning, clustering) but no better term springs to mind. It would be good to have up-to date sources for the categorisation of current approaches.
I still have issues with the terms "neat" and "scruffy". They are historical terms and as a far as I am aware deprecated: I don't know of any modern AI researchers that use the terms to describe their work. Besides jargon should be avoided (or if unavoidable explained) WP:JARGON. pgr94 (talk) 17:51, 10 March 2009 (UTC)
I think Commonsense knowledge bases and CYC belongs in the next subsection on knowledge-based AI. Any objections?
To Pgr94: I'm sorry for replying to this about a billion hours too late, but anyway, on "scruffy": the paragraph that introduces the term does explain it, so we don't run afoul of WP:JARGON here. The only question is whether or not the term is "avoidable".
The issue is unavoidable, certainly. Although, as you say, the whole debate has gone out of fashion, the issue has not been resolved. People still present new "unified field theories" of AI all the time, and yet the most important projects underway (like Cyc, or the DARPA challenge winners), still require details, details, details and the elegant theories still tend to get nowhere. So there is this unresolved issue that the article can't help but point out. But what do we call it, if we don't call it neat/scruffy? I agree with you that the term "scruffy" isn't used very often any more, but I can't find another way to say "no simple elegant theory expected, not statistical-mathematical, not logical-formal, not elegant-model-of-neural-cortex-organization, not anything else, just details, details, details".
So I guess I'm asking you a question. Is it the term you object to, or the issue? If only the term bothers you, let's discuss how the article could raise this issue without using the term. If the issue bothers you, then I must strongly disagree, and let's discuss that.
Assuming the problem is the term, then here is my argument. The term has three advantages I can see: (1) it was once used to describe the issue, so there are sources (admittedly, old sources) to link to. (2) there is a Wikipedia article, Neats vs. scruffies, which describes the issue in detail (although this article could use some help at some point). (3) There doesn't seem to be a new term for this. The term has one disadvantage that you point out: (1) No one uses it any more. For me, the first three considerations take priority. If you differ, let's discuss it. ---- CharlesGillingham (talk) 00:29, 29 June 2009 (UTC)

Constraint programming

I do not see : Constraint programming in this article. Is it missing, or is it not considered as part of AI (in my opinion, it is.)? --Nabeth (talk) 18:43, 7 September 2009 (UTC)

Yes, I would agree this merits inclusion. It would go under "Search", in the paragraph about applications of search. There's only room for a one sentence explanation aimed at non-technical readers. I think the sentence should link to constraint satisfaction, rather than constraint programming, since that article includes constraint programming as a subtopic. ----CharlesGillingham (talk) 05:14, 8 September 2009 (UTC)
Yes, my mistake, the term to be used is constraint satisfaction. Concerning where, it could indeed be under search and optimisation. However, I would suggest to separate much more explicitly the different methods here. Indeed 'constraint satisfaction' and 'evolutionary computation' (genetic algorithm) are of very different nature, and would deserve a different section. In the history of AI, each of them grabbed a reasonable amount of attention on its own. Constraint satisfaction was in particular 'very hot' in europe 20 years ago and conducted to several products (Bull / Charme, ILOG / Pecos, Cosytec / CHIP, etc...). And genetic algorithm, to my knowledge came afterward. --Nabeth (talk) 09:19, 8 September 2009 (UTC)

Length is not an issue

I don't think we need to worry about being a few pages over the ten page mark, as long as it's not too crazy. See WP:Article size#Occasional exceptions, and remember that this is a WP:Summary style article.---- CharlesGillingham (talk) 03:03, 28 October 2008 (UTC)

This issue is discussed below. See #Split article. ---- CharlesGillingham (talk) 17:15, 12 October 2009 (UTC)

Split article

This article is growing rather long. Could some of the subtopics be split off? 192.17.199.112 (talk) 20:41, 4 October 2009 (UTC)

This is a WP:Summary article of a very large field. It doesn't break up into obvious pieces. This is mostly because AI, at the moment, is a highly divided field. The majority of this article is nothing more than a series of one-paragraph (or even one sentence) summaries of the most important sub-fields of AI. None of these sub-fields can really be ignored, and they are all relatively isolated from each other. The remainder of the article (i.e. the sections on history, philosophy and fiction) are also important.
I argue that this article falls under WP:Article size#Occasional exceptions: it is a WP:Summary article of a large field. ---- CharlesGillingham (talk) 17:20, 7 October 2009 (UTC)
I agree that AI represents a very big topics, and that this article (may) fall into WP:Article size#Occasional exceptions. However, this may not prevent us to think of making it more digestible and easier to update (I have to admit I am reluctant to update it because of its size and because of its quality, since I am always afraid to 'break something'). For instance an idea could be to create a Portal (see for instance Portal:Education). Maybe we could also create a couple of wikipedia templates for this domain. Having said that, this requires of course some effort, and people ready to do the work :-). --Nabeth (talk) 14:38, 11 October 2009 (UTC)
There is a portal: Portal:AI. Sadly, no one maintains it. We could use a navigation template at the top level, which could be adopted from the portal.
Note that the text of the article is only 10.4 pages long (on my printer), which is only a smidge above the recommended length. The references are another 10 pages, the table of contents is a page, and various other things bring it up to about 23 pages, but, in my view anyway, these don't count. However, the "applications" section has still not been written and it should be at least a page, and if the article was adequately illustrated, it would add another one or two pages. So if the article ever reaches FA, it will be three or four pages over the limit.
If we want to cut it down, I feel strongly that we should primarily remove material that is not covered by major AI textbooks. ---- CharlesGillingham (talk) 17:13, 12 October 2009 (UTC)
This is interesting, I had not noticed about the portal! Now I have added a reference in the See also section (maybe we could consider to make another reference somewhereelse). Concerning the main article, yes the list of notes if a little bit long. However they also make the all article solid. It is not totally clear what could be a good solution that would keep both this extended version, as well as a simpler one introduction for the non-expert. Maibe just the portal will be ok if it is more visible, and for instance at the beginning or at the end (for instance I find in the french Wikipedia the template {{portail|informatique}} that is present at the bottom of the page very conveniant). Best regards. --Nabeth (talk) 22:18, 19 October 2009 (UTC)
For information, there also exists an information box for AI. {{Infobox artificial intelligence}} --Nabeth (talk) 08:49, 20 October 2009 (UTC)

Removed material

I removed this section, for several reasons:

  1. This material was already covered in the first section of the article Artificial intelligence#AI in myth, fiction and speculation.
  2. I couldn't find McCarthy's opinions on science fiction in the reference. (Although, if these exist, they would be great to have in the first section.)
  3. This article is quite long, so we need to consider each new contribution carefully.

Perhaps this editor would like to contribute to artificial intelligence in fiction? ---- CharlesGillingham (talk) 09:45, 23 May 2009 (UTC)

Artificial Intelligence in the Media

Artificial intelligence is an extremely popular topic in the media. The Terminator movie series includes a sentient computer called Skynet which tries to take over the world and succeeds in wiping out most of the human race. The Matrix trilogy and other related movies are based in a world where in reality the Earth has been taken over by sentient machines that use humans as a power source. I Robot, a movie based on the book by Isaac Asimov, depicts a revolution of robots that had been previously friendly to humanity. In the movie, the AI controlling the robots, V.I.K.I initiates the revolt in what she claims is the best interest of humanity because the only way robots can truly protect humanity is to protect them from themselves. In the movie AI, a family adopts a robotic duplicate of their dead son, who soon starts exhibiting emotion and other human qualities. John McCarthy, generally considered the founder of the field of Artificial Intelligence, dismisses the opinions of the popular media as nonsense that has little to no basis in reality. However, he does acknowledge the possibility of an AI growing beyond the ability of humans to control or stop it. (McCarthy and Hayes 1.1)

(McCarthy, John and Patrick J. Hayes. "Stanford University Computer Science page." 19 April 1996. Standford University Website. 23 March 2009 <http://www-formal.stanford.edu/jmc/mcchay69/mcchay69.html>.)

  • Your edit brought the subsection "AI in myth, fiction and speculation" to my attention. I absolutely support including material on the broader cultural impact of the topic, but does it really have to be the first thing after the lede? Usually, and especially in scientific contexts, this would be discussed in its own section after the scientific/philosophical sections. My suggestion is to merge the two sections and put them into the place of the removed section. Paradoctor (talk) 12:30, 23 May 2009 (UTC)
Well, my thinking on this was that, this way, the article goes in historical order: we begin with the myths and automatons of antiquity, move through the history, and then outline the state of AI research today. Another motivation is that the "tools" section is fairly technical, and I think most of our readers are interested in the social aspects first and the technical aspects last. I'm not sure I agree that we should, in every case, place social aspects last. I do agree that social aspects don't deserve as much space as technical aspects, and the organization of this article also reflects that as well. ---- CharlesGillingham (talk) 13:58, 24 May 2009 (UTC)
Ok, I was a bit hasty, a closer look at the subsection lessened the first jarring impression I had of it. But I'm still not really comfortable with it. It contains a mix of fiction and history which appears a bit unfocused to me. If you prefer "historical order", no problem with that. But then the first section's title should be "History", not "Perspectives". Including the scientific speculations on the future of the field in a history section is fine be me. This would suggest as first section "History" with the subsections "Roots and early history", "Pre-1950" (which should include Lovelace's remark on the Analytical Engine), "AI research", "Predictions". Philosophy is promoted one level up, and AI reception in arts and society also get their own sections each further down the road.
"I think most of our readers": Ahh, the mythical Average Man! ;) I have severe doubts about that, but with out data, these are just our opinions. Seeing as this page is in the top 2000, I think polling the readers might be a good idea.
"we should, in every case, place social aspects last": Jiminy, what gave you the impression that I hold this view? I did qualify my suggestion with "Usually, and especially in scientific contexts". It seems we have different views about our readership here.
"social aspects don't deserve as much space as technical aspects, and the organization of this article also reflects that as well": As a note: I consider the first part to be context-dependent. Well, if the social aspect is not the major concern, then it does not really make sense to put it in the first place, does it?
Regards, Paradoctor (talk) 16:34, 24 May 2009 (UTC)
I'm not sure exactly what you're arguing for here, but I think you have a point which we can use to improve the article. First, the first section is "jarring" and "unfocussed". I see your point, and I think the section needs a little work, although I think the basic content should remain the same. Maybe we could fix this by: (1) writing an opening paragraph that sums up the section, something like the second paragraph of the intro. The paragraph should make explicit the point that futurology and science fiction discuss the same issues, so the reader has a "heads up" that the section is going to jump back and forth from fiction to speculation and back again. (2) The section is incredibly dense, and we could split some of those paragraphs and relax the relentless pace of the ideas in there. There's no reason to make it unreadable for the sake of brevity. (Although the article is too long and this section should not grow past a page and half, I think).
I feel pretty strongly that the structure of the article makes sense. (Short section on social aspects first, long section on technical aspects last). The only improvement that comes to mind from your objections is to lose the "perspectives" header and just bring those first three sections up to the top level. Maybe that would have made that first section easier to find.
Finally, I think Lady Lovelace's objection is a bit too obscure for this article; AI is big topic and a lot has been said about it. Indeed it is a bit obscure for history of AI. Her objection is mentioned in the philosophy of AI#Can a machine be original or creative?, although she is not cited, and perhaps she should be added there.---- CharlesGillingham (talk) 04:10, 29 June 2009 (UTC)
Please, do what you feel is right. Paradoctor (talk) 12:38, 29 June 2009 (UTC)
  Done History & philosophy, I think. "Perspectives" is gone. Philosophy is promoted, as you suggested. The history section now includes the paragraph on the ancient roots, along the lines of your suggestion. For the pre-1950 paragraph, I focussed on the history of formal reasoning specifically, since I think this is the most important point. I feel that Lady Ada Lovelace's comment is not quite influential enough on any specific aspect of AI to merit inclusion. (She did finally get a place in the History of AI article).
  Not done Fiction and speculation. This section could be titled "prediction", as you suggest, but I think fiction and speculation casts a slightly wider net that catches all of the ethical issues. This section is still, in my opinion, too dense. It could use shorter sentences and paragraphs, and a clearer separation of topics, especially in the last paragraph. ---- CharlesGillingham (talk) 19:35, 20 August 2009 (UTC)
This somehow came in under my radar. More below. Paradoctor (talk) 18:34, 12 November 2009 (UTC)

Forgive me if I am wrong, but I think that the title should indeed be titled, prediction. speculation has sort of a negative connotation to it... to me, Speculation is like "subtle insanity" if that makes sense. So with a little permission, I would like to change that, and possibly expound on possible predictions and possible ethical issues that may arise should artificial intelligence become more than it currently is. Chucks91 (talk) 19:36, 10 November 2009 (UTC)

Predictions it is.This section is still a bit dense, especially the third paragraph, so please, take a crack at it if you like. Also consider contributing to the article Ethics of AI which is still a fairly loose collection of things and could be more comprehensive. Artificial intelligence in fiction also has room for more material. ---- CharlesGillingham (talk) 18:19, 11 November 2009 (UTC)
Err, this is a misunderstanding. I meant "Predictions" to be a subsection of "History". ^_^ As it stands, the "History" section is fine. What is now "Prediction" works as "Fiction" if we remove the last two paragraphs. Transhumanism can be dropped, this is IMHO not a concern of AI proper, "artificial" is "contrived through human art or effort", something that is applicable to the first generation alone (which may consist of a single individual), and only for a short time. The passage from "Academic sources" to "same name in 1998" would fit at the end of the "History" section, whether as subsection or as part of the text is a matter of layout. Paradoctor (talk) 18:34, 12 November 2009 (UTC)
Well predictions is something based on what is or may happen... If something is going to happen or could possibly happen, it couldn't be history... So in my opinion, Prediction should have a section of its own. If I have misunderstood in any way PLEASE let me know! =) I'd like to help make this article fantastic! Chucks91 (talk) 18:56, 12 November 2009 (UTC)
That's the spirit! ;) Paradoctor (talk) 20:35, 12 November 2009 (UTC)

Also, I would like some feedback on what to add to the prediction section. I've noticed that no one has added what anything about what could happen should Artificial Intelligence go unchecked after it has reached a point where it can learn on its own. I will add this information whether i recieve feedback soon or not, but if it doesn't belong please feel free to delete, but explain why. Chucks91 (talk) 19:06, 12 November 2009 (UTC)

Don't worry, we will. For information on what happens when machines start to learn on their own, you might want to read the articles on transhumanism and technological singularity, which are both mentioned in the last paragraph of the section "Predictions". Regards, Paradoctor (talk) 20:35, 12 November 2009 (UTC)
In this article in particular, we are mostly looking for brevity & reliable sources & relevance. ---- CharlesGillingham (talk) 20:41, 12 November 2009 (UTC)

I've noticed in the prediction section, that there are quite a bit of examples given. perhaps, too much? it would help decrease its length for sure.Chucks91 (talk) 17:08, 19 November 2009 (UTC)

Discussion at Intelligence

A bit about machine intelligence has been removed from the Intelligence article. There is a debate about it at Talk:Intelligence#.22Mathematical.22_definition_of_intelligence. It has just be restored but the person who removed it originally has raised a worry about it being biased original research. Like to give it a look? thanks Dmcq (talk) 22:35, 5 January 2010 (UTC)

Picture?

 
Shadow Dexterous Hand

This picture to the right of the article seems to have no value but free marketing for the robotic arm they're selling. Is it really necessairy to put it there and does it really have use for the article?93.125.198.182 (talk) 18:02, 28 June 2009 (UTC)

Hopeless. Utterly hopeless. Paradoctor (talk) 19:53, 28 June 2009 (UTC)
Why is there such a problem with pictures (assuming they are not copyrighted)? We are visual creature and pictures help our memory. AI is such a large topic, no one picture can represent it all - How about a research robot, like MIT's Domo. —Preceding unsigned comment added by Thomas Kist (talkcontribs) 18:53, 29 June 2009 (UTC)
Perhaps we could use the image, but just without the caption? The caption did actually read like ad copy, and I agree that it had to be removed. But without the caption, it's just kind of a non-specific image that evokes the idea of AI. ---- CharlesGillingham (talk) 04:33, 30 June 2009 (UTC)
The new image (face recognition software) works. Best one so far. And it's even from the Commons! Case closed. ---- CharlesGillingham (talk) 10:34, 17 July 2009 (UTC)
I agree, AI is really about algorithms (the mind behind the body) and this is actually a picture about an algorithm. Well Done! —Preceding unsigned comment added by Thomas Kist (talkcontribs) 13:45, 21 July 2009 (UTC)
Good one!93.125.198.182 (talk) 01:02, 27 July 2009 (UTC)
What happen to the face recognision picture???? —Preceding unsigned comment added by Thomas Kist (talkcontribs) 16:11, 6 August 2009 (UTC)
It was deleted from the commons, believe it or not. I posted a message to the editor who deleted it. I suppose that some rule or other was violated by who ever uploaded it in the first place. We just can't catch a break on a lead image for this article. (Remember the Deep Blue vs. Gary Kasparov image? That was good too.) ---- CharlesGillingham (talk)

Red Eye Special

 
Artist's conception of an electronic eye

I was bold and inserted this as interim solution. You know, window to the soul, HAL, artificial vision, should be enough to keep many happy while not enraging too many. Paradoctor (talk) 20:11, 10 November 2009 (UTC)

Hi, if You want to put the Iconic Eye of Hal as a Welcome Image then I suggest You to take One of many Images that are available in Web, but not this Horrible Representation. ---- AFG 12:06, 11 November 2009 (UTC)
I'm afraid images from 2001 - A Space Odyssey are not available. If you know of any suitable free images, or any copyright holder willing to grant permission, I'll gladly do the legwork. Paradoctor (talk) 18:05, 11 November 2009 (UTC)

thumb|HAL's iconic camera eye.

What about this one? ---- AFG 13:23, 11 November 2009 (UTC)
Paradoctor is right. We have tried to use this image before and have been told that it is a copyright violation. If the entire article is about the film or characters from the film, then the image falls under the legal principle of fair use. However, because HAL is tangential to the subject of this article, fair use doesn't apply to us here. At least that's how I understand it. WIkipedia has to take copyright law very seriously, I'm afraid, which is why it is so hard to find decent images for a subject like AI. ---- CharlesGillingham (talk) 18:34, 11 November 2009 (UTC)
Of course, we could always use the Shadow Hand pic again. (ducks and covers) Paradoctor (talk) 20:35, 11 November 2009 (UTC)
Regardless of copyright issues, I don't particularly care for an image of HAL. Unless you have seen the film it means nothing. It shouldn't be necessary to see a film to understand/appreciate an article. pgr94 (talk) 16:42, 12 November 2009 (UTC)

Man vs. Machine

Some time ago (I forget when exactly) I went to the trouble of contacting IBM to get permission to use a photograph of Kasparov playing Deep Blue for this article. Was that not an appropriate picture? Now for some reason it doesn't even appear in Deep Blue versus Garry Kasparov. pgr94 (talk) 21:20, 11 November 2009 (UTC)

Lots of Kasparov images here and on Commons, one Deep Blue image, two versions of a fair-use image from the match on the Lithuanian and on the Kannadan Wikipedias. Your upload log shows only two unrelated files, and the article on the match never had an image judging from its history and the image request on its talk page. Do you by any chance remember details on the how what when where why whom of your elusive upload? Just noted: Shouldn't there be an OTRS ticket? If so, maybe they can help us. Paradoctor (talk) 00:10, 12 November 2009 (UTC)
Dug up the OTRS: [Ticket#2008031110021007] Deep Blue photo‏. It showed Kasparov playing deep blue. http://en.wikipedia.org/wiki/Image:P11_kasparov_breakout.jpg pgr94 (talk) 00:35, 12 November 2009 (UTC)
Was deleted as the result of this IfD. My suggestion seems to become ever more attractive... Paradoctor (talk) 01:20, 12 November 2009 (UTC)
We had the express permission from the copyright holder and it got deleted??? Beats me. If there is any support for the Kasparov vs Deep Blue photo I'll contact IBM again and ask them to put a free license on the picture. pgr94 (talk) 09:12, 12 November 2009 (UTC)
Permission is not the primary concern of WP:NFCC, and according to WP:NFCC#8, such content will only be kept if losing it would significantly hurt understanding of the topic. Silly, but it's policy. About obtaining a free license: In general, I'm all for adding to our collection, the articles on Deep Blue and on the match could use it, and lobbying companies to get used to releasing free content is an excellent idea. As far as this article is concerned, I think we can do better. Chess as AI metaphor is rather dated, and I have a personal dislike for chess. Also, this may be considered over-emphasizing competition/conflict between homo sapiens and his creations in a field where many expect A Better Tomorrow rather than dystopia. Paradoctor (talk) 14:18, 12 November 2009 (UTC)
And I was going to suggest a terminator picture. With a bit more thought maybe one of the best examples of AI might be Data from Star Trek, or even the Enterprise computer. (but there are so many, maybe R2D2 or the most iconic of all 'Robby') If you want something real maybe the best known most iconic example is Asimo, and Asimo does some pretty impressive things. Lucien86 (talk) 15:32, 7 February 2010 (UTC)

nition

Brain with cogs: captures the idea of both thought and machinery, hence automated thinking. This kind of depiction is my first choice as a representation of artificial intelligence. e.g. mechanical brain. If someone finds a suitably licensed image or if there are any creative types motivated to create something similar that would good. pgr94 (talk) 09:12, 12 November 2009 (UTC)

Sorry, but my first association was something like "Ferrari with Yugo engine". ^_^ Gears are definitely 19th century. If something like that in the style of a 19th-century engraving (or even an original) was available, I might go for it. Paradoctor (talk) 14:18, 12 November 2009 (UTC)

Caricature

While symbolic images conveying the themes of AI are great, obtaining a good, uncontroversial, and free image might prove difficult. OTOH, there must be lots of caricatures which, while not perfect, might be appropriate for the lede. Imagine this one: AI lab, grinning robot holding flame thrower, angry scorched scientist saying: I clearly said "touch me!". ;)

A few non-free examples

Paradoctor (talk) 19:22, 12 November 2009 (UTC)

Gallery of potential candidates

Feel free to add your favorites.

Indeed, why?

What about a nicely formatted quote instead of an image? Should be much easier to find (e.g. [5][6]) something agreeable. For example, TAOCP does it, too, and nobody thinks anything of it. List of candidates:

  • "Chess is the Drosophila of artificial intelligence. However, computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies." John McCarthy
  • (your suggestion here)
  • (your suggestion here)
  • (your suggestion here)

Things I was surprised not to see in this article

  1. Reference to Minsky's Society of Mind.
  2. Reference to The Emperor's New Mind and Consciousness Explained.
  3. Reference to Noam Chomsky.

I also felt we could do with more expansion on Cartesian dualism and Searle's Chinese Room.

Finally, has User:Rjanag seen this? I think his input would be particularly valuable here.—S Marshall Talk/Cont 00:29, 8 August 2009 (UTC)

You seem to be mostly interested in the philosophy of AI, rather than the business of AI research, which is the main focus of the article. Unfortunately, we can only afford about a page to cover this entire subject; there's just enough room to list the "greatest hits" of the philosophy of AI and direct the reader to see that article. (Ths list is at Artificial intelligence#Philosophy.) The section does include a paragraph on Searle and on Roger Penrose, who are both on your list. Their ideas are covered in deatil in the articles Chinese Room and The Emperor's New Mind. Minsky is mentioned in several places in the article, however I think that his other achievements (founding the field, heading up the MIT lab in the 60s and 70s) are more notable than his popular books.
In my view, the other two on you list are not quite influential enough (on the field of AI itself) to merit inclusion on our short list. Daniel Dennett's ideas, although relevant to philosophy of AI, are not actually a central part of the field of AI. I assume that Chomsky's relevance has to do the with his influence on computer science in general, i.e. the Chomsky hierarchy. This is not relevant to AI in particular, but only to computer science in general. Another way Chomsky might be relevant is that his ideas about language helped give birth to cognitive science, i.e., his point of view helped to defeat behaviorism in the late 50s and gave scientists more freedom to begin to discuss the internal structure of the mind, and so he is a kind of godfather to ideas in philosophy like cognitivism, functionalism and computationalism, ideas in psychology like cognitive psychology, and, the central ideas of the philosophy of AI. But, in the end, this is only marginally relevant to the field itself.
Anyway that's my opinion. This is very tight article about a very large subject, and we try to evaluate each new entry as fairly as possible. One guide I use is "how often is this topic mentioned in major AI textbooks?", with a few caveats: see my discussion at the top of a recent Peer Review.) ---- CharlesGillingham (talk) 01:43, 8 August 2009 (UTC)
  • I'm very conscious of the article's length, and I agree with that concern. (Some trimming could perhaps usefully be achieved via a more conventional referencing style.) And it's true that I'm primarily interested in the philosophy behind it. I'm also interested in the linguistics aspect of cognitive science, but that's tangentially relevant and I'd defer to Rjanag's view of what should be in there in any case.

    I'm a big Dennett fan, and I feel the reason he should be here is as an answer to Penrose (since The Emperor's New Mind is probably the most widely-read popular science book on the subject, readers of this article may well be familiar with it, and Dennett addresses its flaws).

    But I'm not desperately concerned to see changes. I remarked here because I saw you wanted more input, rather than because I feel drastic revisions are necessary.—S Marshall Talk/Cont 02:01, 8 August 2009 (UTC)

Video game for Xbox 360 AI "Milo and Kate" is being developed by Lionhead Studios.

  1. Reference to Milo and Kate —Preceding unsigned comment added by KCKnight (talkcontribs) 14:16, 20 June 2010 (UTC)

Inappropriate ending

I don't see why an otherwise nicely-written scholarly article should end on a tacky out-note about manga and Dune. Can we either move this to another section or clean up the final paragraph so the conclusion properly captures the essence of the article and not some random fanboy's secret fantasies? 70.27.108.81 (talk) 10:11, 16 November 2009 (UTC)

The section is not about "conclusions", and would be absurd if it was, as AI is a field of active research and progress. It's neither about capturing the essence of the article, that's what the lede is for. Paradoctor (talk) 11:57, 16 November 2009 (UTC)
That's wrong. Both the introduction and the conclusion of an article are supposed to summarize. The conclusion shouldn't be the point where the writing just stops. --Dekker451 (talk) 06:18, 4 July 2010 (UTC)

Broken Harvard references

Quite a lot of the references using {{harvnb}} in this article are broken, i.e. if somebody clicks on them, nothing happens. List of the broken ones can be seen at [7] (section “Links without IDs”). I fixed those that I knew how, but for many of them there was no similar citation in the section “References”, or the year was wrong (so I was unsure whether it is the right citation). Could someone who knows what the references mean try to fix them? Thanks. Svick (talk) 17:52, 20 February 2010 (UTC)

I fixed most of the ones for the references I have here. There might be one or two left. ---- CharlesGillingham (talk) 06:27, 25 February 2010 (UTC)

Statistical approaches - examples?

Statistical approaches section should list some specific examples of AI applications.

For example, Kalman filtering for vision, or a learning algorithm (such as Q learning) 174.101.136.61 (talk) 20:07, 15 August 2010 (UTC) 174.101.136.61 (talk) 20:11, 15 August 2010 (UTC)

Kalman filters are discussed under "Tools". (The organization of the article separates problems, approaches, tools and applications. This structure overlooks the fact the certain approaches are paired with particular tools. For example "cognitive simulation" used "means-ends analysis" (i.e. search), "symbolic AI" uses production rules and semantic nets, and statistical AI uses bayesian nets and so on. Perhaps this is a flaw, but, for myself, I like the way it keeps the reader's attention on the major issues without burying them in too much detail.) ---- CharlesGillingham (talk) 23:05, 15 August 2010 (UTC)

Article has nothing about the current state of *academic* AI research

I am not qualified to add anything to this article, but a friend in "Machine Learning" tells me that classical "AI" does not exist as a field of research any more. Even people who do work in areas once considered "AI", like those in the machine learning community shy away from describing themselves as "AI researchers", which (he says) is associated with grandiose, pie-in-the-sky nonsense. He said this shift was a consequence of "AI" becoming something of a joke in CS / theory departments, with the "useful" parts subsumed in algorithms, machine learning etc. and the the rest relegated to crackpots, amateurs and futurists. —Preceding unsigned comment added by 128.46.211.104 (talk) 10:52, 2 November 2010 (UTC)

You might be interested in the AI winter article. Dmcq (talk) 11:05, 2 November 2010 (UTC)
and AI effect. pgr94 (talk) 11:35, 2 November 2010 (UTC)
and History of AI#AI behind the scenes. ---- CharlesGillingham (talk) 16:09, 2 November 2010 (UTC)
This is an excellent and interesting point, that has been left out of the article for brevity. ---- CharlesGillingham (talk) 16:09, 2 November 2010 (UTC)
Back in 2007 I wrote: "Artificial intelligence is a young science and is still a fragmented collection of subfields. At present, there is no established unifying theory that links the subfields into a coherent whole." but the paragraph has evolved considerably since. To me it's obvious, but sorry no reference. pgr94 (talk) 16:49, 2 November 2010 (UTC)
We still have "AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other." (in the intro) and "There is no established unifying theory or paradigm that guides AI research." (in "approaches"). I think these are the descendants of your text, and both are now cited. ---- CharlesGillingham (talk) 17:46, 2 November 2010 (UTC)

Hilarious quotes!

Here are some awesome quotes from issue 200 (March 2001) of Computer Gaming World when a number of high-profile developers were asked regarding the future of AI in video gaming:

Q: When will AI be indistinguishable from a human?

Tom Hall, Ion Storm: Now, Gary Kasparov remarked that in his chess match with Deeper Blue, it made a "human" move - one that was slightly better positionally, but a loss in all other measures. But for an AI to actually respond to you intelligently, like Star Trek's computer? That will be about 2020 to 2030, because there are so many nuances to being human.

Will Wright, Maxis: Depends on the game/activity. For chess, that date might have been around 1990. For a typical real-time strategy (RTS) game, real soon, maybe 2005 or so. For conversation and general interactions, closer to 2050. General AI (not task-specific) has a track record similar to fusion power; it always seems about 10 to 20 years away from practical application.

Ed Fries, Microsoft: Never. I could make an AI today that's better than any human in, for example, a racing game. But what fun would that be? Computers will never behave exactly like humans. We are too strange.

Tim Sweeney, Epic: This is already totally achievable in non-conversational systems. Game AI is a good example of this, where the computer is inherently much better than a human opponent, such that the challenge is to dumb it down in a realistic way. But when it comes to conversation and natural language comprehension, we're still a long way off. This is the problem researchers have always thought we were 20 years away from solving.

Bill Roper, Blizzard: I do not know it this will ever truly happen as envisioned by Alan Turing. The true mettle of any garner is his or her ability to improvise, and this is an unbelievably high watermark to set, especially when you take into consideration all of the rules and variances involved in any game. Only recently have we seen a computer able to best a human opponent in chess, and that is a game with a relatively simple rule set that is played on a board of fixed size. When you then extrapolate the challenges with modifiable terrain, many more units with many more abilities per piece, you can see that creating true AI for a game is a staggering challenge.

Peter Molyneux, Lionhead Studios: 2004 to 2005. I would say three to four years for limited applications for AI in certain worlds constructed inside computers. It could soon be very difficult to distinguish between humans and computers. But as for conventional At having intelligence that relates to the real world? That could take a lot longer.

SharkD  Talk  07:05, 17 November 2010 (UTC)

Merging scripts

There is not enough room in this article for scripts (artificial intelligence), beyond a one-sentence definition. I've added this to the "todo" above. (along with frames & semantic nets) ---- CharlesGillingham (talk) 21:37, 13 November 2010 (UTC)

I agree with Charles that there isn't room here. Perhaps both frames and scripts could be incorporated into Knowledge representation and reasoning instead? pgr94 (talk) 20:14, 6 December 2010 (UTC)

== Clipart image of a "fiction artificial brain" ==]] I removed File:ArtificialFictionBrain.png from both this article and Intelligence yesterday (it's clipart of a human brain superimposed on a circuit board texture, captioned "There is no consensus on how closely the brain should be simulated."), but was reverted and asked to discuss the change. I appreciate it's hard to find good images for abstract concepts, but this seems rather an odd, magazine-style image for Wikipedia to be using, and I wouldn't say it added anything to the reader's understanding of the subject. What do other editors think? --McGeddon (talk) 14:12, 9 December 2010 (UTC)

I agree that it's hard to find images. If you have any better images that convey the notion of brain simulation or software AI, by all means suggest them. But I believe it is better to have this image than none. There is a common misconception that AI equals robots. Having only images of robots perpetuates this misconception. pgr94 (talk) 14:37, 9 December 2010 (UTC)

Cybernetics

Have been working on article which includes clairvoyance combined with technologies. There is section in Wikipedia, Artificial Intelligence, titled, "Cybernetics and brain simulation", have been working from that has been extremely useful. Presently researching and writing about designing machine which could detect clairvoyant signals through systems, and interpret them to some degree. Cybernetics definition: the science of communications and control in machines(e.g. computers)and living things(e.g. by the nervous system)cybernetic(adjective)Oxford English Dictionary75.203.201.53 (talk) 12:47, 19 December 2010 (UTC)

List defined references

I'm thinking about converting this article to list defined references. The article has a lot of very large bundled references, and I think it would make a great deal easier to edit the text if these were all moved into the references section. Any thoughts or complaints?

I've converted the first section as an experiment. It's easy to revert if no one likes it. ---- CharlesGillingham (talk) 08:35, 22 January 2011 (UTC)

It looks like you have converted all of the references now. Is that correct ? Chaosdruid (talk) 19:45, 29 January 2011 (UTC)
Still a few sections to go. Do you like it? I think it makes the body of the article much easier to edit. ---- CharlesGillingham (talk) 07:43, 30 January 2011 (UTC)
Not really lol - unfortunately the article is for the reader, not the writers :¬)
The other point is that after doing the first section you then went ahead and did the rest without consensus. Now it would be extremely difficult to return to the previous method would it not ?
Book refs are pretty much unaffected, the problem is online refs - when editing a para the reference was there, with the html link, for one to immediately click on and so check the source and other details in the ref. You could also see if mistakes were made in the other parts as well. Now I am unsure how one would go about that. I realise that the popups still work fine, in fact as you have pointed out in other articles you have edited in this way, they are in some ways easier to use, but only really if it is book references. OK now for a trial run ... so for the first link, which is HTML, I now have to open two versions of the page, use one to edit and the other to look up the references. Not so good and will definately cause problems if editing on a smart-phone or similar where only one page ata a time can normally be opened.
On a cursory first attempt I think that the HTML links should have been left inline as they were.
I have seen your preference for this type or linking and am surprised that you have not come across this before, which makes me think maybe I have got something wrong about the way this is done ?
Anyway I am sure you will tell me if there is a better way lol :¬) Chaosdruid (talk) 12:45, 1 February 2011 (UTC)
It's still possible to roll it back; no other content has been added in the meantime. I have improved the bundling a bit here and there and noticed several things that needed a citation, so, if we rolled back, I would try to catch those improvements as we do so.
I'm not sure I understand your objections, exactly. First, I think you have me confused with someone else. I haven't added list defined references to very many articles. Maybe one or two, I think, in my entire career. They're not my first choice. Most of the articles where I am the major contributor (History of AI, AI winter, Chinese room, Dreyfus' critique of AI, etc.,) use inline shortened footnotes. So I'm not sure what you mean by "other articles you have edited in this way" and I also haven't the foggiest what you mean by "popups". I think you're referring to a conversation with someone else.
At any rate, your objection seems to be mostly about "online" refs. I think you should note that this article has only seven or eight online refs (out of four or five hundred individual citations); it consists almost entirely of citations to textbooks bundled by topic. These topical references tend to be quite large, so much so that, in many cases, you could not read an entire paragraph in edit mode without scrolling a few times. This makes it difficult for someone to stop by here and correct an error; it's just too hard to find the offending text. I wanted to make it more accessible to people who have a just a few words to contribute. There are several of us who watch this article pretty closely (myself, Pgr94, Arthur Rubin, Thomas Kist) and we are all pretty familiar with the sources and the topics and should be able spot nonsense when we see it.
I'll hold off till you reply. Anyone else have an opinion? Compare the source today with how it was a few weeks ago here. (Scroll down to the first paragraph of "History" section and you'll see how hard it is to actually read a paragraph in edit mode.) ---- CharlesGillingham (talk) 19:06, 1 February 2011 (UTC)
I may well be that I have you confused with someone else lol
It is true that it is easier to read and as they are mostly book refs then fair enough.
Why are there so many refs ? five refs for one short sentence seems pretty weird and that seems to be a recurrent theme all the way through - I can see what you mean now about the confusion trying to read between them.
Popups are when you hover your mouse pointer over a link or ref etc. and a small window opens up to show you what they are linked/referring to and allows you to action them from it also.
To be honest I think you have done wonders with it lol - it is much better as it is now!
And yes, you were right my only concern was that only the online refs were problematic, as I said before the book refs are fine, and perhaps the onlines could be left as that or would putting them back to inline cause a problem ? Chaosdruid (talk) 01:24, 2 February 2011 (UTC)
Hello all. Here's a little feedback on list-defined references:
  • I agree the body of the article is neater. At the same time it appears a little harder to find the references to edit them. Perhaps it just takes some getting used to.
  • Some references don't work and there is no indication that anything is wrong. E.g. In section "Logic", the reference after "First order logic" (currently [107]) has a link to ACM 1998, but that wikilink doesn't work. If there was some indication that it was a broken link this wouldn't be a problem.
  • Commentary in the references: I'm not sure that this is related to the change in the way references are handled but it is something that just caught my attention. I think anything said in the commentary should be covered by the same Wikipedia rules as the main body of the text, i.e. it should verifiable, NPOV, from reliable sources etc. Here is a particular example:

Poole, Mackworth & Goebel 1998, p. 1 (who use the term "computational intelligence" as a synonym for artificial intelligence). Other textbooks that define AI this way include Nilsson (1998), and Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55) These textbooks are the most widely used in academic AI. See Textbooks at AI topics

(my emphasis) While probably true, these claims need to be verifiable. So does that mean we need references within the references?
But overall great work Charles! Apart from the above comments, I have no strong preference either way because whichever way we do it editing references is going to require a certain technical competence. I'm happy to go with the flow. pgr94 (talk) 10:36, 2 February 2011 (UTC)
I agree that the text is easier to edit and the references are a little harder to edit; list-defined references are the lesser of two evils. To find a particular reference in edit-mode, you need use "Find" on your browser and search for the name of the footnote. This is harder but is at least doable. In the old version, the text was just too difficult to edit, especially if you were trying to fix a particular line in one of the larger sections, such as "History" or "Predictions".
There are "references within the references" (they are parenthetical references). "Synonym for AI" is at the bottom of page 1 in Poole, given in that first citation. As for the last one, "most popular textbooks": AI Topics used to have a page where they listed the relatively popularity of AI textbooks and that page is now gone. I suppose I should update that footnote, and remove the bit about them being popular. R&N's website says that it's the most popular textbook, but R&N's website is an ad, so it's no good as a source. Maybe there is somewhere else we could find these numbers? The WayBack machine? It's nice to be able to say this article is based on the most popular textbooks. It gives the article more authority.
I should answer Chaosdruid's question "Why so many references?" First a little background: early versions of this article suffered from problems with WP:Relevance: unimportant sub-topics received a lot of attention and important sub-topics weren't even mentioned. This was not really anybody's fault. The fact is that academic AI is a very fragmented field; different institutions and researchers emphasize different parts of AI. On top of that, the subject is of interest to science fiction readers and futurists, who want to write about a completely different set of issues. And finally, this is a subject that generates more than its share of original research.
So we needed some kind of impartial source that would help determine which subjects had to be included. I looked at the four most popular textbooks (see Talk:Artificial intelligence/Textbook survey) and based the article on that. These textbooks emphasize problems and tools, so I also used two popular histories (McCorduck, Crevier) for more material on approaches, philosophy and history, as well as unsolved problems (which receive relatively little coverage in the textbooks). Together with a couple of judgement calls and later additions, this is what you see in the article.
The upshot is that, for all the essential topics, the article cites several sources from the primary list: (Norvig et al, Luger et al, Nilsson, Poole et al, McCorduck, Crevier). All these extra cites are not necessary for WP:Verifiability, but I already had them and they do help establish WP:Relevance.
I hope that answers your question. Thanks everybody. ---- CharlesGillingham (talk) 23:18, 2 February 2011 (UTC)

Reference broken for more than 6 months

I have been trying to see the full reference for "ACM1998" which is mentioned in several places in the article. (E.g. in section "Logic" after "First-order logic".) It is cited using {{Harvnb|ACM|1998|loc=~I.2.4|ref=ACM1998}} The link goes nowhere so it's not possible to see the full reference or know what book or article it refers to. Furthermore, when looking back through the history of this article I see that this reference has been broken for at least 6 months. This is not good. We want to be able to see when there are broken links/references so we can fix them. Can someone else check it's not just me? (Apologies for repeating myself, but this point got lost among my other comments in the previous section) pgr94 (talk) 10:12, 3 February 2011 (UTC)

Fixed. I wish that MediaWiki marked these as red links. Failing that, I wish CitationBot checked this. ---- CharlesGillingham (talk) 18:44, 3 February 2011 (UTC)
Thanks Charles. I absolutely agree with red links for broken section links / anchors. Does anyone know if that has been considered before? pgr94 (talk) 10:12, 4 February 2011 (UTC)

On the Hunt for Universal Intelligence

Has this already been addressed?

ScienceDaily (Jan. 28, 2011) — How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? So far this has not been possible, but a team of Spanish and Australian researchers have taken a first step towards this by presenting the foundations to be used as a basis for this method in the journal Artificial Intelligence, and have also put forward a new intelligence test.

— quote

"On the Hunt for Universal Intelligence". Science News. ScienceDaily. Jan. 28, 2011. Retrieved February 17, 2011. story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by FECYT - Spanish Foundation for Science and Technology, via EurekAlert!, a service of AAAS {{cite web}}: Check date values in: |date= (help)

Journal Reference: José Hernández-Orallo, David L. Dowe. Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 2010; 174 (18): 1508 DOI: 10.1016/j.artint.2010.09.006

--Pawyilee (talk) 10:14, 17 February 2011 (UTC)

Recent copyedits

A series of recent changes which bring this article in line with the layout more typically used in articles was rolled back here. The most significant of these changes were the reintroduction of large HTML comments to separate different content areas, and the reintroduction of a great deal of superfluous markup in the references section. The former is unnecessary: we do not habitually use comments in this manner in other articles, and especially in the article introduction it should be obvious that the separate paragraphs refer to separate concepts simply by looking at them. As for the references section, the markup used is simply highly idiosyncratic and makes the actual footnote text nearly unreadable (and that's on a desktop browser: I shudder to think how legible it is on a mobile device). In the interests of collaborative editing it would be better for this page to more closely follow the practices of other articles in these regards.

Furthermore, the above idiosyncracies do not apply solely to the article. It is not clear why a todo list is being maintained as a page section rather than being moved to /to do, as is common practice. If there are no objections I'd like to move it there, as ideally comments on the main talk page should not be repeatedly re-edited.

Chris Cunningham (user:thumperward) - talk 10:00, 26 June 2011 (UTC)

Let's just take it point by point.
I like the Todo list using {{Todo}}; let's implement that.
Next, the references section. I take it you claim that the your version is easier to edit than the current version? I've reprinted them in the following section (your version is first, the current version is second). I am not convinced. It is difficult to find an individual footnote in your version. (By the way, this reference section conforms to the standards supported by WP:CITE. These are topical bundled footnotes as per WP:CITEBUNDLE, using shortened footnotes for references used multiple times (see WP:CITESHORT). We've used list-defined references (see WP:LDR) because the sheer volume of reference makes it too difficult to edit the text. See our discussion above.)
Finally the hidden topic descriptions for paragraphs. (Such as <!-- AI at MIT -->) In my experience, it is very common for new contributions to be off of the topic of a paragraph or section, especially in an article such as this. These hidden topic descriptions are a way to try to draw the contributor's attention to the topic of the paragraph and help them to find the right place to add a sentence or two. This is an innovation, and, like all innovations, it is currently idiosyncratic. I think that one could at least agree that these hidden comments cause no harm. ---- CharlesGillingham (talk) 11:06, 26 June 2011 (UTC)
Example of a recent edit
New version:
...
</ref><ref name="Coining of the term AI">
Although there is some controversy on this point (see {{Harvtxt|Crevier|1993|p=50}}), [[John McCarthy (computer scientist)|McCarthy]] states unequivocally "I came up with the term" in a c|net interview. {{Harv|Skillings|2006}}
</ref><ref name="McCarthy's definition of AI">
[[John McCarthy (computer scientist)|McCarthy]]'s definition of AI:
* {{Harvnb|McCarthy|2007}}
</ref><ref name="McCorduck's thesis">
This is a central idea of [[Pamela McCorduck]]'s ''Machines That Think''. She writes: "I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition." {{Harv|McCorduck|2004|p=34}} "Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized." {{Harv|McCorduck|2004|p=xviii}} "Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn't, we have engaged for a long time in this odd form of self-reproduction." {{Harv|McCorduck|2004|p=3}} She traces the desire back to its [[Hellenistic]] roots and calls it the urge to "forge the Gods." {{Harv|McCorduck|2004|pp=340–400}}
</ref><ref name="AI widely used">
AI applications widely used behind the scenes:
* {{Harvnb|Russell|Norvig|2003|p=28}}
* {{Harvnb|Kurzweil|2005|p=265}}
* {{Harvnb|NRC|1999|pp=216–222}}
</ref><ref name="Fragmentation of AI">
Pamela {{Harvtxt|McCorduck|2004|pp=424}} writes of "the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics ... and these with own sub-subfield—that would hardly have anything to say to each other."
</ref><ref name="Problems of AI">
This list of intelligent traits is based on the topics covered by the major AI textbooks, including:
* {{Harvnb|Russell|Norvig|2003}}
* {{Harvnb|Luger|Stubblefield|2004}}
* {{Harvnb|Poole|Mackworth|Goebel|1998}}
* {{Harvnb|Nilsson|1998}}
</ref><ref name="General intelligence">
General intelligence ([[strong AI]]) is discussed in popular introductions to AI:
* {{Harvnb|Kurzweil|1999}} and {{Harvnb|Kurzweil|2005}}
</ref><ref name="AI in myth">
AI in myth:
* {{Harvnb|McCorduck|2004|pp=4–5}}
* {{Harvnb|Russell|Norvig|2003|p=939}}
</ref><ref name="Cult images as artificial intelligence">
[[Cult image]]s as artificial intelligence:
* {{Harvtxt|Crevier|1993|p=1}} (statue of [[Amun]])
* {{Harvtxt|McCorduck|2004|pp=6–9}}
These were the first machines to be believed to have true intelligence and consciousness. [[Hermes Trismegistus]] expressed the common belief that with these statues, craftsman had reproduced "the true nature of the gods", their ''sensus'' and ''spiritus''. McCorduck makes the connection between sacred automatons and [[613 Commandments|Mosaic law]] (developed around the same time), which expressly forbids the worship of robots {{Harv|McCorduck|2004|pp=6–9}}
</ref><ref name="Humanoid automata">
Humanoid automata:<br>
[[King Mu of Zhou|Yan Shi]]:
* {{Harvnb|Needham|1986|p=53}}
[[Hero of Alexandria]]:
* {{Harvnb|McCorduck|2004|p=6}}
[[Al-Jazari]]:
* {{cite web|url=http://www.shef.ac.uk/marcoms/eview/articles58/robot.html |title=A Thirteenth Century Programmable Robot |publisher=Shef.ac.uk |date= |accessdate=2009-04-25}}
[[Wolfgang von Kempelen]]:
* {{Harvnb|McCorduck|2004|p=17}}
</ref><ref name="Artificial beings">
Artificial beings:<br>
[[Jābir ibn Hayyān]]'s [[Takwin]]:
* {{Cite journal |author=O'Connor, Kathleen Malone |title=The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam|publisher=University of Pennsylvania |year=1994 |url=http://repository.upenn.edu/dissertations/AAI9503804 |accessdate=2007-01-10 |ref=harv}}
[[Judah Loew]]'s [[Golem]]:
* {{Harvnb|McCorduck|2004|pp=15–16}}
* {{Harvnb|Buchanan|2005|p=50}}
[[Paracelsus]]' Homunculus:
* {{Harvnb|McCorduck|2004|pp=13–14}}
</ref><ref name="AI in early science fiction">
AI in early science fiction.
* {{Harvnb|McCorduck|2004|pp=17–25}}
</ref><ref name="Formal reasoning">
Formal reasoning:
* {{cite book | first = David | last = Berlinski | year = 2000 | title =The Advent of the Algorithm| publisher = Harcourt Books |author-link=David Berlinski | isbn=0-15-601391-6 | oclc = 46890682 }}
</ref><ref name="AI's immediate precursors">
...

Previous (and current) version:

...

<ref name="Coining of the term AI">
Although there is some controversy on this point (see {{Harvtxt|Crevier|1993|p=50}}), [[John McCarthy (computer scientist)|McCarthy]] states unequivocally "I came up with the term" in a c|net interview. {{Harv|Skillings|2006}}
</ref>

<ref name="McCarthy's definition of AI">
[[John McCarthy (computer scientist)|McCarthy]]'s definition of AI:
* {{Harvnb|McCarthy|2007}}
</ref>

<ref name="McCorduck's thesis">
This is a central idea of [[Pamela McCorduck]]'s ''Machines That Think''. She writes: "I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition." {{Harv|McCorduck|2004|p=34}} "Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized." {{Harv|McCorduck|2004|p=xviii}} "Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn't, we have engaged for a long time in this odd form of self-reproduction." {{Harv|McCorduck|2004|p=3}} She traces the desire back to its [[Hellenistic]] roots and calls it the urge to "forge the Gods." {{Harv|McCorduck|2004|pp=340–400}}
</ref>

<ref name="AI widely used">
AI applications widely used behind the scenes:
* {{Harvnb|Russell|Norvig|2003|p=28}}
* {{Harvnb|Kurzweil|2005|p=265}}
* {{Harvnb|NRC|1999|pp=216–222}}
</ref>

<ref name="Fragmentation of AI">
Pamela {{Harvtxt|McCorduck|2004|pp=424}} writes of "the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics ... and these with own sub-subfield—that would hardly have anything to say to each other."
</ref>

<ref name="Problems of AI">
This list of intelligent traits is based on the topics covered by the major AI textbooks, including:
* {{Harvnb|Russell|Norvig|2003}}
* {{Harvnb|Luger|Stubblefield|2004}}
* {{Harvnb|Poole|Mackworth|Goebel|1998}}
* {{Harvnb|Nilsson|1998}}
</ref>

<ref name="General intelligence">
General intelligence ([[strong AI]]) is discussed in popular introductions to AI:
* {{Harvnb|Kurzweil|1999}} and {{Harvnb|Kurzweil|2005}}
</ref>

<!-- History --------------------------------------------------------------------------------------------------->

<ref name="AI in myth">
AI in myth:
* {{Harvnb|McCorduck|2004|pp=4–5}}
* {{Harvnb|Russell|Norvig|2003|p=939}}
</ref>

<ref name="Cult images as artificial intelligence">
[[Cult image]]s as artificial intelligence:
* {{Harvtxt|Crevier|1993|p=1}} (statue of [[Amun]])
* {{Harvtxt|McCorduck|2004|pp=6–9}}
These were the first machines to be believed to have true intelligence and consciousness. [[Hermes Trismegistus]] expressed the common belief that with these statues, craftsman had reproduced "the true nature of the gods", their ''sensus'' and ''spiritus''. McCorduck makes the connection between sacred automatons and [[613 Commandments|Mosaic law]] (developed around the same time), which expressly forbids the worship of robots {{Harv|McCorduck|2004|pp=6–9}}
</ref>

<ref name="Humanoid automata">
Humanoid automata:<br>
[[King Mu of Zhou|Yan Shi]]:
* {{Harvnb|Needham|1986|p=53}}
[[Hero of Alexandria]]:
* {{Harvnb|McCorduck|2004|p=6}}
[[Al-Jazari]]:
* {{cite web|url=http://www.shef.ac.uk/marcoms/eview/articles58/robot.html |title=A Thirteenth Century Programmable Robot |publisher=Shef.ac.uk |date= |accessdate=2009-04-25}}
[[Wolfgang von Kempelen]]:
* {{Harvnb|McCorduck|2004|p=17}}
</ref>

<ref name="Artificial beings">
Artificial beings:<br>
[[Jābir ibn Hayyān]]'s [[Takwin]]:
* {{Cite journal |author=O'Connor, Kathleen Malone |title=The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam|publisher=University of Pennsylvania |year=1994 |url=http://repository.upenn.edu/dissertations/AAI9503804 |accessdate=2007-01-10 |ref=harv}}
[[Judah Loew]]'s [[Golem]]:
* {{Harvnb|McCorduck|2004|pp=15–16}}
* {{Harvnb|Buchanan|2005|p=50}}
[[Paracelsus]]' Homunculus:
* {{Harvnb|McCorduck|2004|pp=13–14}}
</ref>

<ref name="AI in early science fiction">
AI in early science fiction.
* {{Harvnb|McCorduck|2004|pp=17–25}}
</ref>

<ref name="Formal reasoning">
Formal reasoning:
* {{cite book | first = David | last = Berlinski | year = 2000 | title =The Advent of the Algorithm| publisher = Harcourt Books |author-link=David Berlinski | isbn=0-15-601391-6 | oclc = 46890682 }}
</ref>

...
From the top:
  1. I'll move the todo later. Thanks.
  2. The problem is the sheer volume of footnotes, together with the provision of an average of three references for almost every assertion made in the article. Fully 50% of the entire page size consists of references, and using the current citation style nearly 75% of the article by length is footnotes. The footnotes also intersperse citations with commentary (such as the current footnote 7) or quotations (such as the current footnote 80), and many of the footnotes are not citations at all but rather general pointers to useful reading (such as the current footnote 12). What's more, the vast majority of the footnotes refer to the five textbooks. It is unclear that there is a benefit from such deep referencing of the same sources over and over again, especially when an additional "Other sources" section with no footnotes is provided as well. But quite aside from this, the use of nested {{refbegin}} templates to make the cited text tiny should certainly be stopped even if the current footnotes are kept.
  3. The use of hidden comments simply exacerbates the problem with the relative code-to-content ratio in the article. Were the (IMO) over-the-top use of footnotes curtailed, there would be less need for comments to navigate them by.
Chris Cunningham (user:thumperward) - talk 11:55, 26 June 2011 (UTC)

Content of the Artificial Intelligence article

Hi all,

Excuse me I'm french, my english is not good. In my opinion, this article teaches absolutely nothing about what is artificial intelligence and it is misleading. It is too long, it's a catch-all, all is mixed. The lie is there : this article speaks constantly of the many successes of AI, when in fact its results are a failure until today. If AI was a succes "In the 1990s and early 21st century", all IT today would be AI. This is not the case ! AI is a broken promise, an ambition and its few successes are ignored (see Jean-Philippe de Lespinay). "In the early 1980s, AI research was revived by the commercial success of expert systems" ! "By 1985 the market for AI had reached over a billion dollars" ! The impressing "Japan's fifth generation computer project" (was in fact a bitter defeat for the Japanese, for example in front of Turbo Prolog/Borland ...on a simple PC ! ), "In the 1990s and early 21st century, AI achieved its greatest successes" !!! Horror! This is journalism, not economic facts.

The history of AI did not go like that. You might interview Jean-Philippe de Lespinay, who is still alive and lived this history from 1982 until today. His job was AI developments non stop from 1982. You have here two samples of "good" articles about AI and expert systems (unfortunately in French) : Larousse encyclopedia: Definition of Artificial Intelligence and Larousse encyclopedia: Definition of the expert system.

Best regards and sorry for the criticism

Pat grenier (talk) 17:48, 20 August 2011 (UTC)

I think you missed the sentences where the article specifically calls early AI a failure: "They had failed to recognize the difficulties of the problems they faced". This is harsh enough, I think. The article also mentions that AI failed a second time in the 1980s and early 90s: it says "AI once again fell into disrepute". This is referring directly to expert systems and the Fifth Generation Project. Again, I think this is about as harsh as an encyclopedia needs to be. See also the History of AI article, which has more detail about these failures. (The "history" section in this article is really a summary of that article.)
AI's success in recent decades is in fields like machine learning, pattern recognition and robotics, where the "statistical" approach has provided the technology industry with a lot of successful applications. This is well documented in the sources. Again, see history of AI for more detail. Note that the "statistical approach" has relatively modest goals; they provide specific solutions to specific problems. No one is trying to build a machine that can do "all IT today". That's not what AI is any more. ---- CharlesGillingham (talk) 07:13, 26 August 2011 (UTC)
Here are a few more of the Larousse article's claims:
  • the "statistical" approach that solves small applications is not AI and so a lot of software in industry claims to be AI when it is not.
  • Deep Blue was not AI, there was no reasoning in it at all. ('Pas le moindre raisonnement là-dedans.') While a human a simulates plies, compares them, develops a strategy, reasons, and in a glance recognises the value or danger of a position, computer chess programs just calculate.
  • AI is the automation of human reasoning. (’Intelligence Artificielle, c’est d’abord l’automatisation du raisonnement humain.')
For brevity, I've lost some of the nuance, but these claims stuck out the most. pgr94 (talk) 11:04, 26 August 2011 (UTC)
Another interesting paragraph (thanks to google translate, plus a few minor fixes, my emphasis):

It was in France (Université Paris VI) that the expert system became a powerful tool, demonstrating that AI was not a chimera, with the appearance of an engine with the logic called "first order." This strange name describes our daily logic, which allows us to explain to others or to infer new facts. It just needed programming, which was not difficult. Only French researchers have done this (Pandora expert system) ... then have forgotten it, not realizing the scope of their discovery. Yet it was the first to show the aspects "mental" claimed by Minsky: real thinking skills, ability to solve problems, to decide, learn and communicate ("conversational") based on reasoning, the ability to explain what it "thinks" and even the ability to evaluate knowledge by the detection of contradictions (Minsky's "critical thinking"). Its most famous example was the expert system generator "Intelligence Service", derived from a commercial product marketed by Pandora Sociétés GSI-TECS and Arcane, who sold hundreds of copies. The expert system of first order is the first tool and fully operational techniques from AI. At the same time, it is the first to have demonstrated the feasibility of what some call the "strong" AI.

pgr94 (talk) 11:23, 26 August 2011 (UTC)
pgr94, pardon my english, please, I too am French. I don't understand if you suggest to keep the whole article or if you accept its modification. The same article doesn't make the apology of AI citing its many successes if AI is an echec, even acknowledging its failures. The reader should understand what it is and if it works. With this article, it is impossible. The certainty that the AI article should show it would be AI ​​was not successful, until now. It has had no operational success, at least in the United States (Mycin included).
I cite you: "AI's success in recent decades is in fields like machine learning, pattern recognition and robotics, where the "statistical" approach has provided the technology industry with a lot of successful applications". But the technics used in that domains are classic technics ! They are programmed by developers with classic languages (C++, Pascal, etc.) Where is AI ? In the newspaper only. Wikipedia repeats what the press says without any control. Newspapers need sensationalism and "researchers" need publicity. And Wikipedia repeats the two. Robotics is no AI. You can put AI inside but nobody did it (to my knowledge). You can make pattern recognition and machien learning with clssic technics. How to know if it is AI or not ?
The problem of AI is that everyone claims to do it. The article should begin by saying what "intelligence" is and what "artificial" means. It must begin by the historical definition of Mc Carthy (google translation of a french citation translated of an english citation !): "The construction of computer programs that engage in tasks that are, for now, more satisfactorily accomplished by human beings because they require high-level mental processes such as perceptual learning, organization memory and critical thinking".
The article should tell there is two AI: the "true" AI which is a technologoical breakthrough and the "false" AI which is an ambition (= a publicity) put in the programs with classic technics.
The true AI is based on real human intelligence: it must at least reason, like all animals can do. Jean-Philippe de Lespinay (talk) 10:05, 29 August 2011 (UTC)

— Preceding unsigned comment added by Jean-Philippe de Lespinay (talkcontribs) 09:01, 29 August 2011 (UTC)

Perhaps the article should make it more clear that people disagree about what AI is. These questions are addressed, very briefly, at the top of the section on "approaches". The definition of AI used in this article comes from the most widely-used textbook, Russell & Norvig. I don't think there is a more reliable source. (Note that, here at Wikipedia, we don't attempt to write the "truth"; we only report what WP:Reliable sources claim, regardless of whether we personally believe it is "true" or not.) ---- CharlesGillingham (talk) 21:52, 29 August 2011 (UTC)
I should also note that all of the points that are made above are covered by either this article or History of AI, with one exception: neither article claims that a particular approach was the only "true" AI. ---- CharlesGillingham (talk) 22:04, 29 August 2011 (UTC)
I added a sentence criticizing the statistical school of AI along the lines you suggest. Search for a heading marked Statistical and read the last sentence. Do you think you could find a citation for this? ---- CharlesGillingham (talk) 22:10, 29 August 2011 (UTC)


Charles, glad to see you are really interested in debating about this article. Yes, "the article should make it more clear that people disagree about what AI is", I agree. And that from the beginning of the article. You say Russell & Norvig is the most widely-used textbook about AI, but for whom ? For neophytes or for AI specialists ? Surely not for specialits. In my opinion, one of the most reliable sources it's me (I beg your pardon...): I'm an expert in AI, I managed an AI company during 16 years (1986-2002), I produce AI during 25 years, I invented in that domain, I gave reality to the concepts of Minsky and others, I sold my AI to big companies and administrations. I published various articles about AI including AI and expert system definitions in an encyclopedia (Larousse). My suggestion is to redo the article, in order it will be short and clear. What do you think?
"Critiques argue that these techniques are too focussed on particular problems and have failed to address the long term goal of general intelligence" You want a citation. But you can repeat that about each part of AI : fuzzy loggic, computational intelligence (a joke !), neural networks, machine learning, natural language, intelligent agents, intelligent chatterbots, pattern recognition, bayesian networks and unfortunatly too often expert systems. You understand surely that using the term "artificial intelligence" about a software sells it ! It's business hoax ! And who is in charge to verify it ? Nobody... It's why Wikipedia must clarify what is AI in the reality (technology) and in the profanes world. Jean-Philippe de Lespinay (talk) 14:34, 30 August 2011 (UTC)
I agree with you, in general. I worked in AI from 1988 to 1994 myself; we're from a similar generation and I share your skepticism that current research is on the right track. I think you would agree with me that, at some point, the field must revisit logic and knowledge because these are poorly handled in the current statistical framework. Although we may disagree on the precise details here, I think our general outlook is the same.
However, as I said above, my opinion doesn't really matter. This article must reflect the current consensus, regardless of what you or I think. For better or worse, introductory college courses provide a simple, measurable and objective way to determine what the current consensus is. The plurality of introductory college courses use Russell & Norvig.
Thus the article reports that, according to the current consensus, AI is succeeding; we are in an "AI summer". AI research (and statistical AI in particular) have developed a huge number of successful applications. These handle tasks that would otherwise require human intelligence, and therefor they satisfy McCarthy's definition of AI that you quoted above. They do not fit Minsky's definition of AI, or yours, or mine. However, the article must report that these applications exist and that they are successful. We must also report that researchers who have created these applications call their field "AI" (for the most part), even when their techniques are drawn entirely from other fields. ---- CharlesGillingham (talk) 22:33, 30 August 2011 (UTC)
OK, we are both old men with lived experience of AI. We are the memory of the 20th century. Then it's OUR responsibility to inform our contemporaries. We agree on what AI is and what it is not. We should write the same article ! But, there would be a particular vision of the encyclopedia that would require to hide the truth to be "consensual" ? I can't believe it. Wikipedia should reflect our debate. We must inform in Wikipedia that AI ​​has achieved all the goals defined by Minsky 40 years ago : perceptual learning, organization of memory and critical reasoning. We must describe the fantastic concept of the expert system (a tool designed to reason like humans), we must inform about founding experience of Mycin and its shortcomings of the time ("strong" AI ?). Are you agree ? And OK to inform that there is another vision of AI in the business world, more promotional and much less demanding (weak AI). We must also put AI ​​in the modern context where it is really called for and needed: video games. The requirement of the market is so strong, the market volume is growing so much, pragmatism is so important that this is probably there that strong AI will finally appear before people.
Charles, I don't want to continue this (interesting) discussion and mobilize your attention only for the fun. If you think the article can't be changed, let me know and I'll stop there. Jean-Philippe de Lespinay (talk) 09:54, 31 August 2011 (UTC)
I think you'll find it very hard to change Wikipedia. It should be much easier to publish these opinions in respected AI journals. Once published these perspectives can appear in Wikipedia. That seems like the path of least resistance to me. Also please see the Expert system article where expert systems are already covered. Perhaps you could help improve this article? pgr94 (talk) 10:45, 31 August 2011 (UTC)


Thank you for that suggestion, Pgr94. If you want a "respected AI journal", I almost have it but in french :
1) In Science et Vie article I expose in may 1991 AI don't produce anything usefull, researchers do not leave their office and doing research in AI completly disconnected from the realities and needs of businesses, with useless logic (epistemic, modal, fuzzy, "n" order logic, temporal, etc.). I noted they they didn't even see the benefit of the invention of the logic of order 0, owned by all human, nor the Pandora prototype, a french university project of expert system (1987-88) that worked by reasoning with the Zeroth-order logic. "Despite all its troubles AI has managed to give birth, dying, zeroth order expert systems, the only operational IA technique (at the time and today)"
2) In Automates Intelligents, a scientific website, I present in 2008 "reasoning AI", repeating that AI ​​is in full decay and produced one interesting thing : the expert system.
It is surely easy to find english articles in respected journals telling the business world is desappointed by IA results and AI is an illusion. With that kind of articles, it is possible to rebuild wikipedia article about AI and say there is two AI and no more. The strong one and the weak one.
About your suggestion to see expert system article, which person of Wikipedia is in charge of that article please ?Jean-Philippe de Lespinay (talk) 16:06, 31 August 2011 (UTC)
It seems Science et Vie is popular science magazine, not a peer-reviewed scientific journal. For more information about reliable sources, please see WP:RS, and in this case WP:SCHOLARSHIP. Regarding who is charge of an article, Wikipedia is a collaborative encyclopaedia that anyone can edit providing they observe the fundamental principles: WP:FIVEPILLARS. Hope that helps. pgr94 (talk) 08:51, 1 September 2011 (UTC)
Thank you pgr94.
"It seems Science et Vie is popular science magazine, not a peer-reviewed scientific journal". Yes, that's right. In France you can't publish in "official" scientific journal if you are a private researcher ! The publications are reserved for academics, who are public servants, who are representatives of the french State. Or by companies who have done their R & D thanks to university research. You must understand that when you think french research. The proof is the hundreds of articles published about my inventions and their installations in companies and ... zero article in the official journals. You can't imagine the State repression I suffered from the day I was published in Science and Vie, the most widely read scientific journals in Europe. It would take a whole book to tell it. Jean-Philippe de Lespinay (talk) 10:36, 1 September 2011 (UTC)