Strong AI edit

I agree with you that strong AI is very weak. We disagree, however, on exactly what it is wrong with the article.

I think it needs solid research. Ideally, it should provide an overview of several points of view, coming from prominent researchers, futurists, philosophers and science fiction writers. All of these viewpoints need to be covered. The most important thing is that each of these viewpoints is tied via a page numbered citation to a reliable source.

I have tried to correct some of the obvious flaws in the article (for example, I added the section on mainstream AI research, and the section on the origin of the term) but, beyond these sections, in my humble opinion, the article is fairly scattershot.

As an aside, I also agree with you (very strongly), that "consciousness", "sentience", "sapience" and "self-awareness" are not aspects of intelligence. I think you can build intelligent machines that don't have any of these qualities. I think we may be able to build interesting machines with these qualities (as long as they are well defined), but I agree with you that this probably will not be necessary for the task of building intelligent machines.

However, not everyone agrees with you and I. There are serious researchers who think artificial consciousness is necessary. You and I may think that this avenue of research is fruitless, but the research goes on in any case. Sentience is a moral concept: a machine which can "feel" can also theoretically suffer, and therefor has certain legal rights, or at least so the argument goes. Some sources (especially in fiction) consider this an important issue, and it may be an important issue in the future. The other two slippery terms "sapience" and "self awareness" come mostly from fiction, where these are used as synonyms for "strong AI with sentience." You will note that most of the links into the article come from fiction, so a reader directed here may actually be looking for a definition of a type of character in fiction. The article has to cover this viewpoint as well.

The point is that the article can't merely reflect what you and I consider to be "the truth". It has to reflect "what prominent people have written". This is what Wikipedia is.

I hate reverting your edits, partly because I actually agree with them and partly because I can see they are well intentioned. I'm also sorry that I don't have the time to fix this article and make it into a complete overview of the topic. --- CharlesGillingham (talk) 18:57, 19 January 2009 (UTC)Reply

Reliable sources edit

This is the most important about the strong AI article. It needs WP:reliable sources. We can't write our own opinions into the article. Wikipedia doesn't allow that. We're not scientists. We're reporters, journalists. We quote people. That's all a Wikipedia editor is.

Wikipedia has a policy against original research. Please read WP:OR.

Since Gerald Edelman and Igor Aleksander (to name two people) believe consciousness is essential to artificial intelligence, we must report that "some people think that consciousness is essential to artificial intelligence, such as Gerald Edelman and Igor Aleksander." We could also report that "Jeff Hawkins disagrees, and argues that consciousness is unnecessary for intelligence." But without mentioning Jeff Hawkins and citation to his work, we can't argue that consciousness is unnecessary for intelligence. ---- CharlesGillingham (talk) 11:02, 22 January 2009 (UTC)Reply