Talk:Sentient computer

Latest comment: 16 years ago by CharlesGillingham in topic Merge

Rewrote article

edit

Completely re-wrote article to add information on real-world classification and development efforts towards sentience computers and reduce focus on article as the domain purely of science fiction. Is it OK to remove the stub flag now?? PocklingtonDan 15:06, 2 November 2006 (UTC)Reply

Turing Test

edit

"As of 2006, no computer has actually passed the Turing test as such, but it is expected to occur within the next 5 years. Trying to pass the Turing test in its full generality is not, as of 2005, an active focus of much mainstream academic or commercial effort, but some amateurs do work on the problem, mainly in the form of conversational programs such as ELIZA."

I sincerely doubt that the Turing test will be passed in 5 years (What is the reference for this?) The Turing test is not about fooling someone into believing the computer is a human (has happened before in chatboxes), but it is about fooling someone who can and will ask very specific questions into believing it is human. The human questioner is allowed to ask about any subject he/she wants and can ask questions that depend on answers given earlier.

For example I personally tried several online chatterbots including ELIZA, and no single one could even answer a simple question like "what did you have for breakfast this morning?" without giving evasive answers. Even if one does give a right answer like "I ate a bacon sandwich", the human questioner could continue to ask something like "at which supermarket did you buy the ingredients for the sandwich?". If a human continues questioning like this he will soon get some evasive response which isn't very probable from a real human's answer. If the person didn't buy the sandwich in a supermarket but at the sandwich lady across the street he would simply say so. Of course a chatterbot can be programmed to know a lot about this specific subject, remember that the questioner can ask almost anything. Just try out specific questions at any of the "advanced" state-of-the-art online chatterbots and you will see that the 5 year limit is probably a bit too early. (unless I'm mistaking and there currently are some super secret really advanced chatterbots that will pop up out of the blue in a few years).

145.97.200.254 02:21, 30 December 2006 (UTC) Strider80Reply

I'll add a cite request for that fact - PocklingtonDan 08:17, 30 December 2006 (UTC)Reply


Turing Test

edit

Personally, I aggree we are no were near passing the Turing Test - let along 5 year. But, more importantly other do not either, ref AI Winter and History of artificial intelligence even Hans Moravec says 2030. so I'm removeing the 5 year phrase.

BTW - love the rest of the article, just needs more citations

Thomas Kist 23:34, 11 October 2007 (UTC)Reply

A couple of fixes

edit

I think this will be a nice article, especially from the point of view of science fiction, and I have some suggestions on how to make it better.

First, a couple of minor factual errors:

  • "[Strong AI is just the ability to] "conceive and plan actions", which falls short of sentience." Strong AI gives machines the ability to be as intelligent as human beings, and while that includes planning, there is far more to intelligence than planning (see Artificial intelligence#Problems of AI).
  • "Clearly there is some key element necessary to achieve sentience that is lacking in modern computers." This is not clear at all.
  • "This is not just outside of our current abilities but outside of our current imagination - humans literally do not know where to start with such an effort as of 2006." This has been "imagined" by AI researchers since the 50s, and all of them have argued that they knew "where to start."

We also need to know who thinks all these things. ---- CharlesGillingham (talk) 01:35, 6 December 2007 (UTC)Reply

Sentience vs. consciousness vs. intelligence (vs. humanity vs. soul)

edit

This article, as it is, may need to be merged into artificial consciousness or strong AI. To avoid this, it would have to define sentience, and how sentience ("the ability to feel") is different than consciousness ("to have thoughts and experiences") or intelligence ("the ability to solve problems, know, plan, learn, guess, percieve, etc."). Unfortunately, this is a difficult philosophical problem, and I'm not sure that references exist that could unambiguously sort it out. Even if you found one that did make the distinction, you would also find many others that don't, and a wikipedia article has to include both points of view.

There are a few other options that might be a great deal easier, and still keep what I think is the general direction of this article:

  1. Make this article about a type of character. This would allow you to use science fiction for your references, and I think this what most of the links into this article are looking for. The lead could read:

    A sentient computer is a type of character in science fiction and other literature that has many or all of the attributes of human being and yet is still clearly a machine. Such a character is also described as "self-aware", "sapient" or "conscious."

  2. Make this article about the ethical issue of "humanity" or "soul". The word "sentience" is used this way in arguments about animal rights, for example. If a animal can "feel", then we have certain responsibilities towards it. If a machine is intelligent (and conscious) does it also have the same rights as we do? Does it threathen existing religions? Or are they obligated to believe that machines can have souls as well? The lead could read something like:

    A sentient computer is a machine that is sufficiently similar to a human being that it deserves all the same rights and has all the responsibilities as any human being.

In either case, the article needs to lose the sections that discuss the technical difficulties of producing sentience in machines. I think this is probably inevitable, since I find hard to imagine that there is a technical approach that could produce artificial consciousness, complete artificial intelligence (i.e. strong AI) and yet fail to produce a sentient computer. If there is a difference between the technical approaches to these things, our understanding of it is so far in the future that there is nothing Wikipedia can say about it now: the sources simply don't exist. ---- CharlesGillingham (talk) 01:35, 6 December 2007 (UTC)Reply

Merge

edit

With no response to my comments above, I think I will redirect this to strong AI, and add a section to that article "strong AI in fiction". Any objections? ---- CharlesGillingham (talk) 20:14, 15 December 2007 (UTC)Reply

I am carrying out the redirect. If you would like to revive the article, please consider my comments above. ---- CharlesGillingham (talk) 09:38, 3 January 2008 (UTC)Reply