Archive 1 Archive 2 Archive 3

Requirements of strong AI

The first sentence of 'Requirements of strong AI': The Turing test is not a definition of intelligence, it is a test of intelligence. --62.16.187.35 (talk) 19:45, 12 June 2008 (UTC)

Hello, I have a problem with these requirements. For example 'be able to move and manipulate objects (robotics)'. So you can't have Strong AI if it is not a robot? Strong AI cannot exist inside a computer by definition? Paralyzed human is still considered intellegent. Or maybe this article means that IF Strong AI has a physical form, it then has the intellegence to use it? Or something..

It also has to see? It can't be intellegent without vision? Oh all blind people in the world fail this test. I'm fine it has to perceive something, but not "especially see". Right under these requirements reads "Together these skills provide a working definition of what intelligence is, ....".

Maybe I'm just not understanding Strong AI correctly. 88.114.252.161 (talk) 19:33, 2 October 2008 (UTC)

  Fixed See a few posts below, where this objection was brought up a second time. ---- CharlesGillingham (talk) 08:40, 30 December 2008 (UTC)

Computer implementation of brain

A few points worth adding

(1) The parralel vs speed issue is a red herring, because computers can be designed to operate in parallel, in the same way as the brain. Given a sufficiently large number of transistors, one could create a simulation of a brain which simulated all neurons in parallel, and operated incredibly fast compared to a human brain. (The number of transistors would be vast, however).

(2) If one accepts that it is possible to simulate the operation of a single human cell to a high degree of accuracy, then one is forced to accept that it is possible in principle to create a strong AI via simulation of a human from conception, at a cellular level.

(3) Though the computing power required for such a simulation would be huge, it is likely that a highly detailed model may not be required, and it would need to be done only once, since the resulting artificial person could be copied/probed/optimized as required. This makes the possibility somewhat more feasible. It might take a million processor supercomputer 20 years to generate a simulated brain, but it might then be possible to reduce the complexity required by a factor of a million.

(4) Using a distributed.net/SETI@home style architechture, a million CPU grid supercomputer isn't as unlikely as it might seem.

Pog 15:00, 25 July 2007 (UTC)

Including "Artificial Intelligence System"

I recommend the project Artificial Intelligence System be mentioned under 2.3 (Simulated Human Brain Model). Blue Brain is mentioned instead, but I believe this project is more directed toward a Strong AI outcome and better suited for this article. CarsynG1979 (talk) 21:45, 10 March 2008 (UTC)CarsynG1979

I've been thinking about collecting all this material into the article artificial brain (see my notes on Talk:Artificial brain), and shortening the section in this article to two or three paragraphs that just mention the names of the projects underway and focus on making it very clear what this means to strong AI specifically. ---- CharlesGillingham (talk) 18:39, 11 March 2008 (UTC)
  Fixed link included, where the text referred to the main ASI result but referenced a Blue Brain article for some reason. Bitstrat (talk) 11:12, 29 March 2010 (UTC)

Incorrect Reference:

Haikonen (2004) does not link to a valid reference. Is this the same as the Haikonen (2003) above? HatlessAtless (talk) 15:41, 30 April 2008 (UTC)

  Fixed ref is now the same as on the artifical consciousness page Bitstrat (talk) 17:09, 29 March 2010 (UTC)

Archive

I have archived all the comments that refer to parts of the article that no longer exist. ---- CharlesGillingham (talk) 02:01, 17 May 2008 (UTC)

Intelligence is not, does not require "seeing".

Under the requirements of Strong AI, there is listed:

"perceive, and especially see;",

but many intelligent people are visually impaired or blind. This requirement must be wrong, no?

Natushabi (talk) 08:18, 2 June 2008 (UTC)

It is shocking when you read it, but I kind of agree. On one hand perception is required, on the other hand it depends on how wide or narrow you take the meaning of "see". If you consider that a bat is able to see then the phrase is more correct than if you don't consider it that way. In any case artificial vision is a hard topic in AI, and the area in the brain devoted to vision is also a huge one, vision has high requirements for machines and humans, and it is also quite useful because of the information provided.
In any case, I completely agree that this phrase should be reworded as it is unclear and even offensive the way it is now. It's just I can't come up with a way of wording it now. trylks (talk) 06:35, 28 December 2008 (UTC)
  Fixed I think. II removed the claim that this is a "working definition of intelligence". Sources are desperately needed for this article. ---- CharlesGillingham (talk) 08:45, 30 December 2008 (UTC)

Threshold to Sentience

You know, there's a complete school of thought that does not agree with this term ("AI") and insists on separation of the term AI (which they feel describes any computerized machine, including your very own pocket calculator) from the term Digital Sentience (which is the self-aware machine).

The reasoning is that while intelligence not only exsts, it has also surpassed human intelligence long ago (for example, your pocket calculator can make complex calculations faster and far more efficiently than you, probably), it is a mistake to look at intelligence as the threshold to self-awareness. Sentience, the reasoning goes, does not come from intelligence but from having feelings. Seeing how it is widely agreed that simplistic creatures with a limited intelligence (for example: cows), do have an awareness to themselves, through their feelings, their wants and needs. Those may be the result of biologically-encoded signals in the brain, yet anyone who's seen a cow showing affection to one cow and hostility to another would be able to relate.

Ergo, the reasoning continues, the self-aware machine would evolve from feelings rather than intelligence, making the cruel heartless AIs of movies like the Terminator and the Matrix into nothing more the a modern form of the mythological Golem. The Self-Aware machine would feel compassion, love, hate, fear and right about any other feeling we feel. Advancements in Cybernetic sensors could equip this Digital Sentience with a body capable of the full rainbow of sensory experiences we can experience, leading to Digi units that would marvel at the taste of Cheese, despise the smoke of Cigars and when the time is right, even have orgasms....

Stacking "Digital Sentience" into "Strong AI" is a mistake IMO

--Moonshadow Rogue (talk) 14:54, 27 June 2008 (UTC)

I think these are interesting issues, and are, IMHO, poorly covered in Wikipedia. What's needed are sources, which I don't have. It is clear to me that the usage of "sentience", "Strong AI'" and "self-awareness" overlap, and so it makes sense to attempt to disaentangle them here. This article, so far, makes a weak attempt to disentangle "intelligence" (as AI researchers understand it) from these other concepts. What's needed is a better treatment of "sentience" and this requires sources which take the concept seriously. ---- CharlesGillingham (talk) 19:08, 5 July 2008 (UTC)

Image copyright problem with Image:RIKEN MDGRAPE-3.jpg

The image Image:RIKEN MDGRAPE-3.jpg is used in this article under a claim of fair use, but it does not have an adequate explanation for why it meets the requirements for such images when used here. In particular, for each page the image is used on, it must have an explanation linking to that page which explains why it needs to be used on that page. Please check

  • That there is a non-free use rationale on the image's description page for the use in this article.
  • That this article is linked to from the image description page.

This is an automated notice by FairuseBot. For assistance on the image use policy, see Wikipedia:Media copyright questions. --02:52, 2 October 2008 (UTC)

  Fixed image removed (not needed). Bitstrat (talk) 11:14, 29 March 2010 (UTC)

"Important topic"

I'm concerned about the line "an important topic for anyone interested in the future." I'm not sure whether the problem is the word "anyone," or "future." Either way, this sentence needs to be made more specific. At this time, I do not personally have a suggestion for a change. --65.183.151.105 (talk) 03:58, 13 October 2008 (UTC)

  Fixed --- CharlesGillingham (talk) 08:47, 30 December 2008 (UTC)

What to do when a page is so badly written that the first impulse is delete it all and start again

This page is worse than useless.

Must AGI be modelled on human intelligence and therefore contain all the weaknesses and flaws of human intelligence?

Most AIs fall into two categories - artificial neural networks and expert systems, an AGI should be able to implement one or both of these alternatives for the purpose of discussing or solving a system. It should be responsive to a human operator but cannot use natural language for communication, as natural language is simply too ambiguous and full of the same weaknesses and flaws as human intelligence.

No an AGI does not have to be embodied in a robot, it doesn't have to be able to see or hear, all it has to do is spectacularly fail the Turing test by being able to do math and reason logically.

May I start a separate AGI page?

Paul shallard (talk) 14:14, 25 November 2008 (UTC)

I agree with you that this article is very weak. We disagree about what exactly is wrong with it. I believe it needs more research.
I think this article should reflect the views of prominent researchers, philosophers, futurists and science fiction writers. All of these viewpoints need to be represented in systematic way that gets to the heart of what these people think. And, most important, each paragraph needs to be tied a reliable source via a page numbered citation.
I think mainstream AI research is pretty well covered in its section. The sections on some of the research approaches (such as artificial brain and consciousness) seem too long to me. Other sections need to be added that get into the the ideas of futurists and how the term is used in fiction (note that most of the links into this article come from fiction).
This is a controversial topic and there are a lot of conflicting opinions. This article should balance several points of view. I think this is a better approach than splitting the article out into a bunch of (even weaker) articles, each with it's own point of view.
I wish I had time to fix this article, but unfortunately I don't.---- CharlesGillingham (talk) 19:13, 19 January 2009 (UTC)
took a couple of paras over to artificial brain Bitstrat (talk) 17:06, 29 March 2010 (UTC)
Blow it up, sometimes it's necessary. Cliff (talk) 17:33, 7 April 2011 (UTC)


1) The acronym "cps" is not defined.

2) It's ridiculous to jump into an analysis of the amount of hardware that would be equivalent to a human without referencing Moravec. And that analysis has to be hugely footnoted. The graph derived from Kurzweil suggests 10 Petaflops is the magic amount of hardware. Moravec strongly argued for 100M MIPS -- roughly on the order of 0.1 Petaflops. 71.141.125.153 (talk) 07:52, 18 April 2011 (UTC)

Citation required in emergence

I can't find the citation now, but I'm quite sure Turing is among those who claimed that. Hope this helps someone to remember, finding it or whatever. trylks (talk) 06:37, 28 December 2008 (UTC)

Emergence, as a concept, post dates Turing by a considerable margin. He did predict human level intelligence by the year 2000 but in my reading of him is unlikely to have suggested that it would just happen. He was a very methodical in his approach to computing. —Preceding unsigned comment added by Paul shallard (talkcontribs) 15:14, 18 January 2009 (UTC)
You're probably thinking of the section on "Learning machines" in Computing Machinery and Intelligence. Turing argued that a machine could be taught in much the same way a child is, using reinforcement (psychology). I agree with Paul that this has nothing to do with the modern idea of strong emergence. ---- CharlesGillingham (talk) 19:22, 19 January 2009 (UTC)

Petaflops

The text cites that "10 million MIPS", or "10 petaflops" is needed. But, isn't 10 million MIPS same as 10 teraflops? 10 petaflops is 10 000 million MIPS. Which is the right amount, 10 million MIPS or 10 petaflops? Shd (talk) 21:38, 2 March 2009 (UTC)

I have a copy of "The Singularity Is Near" by Ray Kurzweil (2005). A quote from the book, "These estimates all result in comparable orders of magnitude (1014 to 1015 cps). Given the early stage of human-brain reverse engineering, I will use a more conservative figure of 1016 cps for our subsequent discussions." This was from "The Computational Capacity of the Human Brain" section in the book. He sited several estimates based on very different independent approaches. Gatortpk (talk) 19:33, 14 April 2009 (UTC)

So in this context (M)IPS, cps, and flops could be used interchangeably. 10 million MIPS is 10 teraflops. Kurzweil quoted 1014 to 1015 cps which is 100 teraflops to 1 petaflops. And his more conservative figure of 1016 cps is 10 petaflops (10 billion MIPS). Gatortpk (talk) 19:33, 14 April 2009 (UTC)

Kurzweil may have quoted 10 million MIPs a long time ago. But that would mean a handful of GPUs from several PS3s with good software could simulate the human-brain -- I don't think so. Gatortpk (talk) 19:33, 14 April 2009 (UTC)

I think the "10 million MIPS or 10 petaflops" is straight wrong. 10 petaflops is basially correct but that is 10,000 million MIPS. However just changing this makes the rest of the paragraph not fit. I hope to correct it shortly. Tim333 (talk) 18:51, 23 April 2009 (UTC)

I've updated the page removing much of the "10 million MIPS or 10 petaflops" stuff and using gatortpks quote from Moravec's book. Tim333 (talk) 19:08, 23 April 2009 (UTC)

Why the article then goes "(for 100 petaflops), up to a more conservative estimate of ~2025 (for 100,000 petaflops). " when the usual estimates are 100 teraflops to 100 petaflops, I don't know... Feel free to correct it someone! Tim333 (talk) 19:15, 23 April 2009 (UTC)

removed „dubious” tag

I removed a „dubious” designation from one of the statements. It's possible to serialize every piece of data. We Lisp programmers frequently use it to save program's state as one large binary. Smalltalk programmers use it even more extensively. If you have a reason for thinking it's dubious then please tell so on this talk page and undo my commit. Drama-kun (talk) 17:05, 28 September 2009 (UTC)

The editor who tagged that statement neglected to provide a reason for his doubt. But I think what s/he was getting at is that information theory and physics both tell us that decay happens in all physical systems, so the tagged statement is not true, strictly speaking. This should be reformulated along the lines that we can reversibly interrupt the operation of some computers, while the same is not true for human brains. Regards, Paradoctor (talk) 22:01, 28 September 2009 (UTC)
Might be. At the same time, the Church-Turing thesis tells that there's nothing computable that can't be implemented on a Turing machine. While it's not even formalized so it can't be falsified, most computer scientists accept it as very probably true. Besides, human brains don't have to be simulated on molecular level, higher-order abstractions can be used that don't involve quantum mechanics and the like. If I'm smoking crack just revert me. Drama-kun (talk) 04:01, 30 September 2009 (UTC)

Copy of entire article in References

There is an entire 2nd copy of the article pasted in just after the References heading, it has been that way for many many revisions, I was not able to find the place where it was first included, but about 200 revisions back it is definately not there. I don't feel competant to make that major an edit, the text of the 2nd copy is indented for some reason. --David Battle (talk) 01:05, 16 January 2010 (UTC)

Thanks, fixed. The problem began with this edit of mine. I can only assume that the citation bot had indigestion or something. ;) Paradoctor (talk) 02:28, 16 January 2010 (UTC)

  Done

Searle's Chinese room is ridiculous. But regardless of what you think of it, that attempt to dismiss the entire concept of AGI does not belong BEFORE the article.

Move it to a criticism section. 66.91.255.120 (talk) 07:08, 23 January 2010 (UTC)

The "See also" is another use of the same term. See the section in this article on the origin of the term. ---- CharlesGillingham (talk) 16:48, 30 March 2010 (UTC)

The article should be titled Artificial General Intelligence, NOT 'strong AI'

Because AGI is the correct name for this field. 66.91.255.120 (talk) 07:08, 23 January 2010 (UTC)

Unfortunately, both terms refer to the enterprise described in published literature. Strong AI (Artificial general intelligence) might be better Bitstrat 14:36, 27 March 2010 (UTC) —Preceding unsigned comment added by BitStrat (talkcontribs)

There is now a link to Pai Wangs AGI portal - it would be good if sombody brought some of this material here, then a name change might be more justified Bitstrat (talk) 15:15, 30 March 2010 (UTC)

Simulated human brain model

Much of the first part of this section appears to be original research and/or unsupported opinion. Proposed solution - delete everything before the discussion of the blue brain project, and replace with a few sentences describing the basics of the brain simulation approach. Bitstrat 14:47, 27 March 2010 (UTC) —Preceding unsigned comment added by BitStrat (talkcontribs)

Done some heavy editing to fix this. Hopefully the new version can be adopted Bitstrat (talk) 18:54, 27 March 2010 (UTC)

the text below was removed - it made the section too long without adding clarity "10^16 cps can be regarded as equivalent to 10 petaflops.[1] It is likely that the first unoptimized simulations of a human brain in real time will require a computer capable of significantly more, perhaps an exaFLOPS (1018 FLOPS). Using Top500 projections, it can be estimated that such levels of computing power might be reached using the top-performing CPU-based supercomputers by ~2015 (for 100 petaflops), up to a more conservative estimate of ~2025 (for 100,000 petaflops). This capacity might be acheived using a parallel computer (the Cray and NEC vector computers operate as a single machine but perform many calculations in parallel), or using a form of cluster computing, where multiple single computers operate as one. If GPU processing and Stream Processing power continues to more than double every year, exaFLOPS computers may be realised sooner using GPGPU processing. High-end GPU's such as the AMD FireStream can already process over one teraFLOPS (1012 FLOPS). For comparison, a 2 GHz Intel microprocessor, as used in a desktop circa 2005, operates at 2 billion cycles per second, uses 1.7 billion transistors,[2] and operates at a few GFLOPS (109 FLOPS)." Bitstrat (talk) 12:41, 30 March 2010 (UTC)

Pgr94 commented that the FLOPS bit was too long. Agreed, so it is here (where should it go?). Also expanded the AGI section, aiming at a better balance. Bitstrat (talk) 14:44, 30 March 2010 (UTC)

I think these details belong in a section in artificial brain. ---- CharlesGillingham (talk) 16:45, 30 March 2010 (UTC)

Artificial consciousness

Moved most of this to the artificial consciousness page where it was duplicated Bitstrat (talk) 14:47, 30 March 2010 (UTC)

Salience

I think salience should be moved up into the new paragraph that covers "other traits". The final list is supposed to be for terms that are used as "code words for the soul" in popular sources. (Although occasionally a researcher develops a definition for these that manages to avoid essentialism, these subtle distinctions usually disappear by the time it reaches science fiction. These terms are partially defined by science fiction and, (unfortunately, I suppose) Wikipedia must stick with only the most widely used definitions for things.) ---- CharlesGillingham (talk) 17:51, 30 March 2010 (UTC)

sentience - sensing the impact of world on self - is also a candidate. Animals respond to painful stimuli but we do not know if they experience pain. The tricky concepts i.e. self-awareness, consciousness and sapience should definitely stay below SF and ethics. The problem is sentient is not understood in an unconscious way by most people. Ditto salient. A better answer might be to follow the sense and act statement with "This includes the ability to detect and respond appropriately to hazard." —Preceding unsigned comment added by BitStrat (talkcontribs) 18:26, 30 March 2010 (UTC)

I thought consciousness was the term joining them all. But the point of this comment is to remark that the definition of salience given in the article does not match the definitions present in the wikipedia page linked. --trylks (talk) 15:21, 12 April 2010 (UTC)

Nice work

I like the reorganization. It's nice to see this article finally improving. ---- CharlesGillingham (talk) 17:52, 30 March 2010 (UTC)

Artificial General Intelligence (AGI) - all hype?

AI research has been around for half a century, so a new buzzword like "artificial general intelligence" should be put into context and given due weight. The article should not focus more on AGI than on 50 years of research unless there have been spectacular results. I would strongly recommend that this article should scale AGI in proportion to significant peer-reviewed results. What exactly has been achieved since the term was coined in 2003? The article should make this very clear.

The article should focus less on speculation and more on computational models of intelligence (e.g. Newell's SOAR, ACT-R, Minsky's Society of Mind, CLARION and various other cognitive architectures).

In the past, making over-optimistic promises caused significant damage to AI research (See AI winter) so some scepticism is warranted. Wikipedia's guidelines should be observed with particular attention to undue weight WP:UNDUE and speculation WP:SPECULATION. pgr94 (talk) 21:33, 30 March 2010 (UTC)

So far the AGI deliverables look like ideas. Society of Mind is also ideas (that didn't work). SOAR is an ongoing effort (like CYC) that is good for making people think, but arguably has not delivered anything truly significant. Kurzweil's old stuff is purely about making people think. New label, same stuff. The current article is not about "significant peer-reviewed results" - scientific knowledge of cognition is mostly based on psychology research rather than old "cognitive" AI architectures, and progress in computing is largely based on incremental engineering efforts with a smaller scope, both of which are better described elsewhere. It is about a powerful vision (Strong AI or any of the alt labels) that has had significant (Newell's general intelligent action), sustained (Strong AI) and ongoing (AGI) impact on the thought of large numbers of scientists and non-scientists (how many pages link here?). It does not do this well yet, and may never do, but it is at least better referenced and contains less OR than some other pages you could find on similar topics. What it probably does need to balance it better are new sections on strong AI in fiction and on computationalist philosophies. Bitstrat (talk) 00:57, 31 March 2010 (UTC)
I agree with Pgr94's general assessment: any student of the History of AI can't help but notice that every generation has contained a movement like AGI, all have made the same prediction (20 years) and all have overestimated what they could accomplish within the currently popular paradigm.
Still, it's the visionaries who care about strong AI, and that's what this article is about. It's very important that we include all the visionaries, past and present, and we don't allow any one school of thought to carry the day. AGI is not a new idea, by itself. It is, as Pamela McCorduck puts it, the "ancient dream to forge the gods." The various AGI people have not given us any new ideas about "what" we want to do, just a few more suggestions on "how" we might do it. There is no convincing evidence these ideas are better, and, in my view anyway, there is no convincing evidence that the old ideas have completely failed.
Wikipedia should avoid lionizing current AGI research and disparaging older approaches. ---- CharlesGillingham (talk) 13:40, 31 March 2010 (UTC)

Page improvement programme - fixes needed - discuss

The point above is that the page should not get sucked into discussing the significance of knowledge and computational artefacts produced by strong AI research - there are very few, and this route leads to a page of interest to a minority.

The most important target for the current page is to be about the vision and its notable impacts on human activity (including research activity). This is what will attract the biggest audience. Wikipedia is about (persistent) notability. It is not about peer revue in high impact journals - except where that is the most appropriate indicator of notability.

The current page (basically a tighter version of what was already here) indicates that a third generation of visionary researchers (the AGI folk & Kurzweil) have emerged over the last 10 years, making predictions that are certainly optimistic (in the Kurzweil case because his estimate of brain processing power is the same as that of Moravec, i.e. it is based on a naive understanding of neurophysiology from 25 years ago). It does not discuss the new ideas yet (apart from brain emulation which is hardly new, and a mention of second life embodiment - also not new). This seems right. Goertzel, Kurzweil, Markram et al are certainly persistently notable to the media (even if some researchers wish they were not).

The page mentions the symbolist ideas of the first generation of AI visionaries (McCarthy, Newell, Simon and Minsky). This could be expanded a bit.

The second generation (Lenat, Brooks&Pfiefer, Minsky again, Moravec, Kurzweil again, Grossberg) also mostly get mentions but their ideas are not discussed at all (although embodiment is now mentioned). This should be fixed, but by summarising the ideas (influential books), not by discussing a bunch of architectures.

The inheritors of frames (who did get something useful out of the idea) were multi-agent researchers, who are also not represented (some are now in the AGI group). This too should be fixed.

Evolutionary software (Holland and successors) should also be here somwhere.

The really big hole for an ideas article is that the impact of the ideas on folk outside AI is not discussed at all (apart from the excellent bit about Hal).

New sections on SF and computationalism (in both philosophy and psychology) are proposed above. There are other areas where the ideas have changed things, (perhaps in less obvious ways) that could also be included.

Please comment under each para above Bitstrat (talk) 15:04, 31 March 2010 (UTC)

No one replied, because there is really no one working on this article.
I think your ideas about exactly what material is relevant is excellent, and I would hope that your plan is carried out some day. ---- CharlesGillingham (talk) 07:44, 8 April 2011 (UTC)

Intelligence or Intellect?

Under "Requirements", the attributes of intelligence listed are actually no more than the primary attributes of intellect. Further down, the statement "other aspects of the human mind besides intelligence" makes the same mistake. The "other attributes" listed are some of those that, in addition to intellect, contribute to intelligence. This is a typical computer sicentist/nerd definition of intelligence - a "Mr. Spock" mind. It would be impossible to cope in the real world with such a narrow pure intellect. Emotion is - strictly - the mental mechanism that drives action, without which there is no incentive to action. To quote from the article on RUR, "One of the things Helena requests is that the Robots get paid so that they can buy things they like, but the Robots don't like anything." Consequently, they don't want anything - because they lack emotion. 212.159.59.5 (talk) 17:36, 1 May 2010 (UTC)

Theoretical foundation of AGI

"Universal Algorithmic Intelligence: A mathematical top->down approach" by Markus Hutter provides optimal algorithm for intelligent agent. Is it worth including? --178.44.228.197 (talk) 15:44, 5 June 2010 (UTC)

Dunno; was that one still uncomputable? --Gwern (contribs) 21:53 7 June 2010 (GMT)
  1. ^ A FLoating point Operations Per Second (FLOPS) requires at least one cycle, but may involve thousands of single cycle logic operations
  2. ^ Intel spec.