What is needed to move out of draft status? edit

I am curious, what is needed to move this article out of draft status? Thanks.

Jrincayc (talk) 15:38, 12 June 2022 (UTC)Reply

Well, I don't think it meets notability requirements at this time. The only reliable sources that cover LaMDA seem to be on the 2021 I/O announcement, the 2022 I/O update, and the recent sentience claims. If you strongly feel that it does meet notability requirements, I can move it back to the mainspace, but I think there's a probable chance someone will take this to AfD. InfiniteNexus (talk) 18:25, 12 June 2022 (UTC)Reply
Hm, well, I started a stub article (and InfiniteNexus, thank you very much for the work you have put into this draft article) because I was surprised that Wikipedia didn't already have an article on it. Tho' I think it is reasonable to wait a month or so and decided if this should just be merged into Google AI or kept as a separate article. Thanks. Jrincayc (talk) 18:57, 12 June 2022 (UTC)Reply
If there is more significant chatter over the sentience claims in the coming days/weeks, perhaps we could deem it being notable. InfiniteNexus (talk) 05:31, 13 June 2022 (UTC)Reply
I'm seeing more commentary on the sentience claims ([1], [2], [3], [4], [5], [6]), so I think we can move this to the mainspace once those sources are added. InfiniteNexus (talk) 17:05, 18 June 2022 (UTC)Reply
Awesome, and thank you again. :) Jrincayc (talk) 14:08, 19 June 2022 (UTC)Reply

Comment on If LaMDA is Sentient edit

I haven't seen this directly quoted in any of the articles, so it can't be added (yet), but basically, the reason that it is unlikely that LaMDA is sentient (at least in a conventional sense) is that it is a transformer model, and during regular running (non-training) it basically is:

function(previous 8192 words seen) -> 8192 words including a new response

so it doesn't really have any long term memory. That said, the conversation in Table 26 in the 2022 paper [7] is seriously impressive.

Jrincayc (talk) 15:49, 12 June 2022 (UTC)Reply

There is at least some useful discussion on if LaMDA has sentience over at: https://www.lesswrong.com/posts/vqgpDoY4eKyNnWoFd/a-claim-that-google-s-lamda-is-sentient
Jrincayc (talk) 01:56, 13 June 2022 (UTC)Reply
LessWrong is a blog, and this "Ben Livengood" isn't notable as he doesn't have his own article. So we probably can't do much with it per WP:BLOG.   InfiniteNexus (talk) 05:23, 13 June 2022 (UTC)Reply
Here is one we might be able to use from: https://www.siliconvalley.com/2022/06/14/google-debate-over-sentient-bots-overshadows-deeper-ai-issues/
The architecture of LaMDA “simply doesn’t support some key capabilities of human-like consciousness,” said Max Kreminski, a researcher at the University of California, Santa Cruz, who studies computational media. If LaMDA is like other large language models, he said, it wouldn’t learn from its interactions with human users because “the neural network weights of the deployed model are frozen.” It would also have no other form of long-term storage that it could write information to, meaning it wouldn’t be able to “think” in the background.
Jrincayc (talk) 00:50, 15 June 2022 (UTC)Reply
Another article to add: Do Large Language Models Understand Us? by Blaise Agüera y Arcas https://direct.mit.edu/daed/article/151/2/183/110604/Do-Large-Language-Models-Understand-Us Jrincayc (talk) 01:18, 23 July 2022 (UTC)Reply

Explanation of Turing test irrelevant to this article? edit

Regarding this edit by InfiniteNexus if it is irrelevant to this article why does the previous sentence in the lead mention the Turing test? There's also a diagram purportedly of the Turing test in the body of the article without much explanation or context. Some context seems appropriate. Wikipedia is supposed to be for non-expert readers and something that is mentioned which has no meaning at all to your average person needs some explanation beyond the option to click a link, roughly per MOS:JARGON. —DIYeditor (talk) 04:39, 27 June 2022 (UTC)Reply

The mention of the Turing test isn't irrelevant to the article, the description of what the Turing test is is. If we were to explain every scientific concept mentioned in this article (such as sentience, artificial general intelligence, etc.), the article would become too long. The text you added is also unsourced and excessively long, if you believe clarification on what the Turing test is needed please keep it short (one sentence max) and cite a reliable source to back it up. InfiniteNexus (talk) 04:45, 27 June 2022 (UTC)Reply
Of course it's always acceptable to demand a citation for text on wikipedia. What's the citation that backs up the sentence before what you removed? What backs up the diagram? What do you think is inaccurate about what you removed? It's not going to be hard to provide a citation for a description of the Turing Test or Imitation Game so that is not my concern here. Lede sections don't need citations in general. What is controversial about what you removed? I'd like to avoid running afoul of what you think it should say when I paraphrase a source. —DIYeditor (talk) 04:55, 27 June 2022 (UTC)Reply
You are correct that the lead section should not have citations, that is why the previous sentence does not have one. Per WP:LEADCITE, the lead serves as a summary of the article body, so everything in the lead should also be found in the body. The problem is, the text about the Turing test you added isn't found in the article body. Note that the diagram you mentioned is sourced properly. InfiniteNexus (talk) 05:02, 27 June 2022 (UTC)Reply
What's the citation for the diagram? —DIYeditor (talk) 05:04, 27 June 2022 (UTC)Reply
I assumed you were referring to the citation at the end of the image caption, but now I see you mean a citation for the image itself. The image is a representation of the Turing test, so we don't need a reference explaining what the image means. InfiniteNexus (talk) 05:12, 27 June 2022 (UTC)Reply
My point is that nothing says the diagram is accurate other than consensus that it is accurate. That it is obviously accurate (if it is). The same goes for a simple description. What was inaccurate about the text you deleted so when I paraphrase a source on it you don't feel I have done so inaccurately. —DIYeditor (talk) 05:14, 27 June 2022 (UTC)Reply
But why exactly do we need to describe what the Turing test is when all readers need to do is click on the link to that article to find out? Should we do the same for every other term that not everybody may be familiar with? Not everybody knows what a language model is, or the Three Laws of Robotics, or maybe even AI sentience. Do you think we need to explain those terms too? Personally I think not. InfiniteNexus (talk) 05:20, 27 June 2022 (UTC)Reply
MOS:JARGON by my reading pretty plainly says we should offer some explanation of those terms. It says wikilinking is not enough. Of course we don't always do a good job of that but it is something to strive for. —DIYeditor (talk) 05:25, 27 June 2022 (UTC)Reply
Alright, I concede. But I still think the wording should be kept as short and concise as possible. InfiniteNexus (talk) 05:46, 27 June 2022 (UTC)Reply
How about just a parenthetical (whether a computer can pass for a human) or something to that effect? —DIYeditor (talk) 13:59, 27 June 2022 (UTC)Reply
Yes. InfiniteNexus (talk) 16:54, 27 June 2022 (UTC)Reply

"Lemoine's claims have been widely rejected" paragraph edit

This paragraph is entirely factual, but it isn't a tenth as good as it could be. A third of this paragraph was essentially: "Bob says it's bullshit. John says it's bullshit. Mary says it's bullshit. Rose says it's the biggest bullshit she's ever heard." That delivers no useful information whatsoever, and is utterly redundant with the 6 words I quote in this section's heading. The other two-thirds of the paragraph are pertinent remarks, but I've removed the rest.

What I removed should be replaced with technical explanations of why it can't be sentient (I've seen those explanations, though I didn't bookmark them), sourced to experts.

Our readers are sentient, and we shouldn't assume that a proper technical explanation would be out of their grasp.

I've also added a sourced link ot the ELIZA effect, which was a conspicuous omission. DFlhb (talk) 17:21, 8 February 2023 (UTC)Reply

Those comments can be summarized into Many experts ridiculed the idea that a language model could be self-aware, including ... But being legitimate sourced commentary, they shouldn't be removed outright. InfiniteNexus (talk) 17:35, 8 February 2023 (UTC)Reply
That's much better, but I still hope we can find more technical explanations (showing why LLMs fundamentally cannot be sentient) and include those. I'll keep digging. DFlhb (talk) 19:54, 8 February 2023 (UTC)Reply
Thanks. InfiniteNexus (talk) 22:23, 8 February 2023 (UTC)Reply
Thanks for this edit; I was about to fix that too, because it drove me bonkers. No sane journalists would say a stock dropped 8% because of an ad, and none of the sources did. That was pure WP:OR. DFlhb (talk) 19:24, 9 February 2023 (UTC)Reply
Sure thing. InfiniteNexus (talk) 19:28, 9 February 2023 (UTC)Reply

All of the perspectives in the section are in opposition to Lemoine. Shouldn't we include those who have been saying much the same things, such as former Google executive Mo Gawdat e.g. in [8] and his book Scary Smart? Sandizer (talk) 00:09, 27 February 2023 (UTC)Reply

I can't access the Times article because I don't have a subscription, but you are welcome to add additional sourced commentary to the article. InfiniteNexus (talk) 01:30, 27 February 2023 (UTC)Reply
I agree that Lemoine's claims have not really been rejected. The detailed ones seem to basically say that a pure transformer model can't be sentient, but I am not sure LaMDA is a pure transformer model. Lemoine said something like: "Literally no scientists have been just flatly disagreeing with me across the board. There are some nuanced claims I'm making that they have different suggestions on" and "I've been actually making journalists quote the original. I'll be like does that sound like they're disagreeing with me to you? The journalist be like, no it doesn't" in this interview: https://www.youtube.com/watch?v=FdhuKEMeVq0 Jrincayc (talk) 02:41, 23 March 2023 (UTC)Reply
Lemoine claimed that LaMDA was sentient; the scientific community said that LaMDA is not sentient. If "rejected" isn't the right word, what word would you suggest? InfiniteNexus (talk) 16:20, 23 March 2023 (UTC)Reply
Hm, the actual article linked (CNN) used the term pushed back, so I changed it to that. Jrincayc (talk) 02:42, 6 April 2023 (UTC)Reply

Splitting Bard? edit

I've been sitting on this idea for some time, but I think Bard could possibly be spun-off into its own article, given the amount of information that's coming in. Or we could just keep everything here, given this article isn't that long. I've mocked up what a standalone article would look like at Draft:Bard (chatbot). Thoughts? InfiniteNexus (talk) 16:46, 23 March 2023 (UTC)Reply

  • Support — It's certainly notable enough to have its own article, and I think there's enough unique information to make it a better choice, organization-wise. I also think that people looking up Bard may get confused if this is the main result. PopoDameron ⁠talk 16:59, 23 March 2023 (UTC)Reply
  • Support, definitely overdue. The problem isn't length, but structure. A distinct article would allow us to organize things more clearly ("Background", "Release", "Architecture", "Reception"). Keeping it here either leaves a huge wall of text, or a bunch of level 2 subheadings, both of which are messy. DFlhb (talk) 17:01, 23 March 2023 (UTC)Reply
  • Support, definitely notable and there is no sense of cluttering everything together. Artem.G (talk) 08:59, 25 March 2023 (UTC)Reply

  Done, article now live at Bard (chatbot). InfiniteNexus (talk) 17:33, 27 March 2023 (UTC)Reply