Talk:Artwork title

Latest comment: 1 year ago by Marginataen in topic New text

Use of ChatGPT

edit

As an experiment, I used ChatGPT for the initial version of this article, both to understand the technology better and to see its potential applications to Wikipedia. The guidelines I set for myself were to start the article in my sandbox, to clearly indicate the initial version as AI-generated, and to only publish it once it had been corrected, wikified, referenced, and rounded out with additional information. It was important for me to have the first edit be uncorrected pure-AI for transparency, so you can see exactly what was generated, and all the human edits made subsequently.

I believe this is the first time someone has done this transparently on English Wikipedia, though no doubt others have experimented sub rosa.

My experience suggests this is mostly useful as a tool for overcoming writer's block, as this is a very general sort of a subject I had been considering writing an article on for years, but was unsure where to start. Using the tool is this way is like having the assistance of a very hard-working, well-read, but shallow research assistant, the AI resembling more Igor than Frankenstein's monster. Pharos (talk) 19:12, 20 December 2022 (UTC)Reply

Correction: I realize now that I must have made some slight grammatical changes to my initial version. I think it was mostly qualifying language, for example in a number of cases changing "is" to "can be". I would advocate in future for posting a truly pure-AI version as the first edit for full transparency.--Pharos (talk) 19:46, 20 December 2022 (UTC)Reply
Nice. I have also played with ChatGPT in generating articles, though I hadn't saved anything on wiki until now. Here's my first experiment using the same parameters of saving the raw text, then going in to "wikify" it. Draft:Sexual Assault Kit Initiative. I first prompted it with "Can you write a Wikipedia article about the Sexual Assault Kit Initiative?" then to get references, "Can you provide references for that?" then "Can you format those as Wikipedia references?" which didn't give me wiki markup of <ref> tags, but oh well. Overall I think the writing is decent.
I agree that it gets over the inertia to get to useful work. Maybe this is a version of Cunningham's Law - the best way to get a good article in Wikipedia is to paste in a substandard one from ChatGPT. :) - Fuzheado | Talk 19:30, 20 December 2022 (UTC)Reply
The Cunningham's Law point applies, it can be satisfying to fix text that is slightly or wildly incorrect, and us Wikipedians have a lot of good experience in that field! Pharos (talk) 20:26, 20 December 2022 (UTC)Reply
This is fascinating thanks for sharing your process. To be honest, I've been mulling the idea of proposing some kind of policy RFC on whether to ban AI-generated article text ala what StackOverflow recently did. Even the best models are kind of a nightmare because they're really good at generating convincing but dangerously wrong bullshit. Photos and media are a whole other can of worms additionally, which has already come up on the Village Pump. Your process here is a really useful model though, and potentially a good framework for a policy, where we encourage people to use generated text as a draft starting point and then edit. One thing I'm curious about is what are the copyright implications? Steven Walling • talk 20:01, 21 December 2022 (UTC)Reply
Good seeing you @Steven! We probably know all too well that any RfC phrased as, "Should we ban <insert scary foreign influence> on Wikipedia?" will inevitably be biased towards "yes." (Consider it our own inverted Betteridge's Law). Context and history might be helpful here - the early days of Wikipedia saw direct text import of public domain prose from Britannica 1911 and CIA World Factbook as starting points, but with an advisory to please edit them to conform to our house style. The same goes today for most any PD text that is copy/pasted in.
I could imagine a good first step may be to update our advisory to state that for now, ChatGPT output – with generative AI treated quite widely as public domain – would be governed by the "Converting and adding open license text to Wikipedia" policy, but that the source of the text as coming from an AI/generative source must be stated clearly in the edit summary, and the factual accuracy needs to be treated with a heavy dose of skepticism. Whether we then mark these creations via a hatnote, category or other metadata, we should think about seriously. I'd be disinclined to consider banning or making something taboo as it is likely to have unintended Streisand-ey consequences. :) - Fuzheado | Talk 21:49, 21 December 2022 (UTC)Reply
"with generative AI treated quite widely as public domain" I think the interesting question is, is it really PD or not? ChatGPT doesn't have a copyright statement I could find, and its TOS is kind of confused IMO:
That would imply the user who made the prompt has legal right to republish output on Wikipedia under a free license, but on the other hand, model output isn't unique per user. I wish we could get a real lawyer to provide input and review the terms of the various popular models. Steven Walling • talk 23:21, 21 December 2022 (UTC)Reply
I actually asked ChatGPT itself about the copyright status, here's what it said:

Q: What is the copyright status of text and responses from ChatGPT?

A: ChatGPT is a chatbot developed by OpenAI that uses machine learning to generate responses to user input. As a machine-generated text, ChatGPT does not have any copyright protection. However, the text generated by ChatGPT may be based on or incorporate material that is protected by copyright, such as quotes or excerpts from copyrighted works. In such cases, the copyright in the original work would still be protected and the use of the material would be subject to the copyright owner's permission. It is the responsibility of the user of ChatGPT to ensure that they are not infringing any copyrights when using the service.

Interpret as you will, but some more legal mindpower would be welcome. - Fuzheado | Talk 15:06, 22 December 2022 (UTC)Reply
@Fuzheado - unsure of the relevance of Britannica 1911 and CIA World Factbook, both of which are sources which at least understood what a fact was and made an effort to present them - in contrast to ChatGPT which assembles words into plausible-looking patterns regardless of substance. Can you clarify why you mentioned them? The Land (talk) 16:58, 23 December 2022 (UTC)Reply
Dear @Pharos, thank you very much, I will have a close look. My own dealings with Chat GPT (a readable article for the Klexikon) I have presented in this video. Maybe we can make use of AI when the product looks at least as nice as Dr Frankenstein himself. :-)
And I agree with @Steven Walling that the biggest problem is that the text makes a good impression, while the facts are very much kaput. For example, I had the AI write an article about the author James Krüss, and the AI failed to mention his most famous novel, while making up one all by itself! (I asked the AI then: what happens in that novel, but the AI only repeated what it already wrote in the article about the author.) Ziko (talk) 21:44, 21 December 2022 (UTC)Reply
Agree that the problem with generative AI (or AI that is not explainable) is that we have no clue as to the confidence or quality level of synthesized text, or how to even tweak it to improve. This is much more problematic than a goofy GAN image where the AI generated a sixth finger when rendering a portrait photo. It has a lot more serious consequences for us when you just make crap up for a Wikipedia article. Pinging @Denny as this was a serious issue while developing policies around Abstract Wikipedia, and more. - Fuzheado | Talk 21:52, 21 December 2022 (UTC)Reply
I think it should go without saying that at this given point in time no text generated by a language model should be added to Wikipedia without human supervision. That doesn't seem too different to our policies around machine translated content. If someone likes to use a generated model in order to create a first draft, because they prefer some text over an empty page, all power to them - but by adding content to Wikipedia they basically own the content they add and are responsible for it, including all mistakes it contains. I think our current policies should cover all of that already. -- denny vrandečić (talk) 22:28, 21 December 2022 (UTC)Reply
+1. This is an entirely sensible and easy to understand guideline, IMO. Steven Walling • talk 23:23, 21 December 2022 (UTC)Reply
@Denny so in practice first paste from AI generated output should be in namespace Draft: and when human processed move to main article namespace? (so #1 form and sourcing is kept) -- Zblace (talk) 12:32, 22 December 2022 (UTC)Reply
@Zblace I wouldn't bother keeping the generated output anywhere, to be honest. Only the human edited and approved version. -- denny vrandečić (talk) 15:37, 22 December 2022 (UTC)Reply
I find it useful to keep the first sandbox edit for full transparency, just so that when there is inevitably a questionable statement, we will know if the weirdness is of human or AI origin. Of course the human posting to English Wikipedia must take responsibility for their work, but subsequent editors would be better off knowing what type of uncaught errors they might be dealing with. Pharos (talk) 00:24, 26 December 2022 (UTC)Reply
There is a certain similarity to automatically translated texts. (Even if it looks great, you cannot be certain about the accuracy.) Ziko (talk) 22:27, 21 December 2022 (UTC)Reply
I find it a very apt comparison. There is a statement in Help:Translation that Wikipedia consensus is that an unedited machine translation, left as a Wikipedia article, is worse than nothing, and I agree that the same should apply for unedited AI. Pharos (talk) 00:31, 26 December 2022 (UTC)Reply
In my view, we should strongly advise against the use of AI tools to create article drafts, even if the articles are then reviewed by humans. ChatGPT is too good at introducing plausible-sounding falsehoods. People are also of course good at introducing plausible-sounding falsehoods, but people+AI at its current maturity would be a great way to introduce errors into the work of good-faith editors and automate the productivity of bad-faithe editors. The Land (talk) 16:58, 23 December 2022 (UTC)Reply
A reactionary prohibition of using ChatGPT and AI would be a real shame given how Wikipedia was built on WP:BEBOLD experimentation. Therefore, I'm glad you're thinking more about how to "advise" and how to warn someone quickly and perhaps "give them the hook" if they are not working with the community cooperatively. The reason why "advise" is a better option is that a ban is against the Wikipedia spirit on numerous levels. Not only is it counter to WP:AGF and WP:BEBOLD, but we already have adequate safeguards in terms of the copyright status and our content quality policies (ie. we accept translations or bot automated content of unknown quality as long as the editor takes responsibility for the quality of the saved text and the community can inspect it.) We had rambot in 2002 add thousands of U.S. census-based articles, as well as Lsjbot in Wikidata, Swedish, and Cebuano. The main things we could advise on as policy refinements or additions could be several things:
  • The scale and speed at which someone adds adds content that is from an AI source, and whether it is deemed to be of high quality enough to continue at that rate. Again, we already have guidelines about contributions of high/low quality, and whether to stop in areas such as our bot or 'WP:SEMIAUTOMATED' policy. But we could update this to say that AI-generated falls into this category so WP:NPP and admins have better policies to point to when dealing with this on a day to day basis.
  • Encourage transparency as to the origin and creation of such content. I'd differ somewhat from Denny above in that I would consider it a better practice if the originator of the AI-generated article did in fact state the system/method used, the prompt or process that was employed, and a "raw" first draft of the article that has had a basic level of quality checked with all that info in the edit summary in some form (ie. hashtag, links, etc). I think this is not unreasonable, and would provide transparency into what is likely to be more common, and provides the community a way to inspect and track the use of generative AI better.
TL;DR - Like everything else on Wikipedia, an experiment in radical inclusion and transparency, advise to disclose and be responsible for quality, rather than disallow. - Fuzheado | Talk 12:54, 24 December 2022 (UTC)Reply
I agree with people here and elsewhere who pointed to the apparent danger of this experiment. In fact, I think Wikipedia needs to take a definitive stand to ban ChatGPT-generated text entirely. Sooner rather than later, before all the good work done in years is diluted with plausible nonsense from this experiment. The risks are enormous and the consequences far-reaching when people are allowed/encouraged to use a sophisticated hoax-generating machine to write articles for Wikipedia.

It's good for fun, personal use; for creating Wikipedia article, it's entirely useless in my view. The effort needed to thoroughly verify what it generates far exceeds what it would take to genuinely start a new article from scratch. What Stack Overflow does for the same issue is entirely reasonable and ideally, Wikipedia needs a similar stance, before it's too late. – Ammarpad (talk) 06:50, 24 December 2022 (UTC)Reply

See the above comment to The Land - reactionary banning is not the Wikipedia way. Also, unlike StackOverflow, which encourages many parallel contributions that require up/down voting, we don't face the same risk of content flooding because we require there to be one and only one article about a named subject. Theirs is a temporary policy to "slow down the influx of answers" while they sort things out. Our dynamics are very different and the StackOverflow decision not directly applicable here. That said, I think we should have a smart and intentional approach to AI on our systems. Our community and policies are resilient and have served us well for 20+ years. We should not have a reflexive rejection of emerging capabilities. - Fuzheado | Talk 13:00, 24 December 2022 (UTC)Reply
I disagree. Actually Wikipedia is more threatened than StackOverflow. What's the consequence of false/invented information on StackOverflow? some random code of fledgling developer not working, right? What of here?

Wikipedia has reputation to guard. It's not perfect, yes, but ChatGPT (or any insentient AI agent for that matter) cannot make it so in several decades to come. Per the available facts about this experiment, it can only do the opposite; make Wikipedia worse. Additionally, as already established, it's more difficult to fact-check an AI-generated article than to write a new one. Given all this, I wonder what the gain is in all this.

Yes, "we require there to be one and only one article about a named subject," that's even more reason act now. We do not have chances to waste, any hoax article will remain hoax (read: ChatGPT hallucination), since it is difficult to detect and there are no alternative human written versions (which is possible on StackOverflow). Yes, our policies served us well all this while, because there were no sophisticated machines that can churn out plausible nonsense in split seconds. Now there's, and our policies need to adapt. – Ammarpad (talk) 17:05, 25 December 2022 (UTC)Reply

@Ammarpad, how exactly would these policies help? The point of ChatGPT is to produce convincing prose, so it is hard to spot without actually fact checking, at which point it just becomes incorrect text to remove: for all intents and purposes, there is no difference between a person making up nonsense and ChatGPT. If course incorrect information should be removed, and bots should be misused, but that's already covered by existing policy. There's further discussion of this in the VPP thread linked below. — Qwerfjkltalk 10:00, 26 December 2022 (UTC)Reply
Also, is there any evidence of this actually happening (though it may by definition be hard to spot)? As mentioned in the VPP thread, it may be a very Wikipedia-centric point of view to use ChatGPT to write Wikipedia articles. — Qwerfjkltalk 10:02, 26 December 2022 (UTC)Reply
BTW everyone, we seem to be in a small tiny room when there is a larger room talking about this already - Wikipedia:Village_pump_(policy)#Wikipedia_response_to_chatbot-generated_content. @Denny, The Land, Steven Walling, Ziko, Pharos, and Zblace: - Fuzheado | Talk 13:03, 24 December 2022 (UTC)Reply
My problem with LLM is exactly this edit. LLM did not use these references. No one used these references. The author did not select these sources and then write content from them. They attached them post-fact to support some but not all content. This is exactly how poorly-sourced content appears. As a parallel, this is how many newcomer drafts are written - editors write content and then reluctantly sprinkle sources to pass approval, but more often than not these sources don't support most of the content. LLM simply predicts what you want to hear (even if you don't know what you want to hear and are wrong). The example of "in a number of cases changing "is" to "can be"" is exactly why LLM has no concept of context. It's just confidently wrong. The more one edits the output, the more such subtly-wrong information gets buried. I would be first in line to let AI write articles and skip human bias and errors, but this example is a long way from it yet. —  HELLKNOWZ  TALK 12:46, 26 December 2022 (UTC)Reply
I fully agree with Hellknowz. I generally create new articles by starting with the best references I can find, and its their content that determines what sections can be created and how the initial draft of the article is formed. My biases on how I see the subject probably influence the content, but they're strongly constrained by the sources. Writing the article the other way around would be WP:SYNTHESIS.
The more general problem with a Google-AI writing a draft of a Wikipedia article is that it's based on Google-selected sources of text, but Alphabet/Google selects text with the bottom line being optimising its advertising revenue (Google Ads + Google AdSense), not optimising the research quality of empirical evidence-based knowledge. Boud (talk) 00:58, 31 December 2022 (UTC)Reply
@Boud, but ChatGPT is not related to Google, it was created by OpenAI. — Qwerfjkltalk 03:16, 31 December 2022 (UTC)Reply
Fixed. Thanks for the correction! Boud (talk) 08:58, 31 December 2022 (UTC)Reply

Proposed policy/guideline for the use of large language models

edit

I wrote WP:LLM about a month ago as a draft for a potential policy/guideline regarding the use of large language models on Wikipedia; unfortunately, the only place I linked to it was in a rather long and confusing thread on WP:VPP that I don't think anyone was paying attention to. However, it seems that a lot of eyes are on this thread, so I figure I may as well mention it here; if anyone here has feedback, I would be interested in hearing it. Hopefully it can be proposed for formal adoption sooner rather than later. jp×g 11:23, 14 January 2023 (UTC)Reply

I'd also recommend User:JPxG/LLM demonstration as useful supplementary reading for this; I don't think that having LLMs generate article text from scratch is a use case particularly suited to their strengths (since they cannot dynamically access external sources of text that aren't included in the prompt). However, there are a number of tasks for which they are incredibly well-suited. That is to say, "please reformat the wikitext source for this table to have more easily readable text, and also, make it in the colors of the University of Michigan" is a task that would probably take me about ten or fifteen minutes to do, which ChatGPT did in fifteen seconds (and it looked a lot better than what I would have done as well). jp×g 11:26, 14 January 2023 (UTC)Reply

Agree with your insights, and I found your experiments useful. I too experimented with some topics here - User:Fuzheado/ChatGPT and found it a good starting point for phenomenon and popular terms, but quite poor at biographies of academics. That said, I don't think we should have a ban or restriction, but we should make sure people are clear about where these texts come from and are transparent about when they are used. - Fuzheado | Talk 17:23, 14 January 2023 (UTC)Reply

Tag that is not actionable

edit

The current OR tag in my opinion rather defeats the purpose of WP:TAGGING, as it is inherently not actionable. The last bit of the tag, which is standard language, does indeed make sense, and I did this already a couple of weeks ago, when I first published: Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. It is not exactly unprecedented in the history of Wikipedia that an unsourced dodgy text is improved by experienced editors, and in any case the point of tagging is to encourage the improvement of an article, not to claim the philosophical impossibility of improving it. Pharos (talk) 04:29, 6 January 2023 (UTC)Reply

@Pharos, I presume if the "original research" produced by ChatGPT has all been verified then it can be removed. — Qwerfjkltalk 06:37, 6 January 2023 (UTC)Reply
Agreed. Best, Barkeep49 (talk) 16:16, 12 January 2023 (UTC)Reply
Also agree with above. OhanaUnitedTalk page 22:29, 12 January 2023 (UTC)Reply
The "Art criticism" section is currently mostly a series of statements of opinion about what "can" or "may" happen. This is very much about what we could expect from random web pages related to "artwork title", i.e. from Common Crawl, which is 60% of GPT-3's source corpus. In some sense it's worse than OR, because it's too vaguely stated to be encyclopedic; vague statements are generally unverifiable. A Wikipedia article is not the place to write poetry. The final sentence in the section, unsourced, Whether descriptive or abstract, the title of a work of art can be a crucial element of the artistic process almost sounds like it comes from Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity. Vague opinions by notable sources should be attributed, not stated in Wikivoice, especially when their source appears to be CommonCrawlVoice. Boud (talk) 20:50, 13 January 2023 (UTC)Reply

I have removed the tag; maintenance templates are meant as a means to indicate potential problems to readers and point editors to relevant discussions, not as a method of inserting paragraph-long talk page comments at the top of a mainspace page. jp×g 11:19, 14 January 2023 (UTC)Reply

I've done my best to replace it with a neutrally-worded template directing people here; the closest thing I could find was {{cleanup}}. jp×g 11:33, 14 January 2023 (UTC)Reply

New text

edit

I asked the Chat BGT, "Write an eassy on the development of artwork titles through the ages". This is the raw result:

"Artwork titles are an essential part of an artwork's identity. They provide context, meaning, and insight into the artwork, and can often shape our interpretation and understanding of it. The development of artwork titles has been an interesting evolution through the ages, influenced by historical, cultural, and artistic movements.

In ancient times, artwork titles were often descriptive and functional, simply naming the artwork after its subject matter or function. For example, the ancient Egyptians named their artworks after the person or deity depicted in the piece, such as "The Great Sphinx of Giza." Similarly, the Greeks named their sculptures after the mythological figures they depicted, such as "The Discus Thrower" or "Athena Parthenos."

During the Renaissance, artwork titles became more poetic and descriptive, often referencing classical mythology or literature. Artists began to see their works as intellectual and philosophical statements, and the titles reflected this new artistic vision. For example, Leonardo da Vinci's famous painting of the Last Supper was originally titled "The Disciples' Reaction to the Announcement of One Amongst Them Who Should Betray Jesus."

In the Baroque period, artwork titles became more complex and dramatic, reflecting the emotional and theatrical nature of the art. Baroque artists often used titles to create a sense of drama and tension in their artworks. For example, Caravaggio's painting of Judith beheading Holofernes was titled "Judith Slaying Holofernes," highlighting the violent and gruesome nature of the scene.

During the Romantic period, artwork titles became more introspective and personal, reflecting the individualistic and emotional nature of the art. Romantic artists often used titles to express their personal feelings and experiences, as well as their philosophical beliefs. For example, Caspar David Friedrich's painting "Wanderer Above the Sea of Fog" is a personal meditation on the human condition, depicting a solitary figure contemplating the vastness of nature.

In the modern era, artwork titles have become more abstract and experimental, reflecting the avant-garde and experimental nature of the art. Artists often use titles as a form of artistic expression, creating poetic and enigmatic titles that challenge our preconceptions of art and its meaning. For example, Mark Rothko's abstract paintings often have titles such as "No. 61 (Rust and Blue)" or "Orange and Yellow," which provide little context or meaning to the artwork but invite the viewer to engage with the colors and shapes on a purely visual level.

In conclusion, the development of artwork titles through the ages reflects the historical, cultural, and artistic movements of their time. From functional and descriptive titles to poetic and enigmatic ones, artwork titles have evolved to reflect the changing nature of art and its place in society. They continue to be an essential part of an artwork's identity, shaping our interpretation and understanding of it".

These are the sources: "Art and Its Significance: An Anthology of Aesthetic Theory" edited by Stephen David Ross "The Art of Description: World into Word" by Mark Doty "The Meaning of Pictures: Six Lectures Given for Columbia University at the Metropolitan Museum of Art" by Ernst H. Gombrich "Naming the World: And Other Exercises for the Creative Writer" by Bret Anthony Johnston "Art Titles, from Mere Identification to Connotative Power" by Michael J. Lewis, The Wall Street Journal "Art History Basics: Why Do Paintings Have Titles?" by Ingrid Christensen, The Abundant Artist "The Importance of Art Titles" by Daniel Grant, The Huffington Post "Why Art Titles Matter" by Rebecca Gross, Artsy "A Brief History of Art Titles" by Elizabeth O'Neill, The Culture Trip "The Power of Art Titles: From Classic to Modern Times" by Natalia Mego Podešvová, Widewalls.


I also asked it: "Write an essay on translations of artwork titles"

"Translations of artwork titles are an important aspect of understanding art and its context. Translating an artwork title from one language to another can provide insight into the artist's intention and cultural background, as well as help to bridge linguistic and cultural divides.

One of the challenges of translating artwork titles is capturing the nuances of the original language. The title of an artwork is often carefully chosen by the artist to convey a particular meaning or to evoke a certain emotion. When translating a title, it is important to consider not just the literal meaning of the words but also the connotations and associations that the title carries in its original language.

For example, the famous painting by Vincent van Gogh titled "Starry Night" has been translated into many languages, including French ("La nuit étoilée"), Spanish ("La noche estrellada"), and Chinese ("星夜"). Each translation captures the basic meaning of the original title, but there may be subtle differences in how the title is perceived in each language. For instance, the Chinese translation of the title may evoke a different cultural and historical context than the French or Spanish translations, due to the different cultural associations with the night sky in each language.

Another challenge in translating artwork titles is maintaining the artistic integrity of the original work. The title of an artwork is an integral part of its identity, and a poor translation can diminish the impact of the artwork. A translator must balance the need to accurately convey the meaning of the title with the need to preserve its artistic and aesthetic qualities.

One notable example of a mistranslated artwork title is Leonardo da Vinci's famous painting "Mona Lisa." The Italian title of the painting is "La Gioconda," which refers to the smile of the subject. However, the French translation of the title, "La Joconde," mistranslates the word "gioconda" as "jocund," which means cheerful or merry. This mistranslation has led to a misconception that the subject of the painting is smiling in a cheerful manner, when in fact the smile is enigmatic and mysterious.

In conclusion, translating artwork titles is an important aspect of understanding art and its context. Translating a title requires careful consideration of the original language, cultural context, and artistic integrity of the artwork. When done correctly, a well-translated title can provide insight into the artist's intention and help to bridge linguistic and cultural divides."

"The Art of Translating Art Titles" by Emma Cummins, The Culture Trip https://theculturetrip.com/europe/united-kingdom/articles/the-art-of-translating-art-titles/

"Translating Art Titles: What's in a Name?" by Allison Meier, Hyperallergic https://hyperallergic.com/437386/translating-art-titles-whats-in-a-name/

"Lost in Translation: The Importance of Artwork Titles" by Claire Voon, Artsy https://www.artsy.net/article/artsy-editorial-lost-translation-importance-artwork-titles

"The Role of the Title in Art" by Annette Campbell-White, The Huffington Post https://www.huffpost.com/entry/the-role-of-the-title-in_b_9697434

--Marginataen (talk) 11:46, 19 April 2023 (UTC)Reply