User:MichaelGonzalez26/Artificial intelligence art

Legality edit

Due to recent advancements in generative artificial intelligence, synthetic content, such as AI-generated images and artwork, has become much more accessible throughout the early 2020s. As a result of the increased presence of AI-generated artwork and images, various legal issues have emerged concerning the use of AI-generated images, particularly in fields such as copyright and defamation law.

In regard to the copyrightability of AI-generated artwork, both existing case law and the Copyright Office stipulate that AI-created artwork cannot be granted copyright protection due to having not been created by a human author.[1] When examining whether an artist of a copyrighted work can successfully argue infringement against an AI-generated image, a key consideration is whether the image is a near-identical copy of the original, or merely incorporates its stylistic cues.[2] Lastly, it has been found that using copyrighted materials to train AI models would likely be protected by fair use, as the models are merely examining the copyrighted works to learn more about them, rather than using them for their expressive elements.[3]

In areas such as defamation law, holding an individual liable for defamatory images generated by AI would prove difficult due to the protection given to online platforms by Section 230; in addition, protections provided by the First Amendment also limit liability for those who post defamatory content for matters outside of public concern.[4][5]

Copyrightability edit

When determining whether a work is copyrightable, a key requirement is whether the work was created by a human author, with non-humans being found to not have an ability to obtain a copyright in their work. [1] For example, a monkey cannot register a copyright in works that it captured with a camera, as the Copyright Act utilizes terms such as “children,” “widow,” “grandchildren,” and “widower,” all terms which “all imply humanity and necessarily exclude animals.” [1] This requirement has long been reflected in copyright law, as well as the Copyright Office's registration guidances, which frequently emphasize that a work must owe its existent to an author in order to be copyrightable.[1]

 
"A Recent Entrance to Paradise", the AI-generated artwork discussed in Thaler v. Perlmutter.

Thaler v. Perlmutter has become a landmark case for the copyrightability of AI-generated work, where the court determined that works created by AI cannot receive copyright protection due to the Copyright Act of 1976 requiring that works be created by humans in order to receive protection.[1]

In Thaler v. Perlmutter, the plaintiff owned a computer system, which he used in order to generate a piece of visual art. When the plaintiff attempted to register the work for copyright, the Copyright Office denied the application, stating that the work lacked human authorship, which is a requirement in order to receive copyright protection.[6] Despite the plaintiff's arguments that copyright law should include works generated by AI due to the law's history of malleability, the court ultimately denied such an argument, stating that human creativity is an essential requirement of copyrightability, even if that creativity is channeled through tools or media.[6] Determining that human authorship is a bedrock requirement of copyright, placing an emphasis on the Copyright Act's use of the word "author", the court ultimately denied copyright registration for a work created absent any human involvement.[6]

Using this rationale, the Copyright Office has refuted to provide copyright protection for works generated by AI on multiple occasions, noting that the work lacked a human author, a perquisite to obtaining copyright protection in the United States.[7] In one specific instance, an artist registered a comic book using images generated by AI; despite the work originally receiving copyright protection by the Copyright Office, the Office proceeded to revoke protection for the work after reaching out the author for details about how AI was used in the creation of their work.[8] More specifically, the Office decided that while the elements of the comic created by the author could receive protection, the images generated by AI were not the product of human authorship and could therefore not be protected.[8]

The Copyright Office first published its policy denying protection for AI-created works in 1973[9], reaffirming this decision as recently as 2021, emphasizing the "fruits of intellectual labor" from "creative powers of the mind" as a key factor in the creation process, and citing the "original intellectual conceptions of the author" (an author being limited to humans) as a key limit to copyright law.[10] Again in 2023, the Copyright Office conformed to these principles, stating that "If a work's traditional elements of authorship are produced by a machine, the work lacks human authorship and the Office will not register it."[10]

Despite these decisions, however, no case as of now has explicitly held that an AI-generated work is unprotectable, meaning that the legality of AI-generated works could be subject to change in the near future.[10] Additionally, some contend that the Copyright Office--despite the position it has taken on AI-generated work--have still registered works created by the AI, as shown by a search of the copyright registry. Due to the Office also not frequently checking whether a work is created by AI, it is also believed that individuals can circumvent the requirement though simply listing a person as the author of the work, with the Copyright Office unlikely to take action to remove it; similarly, one could also circumvent the requirement by labelling the AI-created work as a piece of work that does not require listing an author, such as a Work Made For Hire.[11] Whether a user receives intellectual property rights in AI-generated works also varies depending on the service, with such provisions stipulating that [1] the AI provider owns copyright rights in the content generated by the AI, [2] users can purchase a license to use their generated content, or [3] the rights are transferred from the provider to the user.[12]

Copyright Infringement - Image Training, Outputs, & Fair Use edit

Image Training and Outputs edit

AI-generated images have also faced scrutiny in areas such as copyright infringement and fair use. This is due to these AI models having to be "trained" on images, including copyrighted works, typically without the authorization of the owner of said content.[13] Regarding the use of these copyrighted images for the "training" of AI images, liability is contigent upon if the image was downloaded or "stored" by the developer during the AI's training process, a requirement derived from that "fixation" element of the Copyright Act.[14][15] Should it be found that the image was stored for a sufficient period of time, it is possible that the training of AI models is found to be infringing.[15]

The analysis conducted for an AI-generated image itself can vary depending on whether the image resembles an image used for its training (referred to here as the "input"). When determining copyright infringement for a generated image in the style of another (but not resembling it), a key factor is whether the AI-produced image is substantially similar to the protected "input".[2] For images that merely mimick the style of an "input", infringement would not be found due to stylistic choices being akin to an idea, and therefore not protected by copyright law.[16][2] In cases, however, in which the generated image is a near identical copy of an "input", infringement would likely be found.[2]

 
An AI-generated image by Google Gemini using the following prompt: "Generate an image for me of a dalmatian puppy playing with a ball in the style of a Van Gogh painting."

Recent cases that have been litigated in regard to copyright infringement could provide greater insight as to the nuances and issues that are faced by AI in copyright:

In Doe1 v. GitHub, the plaintiffs argued that GitHub and OpenAI utilized training data including publicly accessible repositories on GitHub, including information limited by Licenses.[17] While the plaintiffs were not aware whether their works were used by the alogrithim in its training process, they argued that their work was ingested by the program, and therefore returned to users through its output, as the program having been trained on all publicly available GitHub repositories.[17] They additionally claimed that the defendants knowingly failed to reach out to them to obtain authority to use their licensed materials, therefore violating their right to distribute copies of and create derivative works based on their licensed material.[17] In short, the plaintiffs' arguments centered around the fact that the software's programming used to power the AI, Codex, does not identify the owner of the copyright in the output, and was not trained to provide attribution to those works, despite requirements that open-source licenses require attribution to the author, among other requirements.[17]

While the case has not yet concluded, as recently as January 3rd, 2024, Judge Tigar refused to dismiss the plaintiffs' claims for standing and providing them with an opportunity to file an amended complaint.[18] While this case primarily centers around the misappropriation of code, it is believed that a similar misappropriation could happen in the world of AI-generated art, with AI being used to generate images similar to already existing ones, and therefore violating copyright law.[19]

In another recent case, Andersen v. Stability AI Ltd., the plaintiffs challenged the defendant's creation of its Stable Diffusion AI, arguing that the AI was "trained" on the plaintiffs' copyrighted work without obtaining the permission of the artists.[20] Despite the AI's output images not likely being a close match compared to any of the images used in the training data, plaintiffs argued that "[e]very output image from the system is derived exclusive from the latent image, which are copies of copyrighted images", and therefore violated their copyright rights.[20] Among the various arguments raised by the plaintiffs, they most notably asserted that Stability's use of the images constituted direct copyright infringement due to Stability "download[ing] or otherwise acquir[ing] copies of billions of copyrighted images without permission", using those images to train the AI.[20] The court ultimately refused to dismiss this claim, as it found that Stability was not able to oppose the sufficiency of the plaintiff's allegations for direct infringement.[20]

While the case has also not yet been decided, the court did state that the plaintiffs adequately alleged direct infringement based on the facts they presented.[20] Should the case continue, and be decided in the plaintiffs' favor, it could have a large impact on how copyrighted images are utilized in the training of AI models.

Fair Use edit

Regarding AI and fair use, much of the literature written agrees that the use of copyrighted material to train AI models should be considered fair use, and therefore cannot infringe upon a copyright owner's content through using said content as part of the AI's training process (a process referred to as "fair learning").[3][21] The underlying reasoning for this is that it is not the expressive elements of copyrighted works that are being copied, but rather is the AI merely viewing the works' components in order to learn more about how the work.[3][22] For example, an AI may take a photograph taken of an individual in order to learn more about the model's face geometry, rather than using the picture for its "creative elements" or aesthetics.[3] Due to copyright protection only extending to the expressive elements of a work, this means that AI that uses a work for its non-expressive elements would not be found to violate copyright law.[16]

This reasoning is derived from the case Sega v. Accolade, Inc.:

In Sega v. Accolade, Sega (plaintiff) had copyrighted computer code that it licensed to other companies for use with Sega’s Genesis console.[23] Accolade (defendant), due to not wanting to be bound to the conditions of Sega’s licensing agreement, decided to “reverse engineer” Sega’s program in order to make their games compatible with Sega systems.[23] After reverse engineering the code, Accolade created a development manual, containing functional descriptions of the interface requirements but did include not any of Sega’s code within that manual.[24] In regard to fair use, the Sega court on appeal concluded that–despite Accolade copying Sega’s code–the copying was protected by fair use, emphasizing how Accolade’s reverse engineering and copying of Sega’s code was necessary in order to understand the functionality requirements for Sega’s console.[25] The court sided with Accolade for this reason, concluding that when disassembly is the only way to gain access to a computer program, and that there is a legitimate reason for that disassembly, disassembling the copyrighted work is protected by fair use.[26]

Others, however, assert that AI art is not protected by fair use, believing generative AI (such as those used to create pictures and art) is distinct from other types of AI (such as those used in self-driving cars) due to being trained to mimic the expression of the works that they are trained on, unlike AI that merely perform functional tasks.[27] As an example, a user can prompt an AI to generate an image in the style of Pablo Picasso, with the traits of Picasso's work being visible in the generated output.[27] Such arguments also assert that such AI models cannot receive fair use protection, as their emulation of the copied works would not make them transformative, and would also serve as a substitute for the original artist's and their work, failing factors one and four of the fair use test.[28][29]

Libel & Fabrication Concerns edit

 
Pope Francis in puffy winter jacket

There is also concern that AI's generative capabilities can be used as a means to create libelous content, using artificial intelligence to create fake images of people to harm their character.[30] For example, in March 2023, an AI-generated photograph of Pope Francis wearing a puffer jacket surfaced on the Internet, with many believing the image to be real.[31] Some, such as the Federal Trade Commission, have emphasized the recent advancements in AI tools, as well as the increased ease of access and use of those tools, as factors which may make "synthetic media" or generated content more commonplace, increasing the potential for people to used AI in order to create libelous content or, in some cases, terroristic threats.[32] Due to the broad protection that laws such as Section 230 provides to hosting services in the United States, it may prove difficult to hold AI companies liable for the creation of content that may be deemed as offensive.[4] The posters of such content would likely only being liable under certain circumstances, such as posting content depicting public figures and officials with "actual malice" (stipulating that the plaintifff published the content while doubting its truthfulness or being aware of its falsity).[4] Furthermore, in the event of litigation, AI companies could attempt to absolve themselves of liability by asserting that the individual offering the prompt is liable for the AI-generated content, rather than the AI services themselves, for generating the image.[4] The First Amendment's limitations also pose an issue should one attempt to take legal action for an AI-generated image, with the process often being difficult for those who are injured outside of matters of public concern.[5]

Apart from libel, it has also been found that AI-generated content has been weaponized in the form of "DeepFakes", fabricated media of people created using AI.[33] Especially due to advances in AI over the past few years, DeepFakes have become not only easy to create, (often only taking a few hours and being found on platforms such as YouTube, Facebook, and Instagram) but have also become increasingly harder to distinguish from real life.[33] As a result of this technology's rise to prominence, many are concerned that this technology could be used for malicious and manipulative purposes, as being used to spread political misinformation and hurt the economy.[33]

One example of such weaponization is the use of AI to generate revenge porn and nonconsenual pornography, using an image of the person that is then imposed into a pornographic photo or video.[33][34] Absent any indication that the generated pornographic was created as a parody or contained false images, anyone generating and publishing such content would very likley be found liable for defmation.[34] As for website operators, Section 230's broad protection would again shield them from liability.[35]

References edit

  1. ^ a b c d e 88 FR 16190, 16191.
  2. ^ a b c d Alhadeff, Jacob. "Limits of Algorithmic Fair Use". Washington Journal of Law, Technology, & Arts. 19: *28 – via Lexis.
  3. ^ a b c d Alhadeff, Jacob. "Limits of Algorithmic Fair Use". Washington Journal of Law, Technology, & Arts. 19: *28 – via Lexis.
  4. ^ a b c d Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *442 – via Lexis.
  5. ^ a b Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *451 – via Lexis.
  6. ^ a b c Thaler v. Permutter, 1:22-cv-01564, (D.D.C. Aug. 18 2023).
  7. ^ Abbott, Ryan (August 8, 2022). "Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence" (PDF). Florida Law Review. 75 (6): 1157–1158 – via Lexis.
  8. ^ a b Abbott, Ryan (August 8, 2022). "Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence" (PDF). Florida Law Review. 75 (6): 1158. – via Lexis.
  9. ^ 88 FR 16190, 16192.
  10. ^ a b c Abbott, Ryan (August 8, 2022). "Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence" (PDF). Florida Law Review. 75 (6): 1161. – via Lexis.
  11. ^ Abbott, Ryan (August 8, 2022). "Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence" (PDF). Florida Law Review. 75 (6): 1165-1166. – via Lexis.
  12. ^ Abbott, Ryan (August 8, 2022). "Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence" (PDF). Florida Law Review. 75 (6): 1166-1167. – via Lexis.
  13. ^ Alhadeff, Jacob. "Limits of Algorithmic Fair Use". Washington Journal of Law, Technology, & Arts. 19: *1 – via Lexis.
  14. ^ 17 U.S.C. § 102(a).
  15. ^ a b Alhadeff, Jacob. "Limits of Algorithmic Fair Use". Washington Journal of Law, Technology, & Arts. 19: *27 – via Lexis.
  16. ^ a b 17 U.S.C. § 102(b).
  17. ^ a b c d Doe v. GitHub, Inc., 22-cv-06823-JST (DMR) (N.D. Cal. Nov. 28, 2023).
  18. ^ Doe v. GitHub, Inc., 22-cv-06823-JST (N.D. Cal. Jan. 3, 2024).
  19. ^   Khan, Mehtab; Hanna, Alex. "The Subject and Stages of AI Dataset Development: A Framework for Dataset Accountability" (PDF). Ohio State Technology Law Journal. 19: *175 – via Lexis.
  20. ^ a b c d e Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Oct. 30, 2023).
  21. ^ Khan, Mehtab; Hanna, Alex. "The Subject and Stages of AI Dataset Development: A Framework for Dataset Accountability" (PDF). Ohio State Technology Law Journal. 19: *208-209. – via Lexis.
  22. ^ Khan, Mehtab; Hanna, Alex. "The Subject and Stages of AI Dataset Development: A Framework for Dataset Accountability" (PDF). Ohio State Technology Law Journal. 19: *209. – via Lexis.
  23. ^ a b Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510, 1514 (9th Cir. 1992)
  24. ^ Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510, 1515 (9th Cir. 1992).
  25. ^ Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510, 1526 (9th Cir. 1992)
  26. ^ Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510, 1527-1528 (9th Cir. 1992)
  27. ^ a b Alhadeff, Jacob. "Limits of Algorithmic Fair Use". Washington Journal of Law, Technology, & Arts. 19: *41. – via Lexis.
  28. ^ 17 U.S.C. § 107.
  29. ^ Alhadeff, Jacob. "Limits of Algorithmic Fair Use". Washington Journal of Law, Technology, & Arts. 19: *41-42. – via Lexis.
  30. ^ Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *427 – via Lexis.
  31. ^ Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *426 – via Lexis.
  32. ^ Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *427 – via Lexis.
  33. ^ a b c d Ratner, Claudia. "When "Sweetie" is not so Sweet: Artifical Intelligence and its Implications for Children Pornography". Family Court Review. 59: *389.
  34. ^ a b Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *438 – via Lexis.
  35. ^ Garon, Jon M. (2023). "An AI's Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel" (PDF). Journal of Free Speech Law. 3 (2): *439 – via Lexis.