User:Privacyan/Deepfake pornography

Deepfake pornography, or simply fake pornography, is a type of synthetic porn that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.



Ethical considerations

edit

Deepfake CSAM

edit

Deepfake technology has made the creation of child sexual abuse material (CSAM), also often referenced to as child pornography, faster, safer and easier than it has ever been. Deepfakes can be used to produce new CSAM from already existing material or creating CSAM from children who have not been subjected to sexual abuse. Deepfake CSAM can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying.[1]

edit

Most of deepfake porn is made with faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identify verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content."[2] Oftentimes, deepfake porn is used to humiliate and harass primarily women in ways similar to revenge porn.

Combatting deepfake pornography

edit

Technical approach

edit

Deepfake detection has become an increasingly important area of research in recent years as the spread of fake videos and images has become more prevalent. One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images. One CNN-based algorithm that has been developed specifically for deepfake detection is DeepRhythm, which has demonstrated an impressive accuracy score of 0.98 (i.e. successful at detecting deepfake images 98% of the time). This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images. While the development of more sophisticated deepfake technology presents ongoing challenges to detection efforts, the high accuracy of algorithms like DeepRhythm offers a promising tool for identifying and mitigating the spread of harmful deepfakes.[3]

Aside from detection models, there are also video authenticating tools available to the public. In 2019, Deepware launched the first publicly available detection tool which allowed users to easily scan and detect deepfake videos. Similarly, in 2020 Microsoft released a free and user-friendly video authenticator. Users upload a suspected video or input a link, and receive a confidence score to assess the level of manipulation in a deepfake.

edit

In 2023, there is a lack of legislation that specifically addresses deepfake pornography. Instead, the harm caused by its creation and distribution is being addressed by the courts through existing criminal and civil laws.

The most common legal recourse for victims of deepfake pornography is pursuing a claim of “revenge porn” because the images are non-consensual and intimate in nature. The legal consequences for revenge porn vary from country to country.[4] For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison,[5] whereas in Malta it is a fine of up to €5,000.[6]

The “Deepfake Accountability Act” was introduced to the United States Congress in 2019 but has died in 2020.[7] It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both.[8] A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humilitate a person with an "advanced technological false personation record" containing sexual content "shall be fined under this title, imprisoned for not more than 5 years, or both." However this bill has since died in 2023.[9]

Controlling the distribution

edit

Several major online platforms have taken steps to ban deepfake pornography. As of 2018, gfycat, reddit, Twitter, Discord, and Pornhub have all prohibited the uploading and sharing of deepfake pornographic content on their platforms.[10][11] In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results.[12] It's worth noting, however, that while Pornhub has taken a stance against non-consensual content, searching for "deepfake'' on their website still yields results and they continue to run ads for deepfake websites and content.[13]

See also

edit

References

edit
  1. ^ Kirchengast, T (2020). "Deepfakes and image manipulation: criminalisation and control". Information & Communications Technology Law. 29 (3): 308–323. doi:10.1080/13600834.2020.1794615. S2CID 221058610.
  2. ^ "Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy". NBC News. 2023-03-27. Retrieved 2023-11-30.
  3. ^ Gaur, Loveleen; Arora, Gursimar Kaur (2022-07-27), DeepFakes, New York: CRC Press, pp. 91–98, doi:10.1201/9781003231493-7, ISBN 978-1-003-23149-3, retrieved 2023-04-20
  4. ^ Kirchengast, Tyrone (2020-07-16). "Deepfakes and image manipulation: criminalisation and control". Information & Communications Technology Law. 29 (3): 308–323. doi:10.1080/13600834.2020.1794615. ISSN 1360-0834. S2CID 221058610.
  5. ^ Branch, Legislative Services (2023-01-16). "Consolidated federal laws of Canada, Criminal Code". laws-lois.justice.gc.ca. Retrieved 2023-04-20.
  6. ^ Mania, Karolina (2022). "Legal Protection of Revenge and Deepfake Porn Victims in the European Union: Findings From a Comparative Legal Study". Trauma, Violence, & Abuse. doi:10.1177/15248380221143772. PMID 36565267. S2CID 255117036.
  7. ^ "Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019 (2019 - H.R. 3230)". GovTrack.us. Retrieved 2023-11-27.
  8. ^ Kirchengast, Tyrone (2020-07-16). "Deepfakes and image manipulation: criminalisation and control". Information & Communications Technology Law. 29 (3): 308–323. doi:10.1080/13600834.2020.1794615. ISSN 1360-0834. S2CID 221058610.
  9. ^ "DEEP FAKES Accountability Act (2021 - H.R. 2395)". GovTrack.us. Retrieved 2023-11-27.
  10. ^ Kharpal, Arjun. "Reddit, Pornhub ban videos that use A.I. to superimpose a person's face over an X-rated actor". CNBC. Retrieved 2023-04-20.
  11. ^ Cole, Samantha (2018-01-31). "AI-Generated Fake Porn Makers Have Been Kicked Off Their Favorite Host". Vice. Retrieved 2023-04-20.
  12. ^ Harwell, Drew (2018-12-30). "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". The Washington Post. ISSN 0190-8286. Retrieved 2023-04-20.
  13. ^ Cole, Samantha (2018-02-06). "Pornhub Is Banning AI-Generated Fake Porn Videos, Says They're Nonconsensual". Vice. Retrieved 2019-11-09.