If you've checked into social network X, formerly known as Twitter, in the past 24-48 hours, there's a good chance you've come across it. AI-generated deepfake stills and videos featuring Taylor Swift's likeness. The images depicted her engaging in sexually explicit acts with various fans of her professional American football player boyfriend Travis Kelce's NFL team, the Kansas City Chiefs.
This explicit and non-consensual image of Swift was slammed and condemned by her legion of fans, and the hashtag #ProtectTaylorSwift trended on X earlier today alongside „Taylor Swift AI“ and was removed by X. Despite struggling with this, it made headlines in news outlets around the world. Block content and play “whack-a-mole” when it is reposted by various new accounts.
It also led to new calls from US lawmakers to crack down on the rapidly changing market for generative AI.
But how to do so without stifling innovation or prohibiting parody, fan art, and other traditionally protected unauthorized depictions of public figures is a big question. Questions remain. First Amendment to the United States Constitutionguarantees the people's right to freedom of expression and speech.
It is still unclear what AI image and video generation tools were used to create the Swift deepfakes – Key Services The middle of a journey and OpenAI DALL-E 3For example, on a policy and technical level, we prohibit the creation of sexually explicit or even sexually suggestive content.
according to newsweekthe X account @Zvbear admitted to creating some of the images and has since made the account private.
Independent technology news outlet 404 Media tracks images It was sent to a group on the messaging app Telegram and said that it used „.Microsoft AI tools,“ and microsoft designer More specifically, Powered by OpenAI's DALL-E 3 image modelwhich also prohibits innocuous creations featuring Swift or other famous faces.
These AI image generation tools are based on our usage (VentureBeat uses these and other AI tools to generate article header images and text content) and your instructions ( Prompts) and block the creation of images containing them. We will remove content and warn users that they risk losing their account if they violate our terms of service.
Still, a popular Stable Diffusion image generation AI model created by a startup Stability AI is open source and can be used by individuals, groups, and businesses to create a variety of images, including sexually explicit images.
In fact, this is exactly what it is Image generation service and community Civitai gets into trouble with journalists 404 MediaThey observed that users were creating a growing number of non-consensual pornographic and deepfake AI images of real people, celebrities, and popular fictional characters.
City Vai is Because it states that it is working to eradicate creation. There is no indication yet that this type of image is what enables the Swift deepfakes in question this week.
In addition, the implementation of the Stable Diffusion AI-generated model on the website by model creator Stability AI clip drop We also prohibit explicit „pornography“ or violent images.
Despite all these policies and technical measures designed to prevent the creation of AI deepfake pornography and explicit images, users are clearly avoiding those and other services that provide such images. I've found a way to do this, leading to a flood of Swift images over the past few days.
My take: AI is easily accepted as a consensual creation; Names that are becoming increasingly famous in pop culturenew HBO series, and more. true detective night land The technology has apparently also been used for nefarious purposes by the rapper and producer formerly known as Kanye West and before that as Marvel, with the potential to tarnish his reputation among the public and lawmakers. be.
AI vendors and those who rely on them may suddenly find it difficult to use their technology at all, even if it is benign or innocuous, and how they handle explicit and offensive content. You must be ready to answer whether to prevent or eradicate. . If the new regulations do take effect, they could significantly limit the capabilities of AI-generated models and, therefore, the output of those who rely on AI for less aggressive uses. there is.
A lawsuit breaks out?
British tabloid coverage daily mail It noted that explicit images of Swift without her consent were uploaded to the website Celeb Jihad, and that Swift is „outraged“ by their spread and is reportedly considering legal action. It remains to be seen whether it is directed at Celeb Jihad, which hosted them, or at the AI image generation tool companies such as Microsoft and OpenAI that made their creation possible.
The very prevalence of these AI-generated images raises new concerns about the use of generative AI creation tools and their ability to create images that depict real people, famous or not, in dangerous, embarrassing, or explicit situations. .
Perhaps it wouldn't be surprising to see calls for further regulation of the technology from lawmakers in Swift's home country of the United States.
New Jersey Republican Congressman Tom Keene Jr., who recently introduced two bills aimed at regulating AI, the AI Labeling Act and the Intimate Image Deepfake Prevention Act, spoke today with news outlets and VentureBeat. issued a statement, complaining as follows: Congress needs to take up the bill and pass it.
Mr. Keene's proposed bill would: first billrequires AI multimedia generation companies to add “clear and conspicuous notice” to the work they generate that it is “AI-generated content.” However, it is unclear how this notice will stop the creation and distribution of explicit AI deepfake pornography and images.
Meta already includes one such label and sticker as a logo for images generated using that label and sticker. Imagine AI Art Generator Tool It was trained using user-generated images from Facebook and Instagram and was released last month. OpenAI recently committed to begin implementing AI image credentials. Coalition for Content Provenance and Authenticity (C2PA) This applies to DALL-E 3rd generation as part of efforts to prevent AI abuse in the lead-up to the 2024 elections in the US and around the world.
C2PA is a nonprofit initiative of technology and AI companies and industry associations to label AI-generated images and content with encrypted digital watermarks to ensure future detection as AI-generated. will do so.
of second bill, Co-sponsored by Keene and his political colleague, New York Democratic Congressman Joe Morrell, the Violence Against Women Act Reauthorization Act of 2022 is being sponsored by victims of non-consensual deepfakes. And, in some cases, it will be possible to sue the software companies behind it. $150,000 in damages, plus any indicated legal costs or additional damages.
Neither bill stops short of outright banning the famous faces of the AI generation, given that such a ban would likely ultimately be overturned by a lower court or the U.S. Supreme Court. And this is probably a wise move.Unauthorized works of art by celebrities are traditionally considered acceptable speech Even before the advent of AI, First Amendment works were widely found in the form of editorial cartoons, caricatures, editorial illustrations, and fan art (including explicit fan art). It could be widely found in media not authorized by the subject matter depicted. .
This is because the court found that public figures and celebrities waived their „right to privacy'' by using their images. but, Celebrities successfully sue People who exploited their image for commercial gain based on the „right of publicity.“ The term, coined by Federal Circuit Judge Jerome N. Frank in a 1953 case, essentially boils down to the ability of celebrities to control the commercial use of their images. . If Swift were to sue, it would likely be based on the latter rights. While the new bill is unlikely to help her particular case, it will probably make it easier for future victims to sue those who commit deepfakes.
To actually become law, both new bills must be taken up by relevant committees and voted on by the full House of Representatives, and similar bills must be introduced in the U.S. Senate and passed by another relevant body. It is necessary to . Finally, the President of the United States must sign a coordination bill that integrates the efforts of both legislative branches of Congress. So far, all that has happened with both bills is to introduce them and refer them to committees.
Read Keene's full statement on the Swift deepfakes issue below.
Keene speaks out about Taylor Swift's blatant deepfake incident
contact: Dan Scharfenberger
(January 25, 2024) Bernardsville, NJ – Congressman Tom Keene Jr. spoke today after his speech. Taylor Swift's fake porn image reportedly generated using artificial intelligence It was shared on social media and went viral.
„It's clear that AI technology is advancing faster than necessary guardrails,“ said Congressman Tom Keene Jr., adding, „Whether the victim is Taylor Swift or young people across our country, We need to establish safeguards to counter this alarming trend.“ My bill, the AI Labeling Act, would be a very important step forward. ”
In November 2023, a Westfield High School student used similar artificial intelligence to create fake pornographic images of other students at the school. Reports revealed that photos of students had been altered and shared within the school, raising concerns among schools and communities about the lack of legal recourse against AI-generated pornography. Such altered photos are known online as „deepfakes.“
Rep. Keene Recently co-hosted a press conference with victims in Washington, D.C., Francesca Mani and her mother, Dorota Mani. The Manis tribe has been a key promoter of his AI regulation.
In addition to Introduction of HR 6466 (AI Labeling Act)This bill would ensure that people know when they are viewing AI-generated content or interacting with an AI chatbot by requiring clear labels and disclosures. Keene is also a co-sponsor of HR 3106, the Intimate Image Deepfake Prevention Act.
Keene's AI labeling method is as follows:
- Directs the Director of the National Institute of Standards and Technology (NIST) to work with other federal agencies to form a working group to help identify AI-generated content and establish a framework for AI labeling. To do.
- Requires developers of generative AI systems to include prominently displayed disclosures to clearly identify AI-generated content.
- Ensure that developers and third-party licensees take responsible steps to prevent content from being systematically published without disclosure.
- Established a working group of governments, AI developers, academia, and social media platforms to determine the best ways to identify AI-generated content and transparently disclose it to consumers. Identify practices.
You can read more about the bill here.
VentureBeat's mission will be a digital town square for technical decision makers to gain knowledge and transact on transformative enterprise technologies. Please see the briefing.