X blocks searches for Taylor Swift after explicit AI images of her go viral
Social media platform X has implemented a restriction on searches for Taylor Swift following the circulation of explicit AI-generated images of the singer on the site.
Joe Benarroch, X’s head of business operations, described the action as “temporary” in a statement to the BBC, citing safety as the priority.
Users attempting to search for Swift on the platform encounter a message indicating an error with the prompt: “Something went wrong. Try reloading.”
Instances of fake graphic depictions of the singer surfaced on the platform earlier this week, some of which gained widespread traction, accumulating millions of views and prompting concern from both US officials and Swift’s fanbase.
Her supporters took to flagging posts and accounts sharing the fabricated images, instead flooding the platform with genuine pictures and videos of Swift under the hashtag “protect Taylor Swift”.
In response, X, previously known as Twitter, issued a statement on Friday, asserting that the dissemination of non-consensual nudity violates the platform’s policies.
“We maintain a zero-tolerance stance towards such content,” the statement emphasized. “Our teams are actively removing all identified images and implementing appropriate measures against the responsible accounts.”
It is unclear when X began blocking searches for Swift on the site, or whether the site has blocked searches for other public figures or terms in the past.
In his email to the BBC, Mr Benarroch said the action is done “with an abundance of caution as we prioritise safety on this issue”.
The issue caught the attention of the White House, who on Friday called the spread of the AI-generated photos “alarming”.
“We know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets,” said White House press secretary Karine Jean-Pierre during a briefing.
She added that there should be legislation to tackle the misuse of AI technology on social media, and that platforms should also take their own steps to ban such content on their sites.
“We believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people,” Ms Jean-Pierre said.
US politicians have also called for new laws to criminalise the creation of deepfake images.
Deepfakes use artificial intelligence to make a video of someone by manipulating their face or body. A study in 2023 found that there has been a 550% rise in the creation of doctored images since 2019, fuelled by the emergence of AI.
There are currently no federal laws against the sharing or creation of deepfake images, though there have been moves at state level to tackle the issue.
In the UK, the sharing of deepfake pornography became illegal as part of its Online Safety Act in 2023.