Is it finally time to regulate AI?

As the new year starts, many people question how ethical AI can be and who is in charge of what content is released. Are the terms and conditions within social media effective? Who’s really in charge of verifying if posted content is appropriate and ensuring that it isn’t going to hurt someone’s reputation? Questions started being asked when social media platforms such as Reddit, Facebook, and X (also known as Twitter) were flooded with sexually explicit AI-generated pictures of one of the biggest artists in the world, Taylor Swift. The images were uploaded by an anonymous user on X which was viewed over 47 million times before the account was suspended. However, by the time the account was suspended, the image was shared multiple times throughout the platform. According to X’s policies, these Non-Consensual Nudity (NCN) types of media are prohibited within the app. However, my question is how did they not flag that type of content as soon as the picture was posted? If images came out on X of a smaller female content creator, would they respond as quickly or at all?

How did this happen?

According to 404 Media, the images were generated by a group on Telegram, a Microsoft messenger service where users can send AI-generated explicit pictures to others without being flagged. For a multi-billion-dollar company, you would think they would have a safer policy for manipulated content. Especially about a public figure like Taylor. However unfortunately nowadays on social media, this is practically normal. Waves of AI-generated images are roaming around apps, especially on X and TikTok. However, in this case, those who purposely shared those images wanted everyone to see them. Once these images came out, many Taylor Swift fans flooded X with #ProtectTaylorSwift, a hashtag that contained only positive images of Taylor. Swifties knew what to do in trying to erase those explicit pictures and bring attention to the disturbing content.

Crimimalizing Non-Consensual Media

This also started the conversation about criminalizing non-consensual media. Many different cases of AI images generating non-consensual sexual images of females have been increasing throughout social media, so this is not the first time this has happened. White House correspondent, Karine Jean-Pierre, said women and girls are the overwhelming targets of online harassment and abuse. As sad as that sounds, many girls can agree on how common it is to be targeted on social media.  According to The Project, “96% of deep fake images are pornographically targeted toward women.” This is a new tool to help those who want to humiliate women, especially with how quickly those images can be generated. My main question is what would have happened if the Swifties didn’t call for action? If this same thing happened to a normal person, would they have gotten the same support? Probably not. 

Photo by ThisIsEngineering on Pexels.com

Artists are not strangers to false images. Recently, images were also released of Brazilian soccer player Neymar and how he gained weight. Many commented on the pictures saying “He let his career go” and “He’s not the Neymar we used to know”. The images turned out to be false, but regardless you can see the backlash towards the soccer player. People on the internet crave to bring artists, athletes, and influencers down because they are famous, and the possibility of them seeing their hate comments is not as likely. I think that men or women should never be victims of disturbing manipulated content, no matter how famous they might be. However, I think women have been victims of this harassment for too long and lawmakers should try to make an effort for it to stop.

Concluding Remarks

In conclusion, the recent Taylor Swift AI-generated images took the internet by storm, but they also made us realize how dangerous AI, technology, and social media can be and their impact on society. As AI advances throughout the years, it’s important to know how to approach this situation, and for those who are in charge of regulating what content is released to realize how many people they can potentially hurt.

Leave a comment