• The Loop
  • Posts
  • 🔆 Fake Me Not Pt. 2: Explicit Deepfakes

🔆 Fake Me Not Pt. 2: Explicit Deepfakes

South Korea's way to deal with this widespread issue, a new global agreement on our digital future, and more.

Was this email forwarded to you? Sign up here

🗞️ Issue 37 // ⏱️ Read Time: 7 min

Hello 👋

We’ve talked about deepfakes before, but we’ve only briefly touched upon one of the most widespread and problematic uses of deepfakes: Pornography.

In this week's newsletter

What we’re talking about: The rise of AI-generated deepfake pornography and its implications for privacy, consent, and online safety.

How it’s relevant: Deepfake technology is becoming increasingly accessible, blurring the lines between reality and fabrication in digital content, and affecting people of all ages and backgrounds.

Why it matters: The unchecked spread of deepfake pornography threatens to erode trust in digital media, exacerbate gender-based violence, and fundamentally alter our concept of personal identity in the digital age. As this technology evolves, it challenges our legal systems, social norms, and individual rights in unprecedented ways.

Big tech news of the week… 

⚖️ Together with Zambia, Sweden led negotiations on the Global Digital Compact, adopted last week as part of the Pact for the Future. It became the first comprehensive agreement within the UN that addresses digital issues, including AI.

⛽ Constellation Energy Corp. announced it has signed a 20-year deal to supply Microsoft Corporation with nuclear power to fuel the company’s AI operations by reopening Three Mile Island, the site of the worst accident at a U.S. commercial nuclear power plant in American history.

🌏 Sony Research and AI Singapore (AISG) will collaborate on research for the SEA-LION family of large language models (LLMs). SEA-LION, which stands for Southeast Asian Languages In One Network, aims to improve the accuracy and capability of AI models when processing languages from the region.

Check your messages

Imagine you’re scrolling through your phone on your morning commute when a message pops up from a friend:

“Hey, have you seen what’s going around on social media? There’s a video…it looks like you.”

Your heart sinks as your mind races with worst-case scenarios. You know what this might be - a deepfake. 

We hope this has never happened to you, but it’s becoming an increasingly common reality for many. Deepfakes don't just affect celebrities or public figures; they can target anyone, from your classmate to your colleague.

The emotional toll of discovering a deepfake of yourself can be immense, especially when the perpetrators are peers. It's a violation of privacy that leaves victims feeling exposed, vulnerable, and often powerless. And even after the content is proven fake, the initial impact can linger, affecting relationships, careers, and mental wellbeing.

Let’s take a deeper look at this very severe issue and what’s being done to combat it.

Subscribe to keep reading

This content is free, but you must be subscribed to The Loop to continue reading.

Already a subscriber?Sign In.Not now