• The Loop
  • Posts
  • 🔆 Fake Me Not: Sora Buzz and Deepfakes

🔆 Fake Me Not: Sora Buzz and Deepfakes

The latest developments in text-to-video generation, a $25 million deepfake heist, and practical ways to protect yourself. Stay informed and read the latest updates now!

Was this email forwarded to you? Sign up here

🗞️ Issue 6 // ⏱️ Read Time: 7 min

Hello đź‘‹

The term “deepfake” was first coined in late 2017 by a Reddit user of the same name and is a combination of "deep learning" and "fake." The advancement of technology has made creating and sharing deepfakes much easier for everyone. Keep reading if you want to learn why that matters.

In this week’s newsletter:

What are we talking about? Deepfakes and advancements in text-to-video generation.

How does it work? Deepfakes are synthetic media created or altered using deep learning techniques, including images, audio, and video. Deepfakes are generated using a type of neural network called an autoencoder, consisting of an encoder, which encodes the relevant attributes, and a decoder, which imposes these attributes onto the target video.

Why is it relevant? While deepfakes can be entertaining and used responsibly in art, gaming, satire, and other cultural fields, they also pose significant risks. For instance, they can be used to spread misinformation, manipulate political scenarios, commit fraud, and manipulate stock markets.

The internet has been buzzing with OpenAI’s latest reveal: an impressive video-generation model called Sora that can transform text descriptions into photorealistic videos. Sora is quickly being compared to the leading AI text-to-video models from Runway, Pika, and Google. While extremely exciting, this technology is advancing alongside concerns about deepfakes and irresponsibly AI-generated content used to spur misinformation. This hesitation makes sense when you learn that Sora can “take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small detail” and “take an existing video and extend it or fill in missing frames.”

The term "deepfake" is a portmanteau of "deep learning" and "fake" and is often used to describe both the technology and the manipulated content. It’s essentially a counterfeit of a human being.

Here are what some of the dangers of deepfakes look like in practice:

  1. Influencing Public Opinion and Undermining Trust: Deepfakes can distort democratic discourse, manipulate elections, erode trust in institutions, weaken journalism, and exacerbate social divisions. Considering that 2024 is quite the year for public elections, governments around the world are facing new types of challenges.

  2. Violating Privacy and Damaging Reputation: Deepfake technology can be used to create revenge porn, where individuals, especially women, are disproportionately harmed. Like the case where sexually explicit AI-generated photos of Taylor Swift flooded the internet.

  3. Financial fraud: A couple of weeks ago, a new record was set in Hong Kong when fraudsters used deepfakes of the company’s CFO to make an employee transfer $25 million USD.

While deepfakes are generally legal, some jurisdictions have specific laws concerning their use, such as those related to child pornography, defamation, and hate speech.

As deepfake technology becomes more sophisticated, detecting manipulated content is getting more difficult. To mitigate the associated dangers, efforts are underway to develop solutions for automated deepfake detection and to increase public awareness of the possibilities and dangers of deepfakes. Some of the top methods to identify a deepfake include looking for inconsistencies in the media and analysing the audio: Sensity, Reality Defender, and DeepIdentify.ai are some examples of deepfake detection solutions. Tech giants pledged to collaborate on combating abuse of deepfakes in the upcoming election at the Munich Security Conference that took place over the weekend. Team Lumiera questions whether trusting companies to self-regulate this important matter is a good idea.

Here are 3 ways you can protect yourself:

  1. Stay Informed About Deepfakes and AI: Keeping up with the latest developments in AI (like following this newsletter!) can help you recognize potential red flags when encountering suspicious content.

  2. Be Sceptical: Maintain a critical spirit when approaching online content and be cautious when receiving unsolicited digital communication.

  3. Protect Your Identity: Use services that monitor and provide alerts if your personal information is used, which could include the misuse of your likeness in deepfakes.

What we are excited about:

Guardrails AI, the open and trusted AI assurance company, formally launched during the opening keynote at the AI in Production conference. The company introduced Guardrails Hub, an open-source product that lets developers build, contribute, share, and re-use advanced validation techniques for LLM applications.

Ghanaian company KaraAgro AI developed AI-based early warning systems to support farmers in quickly identifying problems in their cashew farms, allowing them to keep their crops healthier and more yielding. Watch more here.

âťť

What could efficient regulation of deepfakes look like?

The Lumiera Question of the Week

🎤 Big tech mic drops of the week

Researchers at Google DeepMind and the University of Southern California have unveiled SELF-DISCOVER, a framework that enables language models to find logical reasoning prompts for complex tasks independently. SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks.

Amazon introduced BASE TTS (Big Adaptive Streamable Text to Speech) the largest text-to-speech model yet created. It’s trained on 100K hours of public domain speech data and achieves a new state-of-the-art in speech naturalness.

đź’¸ Putting your money where your mouth is: a commitment to driving global innovation

Singapore plans to invest 1 billion Singapore dollars (about 689 million euros) over the next five years to strengthen AI capabilities and securely implement its National AI Strategy 2.0. The country aims to build new peaks of excellence by investing in AI computing, talent, and innovation. Part of this will include ensuring access to the advanced chips required for AI development and deployment.

🧠 Stanford research: Deep learning determines gender through brain scans. This could deepen our understanding of sex-related brain differences and their implications for various aspects of neuroscience and medicine, for example fixing the gender diagnosis gap in ADHD.

Until next time.
On behalf of Team Lumiera

Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.

Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.