Skip to main content
Episode 2

Deepfakes and the Future of Truth

Hosted by Eric S. Lander 71:22 min

It’s getting easy to create convincing—but false—videos through artificial intelligence. These “deepfakes” can have creative applications in art and education, but they can also cause great harm— from ruining the reputation of an ex-partner to provoking international conflicts or swinging elections. When seeing is not believing, who can we trust, and can democracy and truth survive?

Deepfakes and the Future of Truth

Guests

Joan Donovan, Research Director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School

Joan Donovan
Research Director of the Shorenstein Center on Media, Politics and Public Policy,
Harvard Kennedy School

Joan Donovan, Research Director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School

Joan Donovan is the Research Director of the Shorenstein Center on Media, Politics and Public Policy. She is also the Director of the Technology and Social Change (TaSC) Research Project. Her research and teaching are focused on media manipulation, effects of disinformation campaigns, and adversarial media movements. She’s also studied white supremacists’ use of DNA ancestry tests, as a postdoctoral fellow at UCLA. Donovan was previously the research lead for Data & Society’s Media Manipulation Initiative.

Noelle Martin, Activist and Law Reform Campaigner

Noelle Martin
Activist and Law Reform Campaigner

Noelle Martin, Activist and Law Reform Campaigner

Noelle Martin is a feminist, activist and law reform campaigner in Australia. She campaigns for better policy, legal and regulatory responses to the global issue of image-based sexual abuse, cyberbullying and deepfakes. Her work was a major force behind recently enacted laws that criminalized the distribution of non-consensual intimate images in Australia. In 2019, she was named a Young Australian of the Year.

Danielle Keats Citron - Boston University School of Law

Danielle Keats Citron
Professor of Law,
Boston University School of Law

Danielle Keats Citron - Boston University School of Law

Danielle Keats Citron is an international expert on privacy, who received a MacArthur Fellow in 2019 for her work on cyber stalking and sexual privacy. Citron is a Professor of Law at the Boston University School of Law, where she teaches and writes about information privacy, free expression, civil rights, national security challenges of deepfakes, and the automated administrative state. She also serves as Vice President of the Cyber Civil Rights Initiative. She has advised the offices of many US legislators.

Reid Hoffman, Co-Founder, LinkedIn & Partner, Greylock

Reid Hoffman
Co-Founder, LinkedIn & Partner,
Greylock

Reid Hoffman, Co-Founder, LinkedIn & Partner, Greylock

An accomplished entrepreneur, executive, and investor, Reid Hoffman has played an integral role in building many of today’s leading consumer technology businesses. He is the co-author of three best-selling books: The Start-Up of You, The Alliance, and Blitzscaling. He is the host of Masters of Scale, an original podcast series and the first American media program to commit to a 50-50 gender balance for featured guests.

Photo credit: David Yellen

Halsey Burgund, Sound Artist & Fellow, MIT Open Documentary Lab

Halsey Burgund
Sound Artist & Fellow,
MIT Open Documentary Lab

Halsey Burgund, Sound Artist & Fellow, MIT Open Documentary Lab

Halsey Burgund is a sound artist and technologist whose work focuses on the combination of modern technologies - from mobile phones to artificial intelligence - with fundamentally human “technologies”, primarily language, music and the spoken voice. He is currently a fellow in MIT’s Open Documentary Lab. Previously, he earned a degree in geophysics, designed and built furniture, and worked in the high tech industry. With journalist Francesca Panetta, Burgund is co-director of the “Moon Disaster” project.

Photo credit: Laura Burgund

Francesca Panetta, XR Creative Director, MIT Center for Advanced Virtuality

Francesca Panetta
XR Creative Director,
MIT Center for Advanced Virtuality

Francesca Panetta, XR Creative Director, MIT Center for Advanced Virtuality

An artist and experimental journalist, Francesca Panetta is a creative director in the MIT Center for Advanced Virtuality. In her previous role at the Guardian, Panetta pioneered new forms of journalism, including interactive features, location-based augmented reality, and she led an in-house VR studio. She also was the Guardian’s head of audio and led its podcast division. As co-director of the “Moon Disaster” team with sound artist Halsey Burgund, Panetta brought to life Richard Nixon to deliver one last speech.

Hany Farid, Professor of Electrical Engineering & Computer Sciences and the School of Information, University of California, Berkeley

Hany Farid
Professor of Electrical Engineering & Computer Sciences and the School of Information,
University of California, Berkeley

Hany Farid, Professor of Electrical Engineering & Computer Sciences and the School of Information, University of California, Berkeley

Hany Farid is an expert in digital forensics who has worked with the AP, Reuters, and The New York Times to verify the authenticity of visual content. Farid is a professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information. His research focuses on digital forensics, image analysis, and human perception. Previously, he was on the faculty at Dartmouth College for twenty years.

Photo credit: Bob Margot

Alexei Efros, Professor of Electrical Engineering and Computer Science, University of California, Berkeley

Alexei Efros
Professor of Electrical Engineering and Computer Science,
University of California, Berkeley 

Alexei Efros, Professor of Electrical Engineering and Computer Science, University of California, Berkeley

Alexei Efros is a computer scientist who combines ideas from computer vision, computer graphics, and artificial intelligence, with vast amounts of image data and a large dose of imagination to understand, model, and recreate the visual world around us. Efros is a professor at the University of California, Berkeley and a member of the Berkeley Artificial Intelligence Research Lab (BAIR). Previously, he was for nine years on the faculty of the Robotics Institute at Carnegie Mellon University.

Photo credit: Peter Badge

Featured in the Boston Globe

We made a realistic deepfake, and here’s why we’re worried

An OpEd by guest Francesca Panetta, Pakinam Amer, and D. Fox Harrell.

A dangerous form of unanswerable speech

An OpEd by Mary Anne Franks, professor of law at the University of Miami School of Law.

Referenced in the Episode

In Event of Moon Disaster 

In July 1969, much of the world celebrated “one giant leap for mankind.” This project from MIT illustrates the possibilities of deepfake technologies by reimagining this landmark event: what if the Apollo 11 mission had gone wrong and the astronauts had not been able to return home?

Everybody Dance Now

A 2018 paper from Alexei Efros’s team at UC Berkeley presents a simple method for “do as I do” motion transfer: giving a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. (You can watch the YouTube video as well.) 

The Great Lengths Taken to Make Abraham Lincoln Look Good in Portraits

An Atlas Obscura article about early photo-editing techniques applied to portraits of Abraham Lincoln.

Generative Adversarial Nets

The seminal paper on Generative Adversarial Networks published by Ian Goodfellow and colleagues in 2014.

This Person Does Not Exist

This website presents a random computer-generated image of a person who does not and has never existed, each time a visitor refreshes the page. 

Is artificial intelligence set to become art’s next medium?
In October 2019, Christie’s sold an AI-generated portrait for $432,500.

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

A 2017 paper on cycle-GANS (building on the work of Ian Goodfellow) from Alexei Efros and colleagues.

Better Language Models and Their Implications

Open AI’s blog post about their transformative language generation system, GPT-2. Read the full paper on the project here.  

Millions of People Post Comments on Federal Regulations. Many Are Fake.

A 2017 Wall Street Journal investigation uncovering thousands of fraudulent posts on federal agencies’ dockets.

Deepfake Bot Submissions to Federal Public Comment Websites Cannot Be Distinguished from Human Submissions
A 2019 study from Harvard student Max Weiss testing whether federal comment processes are vulnerable to automated, unique deepfake submissions that may be indistinguishable from human submissions. Visit his Bot or Not: A Turing Test to see whether you are able to distinguish AI generated comments from real ones.

Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life
A 2018 RAND Corporation publication by Jennifer Kavanagh and Michael Rich, which explores the diminishing role of facts and data in American public life. This report is frequently credited with coining the term “truth decay.”

California Assembly Bill No. 730
A law enacted in California in 2019 that banned publishing political deepfakes within 60 days of an election.

Texas Senate Bill No. 751
A law enacted in Texas in 2019 that banned publishing political deepfakes within 30 days of an election.

Section 230 of The Communications Decency Act

The Electronic Frontier Foundation's guide to Section 230 of The Communications Decency Act, which protects platforms from responsibility for content posted by users. 

Further Learning

In the Age of AI, Is Seeing Still Believing

A 2018 story on deepfakes from The New Yorker that profiles Berkley professors Hany Farid and Alexei Efros. 

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security

A 2019 paper from Danielle Citron and Bobby Chesney about the ways in which deepfake technology can distort reality.  

The Insidious Rise of Deepfake Porn Videos — and One Woman Who Won't Be Silenced

A 2019 Australian Broadcasting Profile on deepfake activist Noelle Martin.

Prepare, Don’t Panic: Synthetic Media and Deepfakes
A project of the WITNESS Media Lab which focuses on the emerging and potential malicious uses of so-called “deepfakes” and other forms of AI-generated “synthetic media.” This work is part of a broader initiative focused on proactive approaches to protecting and upholding marginal voices and human rights.

Deepfakes and Cheap Fakes

A 2019 publication from Dr. Joan Donovan and her colleague Dr. Britt Paris for Data & Society on the manipulation of audio and video evidence.

Department of Justice's Review of Section 230 of the Communications Decency Act of 1996

In June 2020, the U.S. Department of Justice released a set of reform proposals to update the immunity for online platforms under Section 230 of the Communications Decency Act of 1996.

How Section 230 reform endangers internet free speech

A piece by the Brookings Institute which discusses the movement for Section 230 reform. 

These Faces Are Not Real

A July 2020 Reuters piece which details how to spot deepfake images, using a purported British writer by the name of Oliver Taylor, who experts say bore all the hallmarks of having been produced by a GAN, as an example.

New Steps to Combat Disinformation

A September 2020 Microsoft blog post announcing new tools to combat disinformation ahead of the U.S. 2020 presidential election. 

Steve Scalise’s tweet shows the threat of cheap fakes — and what social media sites must do
A 2020 Washington Post opinion piece about political deepfakes on social media and the role social media platforms should play.