Placeholder Image

Subtitles section Play video

  • [ ♪INTRO ]

  • Thanks to CGI, people are used to seeing something fake that looks incredibly real, but it's

  • actually fake. Like Captain America's body. There is no way that's real!

  • The computer generated images from TV shows, movies, and videogames require millions of

  • dollars and months of work by trained professionals. But that could be changing, thanks to deepfakes.

  • You probably know about deepfakes, they're based on machine learning and artificial intelligence,

  • and they're beginning to give amateurs the power to doctor footage quickly and on the cheap.

  • Deepfakes aren't perfect quality right now, but in the future, they could be pretty convincing

  • maybe even indistinguishable from reality. And if that happenswell, people are already preparing.

  • A deepfake video is one where artificial intelligence has been used to make it appear that someone

  • is doing or saying something that they didn't actually do, or say.

  • Like a face swapputting one person's face on another person's body.

  • You can use this to have fun and enjoyment like the person who changed clips of famous

  • films of they all starredwho else? but Nicolas Cage.

  • But there are also more sinister uses out there.

  • For instance, in 2018, director Jordan Peele created a convincing proof-of-concept video

  • of Barack Obama saying lots of funny swear words.

  • The video was obviously not real, but it was an important PSA about how easily political

  • videos can be faked.

  • Also, unfortunately, we can't not mention what around 95% of deepfake videos are right

  • now : pornography that's been non-consensually edited to feature celebrities and even ordinary people. Sobad stuff.

  • Deepfake technology is still really new, but it's evolving quickly.

  • By one measure, the number of these videos almost doubled in 2019. And that's likely

  • because they're becoming pretty easy to make.

  • Deepfakes take advantage of new, powerful computing methods called generative adversarial networks, or GANs.

  • To make one, you start with neural networks: a type of teachable AI inspired by the connections

  • between the neurons in your brain.

  • You take two of these networks, and you give them lots of training datawhich in this

  • case, means lots of pictures and/or videos of the person you want to fake.

  • Then, you get your networks to compete with each other.

  • One network is tasked with using the training data to make a deepfake: to take some person's

  • face and animate it onto a template.

  • And the other network is tasked with watching that deepfake, comparing it to the training

  • data, and guessing whether or not the footage is real.

  • These are called the generative network and the discriminative network, respectively.

  • As these two sides compete with each other over multiple tests, the generative network

  • gets better and better at creating convincing deepfakes.

  • And eventually, it reaches an equilibrium where the discriminative network can't tell real from fake.

  • Of course, your deepfake is only as good as your neural networks. And just because your

  • computer is fooled doesn't mean a human would be.

  • These days, you can make a cheap, amateur deepfake video using about 250 images of the

  • target and about two days of computer processing time. But those won't fool a person.

  • They're not perfect forgeries, and are only good for the Nicolas Cage party trick, or

  • for faking really low-quality footage.

  • The best videos, like the Obama one, still involve professional touch-ups.

  • Still, spotting deepfakes is getting harder.

  • Like, you used to be able to spot one by looking for someone who wasn't blinking enough.

  • But lately, experts are saying you can't rely on that anymore.

  • Like the GANs themselves, human programmers are improving their algorithms over time,

  • so what works for detecting fakes today may not work tomorrow.

  • Sohow close are we to deepfakes that are indistinguishable from the real thing?

  • That's still up for debate. But one way or another, faking the video doesn't seem

  • to be the biggest hurdle here. It's the audio.

  • We're getting pretty good at pasting faces on bodies, and it's only a matter of time

  • before GANs can generate something that really fools us.

  • But making a person say something? That's a whole separate challenge.

  • Generally, faking voices does work in much the same way, with machine learning and GANs.

  • You just need a few recordings of a person.

  • But the state-of-the-art voice-faking software isn't fooling anyone for nowlikely

  • because people just haven't studied it as much.

  • Still, that could change in the future. And while that could be really cool for movie

  • makers, it could also make identifing trustworthy information that much harder.

  • This is enough of a concern that some companies are already preparing for this.

  • For example, one called Truepic is trying to combat deepfakes by focusing on the metadata of photos and videos.

  • For smartphones, that means things like the GPS position of the phone when the video was

  • filmed, and hard-to-fake things like the phone's pressure readings and orientation.

  • The company has made a camera app that reads that data as you film the video, and immediately

  • sends it off to the company's servers.

  • That helps prove the original is real and makes it harder to produce forged versions after it's taken.

  • Additionally, one group that makes fake audio clips has experimented with adding a watermark

  • to their contentan artifact that will always identify the clip as being made by them.

  • Unfortunately, though, no matter how many safeguards people put in place, there will

  • always be those who don't want to play by these rules.

  • But honestly, that's true for many kinds of content. And it's been possible to edit

  • clips and make misleading videos for a very long time.

  • So no matter when the perfect deepfakes arrive, it's probably worth brushing up on your

  • media literacy skills anyway. If nothing else, it can't hurt.

  • If you want to learn more about media literacy, we recommend checking out Crash Course Media

  • Literacy. It's produced by Complexlythe same studio that brings you SciShowand

  • it's a twelve-episode series that talks all about understanding, analyzing, and accessing

  • information. You can find it at youtube.com/crashcourse.

  • [ ♪OUTRO ]

[ ♪INTRO ]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it