Placeholder Image

Subtitles section Play video

  • behind some of the coolest premium effects in Hollywood content is the invisible aid of a I artificial intelligence.

  • It is just blowing the doors wide open on opportunities for new ways to tell stories.

  • This is a good technology to hang our hat on because it is getting so much better.

  • Every single year.

  • Machine learning is being baked into workflows helping create previously unimaginable moments from big blockbusters to nonfiction tv I think where Ai really is impactful is getting it to do things that human beings can't do, including raising the dead as if you know you have Andy Warhol standing in the studio right in front of you and you looked at him and said I want you to say it like like this.

  • I wasn't very close to anyone although I guess I wanted to be, let's examine a few specific use cases of how Ai is changing Hollywood's creative workflow.

  • The entertainment industry was spawned by new technology.

  • So it makes sense that from talkies to television to digital video, Hollywood has a history of leveraging new tech, especially in the world of visual effects.

  • When I saw Jurassic park.

  • That was the moment that I realized that computer graphics would change the face of storytelling forever.

  • In the last 25 years that I've been working in film, we've been conquering various challenges doing digital water for the first time in titanic, doing digital faces for the first time in a movie like Benjamin button and now the state of the art is machine learning Ai applications like the kind matt's company maRS develops in house, You can throw it, you know, infinite amount of data and it will find the patterns in that data naturally.

  • Thanks to thirsty streaming services.

  • Hollywood is scrambling to feed demand for premium content rich and visual effects.

  • Budgets time are not growing in a way that corresponds to to those rising quality expectations.

  • It's outpacing the number of artists that are available to do the work and that's where Ai comes in tackling time consuming uncreative tasks like Denoix, zing, rotoscoping and motion capture tracking removal.

  • This was our first time ever trying AI in a production, we had a lot of footage just by virtue of being on the project and doing 400 shots per marvel.

  • When we received the footage which we call the plates in order to manipulate paul.

  • Bettany is face.

  • There needed to be tracking markers During principal photography we looked at it we said, okay, well removing tracking markers is going to take roughly one day per shot in order to replace or partially replace vision's head for each shot and the shot is typically defined as about five seconds of footage.

  • The tracking marker removal itself was about 1/10 of that.

  • So on a 10 day shot, one day was simply removing tracking markers, developed a neural net where we are able to identify the dots on the face of the artificial intelligence averaged out the skin texture around the dot remove the dot and then in filled with the average of the texture surrounding it.

  • And marvel loved it because it sped up production.

  • They save money exactly what we wanted these solutions to do.

  • Where the solution was faltering was whenever there was motion blur.

  • When paul Bettany moves his head very quickly to the right or to the left, there's moments where those dots will reappear partially because in the data set itself we didn't have enough motion blur data.

  • Another example would be whenever the character turned his head where his eyes were out of the screen you would see those dots reappear as well.

  • The Ai recognition.

  • It's using the eyes as a kind of a crucial landmark to identify the face.

  • And so if I turn my head this way and you can't see my eyes well the ai can identify that as a face again.

  • You can fix those things with more data.

  • The more data you feed these things typically the better right.

  • There wasn't a lot of clean data available on our next day.

  • I use case the star of the film had been dead for 25 years yet.

  • The director wanted more than 30 pages of dialogue read by iconic artist Andy Warhol himself.

  • So what do you do?

  • You could hire like a voice actor to do like a great impersonation but we found with his voices you kind of wanted to retain that humanness that Andy had himself, you can get fairly close with the voice actor but you really can't get it.

  • So and that's where ai technology really helps general of audio is the ability for a artificial agent to be able to reproduce a particular voice but also reproduce the style, the delivery, the tone of of a real human being and do it in real time.

  • Welcome to resemble a generative audio engine.

  • When the team initially reached out to us, they proposed what they were going to do.

  • We asked him like, okay well what kind of data are we working with?

  • And they sent us these audio files, recordings over a telephone.

  • They're all from the late seventies, mid seventies.

  • The thing about machine learning is that bad data hurts a lot more than good data.

  • So I remember looking at the data we had available and thinking this is going to be really, really difficult to get right with three minutes of data, we're being asked to produce six episodes worth of content with three minutes of his voice.

  • So with three minutes hasn't said every word that's out there.

  • So we're able to extrapolate to other phonetics and two other words and our algorithm is able to figure out how Andy would say those words.

  • That's where neural networks are really powerful.

  • They basically take that speech data.

  • They break it down and they understand hundreds and thousands of different features from it.

  • Once we have that voice that sounds like Andy from those three minutes of data then it's all about delivery, it's all about performance.

  • I went down to the office because they're making a robot of me and Andy's voice.

  • It's highly irregular and that's where the idea of style transfer really came in.

  • So style transfer is this ability for our algorithm to take input as voice and someone else's speech.

  • I wasn't very close to anyone.

  • Although I guess I wanted to be we're able to say that line and then our algorithms are able to extract certain features out of that delivery and apply it to Andy's synthetic or target voice.

  • The first one was automatically generated.

  • No touch ups.

  • I wasn't very close to anyone although I guess I wanted to be.

  • The second one was like touch up by adding a pause.

  • I wasn't very close to anyone although I guess I wanted to be.

  • And then the third one was basically adding the final touch where it's like okay you know what?

  • I really want to place an emphasis on this particular syllable.

  • So yeah let's get a voice actor too.

  • Do that part to actually place that emphasis on the right words right syllable.

  • And then the third output has those features extracted from that voice over actor and to Andy's voice.

  • I wasn't very close to anyone although I guess I wanted to be.

  • You have definitely heard ai voice is being used in the past for touch ups for a line here or there.

  • This is probably the first major project that's using it so extensively.

  • Most of the effects are still a very manual process, characters can be extremely challenging creatures, things like for hair.

  • Those things can be extremely challenging and time consuming.

  • One notable example of where the technology is headed are the scenes involving advanced three D.

  • V.

  • Effects in Avengers.

  • Endgame, josh Brolin plays Thanos.

  • We capture tons and tons of data in this laboratory setting with josh and then we use that data to train neural networks inside of a computer to learn how josh's face moves.

  • They'll say lines that look left look right.

  • They'll go through silly expressions and we capture an immense amount of detail in that laboratory setting.

  • Then they can go to a movie set and act like they normally would act.

  • They don't have to wear any special equipment.

  • Sometimes they wear a head camera but it's really lightweight stuff.

  • Very unobtrusive and allows the actors to act like they're in a normal movie.

  • Then later when the animators go to animate the digital character, they kind of tell the computer what expression the actor wants to be in and the computer takes what it knows based on this really dense set of data And uses it to plus up to enhance what the visual effects animator has done and make it look completely real.

  • So there will come a time in the future.

  • Maybe it's 10 years, maybe it's 15 years but you will see networks that are going to be able to do really creative stuff.

  • Again, that's not to suggest that you remove talented artist from the equation, but I mean that's the bet that we're taking as a business is a I gonna take over my job.

  • What I see happening right now is actually quite the opposite is that it is creating new opportunities for us to spend the time on doing things that are creatively meaningful rather than spending lots of time doing menial tasks, were actually able to focus on the creative things and we have more time for iteration.

  • We can experiment more creatively to find the best looking result.

  • I think that the more that Ai can do the menial stuff for us, the more we're going to find ourselves being creatively fulfilled.

  • Again, the argument for us is like really creating content that isn't humanly possible.

  • So, you know, we're not interested in like creating an ad spot that a real voice actor would do because in all honesty, that real voice actor would do way better than the Ai technology would do would be way faster if you're just delivering a particular sentence or a particular line.

  • The technology to do Deepfakes is so prevalent.

  • You can get apps on your phone now that pretty much can do a Rudimentary Deepfake, it's going to be interesting in the future, are we going to have to put limits on this technology?

  • How do we really verify what's authentic and what isn't there sort of social repercussions for it as well that I think that we don't quite understand yet.

  • I absolutely believe that this technology could be misused.

  • Our number one priority is to make everyone feel comfortable what we're doing.

  • I think it comes down to educating the general population eventually making them understand that they should think through whatever they are, looking at, whatever they're reading and now whatever they're hearing, we feel we're directionally correct in our bet that this is a good technology to hang our hat on because it is getting so much better every single year and we don't want to miss what we see as like a once in a lifetime opportunity here.

behind some of the coolest premium effects in Hollywood content is the invisible aid of a I artificial intelligence.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it