Placeholder Image

Subtitles section Play video

  • We'll talk about, uh, Deep Learned Super-Sampling.

  • So yeah, it's got a fancy name. It sounds cool. It is quite cool.

  • Let's imagine you're running a game, right?

  • I don't do that as often as I'd like anymore.

  • But- but- maybe you're pushing your graphics card right to the limit of where it- where it's happy, right?

  • You've got a 4K monitor, you're pushing a lot of pixels to the screen,

  • the game is a big game with lots of- lots of effects.

  • The problem is that then your frame rate's going to start suffering.

  • So maybe what we could do is run at a lower frame rate,

  • which is going to be that much easier for your graphics card, and then use deep learning

  • to recreate that 4K image - and if the deep learning is good enough, you won't be able to tell the difference.

  • So the most recent generation of Nvidia graphics cards have these Tensor Cores on board, right?

  • Tensor Cores are basically very quick matrix multiplication circuitry.

  • Matrix multiplication comes up a lot in deep learning, which is kind of what these were designed around.

  • But it has some applications to games, a little bit.

  • I'm going to betray my lack of knowledge about modern games, but I don't really know much about them.

  • I play some games, but very easy ones.

  • So, I- you know, I'm not running games at 4K and worrying about framerates, right?

  • But some people do concern themselves with these things and they spend a lot of money on these graphics cards,

  • and they want them to run as well as they can.

  • The problem is that maybe some game comes out,

  • and it has a huge demand on your- on your GPU, right? For every pixel in your game.

  • The GPU's gotta work out which triangles in the world

  • it needs to - you know - it needs to render what color they're going to be,

  • it's got to include lighting and shadows,

  • you know, blurs and - you know - depth of field effects, like I was talking about last- last video.

  • You know, this takes a long time and the more pixels you use, the worse it gets.

  • OFF-SCREEN: And motion blur, of course?

  • And motion blur, I know people love motion blur. I myself can't get enough of it.

  • What do you do about this? One thing you do is you just make your graphics cards faster, right?

  • This is something that happens every generation, not just on NVIDIA cards, but - you know - all graphics cards, and that helps a lot.

  • All right, but the problem is we've had a jump from 1080P

  • To 4Kand that is not a slight increase in number of pixels

  • It's four times the number of pixels which on a simple level means four times the amount of work right to render

  • This this screen and you've got it all so when we talk about things like super so

  • Something sometimes we're looking at a pixel more than once right and that means it's getting really slow

  • So what Nvidia have tried to do here is

  • Say well

  • Maybe we can we can we can save quite a lot of performance by let's say running our game at 1080p, right?

  • But if you up sample that to 4k, maybe it won't look very good because basically it's just going to be blurred

  • so

  • maybe we could do a slightly more a

  • slightly smarter up sampling using these sort of deep learns to prepare solution techniques as I understand it even modern TVs these days do

  • Scaling and they're scale things up and blu-ray recorders DVD recorders are always

  • You know had an element of doing this scaling is this just a more advanced version?

  • Yeah, and some TVs have starting to bring in as far as I know

  • Deep learned, you know smart AI driven up sampling right the idea

  • Is that so what happens when you run a game is you've give adding all these effects on top of one another to?

  • Get a good performance

  • now if your performance if your frame rate starts to drop

  • What you're going to do probably is either drop your resolution or if you don't want to drop your resolution because you don't like it

  • You start to remove some of the effects, right?

  • So you drop your shaders down from high to?

  • Something else and that will just reduce the amount of work per pixel and if slightly increase your frame rate

  • But some people don't want to do this

  • Right, they spent a lot of money on their computer and they want to run it on full graphics

  • So but maybe a game has come out. There's just really really demanding

  • There's kind of two problems we want to solve here one

  • Is this problem of aliasing, right which is but if you rent if you rasterize a scene with triangles

  • They don't always line up exactly where the pixels are. So you get a kind of jagged edge, right?

  • That's one problem, which doesn't look very nice and there are techniques to get around this the other problem

  • Is this issue of resolution, right? If you drop down from 4k to 1080p you gain months, you know gone four times faster

  • That's a huge benefit. Right? If you could do that without noticing and the difference in appearance

  • Well, that's a winner right that's going to be great. And then you can start putting it even even more

  • Computational time on shader effects and things so but yes, you're running out a lower resolution

  • But those pixels are better had a good deal time spent on them. They look really nice

  • What are the problems we have? Is this problem called aliasing right now?

  • I will just talk about this very briefly because it's not really what this video is about

  • But if you're valuing an image and your triangle Falls like this, then this pixel actually isn't all of this object here

  • Maybe this whole dish is dark and this object is light. It's not all of this object

  • It's like 70% dark and 30% light now. The problem is that there's no way of doing this

  • So if you saw and pull this pixel here this pixel here and this pixel here. You're going to get an edge that goes light

  • Dark light like this, right and that looks ugly

  • So what you'd use is a technique usually called some multisample

  • Anti-aliasing where essentially you take multiple readings from this pixel like these four here or something more dense than that

  • And there's lots of different ways to do this and then you have those

  • values and the nice thing is then you've got three readings of dark one reading of light and you come out with a reading of

  • About 75% dark and you get a smooth edge

  • This is one thing that graphics cards do to try and make things look a little bit better if you turn off all these

  • Anti-aliasing approaches, then what you'll get is your your core jaggedy lines?

  • It'll run nice and quickly. If you're sampling four times per pixel, that could be a four-fold decrease in speed

  • Right and that has a performance hit unless your graphics card is just amazing. Mine is not that's one problem, right?

  • the other problem is but you know when you go up to 4k

  • It's just four times the number of pixels whatever you're doing per pixel is

  • Multiplied by four these four times in our pixels that you know, that's a huge problem. So if you're running

  • 4k and four samples for example per pixel

  • That's a lot more computation

  • but if you were just running without anti-aliasing on

  • 1080p and so you inevitably have to drop down somebody's setting so that you can get a good framerate for your game

  • And it doesn't look as nice. So one option is just to make the graphics cost faster, right?

  • This can't always, you know, this isn't the answer to everything and they get more more expensive, right?

  • That's also a problem

  • these new cards have in them these tentacles which are very good specific operations namely matrix multiplications and

  • So there is a chance of what we could do is use a deep network to kind of just clean this image up for us

  • Very quickly before it's presented to the screen and that will give us a nice presentation for our image without the huge

  • Performance hit of doing all this anti-aliasing and all of this

  • massive resolution, right

  • So what board is speaking it works like this your computer?

  • will render a raw frame with

  • Aliasing and all the kind of problems that it has and this may or may not be at the same resolution as your monitor

  • It will then be passed through a deep network

  • which is not very deep because

  • this has got to happen pretty quickly which utilizes the tensor cause of these new graphics cards to work very very quickly and that will

  • Produce your knives 4k with anti-aliasing shot

  • Which theoretically looks really really nice the way they train this network when they take the lower resolution alias version and the higher resolution

  • Anti-alias version is they're going to train it basically to try and create the image on a pixel level as closely as possible

  • But they'll also add additional parameters like that. It looks perceptually nice high

  • So basically perceptual loss functions which try and make things look aesthetically closer to something right now

  • Different loss functions are going to have different effects. So we trying out all these different loss functions

  • They might even use adversarial loss functions which are these adversarial networks of what Mars talked about, right?

  • There's loads of different ways to train these and how you do that. It's going to influence the actual result you get

  • All right, because it's not worth going to work perfectly. So there's kind of two answer answer questions here

  • I mean firstly does this work

  • Right and and the other might personally

  • I don't know because I don't haven't tried this right but I think it the results vary right? That's certainly true

  • But also how do we train this new your network, right?

  • Because what you don't want to have happen right is like the fate you're unlocking the phone thing

  • What was it unlocking your face with your phone if you to unlock a face of your phone?

  • You don't want users to have to do this, right?

  • This is something for NVIDIA to do if they're gonna you know

  • Silvus and make money off this technology and that's exactly what happens

  • sometimes shortly before games release the game developers will send an early copy to a midea an

  • Nvidia will start generating training data and train a neural network to do this process to take an image, but isn't quite as nice

  • It's got aliasing it's lower resolution and perform this up sampling smart up sampling up to 4k

  • anti-aliased

  • right

  • that's the idea and they do this by generating essentially a

  • Perfect representation of the game using 64 samples per pixel anti-aliasing, right?

  • So that is for every pixel they do 64 samples from that pixel instead of just one really nice

  • sixty-four times slower than normal and then they take that as their

  • Output and the input is just serve all frame with no anti-aliasing at all

  • Maybe lower resolution and they train this network to take the raw frame and output the 64

  • Samples per pixel really nice frame, right?

  • And so really what it comes down to is whether in practice this works

  • Right and the answer I think is probably some of the time yes some of the time no

  • This is true of most deep learning right people don't tend to say this as much as they should but you know

  • will it generalize if you take

  • 10 million frames of battlefield 5 and train this network on them to get as close to this output as possible and

  • Then you generate the 10 million from one. Don't even want the next frame, right?

  • if you generate the next frame, will it have as good a

  • Performance on that unseen frame and the answer is usually pretty good, but it won't be ever be perfect

  • right, especially if you're going from 1080p to 4k

  • so I think NVIDIA kind of made the point here that actually this is about when you're running at the very top end of your

  • graphics cards capability and so in some sense

  • They're not talking about people who are barely struggling to run the game at 1080p

  • You should already barely run the game at 4k and then maybe this will make it look slightly nicer first kind of two ways of

  • Doing this one is you take a 4k input and you use this to perform

  • Anti-aliasing and the other is you take a low resolution input and you use this to perform both

  • Anti-aliasing and up sampling and that's a harder job to do because if you imagine that you've got a 1080p saw

  • Then what actually you're going to have is a series of pixels like this and you've got to come up with all of these

  • Pixels in between right and this is just like increasing the size of an image enhance enhance by you know, will it work?

  • I don't know. It's going to be better than

  • Violent bicubic up sampling, right?

  • Because it's going to be bearing in mind this local area. It's going to say well look there's an edge coming down here

  • So this needs to be sharper, this doesn't need to be as sharp things like this