Placeholder Image

Subtitles section Play video

  • what is going on?

  • Everybody.

  • And welcome to a video on the python plays G t A.

  • Siri's, uh, just want to go through some of the code aspects of the Charles 2.0, as well as address some of the frequently posted comments.

  • So first thing, one news.

  • I'm just gonna drag up the tensor board.

  • This is the training cycle.

  • Actually ended up having to train this model twice The first time I forgot to save the model.

  • Yeah, I was pretty angry.

  • Anyway, Um, really, This one took 24 hours.

  • So, like just one whole day, and that's 24 hours per epoch.

  • And all I did was train this model for one epoch on.

  • I went that way because, at least with mobile net, I trained that for multiple pox and then went back and saw how each iPAQ did overtime in terms of validation, accuracy and after one iPAQ validation accuracy did only one thing, and that's go down.

  • So, with exception, I honestly have not run it beyond one iPAQ looking at validation, accuracy and validation loss especially, you know, like all validation losses done has gone up after, you know, I don't know.

  • 1/3 of training.

  • So after that, like this looks like it's about to start coming down anyways, plus validation losses and continuing to drop.

  • So I only did one.

  • IPAQ.

  • At some point I may end up pushing this model a little further just to see, but I don't think it's gonna do any better.

  • So really, on Lee Won Hee Park.

  • Ah, ton of training data, though.

  • And yeah, that one epoch took 24 hours.

  • So anyways, um, yeah, cool.

  • So that's the model that we're using.

  • Exception Mobile net.

  • You can kind of see how mobile Net compared.

  • Thio exception.

  • Very, very close in terms of their performance, just a little different like exception dropped quite quickly with with a loss.

  • And let me just refresh it.

  • So we get both Oh, no.

  • Why's this graph like all goofy, I guess, causes the validation laws.

  • But if I un smooth it anymore, you can't see these mean ones.

  • But you can see here that exception was just slightly better the whole time in all of the metrics that you can track so or at least all metrics that we tracked so anyways, exception, it waas.

  • So, uh, next thing I want to talk about is, um Well, one thing was for sure, the the image Pre processing, just divided by 2 55 So all of your image raise need to get divided by 2 55 That's really simple.

  • I don't think I need to show code for that.

  • Just understand that that was one of the changes that got made.

  • So I'm gonna pull the get hub fact useful, screen it and come to two files.

  • I just updated.

  • One is the exception model itself And the other one is this waiting distribution by class.

  • So coming over here, this is just the setup for the exception model, because all you really have to do is just imported from Caracas.

  • And then you set the, uh, the input shape for that first layer.

  • And obviously you don't, except there waits.

  • They do offer, like, pre trained weights that you can use transfer learning with.

  • I wanted to use all of my own data.

  • I've got plenty of data that wasn't an issue, so I didn't need to do transfer learning, specify the shape again to 70 by 4 80 This is quite a big resolution If you want, you could take a picture.

  • I have I should have one.

  • Maybe even in in here, in the scroll down real quick.

  • I don't, But I bet I do.

  • On cyber dot io Uh, I have an image of a 4824 80 by 27 years ago.

  • Yeah.

  • So this is a 4 80 by 2 70 resolution.

  • And if we just kind of, like, zoom in a little bit I mean, I feel like at this resolution, you can see all the things that you really need to see in a model.

  • Like I can tell.

  • That's a car on a bus.

  • And this is a car car.

  • This might be a motorcycle, but it's so far off in the distance, I don't really care.

  • I can tell these air cars Traffic light is easily discernible.

  • Buildings, trees.

  • I feel like a 4 80 by two seventies.

  • Good enough.

  • Also, the mini map is easy to read.

  • Where is that one?

  • 60 by 90.

  • That's, like, impossible.

  • And like, this is clearly a bus.

  • But what the heck is this?

  • I really don't know.

  • Is that a bench or is that a car, you know, And so I think that's a pretty good resolution.

  • So we're gonna go ahead and just continue with that resolution.

  • Um, really?

  • That's it.

  • Honestly, it's just use exception rather than inception v three.

  • The next thing is the waiting class distributor.

  • I'll come up with a better name at some point, but basically what it does is it distributes the weights, um, by red adjusts waits in order to effectively distribute classes evenly.

  • So in the prepared prepared data data set, Uh oh, this is actually I think this will be That's the training data.

  • But the actual validation data is something like, It's like 500 gigabytes worth of data.

  • It's a Tana training data.

  • Um, I'm sorry enough 500 gigabytes.

  • Uh, like 500 megabytes worth of training did.

  • It's ah, few samples from each of this segments of things that we did.

  • So I created some training data, and other people had created some training did, and we kind of pull them all together.

  • I also created training did at different times.

  • At one point, I started using the speedometer to attempt to see if we could teach Charles like how fast he was going at a given time, but then also the way points weren't always there.

  • But later we added the way point.

  • So this this model's been trained with waypoints without way points with a speedometer without his barometer.

  • And then in all the other scenarios, like at night in the daytime police, no police and so on storms, all that stuff.

  • So this model has training and all that, and that's really why this model works without the, you know, like the GPS and stuff like that.

  • I suspect that's why this model performs much, much better.

  • One of the things that a lot of people said is like, Hey, I see this model goes way too fast.

  • That's kind of the whole point.

  • The whole point of the model is to drive erratically like I don't know about you, but when I play Grand Theft Auto, I drive pretty darn erratically, and I think most people do as well.

  • So the whole point of Charles is to drive like that, and some people are claiming he doesn't break.

  • Maybe you weren't here for the other models.

  • This is interesting.

  • I just noticed this that can't be correct.

  • I think I would have to go in there and fix that.

  • Is that I wonder when that that has to have come in?

  • Um, I gotta be, So I gotta fix that.

  • Anyway, Um, sorry.

  • I got distracted when I noticed.

  • It's one point.

  • A.

  • It must have just gotten hit or something when I was moving things.

  • Anyways, some people are like, Oh, you know, it doesn't use his break.

  • He definitely uses his brakes.

  • So if you turn into the stream or something, you can hear the squeal of his brakes when he's using them, and you can also just see he slows down considerably.

  • This is probably the first model that actually uses his brakes, and it does quite heavily.

  • And then as far as crashing into things like I would say, the previous Charles model was like, I don't know, avoided.

  • Things tend to 20% of the time.

  • Where is this one?

  • 70% of time.

  • I mean, he's doing pretty good, in my opinion.

  • Um, I guess that doesn't matter, but yeah.

  • So the other things that I did here is So what?

  • So this script just goes through and just ignore that.

  • A I'm gonna fix it at some point.

  • And then here's the decay.

  • And then all it does is it goes through these, you know, you start with weights of one point.

  • Oh, um and then this is just so I know which is which.

  • Then Also, I have a close dicked.

  • Basically, this is keys that are close, right?

  • So, um, if if the model predicts a um but it was meant to be a w A.

  • That's better than if the model predicted a d.

  • And it was meant to be a W a.

  • Right.

  • So I wanted to just know as I adjusted the weights if we were losing sight off.

  • Also just being close to the right answer because the theory, of course, is that the validation data is perfectly balanced.

  • So all the classes should be predicted an equal number of times.

  • And so the theory of this script is let's adjust the waits until that's true.

  • And if we do that, the end result will be a better model.

  • And so anyways, this is another way of just kind of measuring like Is that still happening?

  • And then, of course, running it in game is the final test, right?

  • Is this actually better?

  • So anyway, here we just load in the model.

  • Um, we have this distribution dictionary.

  • Then This is just for simple calculations, and then we just start going through the data.

  • So again, Like I said, you bring in the data.

  • This is just opening Thebes training files, which are raw data.

  • So then here we did by 2 55 This is the answer, I think the classification.

  • We checked the prediction.

  • Um, and then we are using weights because that's the whole point of this.

  • So it's just whatever the output from the neural network was.

  • So this is just an Honore right of this distribution of what it thinks the prediction is.

  • So then we apply this numb pyre rate array of weights.

  • We just multiply it.

  • So this is not new.

  • We've been doing that since the very, very beginning.

  • So if we go to test model that pint and we scroll down, I am sure we will find it looks like this.

  • Yes.

  • So again, Same thing model that predict.

  • And then we just multiply that prediction by another array of weights.

  • Now, these these weights.

  • I just adjusted by hand.

  • And then what?

  • Tested in game adjusted again by hand, tested in game.

  • And I just felt like there's got to be a better way than doing that.

  • And so that's what how I came up with this method here so that all it does is as it's you know, it a rates through all this stuff.

  • It just kind of prints out both the new accuracy prints out the weights, the act, the rial accuracy against the actual data and then the closeness accuracy eso again.

  • The closeness is just like, Hey, if it was meant to be a W but it says W A we're going to say that still worth like being 30% right?

  • Okay, um, so you still take a hit?

  • But it's not the full hit.

  • So anyway, that's what this script does.

  • So then I just ran this for, like many, many iterations, and basically, once there, you just run it, you can run it indefinitely.

  • But if you just keep running and definitely is gonna keep shaving weights off,