Placeholder Image

Subtitles section Play video

  • what is going on?

  • Everybody.

  • And welcome to part 10 of the A I in Starcraft two with Python Siri's in the previous tutorial.

  • What we're doing is working on some code to build training data.

  • And I told you guys that would build the training data.

  • You didn't necessarily have to do that.

  • And I have done that.

  • So what we're doing is working on building our model.

  • That's gonna work with that.

  • And then probably the next video, we'll feed it through the model and then probably in the video after that will do the actual test.

  • How did that model do?

  • So, um, in the text based version of editorial, I've put a link to the training data.

  • You can just click that link.

  • Boom.

  • Here you are.

  • Downloaded, extracted.

  • There's your training data.

  • So what we're gonna be doing is using cares If you don't have it, you're gonna want to do a pip install caress That's pretty well, too big.

  • But, you know, anyway, um and, uh, at least for now I'm using caress 2.1 point two.

  • Ah, And then also, you'll need tens or flu dash gp use.

  • Well, I guess you just need tensorflow but probably want the GPU version so tense flowed as GPU and the version I'm actually using read this present moment is 1.9 point just came out in the text based version.

  • I was using 1.8.

  • Both of them work just fine.

  • Also, some people have been noting that you cannot do this in Python 3.7, which is apparently the case because of Web sockets.

  • I'm not really sure exactly what the problem is there, but there is a problem there.

  • Okay, so Roy abusing cares.

  • So we're going to import cares.

  • And then from caress stop models were going to import the sequential type of model, which just means it's a feed forward neural network.

  • It's gonna do it like it's just things go in sequence.

  • OK, so it's not like anything fancy from caress.

  • Tha layers were going to import the dense layer.

  • That's just your typical layer we're going to import Dropout, which is just used to make your neural network theoretically more robust.

  • And then we're gonna import flattened because we're going to be using a convolution all neural network.

  • But a common thing that people are going to do is you wanna maybe right before you have your output later, you're going to do, like, just a simple, dense layer, Which is why we're importing that they're, um And, um, To do that, you'll need to first flatten the data.

  • So we've got those things.

  • We also want calm to d because we've got a two dimensional convolution, all layers that we're gonna be using and then also, we're gonna have pooling layers so important those as well, that's prime mad because it crossed over the maybe not just saying that we haven't used him.

  • Okay, so then from care stop callbacks, we're going to import tensor board because we want to be able to visualize how the bottle is actually doing so we can track things like accuracy and the loss and other metrics to if you want.

  • So then, besides that we're gonna import numb pie as and pee wee.

  • I really don't like that.

  • It does that to me before.

  • Numb pie as MP, we're gonna import O s s.

  • So we're gonna use numb pie when we're reading in the data to do some shaping as well Os to generate through a training directory, and then we're also going to import random.

  • So in the directory and working in here, pull it over.

  • We've got the custom SC to package that we got prior to this.

  • And then I've also exported the extracted the train data, although that's quite small.

  • Well, I'll need to put a more full amount of training data in there.

  • Let me see if I've got it better.

  • I think this one's pretty full.

  • I just want it least enough.

  • So actually did all my training on a different um, I did it on a paper space machine.

  • Uh, I think I'll just delete that one and then pace in.

  • There we go.

  • Okay.

  • Hopefully that's done copying 13 gigabytes, but it's time we actually need it.

  • Actually, we're gonna need it.

  • So the next tutorial anyway, um, moving along.

  • So the next thing is, we're gonna start to find your models.

  • So our model is going to be a sequential then, um, the next thing we're gonna do is to start adding in all of the layers to the model, and, uh, I don't really see much point in writing all that one out.

  • So copy paste So, uh, what's going on here is we're just adding in the later so you've gotta convolution a layer convolution a layer, then we've got some pooling.

  • So the convolution a layer has 32.

  • Um, I guess you would call them features.

  • And the windows are three by three.

  • And then the input shape that is the shape of the game data that we're passing.

  • So that is 1 76 by 200 by three.

  • Now, when I first started doing convolution, all neural networks always thought if you wanted to have color, surely that would be a three dimensional convolution.

  • All neural network.

  • Right?

  • But these air just channels.

  • So even though, yes, that's 1/3 dimension.

  • Um, that doesn't matter.

  • It's still a two dimensional convolution on your network.

  • So three dimensions would truly be like three dimensional data.

  • Like if you have Ah, like there was a cattle challenge with, like, a lung scans basically.

  • And that was three dimensional.

  • Um, so you can you can actually work with three dimensional data.

  • Anyway, um, so with padding, because as you start shifting a window, basically, at some point, you might shift to get some data and then you've got some of the windows, like hanging off the edge.

  • The question is, what do you do?

  • Well, you're gonna pat it, and we're just saying just repeat, with same padding, Um and yeah, that's basically there's If you want to know more about convolution, Eleanor Networks, you could go to Python programming.

  • Definite.

  • I'm assuming we can type convolution in the search bar.

  • And then here you go.

  • I think I've got some lovely pictures.

  • Yes, I do, about what the windows are and all that.

  • And then how pooling works is, um, super fascinating.

  • Anyways, um, so that is the main bit of our neural network and then drop out just in case you're not familiar.

  • Basically, what happens there is.

  • So in this case, it's like a 20% dropout, So the idea is 20% of the data.

  • We just kind of like toss it before we go to the next layer to do our inputs.

  • And the objective here is to have no, like, um, really biasing, um, nodes in the network.

  • And so the thought is that it would it makes things more robust.

  • There's plenty of research to back it up.

  • It's just not necessarily going to be the case that it's always gonna help you.

  • Um, and it might even hurt you in some cases, So you definitely don't wanna poke around.

  • Um, lots of times I see this more like, at 50% or even 80% stuff like that.

  • And you'd want to make sure that drop out sometimes drop out.

  • Means how much do you want to remain?

  • And then sometimes it means How much do you actually want to drop out?

  • Um, I'm gonna guess this is 20%.

  • Um, in fact, let's just caress drop out.

  • Let's see if carris is super clear, dry about.

  • So I just typed in Cares drop out in the Google.

  • For some reason, I'm clicking on this, and it's not oats.

  • It took pleasure of consists family setting.

  • Freshen.

  • Okay.

  • Right.

  • So the fraction that we pass into here is indeed the ah, the amount that gets just applied.

  • 02 Okay.

  • Anyway, um, the more you know, I think TF learned might be the opposite of that.

  • So just keep that in mind.

  • Always check.

  • Okay, So those are convolution layers, but we eventually have to get to an output that just outputs for things.

  • But also, we want to throw in that last dense layer.

  • So model that ad and then we're just going to do a flatten, And then we're going to do a model dot ad and we're gonna add a dense layer.

  • Were going to say it's got 512 units.

  • Activation will be rectified, linear so real, you and then we will do a another drop out.

  • So I'm just gonna copy paste, and here we'll do like a 50%.

  • And then finally, we just need our output layers.

  • I'm gonna make some space, so we're going to say model dot ad I'm gonna add dense It's gonna have four because we got four choices and activation.

  • In this case, we're gonna go with soft Max, and we're good with at least the model itself.

  • Now we're going to specify some basic just parameters of that model.

  • So we're gonna say the learning rate and the one I eventually settled on was actually won the negative four.

  • Um, typically will actually start with the one you negative three and then you keep decaying down to a one e negative four Um, but in this case, there was really no learning that was taking place with the one in eight of three.

  • So actually started at four.

  • And I never even decay did.

  • I just kind of left it there.

  • We'll see as time goes on.

  • And if we have a much, much, much larger training set that in just different data are more complex data that we might find that we can get away with a smaller learning rate of decay and all that.

  • But as it stands here, this the whole purpose of up to this point is purely just to see.

  • Um, is there anything there?

  • Can we actually learn on this date, or should we go back and try something else?

  • So lots of people have make been making lots of suggestions and stuff like that for how to make things better, and we will get there.

  • But the first thing you want to make sure is if we can simplify this problem to be as simple as we can possibly make it and things still don't work, then it's probably just not gonna work with whatever we have in our mind for, you know, how we want to structure things, so we actually need to go back and redo the foundation.

  • So before we spend a whole lot of time getting all fancy, we want to just do something really, really simple Anyways, um, so there's our learning rate.

  • Now what we're gonna do is specify the optimizer.

  • So in this case, we're just gonna use the Adam Optimizer, so Carris Optimizers don't.

  • Adam l r will equal learning rate and then decay.

  • Uh, we can What?

  • We could actually set the decay.

  • I guess I lied.

  • I had said to decay my bad.

  • Anyway, now we're gonna do a model dot compile loss equals and we're gonna go with category a goal cross on trippy Hope I spelled it right.

  • And then we're going to say the optimizer is upped.

  • And then the metrics, we're gonna go with just accuracy for now.

  • Accuracy.

  • Give her these, okay?

  • And then the last thing I'm gonna adhere pride before we, uh, finish out.

  • This video is tensor board.

  • We're going to specify a 10 sport object here, so it's tensor board, and then the log dir is going to be logs, stash slash stage one.

  • Okay.

  • And then.

  • So that way, as this model trains were gonna save logging type data that we can display with tense or board.

  • Uh, just so we can see how the model is doing.

  • So sometimes, if you're just like watching it and you're just kind of taking mental note of loss or accuracy or whatever you can miss, there's a lot of trends that you're going to miss that are gonna be pretty useful to see visualized in something like tense or bored.

  • So that's why that's, um that's there.

  • Okay.

  • Um, yeah, I think I'm gonna cut it here and in the next tutorial.

  • What we're gonna do is we're gonna just continue building on this script, and then all we have to do now is like reed in the data and, like, start iterating through the data.

  • And even this is a pretty small training data set, but it's already like 13 gigabytes, which is probably Maur v Ram than, like almost everybody watching.

  • This probably has.

  • So we need to figure out a way to more, um, reasonably go through Arteta.

  • Um, so, anyways, that's where we'll be working on in the next tutorial.

  • If I've made any mistakes here, we'll figure him out, probably in the next one.

  • Otherwise, if you've got questions, comments, concerns, whatever, feel free to leave him below.

  • Otherwise I will see you in the next video.

what is going on?

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it