Placeholder Image

Subtitles section Play video

  • what's going on?

  • Everybody and welcome to yet another deep learning with python tensorflow and Kare aas tutorial.

  • In this tutorial, we're gonna continue with last tutorial where we're attempting to predict future price movements of a certain crypto currency based on the sequence in the historical prices and volume of that Cryptocurrency as well as other major crypto currencies.

  • And we're trying to do this with the recurrent neural network.

  • So, uh, let's continue.

  • The next thing that we want to do here is we're gonna import times.

  • We're gonna use that just in a moment and we need Thio create a few more Constance here.

  • The first thing we're gonna do is iPAQ So how many blocks we want to train this model for?

  • Then we're gonna go with a batch size and we're gonna go with 64 to start weaken.

  • Tinker with that later.

  • If we wanted.

  • And then finally, we're gonna go with a name on.

  • We're gonna make this an F string, and what you want is a name that is descriptive of the model because generally you're gonna tinker a little bit with the model, tweak it a little bit here and there re run it again, and then you're gonna do the same thing again.

  • Hopefully, you don't have itchy advises.

  • I do.

  • Anyway, um, so you want to have a unique name, both for the model that you save as well as intense or bored.

  • So later you can compare the results of a bunch of different models and you don't have to be like I don't know what model that was so or worse, you overwrite the other models.

  • So anyways, uh, yeah.

  • So let's come up with a good name.

  • So I'm gonna go with sequence, length, dash sequence, dash, future period.

  • Predict dash, predict Dash.

  • And then we'll just throw in and int time that time here.

  • And that should be good enough.

  • That'll give us a nice, unique model name.

  • So now we're gonna do is import all the tents airflow stuff that we need.

  • So import tensor flow as t f as t f Stop it sublime from tensor float.

  • Kare aas dot models were gonna import in poor sequence show.

  • There we go.

  • Uh ah, You're such a It's a Quintus D'oh!

  • Oh, this This might be a hard one to get through from 10 serve tens or float.

  • Kare aas dot layers were going to import dense dropout.

  • L s t m probably also coud cou d n n l s t Do you know I had it right.

  • L s t m and then batch batch normalization.

  • Um, all of these we've seen, except for I don't remember if we've done batch normalization.

  • I apologize if I've already covered it.

  • Basically, just normalization, but between the layers.

  • So for the same reason you want to normalize your input data batch normalization can be useful from layer to layer because because really, you can think of each layer.

  • If it's going to another layer, that layer, its output is kind of like your it's it's a new input for a new layer.

  • So you would want it if you could normalize that data.

  • So we're gonna do batch normalization there.

  • Okay, Once we've got that more imports from tensor float up Keira stock callbacks.

  • We want to import 10.

  • Sore board tense.

  • Why can't I live right now?

  • Uh, anyway, tensor board and model checkpoint, I think be intense or board probably needs to be capitalized.

  • It does.

  • Okay.

  • Tense aboard.

  • We've really already seen this callback model checkpoint is a fancy dancy little call back where basically, you can set various parameters as to when you want to save certain checkpoints.

  • So I like to use validation accuracy.

  • I think it's pretty good one.

  • A validation loss would be another good one.

  • Basically, each time it reaches a new max, you could have you could have the model actually save that checkpoint That way, you know, you can run for 100 epoxy now, private the end of 100 box before you got there.

  • The model over fit, and it kind of went crappy.

  • Um, and you could save every single iPAQ, but this might wait space or whatever.

  • Um, anyway, this is makes a nice way to always save the best one.

  • Okay, Uh, once we have that, we're gonna scroll on down.

  • We've got a train.

  • External validation explanation.

  • Why?

  • Ready to build the model?

  • So make some space here, Uh, and we're just gonna There's a lot of this coded stuff we've seen before.

  • So again, model is a sequential model.

  • We've seen that one model, that ad, and we're gonna add, uh, I'm gonna use Cody and else tm If you're on CPU version in tensorflow, use a regular l S T M.

  • It's gonna be 1 28 nodes in this layer and input shape and something bangs on this desk, and it makes me mad that what is that noise I could never find What that noise is striking drives me nuts.

  • And then sometimes it's, like, stops happening for, like, months.

  • And then it comes back anyway.

  • Input shape trained ex, uh, train exact shape, one cold.

  • That was about to be an error.

  • That was not gonna be fun to debug.

  • Uh, figure out where we are now.

  • So we're inside the cootie in an L s t m Still.

  • Yes, it's yes s.

  • So then return underscore er sequences equals true.

  • So you should already know that what the next layer is gonna be.

  • Are we good?

  • Why does it do it?

  • What is it?

  • Because, uh, we don't.

  • But new line, this is maybe Pepe.

  • Yeah.

  • Okay.

  • All right.

  • So then model that ad and we will throw in a drop outs, and we're gonna d'oh 0.2.

  • Then we'll add a batch normalization.

  • So model that ad batch normalization and no parameters there.

  • And then I am going to copy this and then paste it twice paste and paste, and then we're gonna have a dense layer.

  • So because we're gonna have a dense layer, I'm gonna remove the return sequences.

  • And so now it's a 1 28 to another 1 28 and then to a final 1 28 Now, for whatever reason, in my notes, I have this as a dropout of 0.1.

  • So I'm gonna leave that there.

  • But I'm pretty confident your 0.2 would probably work.

  • Justus.

  • Good.

  • Uh, anyway, love movie back.

  • Simple one, Uh, and I think we'll leave everything else as the same.

  • Now we're gonna add eight dense layer, but not the output layer.

  • Just a dense layer model That ad, Yes, that one dense 32 activation will be rectified Linear.

  • Which reminds me, if you can't use Cudi, an analyst him make sure you throw in some activations.

  • I would do either tan H because that's what Cody N N L C M's using, or you could just use rectified linear.

  • So we've got that dense layer.

  • Cool model dot ad will throw in a dropout 0.2.

  • I really wonder if this was like a typo or what everybody else is 0.2, except for this layer.

  • Um, so anyway, um and then finally, we need the final dense layer.

  • Uh, there's this is a binary choice, so it should be only two options there activation because it's the output layer Soft max.

  • Okay, now that we've got that we're ready to specify the optimizer will go A t f got Cara stott optimizers dot Adam with a learning rate of 0.1 23 and a d k of one e negative six and then we'll do the model dot Compile, and we'll go with loss is sparse.

  • Cat, Categorical Cross and Truby, You could also go with, like, binary cross entropy.

  • Uh, next, What we're gonna do is optimizer is Theo optimizer that we defined and then metrics we will with accuracy.

  • Okay, cool.

  • The next thing is, we need to define our callbacks and I guess we can fix this make makes sublime happy s So we have to call back.

  • So we want to dio that one is tensor board which we've really already seen.

  • And so that's just gonna be a tensor board.

  • Objects laundered.

  • Eagles longs, Uh, and then we'll do some formatting here and in my text based version, I haven't made this an F strings trying to move completely to F strings.

  • They do.

  • They are better.

  • It's just I'm so used to, you know, doing the format.

  • But it's so much nicer to use F strings, to be honest, like build it out and then not have to remember what order things Aaron.

  • And then and then later.

  • The worst thing is like when you build it for the first time, it's really no big deal.

  • But then, if you want to, like, go in and move things around or like at a bunch more, it it really does kind of.

  • It's kind of challenging with string formatting, whereas with, like the F strings, just it's just a a breeze.

  • Okay, so that's intense aboard object.

  • The next thing we're ready to do is do the checkpoint objects.

  • So here a file path.

  • And to be quite honest with you, this is not my code.

  • I just I basically searched for the check like an example for the checkpoint.

  • Things so this might be from, like, Carris documentation.

  • I don't even remember where I found this, but basically, we'll do.

  • Orrin and Final obviously didn't have this exact name, but then it's iPAQ, um, and then zero to D.

  • And the reason why I'm doing this is, um I kind of wanted just, like, copy and paste this in so I can talk about it and not waste a bunch of time.

  • Um, here paste.

  • So, like, I'm not sure how how your formatting, Because obviously this is this here is a string formatting object to this file dot model, and then we got format the file path.

  • But then it's like monitor, I guess I just understand how these values are eventually getting passed into here with a dot format.

  • I don't know.

  • Anyway, what I want is a pock invalidation accuracy in this code works.

  • And again, I don't because, obviously, you know, format file path, it throws in the file path.

  • But then it's like these other things in the format.

  • Anyway, I don't get it.

  • I don't know how it works, but it works anyways s.

  • So now we're going to do a history equals modeled outfits.

  • Uh, we're gonna fit train X train.

  • Why, Uh, the batch size is just batch signs.

  • A pox is going to equal a pox Capitals.

  • Let's fix this.

  • They are parameters.

  • Therefore, no spaces.

  • After a pox, we have the validation.

  • Valid validation underscored data.

  • And then in here we have validation.

  • Ex validation.

  • Why?

  • And then finally, we have our callbacks that we want to run.

  • So that's tensor board and checkpoint.

  • Okay?

  • And then once that's done, um, I do run a value, uh, and evaluating the text based version.

  • But really, every pock should be running validation data anyways, So at the end, I guess we could just do a model, not save, although even here, Not really sure what?

  • The point is because the model checkpoint is going to be saved.

  • So I guess we're done at this point.

  • Okay?

  • Save that.

  • And now let's go ahead and run this thing.

  • Time Industry six crypto aren't in tight.

  • Let's see if we have any errors.