Placeholder Image

Subtitles section Play video

  • What is going on everybody and welcome to a much-needed

  • Update to the deep learning and Python with tensorflow as well as now chaos tutorial

  • it's been a bit over two years since I did just a basic deep learning video in Python and

  • Since then a lot has changed. It's now much simpler to both like get into it

  • But then also just to work with deep learning models

  • So if you want to get into the more nitty gritty details in the lower-level

  • Tensorflow code you can still check out the older video

  • But if you're just trying to get started with deep learning that's not necessary anymore because we have these nice high-level

  • api's like chaos that sit on top of tensorflow and

  • Make it super super simple. So anybody can follow along if you don't know anything about deep learning that's totally fine

  • We're going to do a quick run-through of neural networks. Also, you're gonna want Python

  • 3.6 at least as of the release of this video hopefully very very soon

  • Tensorflow will be supported on three seven and later versions of Python just happens to be the case right now

  • it isn't I think it's something to do with the

  • async

  • Changes, I'm not really sure anyways

  • Let's get into it starting with an overview of how neural networks just work

  • Alright to begin

  • we need to have some sort of balance between treating neural networks like a total black box that we just don't understand at all and

  • Understanding every single detail to them. So I'm gonna show you guys what I think is just the kind of bare essential to understanding

  • What's going on? So a neural network is going to consist of the following things. Like what's the goal of any machine learning model?

  • Well, you've got some input

  • So let's say X 1 X 2 X 3 and you're just trying to map those

  • inputs to some sort of output

  • Let's say that output is determining whether something is a dog or that something is a cat

  • So the output is going to be two neurons in this case. So it's just boom two neurons

  • Now what we want to do is is figure out how are we going to map to those things?

  • We could use a single hidden layer. Let's say we're going to do some neurons here and

  • That's our first

  • Hidden lair now

  • What's gonna happen is each of these X 1 X 2 and X 3 these are gonna map to that hidden lair

  • each of the

  • input data x' gets

  • Connected to each of the neurons in that first hidden layer. And each of those connections has its own

  • Unique weight now from here that first hidden layer could then map and connect to that output layer

  • the problem is if you did this the relationship between x1 and dog or cat and

  • All the other ones those relationships would only be linear relationships

  • so if we're looking to map and nonlinear relationships

  • Which is probably going to be the case in a complex question. You need to have two or more one

  • Hidden layer means you just have a neural network two or more hidden layers means you have a quote-unquote deep neural network

  • So we'll add one more layer and then we're gonna fully connect that one two

  • And then once that's fully connected again all unique weights, each of those blue lines has a unique weight associated with it

  • and then that gets mapped to

  • The output and again each blue line has a unique weight associated with it

  • so now what we're gonna do is talk about what's happening on an

  • individual

  • Neuron level. So again that neuron has certain inputs coming to it

  • It might be the input layer X values or it could be inputs coming from the other neurons

  • So we're gonna again we're gonna call the inputs x1 x2 and x3

  • But just keep in mind this could actually not be it might not be your input data

  • It might be data coming from another neuron

  • But regardless that data's gonna come in and we're just gonna get the sum of that data

  • So it's gonna come in and be summed all together

  • But remember we also have those weights each of the inputs has a unique weight that gets put you know multiplied

  • Against the input data and then we sum it together finally and this is kind of where the artificial neural network comes into play

  • we have an activation function and this activation function is kind of meant to

  • Simulate a neuron actually firing or not

  • So you can think of the activation function like on a graph, you know?

  • You got your X and your Y and then a really basic activation function would be like a step or function

  • So if X is than a certain value boom we step up and we have a value. So let's say here

  • This is zero here. The value is one

  • So let's say this is our x-axis 1 2 3

  • so if X, you know after being all the inputs are multiplied by their weights sum together if that value is

  • let's say

  • greater than 3

  • well, then this activation function returns a 1 but today we tend to use more of a

  • sigmoid activation function so it's not going to be a 0 or 1 it's going to be some sort of value between

  • 0 and a 1 so instead we might actually return like a point seven nine or something like that

  • So coming back to this neural network here that we've been drawing

  • Let's say here on this final output layer. You've got dog and cat

  • well, this output layer is almost certain to have just a sigmoid activation function and

  • What's gonna say is maybe dog is a point seven nine and cat is a point two one

  • these two values are gonna add up to a perfect 1.0 but we're gonna go with whatever the

  • Largest value is so in this case

  • The neural network is you could say 79 percent confident that it's a dog 21 percent confidence a cat

  • We're gonna say we're gonna take the Arg max basically and we're gonna say hmm. We think it's a dog

  • All right. Now that we're all experts on the concepts of neural networks. Let's go ahead and build one. You're gonna need tensorflow

  • So do a pip install - - upgrade tensorflow you should be on tensorflow version 1.1 or greater. So

  • one thing you can do is import tensorflow and then

  • Actually touch flow as TF and then TF dot version will give you your current version

  • so mine is

  • 1.10

  • Now let's go ahead and get started. So the first thing we're going to do is import a data set. We're going to use em

  • nacelle

  • of

  • Data sets with machine learning. It is a dataset that consists of 28 by 28 sized images

  • So it's like the resolution

  • images of handwritten

  • Digits 0 through 9. So it'll be like a 0 1 2 3 and so on and it's a handwritten kind of unique image

  • so it's actually a

  • Picture we can graph it

  • soon enough so you can see it's actually an image and the idea is to feed through the pixel values to the neural network and

  • Then have the neural network output

  • Which number it actually thinks that image is

  • So that's our data set, and now what we want to do is

  • Unpack that data set to training and testing variables

  • So this is a far more complex

  • Operation when it's actually a data set that you're kind of bringing in or that you built or whatever

  • For the sake of this tutorial we want to use something real basic like M inist

  • so we're gonna unpack it to X train Y train and

  • then we're going to do X test Y test and

  • that's gonna equal m n--

  • Astana score data, so that's gonna unpack it into there now

  • Just to show you guys what this is

  • We're gonna use Matt plot Lib you can pip install or just look at it with me, but we're gonna import matplotlib

  • Pipe lot as a PLT. And what we're gonna do is peel TM show and we're gonna do X

  • train

  • And we'll do the zero width

  • So one thing we could do just just for the record

  • Let me just print so we can you can see what we're talking about here. So this is just going to be an array

  • It'll be a multi-dimensional array which is all a tensor is by the way

  • So this is this is here's your tensor

  • right

  • Okay, so that's the the actual data that we're gonna attempt to pass through our neural network and just to show you if we were

  • To actually graph it and then do a peel t touch show. It's gonna be the number and you can just excuse the color

  • It's definitely black and white. It's a single color. It's a binary

  • So one thing we could say is the color map is equal to P LTCM for color map

  • Binary Reap lot it and there you go. It's a it's not a color image

  • So anyways back to our actual code up here

  • Once we have the data one thing we want to do is is normalize that data

  • so again, if I print it out, you can see it's data that seems to vary from 0 to

  • Looks like we have as high as 253. It's 0 to 255 4 pixel data

  • So what we want to do is scale this data or normalize it but really what we're doing in this normalization is scaling it

  • So we're going to just redefine X train and X testing but it's gonna be TF caras dot utils dot

  • Normalize and we're gonna pass X

  • Train and it'll be on axis 1 and then we're gonna copy

  • paste and we're gonna do the exact same thing for X test and

  • All this does let's just run that and then we'll run this again and you can see how the 5 has changed a little bit

  • looks like I got a little lighter and

  • Then we come down here and we can see the values here are now

  • Scaled between 0 and 1 and that just makes it easier for a network to learn we don't have to do this

  • But at the end of this only probably won't have time

  • But if you want on, you know, comment those lines out and see how it effects the network. It's it's pretty significant

  • Ok. So the next thing we're gonna do now is actually build the model

  • So the model itself is gonna start as TF karosta model's dot and then it's going to be the sequential type of model

  • There's two types of models

  • The sequential is your your most common one. It's a feed-forward like the image we drew

  • So we're gonna use this sequential model and then from here we can use this like model dot add syntax

  • so the first layer is gonna be the input layer and now right now our images are 28 by 28 in this like

  • Multi-dimensional array we don't want that

  • We want them to be just like flat if we were doing like a confident or something like that

  • We might not want it to be flat

  • but in this case

  • we definitely want to flatten it so we could use that we could use like numpy and reshape or

  • We can actually use one of the layers that's built into chaos, which is flattened. So

  • So we're gonna do ad and what we're gonna add is TF. Chaos layers dot flatten

  • so one of the reasons why you you want this to actually be a layer type is like when you have a

  • Convolutional neural network a lot of times at the end of the convolutional neural network. There'll be just like a densely connected

  • Layer, and so you need to flatten it before that layer. So it's it's it's used for more than then the input layer

  • We're just use it for the input layer

  • Just to make our lives easier. So once we've got that

  • That's our input layer. Now. We want to do our hidden layers again

  • We're going to go with I think just two hidden layers. This isn't a complex problem to solve

  • So again, we're going to use the model set up model dot add syntax and we're gonna add and in fact

  • I think what I'm gonna do is copy paste and then rather than a flattened layer it's a dense layer in the dense layer

  • We're gonna pass a couple parameters here. So the first one is gonna be how many units in the layer. So we're gonna use

  • 128 units or 128 neurons in the layer, then we're gonna pass the activation function

  • This is like the function. Like I said like a stepper function or a sigmoid function

  • What is gonna make that neuron fire or sort of fire whatever so we're gonna use TF tenon