Placeholder Image

Subtitles section Play video

  • Hey, welcome to neural nets in JavaScript with brain Js.

  • I am super excited to teach this course.

  • The goal is to give you a practical introduction to problem solving with neural networks.

  • What you're gonna be learning in this course propagation both forward and backward layers, neurons, training, error.

  • What feed for dinner?

  • All networks are what recurrent neural networks are in a whole lot more.

  • We're gonna build a Zorg, ate a counter, a basic math network, an image recognized a sentiment analyzer and a Children's book creator.

  • And how we're gonna do it is with 17 lectures where we're gonna focus on practice over theory.

  • What that means is you are going to get your hands dirty.

  • But more than that, you're gonna walk away knowing the ideas behind neural networks.

  • There's a swell a bunch of interactive challenges along the way, and that brings me to our use of scramble.

  • Scramble is a fantastic platform for learning, and at any point during the entire lecture, you can stop me.

  • It won't hurt my feelings.

  • You can just introduced a brand new script and you compress command plus s if you're on Mac or Control Plus s if you're on Lennox or Windows and it will execute exactly your coat that is super important throughout.

  • This course is well, I'm gonna make regular reference to the console, which is directly below here.

  • So if you see some numbers go down there like, I'll go ahead and test that right now, 0.5 just appeared.

  • That's super important.

  • Anytime I start talking about if the neural net was good because it had a low error rate or if the neural net was bad because it had a higher rate, just look down there and that will give you a little bit of reference as to what we're doing.

  • So let's get started.

  • This is gonna be awesome.

  • This is our very first known that this is gonna be awesome.

  • So the first problem we're gonna tackle is called exclusive or and you can do some research on it if you like.

  • But more or less, this is what happens.

  • You have inputs that are the same result in a zero output when they differ results in a one.

  • There's always two inputs.

  • There's always one output.

  • So let's take this very simple comment and let's translate it into something that the neural mint or rather the Java script can understand.

  • Let's have a variable.

  • We're gonna call it training data and theirs are very simple variable that represents all of our training data.

  • Let's go ahead and imports brain Js brain Js is I'm just gonna grab a leak.

  • Vets imports it from a Cdn instant delivery network.

  • Oh, at that.

  • And next, we want to instead.

  • And she ate a new instance of brain.

  • And we do that.

  • Const pressing next equals new rain dogs neural network.

  • And down here, we're going to say next, uh, train, we're gonna give it our training data.

  • And now that line 16.

  • By the time we get there, that the Net will have understood what are inputs?

  • Outputs are so we can hear Consul, log out net.

  • Don't run one of her inputs.

  • So let's choose the 1st 1 Give it our layers Hidden layers three will get more in the hidden layers later.

  • And now we're gonna go ahead and run.

  • And now we have an output that's so awesome.

  • Now, the reason that this number here is not zero is because we're using a neural net and it's very hard for him to speak specifically zero and one.

  • They can speak close to that.

  • So that's exactly what we want is a number close to zero, which is 0.5 Now.

  • Here's a challenge for you.

  • Go ahead and get the next outputs console locked out.

  • Just kind of play around their values and and see how the Net operates.

  • And our last tutorial.

  • We talked about how to build a neural net, a very simple one to solve, exclusive or and in this one we're going to discuss how it did it.

  • So the neuron that has different stages, you'll notice I used two different methods.

  • You're the 1st 1 is trained, and the other one's run now in train would do something called Ford Propagation and back propagation.

  • Those terms may seem scary at first, but they're actually quite simple.

  • In fact, we're going to reduce their complexity down to something that even a child can understand.

  • You take a look at my slides here, forward propagation and back propagation.

  • We have a ball, we're gonna take a ball, and we're gonna throw it at a cool.

  • Now when we do that we're going to make a prediction as to how far the baldies to go, how much energy to put behind it, the pathway of the ball, et cetera.

  • I want you to go ahead and pause the video here and to think about the different steps that happen when you throw a ball article.

  • But the first step is prediction and in prediction, we're going to think about how we're gonna throw the ball where it needs to land, how much power we need to put behind it, that first step and with my str showing us that we did not go far enough with the ball, this step is for propagation.

  • We ourselves are making a prediction from them.

  • That prediction.

  • We can see how far we were off from the actual goal.

  • We can measure that and that that step was measuring.

  • That is back propagation.

  • Now the next thing that we want to d'oh is make a determination as to what we're going to do next.

  • That is our second step of back propagation.

  • That is our learning step and you know how the story goes.

  • We throw the ball again, goes too far.

  • We measure that would make another prediction.

  • We throw the ball again.

  • Third time's the charm that illustrates this very first method and everything that goes on inside the net.

  • The next stage is running are met.

  • Now, in running our net, we no longer have to measure how far we are from the goal we already know.

  • And because of that, there's no need to back propagate.

  • So that step goes away.

  • Now, throughout this entire training of the neural net, Burnett is measuring, and that measurement is referred to as error.

  • Check this out.

  • We go to our net.

  • During its training, we could actually enable something really interesting.

  • We're gonna give it a log function.

  • We're gonna console log era, and we're gonna have our log period sex you 200 intervals.

  • It's out now.

  • We can actually see the error.

  • How far off the net waas you can see for a time.

  • Some of these errors may go down or may go up, but eventually the net catches on and it starts to accelerate its ability to learn until that air rate starts to drop to a ridiculously low number, not zero until training is completed.

  • Once training is completed.

  • There's no need to continue training as we discussed we can then just forward propagate.

  • Now, Last tutorial.

  • We talked about how neural networks learn using four propagation and back propagation.

  • And this tutorial, we're going to understand more the structure of the programmatic neuron that neural nets are actually quite simple.

  • They're composed of a function that receive inputs as an argument and produce outputs.

  • We think of our neural net and this very simplistic way.

  • Then really, we can reduce the complexity of it down to one of the simplest functions that you can write.

  • Pause the video here and just look at the structure.

  • Now we're gonna talk about how the network initiates.

  • Did you think about when you were first born?

  • Likely you didn't know very much overtime, though you begin to know more and more, the neural net begins with a bunch of random values, so everything that affects the outputs is just random.

  • At first you may ask yourself why the reason is because mathematically we've proven that is an effective way to start off with knowledge.

  • The knowledge is very random.

  • At first we don't know the idea.

  • It's not zero and It's not one.

  • It's somewhere in between.

  • Over time we can shape that random data, so that finally becomes where we store what's going on inside of the neural net.

  • Each neuron is quite literally mass dot random.

  • Go ahead and positivity.

  • Oh, cuter and get comfortable with the idea that the Net starts out with random data.

  • Next, I want to talk about activation.

  • A really popular and effective activation function that's used nowadays is called Ray Lou Really looks come like this.

  • Where to put it inside of a function.

  • The function would quite literally look like this.

  • That is our activation function called Raylan.

  • Now, activation functions are measured in back propagation, using what is called their derivative.

  • I'll go ahead and put a link here on our bonus material to I'll go ahead and throw some wings that take you to where Ray Lou and it's derivative are used in the brain.

  • Now, last tutorial.

  • We talked about the structure of a neural net, and in this one we're gonna be talking about layers.

  • Take a look at my illustration that this is a neural net.

  • Each circle represents a neuron.

  • The arrows represent a bit of math.

  • Stacked circles are layers, and so here we have what are called input layers.

  • That's his 1st 1 This next layer would be a hidden layer.

  • It's composed of two neurons.

  • The next is another hidden layer composed of two neurons, and the last is called an output layer in brain.

  • The input layers and output layers are configured for you kind of automatically, however, are hidden.

  • Layers can be configured by us.

  • Our first neural net was composed of two input neurons, one hidden layer that had three neurons and an output layer that had one neuron, just as our illustration has to neurons for the input layer, two hidden layers, the first having to neurons, the second having two neurons and the last one having two neurons.

  • What's interesting about hidden layers is that's really where the majority of their storage is.

  • If you liken it to a human, the hidden layers over the ideas are You may run into a scenario where your neural net isn't learning.

  • I'll go ahead and recreate that here.

  • I'm going to change the hidden layers from a single hidden layer with three neurons to that of one, and I'm going to log out training data, watch what happens.

  • We hit 20,000 iterations without fully understanding the problem and how to solve it.

  • We can easily fix that by changing our hidden layers to a single hidden layer of three neurons.

  • You can see we're able to train in a short amount of time.

  • 4800 iterations.

  • We can't as well have more than one hidden layer.

  • Our illustration has two hidden layers.

  • We could mimic this exact configuration two hidden layers, followed by two hidden layers.

  • And this way.

  • Note of caution, though, but more hidden layers that you add, the longer it takes for the neural net to train, let's try it.

  • See, we get 20,000 iterations without actually fully training.

  • There is no hard and fast rules when it comes to hidden layers.

  • This would be an invitation to experiment.

  • Something I have seen, though, is to treat the hidden layers sort of like a funnel.

  • So if you had, for example, 20 inputs, you could have 15 hidden layers, followed by 10 hidden layers, followed by two output layers.

  • That's just an example, though, and many problems take on different configurations.

  • Switching gears for a moment.

  • Take a look back at our illustration.

  • You remember that we have these arrows.

  • I said These arrows represent a bit of math and a feed for dinner moment.

  • That math can be described as this.

  • We have our input.

  • Waits kinds are inputs plus biases activated.

  • Now, this is simple match, but the implications of it are huge.

  • Pause here and just think about how simple that is and let it sink into your brain.

  • This tutorial series is really about the practical application of neural nets.

  • But if you're curious like me, you can take a look here.

  • See how brain uses this Math has another bonus.

  • Take a look at the additional options that are available for brain Neural nets can be widely configured to solve many different problems.