Placeholder Image

Subtitles section Play video

  • hi and welcome to coding Tensorflow, a show where we focus on coding machine learning and AI ai applications.

  • I'm Laurence Maroney, a developer advocate for tensorflow, and in this episode we're going to continue our series about using JavaScript for machine learning in the browser.

  • This is achieved using tensorflow dot Js a JavaScript library for training and deploying ML models in the browser on Don't Know Js.

  • There's lots of great information about it on the J s stock tensorflow dot org's sight, including samples, a P I docks and frequently asked questions.

  • In the first episode, we took a very basic look at what you need to get up and running with tensorflow in the browser by building a simple model that fits its values to align by learning that it is actually a line off of a very small training set in Episode two.

  • We then wanted to focus on data on how you prepared data for training.

  • We did this by taking a popular data, sets that for classification of the iris flower on Turned it from Rossi s V into a number of tense sirs, those with feature data and those would label data for both training and test sets.

  • Now that the data is ready in this episode, we'll take a look at how you can create a neural network toe build a model that can be used to classify future data.

  • So when it sees unknown measurements, it will unfair from that data, which iris flower the data likely represents.

  • It's a simple scenario, but it is the cornerstone building block of all machine learning from an existing data sense.

  • Learn how to unfair the desired results without explicit programming of rules about those results.

  • So let's get coding.

  • I'm going to start within a synchronous function called Do Iris, which I will call at the end of my JavaScript block.

  • In the previous episode, we created the Irish dot Js file that contained the data on all the code we wrote to pre process it.

  • This was orchestrated through a function called Get Iris data, which took a parameter that dictated the split between training and test data.

  • By setting.

  • It's a 0.2 were saying that 80% of the data will be for training and 20% for test.

  • So let's call that and we'll get back our ex train.

  • Why Train X test on why test values now will create a model by calling a train model function on passing them as parameters to it.

  • We haven't yet created that, so let's do it now.

  • We'll start by making this function asynchronous.

  • We're awaiting its return after all.

  • Then we'll create our model in the same way as before.

  • It's a sequential network.

  • Next, we'll set up a couple of values for a learning rate and the number of a pox or generations that we want to run the machine learning for.

  • It's nice to have These is constant so we can tweak them later.

  • The learning rate is used to define the optimizer on.

  • If you remember an Episode one, we use stochastic Grady and dissent.

  • This time we'll use a different one.

  • It's called on Adam Optimizer.

  • There's lots of ways of optimizing machine learning, and Adam was introduced as a methodology in 2015 as an improvement over stochastic Grady in dissent.

  • It's actually built into tensorflow, so you just have to turn it on.

  • But if you want the details about it, you can find them on the archive dot org's site.

  • Now let's create our model.

  • There's an art in how you define your neural network and you can experiment with different combinations for better off faster results.

  • But in this case, I'm going to use two layers.

  • The first has 10 neurons and the second is three.

  • The first layer is activated by something called a sigmoid function.

  • I won't go into all the math of that now.

  • But the important thing to note is that a sigmoid function for all inputs will provide an output between zero and one, which is perfect for classification.

  • IY zero means it doesn't match on one means it matches the second.

  • Our output layer will have three units on.

  • I do this because we're classifying to three different types of flour.

  • Its activation is soft max, which is a function that normalizes its input values so that they all add up to one.

  • That way.

  • When we get our classification, it will be a likelihood for each flower.

  • But the three likelihoods will add up to one weakened, then compile our model with this optimizer and the desired lost function as well as the metric that we want to read the last function this time is called categorical cross entropy and again, without getting on Mathey about it.

  • I've found that when you want to categorize something as we're doing in this case picking between different types of flour instead of predicting a value like a house price, then using this lost function instead of something like root means square works much better.

  • Okay, now we have our model.

  • It's time to train.

  • It will do this with the model dot fit method passing it.

  • Our training on our validation data will also specify the number of epochs that we want to run the training for.

  • To keep track of its progress, We actually got a callback called on Epoch End.

  • In this, we can print our current lost value on When I run it, you'll see this value diminish epoch by epoch.

  • It's really a simple is that for training on when we're done, we'll have a model that can classify the input data.

  • So let's now take a look at using the model to do a prediction.

  • We've only trained this model for a little time, only 40 pox, so we may have some errors and we'll see how to fix that later So here I've created a tensor with a bunch of input values that match those of one of the items in the real data will pass this model to get a prediction back on.

  • Hopefully we get the same results.

  • So let's run it.

  • You'll see that we get three values determining the likelihood of which flower matches the tenser, and it looks like number two is closest to being the winner.

  • Let's see how we could make that even clearer.

  • Now here's the same prediction, but using the ARG Max function to polarize the values, effectively setting the likelihood for flower zero on once and nothing and flower to toe one.

  • I think of that as similar to writing a few, if then statements to compare the values to find the biggest on.

  • Think about how much easier it is Then, if you were doing this with 1000 different values on, the number of Cody would have to, right.

  • So if you want to test your model against the test set to see how many it gets right versus how many it gets wrong, you can then do so with this code.

  • Each X in the test set get the prediction on, then compare it against the real way.

  • If it's the same, you're right.

  • If it's different, you're wrong.

  • If you get a very high error rate, then you can tweak the pox on the learning rate from earlier and try again.

  • So that's it for this episode of coding tensorflow.

  • If you've watched these three episodes and followed along, you have taken your first steps into machine learning in the browser with JavaScript.

  • If you prefer to go beyond the browser and use your JavaScript scales with something like no Js instead, you can still do so.

  • Learn more about that and everything JavaScript related on J s stock, tensorflow dot or GE.

  • If you've any questions, please leave them in the comments below.

  • They'll forget to hit that subscribe button for more great tensorflow content.

hi and welcome to coding Tensorflow, a show where we focus on coding machine learning and AI ai applications.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it