Placeholder Image

Subtitles section Play video

  • NICK KREEGER: How's it going, everybody?

  • I'm here to talk about TensorFlow and JavaScript

  • today.

  • My name is Nick, and this is my colleague Ping.

  • And we work on TensorFlow.js here in Mountain View.

  • So the traditional thinking is machine learning only

  • happens in Python, right?

  • That's kind of what everybody thinks about.

  • But is that always the case?

  • Has anybody seen this before?

  • This is something we host on our TensorFlow documentation.

  • This is the machine learning playground, the TensorFlow

  • playground.

  • And it was actually built by our colleagues in the East Coast.

  • And it was just a visual to put into some of our ML classes.

  • And it kind of shows how data flows

  • throughout a connected neural network

  • with different activation functions.

  • And this was a really popular project we built,

  • and it was a lot of fun to make.

  • And we've gained a lot of traction from it.

  • So we started to think maybe it makes sense

  • to do ML in the browser.

  • There's a lot of opportunities for doing ML directly

  • in the browser.

  • We don't need any drivers.

  • There's no CUDA installation or anything.

  • You could just run your code.

  • The browser has a lot of interactive features.

  • Especially with over the last several years of development,

  • there is access to things like sensors and cameras

  • you can easily hook up to that type of data stream.

  • And the other great part about doing ML directly

  • in the browser is it's a good privacy use case.

  • You don't have to send any user-facing data or any user

  • data over the wire, over an RPC.

  • To do inference behind the scene in your infrastructure,

  • you could actually just do that directly on the client.

  • So coming back to the TensorFlow playground,

  • this is about 400 lines of JavaScript code,

  • and it was very specifically typed for this project.

  • So our team kind of took this prototype

  • and started to build a linear algebra

  • library for the browser.

  • This project was initially started.

  • It was all open source under--

  • it was called deeplearn.js.

  • And we took deeplearn.js and aligned it

  • with what we're doing with TensorFlow internally

  • with eager execution and that type of alignment and launched

  • TensorFlow.js last April already.

  • And once we launched it, we had a lot of really great community

  • and Google-built products.

  • And I want to highlight a couple.

  • This is one that we built at Google.

  • It's called the Teachable Machine.

  • This is all done in the browser.

  • There's like three labels you can give what

  • you're training in the webcam.

  • There's like a green, purple, and red.

  • And it sort of highlights how a basic image recognition

  • model can run directly in the browser.

  • So this stuff all exists online.

  • You can still find it.

  • Another community built a self-driving car

  • all in the browser called Metacar.

  • And this is cool.

  • You can watch it train, and learn the inference

  • in what the car is driving.

  • People built games.

  • So this is a web game that somebody trained with

  • TensorFlow.js to avoid--

  • it's kind of a funny animation, but there's a little dude

  • running back and forth.

  • And he's hiding from those big balls.

  • And the model is learning to avoid the balls

  • all through using TensorFlow.js and continuing to play.

  • This one is really cool.

  • This is a Google project called Magenta, which does a lot of ML

  • with audio.

  • We have a large library called Magenta.js,

  • which is built on TensorFlow.js to do in-browser audio.

  • This is a cool demo somebody built.

  • It's a digital synthesizer that learns how to play music

  • and can drive with it.

  • Another cool example that just came out is--

  • this is all community-built open source.

  • It's called face-api.js.

  • So it's a library that sits on top of TensorFlow.js.

  • It has a few different type of image recognition.

  • It can detect faces and facial features,

  • so even, like, toddlers work pretty well.

  • So I want to kind of showcase how

  • our library pieces together.

  • There's sort of two main components to TensorFlow.js.

  • There's a Core API and then a Layers API.

  • And that is all powered in the browser by WebGL.

  • That's how we did the linear algebra aspect for the browser

  • is we bootstrap all the linear algebra all through WebGL

  • textures.

  • And on the server side, we actually ship our C code

  • that we run Python--

  • or I'm sorry, that powers TensorFlow Python.

  • So you get the high-end CPU, GPU.

  • And then eventually, we're working on the TPU integration

  • story for server side.

  • And those who have used Keras, the Layers API

  • is almost the same as Keras, very similar syntax.

  • The Core APIs are OP level.

  • And anyone who's worked with TensorFlow SaveModels,

  • that API will be pretty similar.

  • OK, what can you do today with TensorFlow.js?

  • Well, you can actually just author small models directly

  • in the browser.

  • There is a limited amount of resources the browsers have,

  • so we kind of get into that a little bit later.

  • But right now, you can do pure model training in the browser.

  • You can import pretrained models,

  • so this is a model that has been trained somewhere else, usually

  • in the cloud or on some Python device.

  • And we have a tool to serialize the model

  • and then run that inference in Node or on the browser.

  • And we have the ability to retrain models, so very

  • basic transfer learning.

  • We can bring in a model.

  • Anyone who's seen TensorFlow for Poets,

  • it's a very similar exercise.

  • So to get started with the Core API,

  • I want to do just a very simple, basic fitting a polynomial.

  • So this is a scatter of some data we have.

  • And we're going to write a really simple model

  • to try to find the best fit for this plot of data.

  • It's the classic fx equals ax squared plus bx plus c.

  • Excuse me.

  • So the first line-- this is all ES6-style JavaScript,

  • for those who are familiar.

  • So we're going to import--

  • @tensorflow/tfjs is the name of our package.

  • And we namespace it as tf.

  • Now, our first step is to include

  • three different variables-- a, b, and c.

  • And we actually initialize those as 0.1.

  • This is going to be passed into our training sequence.

  • The next step to do is declare our function.

  • So this is all using the tfjs APIs are doing that f of x

  • equals ax squared plus b to the power of x plus c.

  • And we have some sugar to make that a little bit more

  • readable using chainable APIs, which is a very

  • common pattern in JavaScript.

  • Next step is to declare a loss function,

  • just have a mean squared loss.

  • And then we declare the SGD optimizer

  • with a default learning rate we've declared somewhere

  • in this code.

  • And then finally we loop it through our training.

  • So EPOCHS we pass through, and every step, we

  • minimize our loss through the SGD optimizer.

  • This is very similar to eager-style Python,

  • for those who have done that in the Python plane.

  • Next thing I want to highlight is

  • the next step up-- that layers, that Keras-style API.

  • And to do so, we've been working on doing audio recognition

  • directly in the browser.

  • And I want to highlight just simply how that kind of works.

  • So really simple spoken commands like up,

  • down, left, right can be run through FFT

  • to build a spectrogram.

  • So we take audio in, and we build

  • a spectrogram as an image, and we train our model on that.

  • And we can actually build that convolutional network pretty

  • simply with our Layers API.

  • And the first step is just the same as our fitting polynomial.

  • We'll include the package tfjs.

  • And then we're going to build a sequential model.