Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • NICK KREEGER: Hey, everybody.

  • My name's Nick, and this is my colleague, Yannick.

  • We're going to talk today about JavaScript and ML.

  • TensorFlow.js is a library we launched last year.

  • It's a library for training and deploying ML models

  • in the browser and on Node.js.

  • We want to showcase what you can do today with the platform

  • and where we're going.

  • One of the great parts about the library

  • is there's really no drivers to install.

  • If you run it in the browser, you can get out-of-the-box GPU

  • acceleration.

  • The browser itself tends to be very interactive by nature,

  • which builds really great applications

  • and demos for using ML.

  • And privacy is a very important part to the library.

  • You can run inference and training locally

  • on the client, which works around all sorts of privacy

  • issues you might have with doing server-side inference

  • or training.

  • And what can you do today with the library?

  • Well, we have a collection of pre-trained, off-the-shelf

  • models that you can use without any knowledge of ML.

  • We also have the ability to take existing Python models

  • and convert them and run them in TensorFlow.js.

  • We also have a full stack for training, inference,

  • and low-level linear algebra.

  • And that runs in the browser and Node.js.

  • And we also have a bunch of platforms

  • that JavaScript can run on outside of just the browser.

  • The first thing I want to showcase

  • is some of our new, off-the-shelf models

  • we've launched.

  • The first one is a bunch of pre-trained-- or off-the-shelf

  • models are bunch of pre-trained models.

  • They are image, audio, and text classification models.

  • And the APIs are all user-friendly.

  • You don't have to worry about converting images.

  • The tensors are resizing.

  • They're very high-level, easy-to-use APIs.

  • These are available on NPM for local application development.

  • Or we also have pre-compiled hosted scripts as well.

  • We'll also be working on this a lot in the upcoming year.

  • We'll have more and more models as we go forward.

  • The first model is BodyPix.

  • We were actually showcasing this at our booth.

  • And I want to show you how easy it is to use this model.

  • So the first thing we need do is include the library

  • and our BodyPix model.

  • This can be done with our pre-compiled scripts.

  • So it's two simple imports.

  • And the next step is to--

  • we'll just create a image in the DOM.

  • So this is a body image detection thing.

  • So I have a picture of my toddler trying to do yoga.

  • It's kind of a funny picture that

  • isn't so much a human just like this and finding arms and legs.

  • So this is Frank.

  • He's trying to do something on the couch.

  • But I want to actually load the model

  • and find body parts on Frank.

  • So the first thing to do is to load the BodyPix model--

  • just a simple, one-line call.

  • And the next step is to call one of the methods we expose,

  • which is estimatePersonSegmentation.

  • And I can pass in a DOM element.

  • This returns a JavaScript object with the width and height

  • of the object or the image and a value for every pixel that

  • was in the image, if it's an arm, or a leg,

  • or a head, et cetera.

  • There's also a bunch of really easy-to-use methods for doing

  • filtering on the image.

  • So I can take the results of that

  • and render it directly on the DOM.

  • So it shows head, body, arm, and so on.

  • Another model we just launched just a couple of weeks

  • ago is the Toxicity model.

  • It's a out-of-the-box text classification model.

  • Again, to use this model, we'll use the pre-hosted scripts--

  • two lines of code.

  • And then I'll load the Toxicity model

  • and ask the model to classify just

  • some really lovely text, pretty PG-- you suck.

  • And I'll get a result back again as a JavaScript object

  • that has the seven labels that we identify

  • different types of toxic type text, and the probabilities,

  • and if it matches.

  • We also have the ability to take pre-trained Python models

  • and run them directly in browsers.

  • So if you have a pre-trained model today that's already

  • been trained in Python world, we have a command line tool

  • that makes it very easy to serialize the model as a JSON

  • object, and the weights, and distribute them

  • in a web format.

  • We support Saved Model, TFHub, and Keras models.

  • The converter itself right now supports

  • over 170 and counting ops.

  • And we will be TensorFlow 2.0 compatible.

  • I want to walk through how simple it is to use this.

  • I have a Python model.

  • I run it through the command line tool,

  • and then I can easily just a load that in my JavaScript

  • application.

  • Very simple.

  • And with that, I want to hand it off to Yannick

  • to walk through our training APIs.

  • YANNICK ASSOGBA: Thanks, Nick.

  • So in addition to working with pre-trained models,

  • TensorFlow.js also allows you to author and train models

  • directly in JavaScript, both in the browser and in Node.

  • The primary tool for this is the Layers API,

  • which is a Keras-compatible API for authoring models.

  • There's also a lower level op-driven API,

  • if you need fine control over model architecture

  • or execution.

  • And we're going to take a quick look

  • at what TFJS code for training looks like.

  • And the main takeaway is that it's

  • pretty similar to using Keras and Python,

  • but follows JavaScript conventions.

  • So the first step is to import the library.

  • And when working in Node,js, you can also use the Node.js

  • bindings, which execute the TensorFlow operations using

  • native compiled C++ code.

  • If you're on a system that supports CUDA,

  • you can import tfjs-node-gpu to get

  • CUDA-accelerated performance when

  • doing training or inference.

  • And this is what creating a convolutional model

  • for a classification task looks like in JavaScript.

  • As you can see, it's very similar to Keras code

  • and Python.

  • We start by instantiating a model.

  • We add our convolutional layers, and we finish our model

  • definition by adding a flatten operation and a dense layer

  • with a number of output classes.

  • Similar to Python, we use model.compile to get

  • it ready for training.

  • And here, we specify our loss function and our optimizer.

  • And model.fit is the function that drives the train loop.

  • In JavaScript, it's an async function.

  • So here, we want to wait for the result,

  • or wait for it to be done.

  • Once the model is done training, we can save the model.

  • And here, we're saving it to the browser's local storage.

  • We support saving to a number of different targets,

  • both on the client and on the server.

  • And finally, just like you're used to,

  • you can use model.predict to get a result from the model.

  • So over the past year, we've also heard feedback

  • from the community on ways we can improve the experience

  • of training with TensorFlow.js.

  • And two particular requested areas

  • are that of data management and data visualization.

  • So we'd like to show you some of the progress we've

  • made in those areas.

  • So first up is tf.data.

  • And it's an API for managing data

  • pipelines to drive training.

  • It's a JS analog to Python's tf.data

  • and provides a whole set of utility functions

  • for data set transformation.

  • And finally, it works with streams.

  • And the lazy evaluation allows you

  • to work with data that doesn't fit in memory,

  • which can be quite important.

  • So let's take a look at a simple example.

  • So here, we load up a CSV file, using tf.data.csv loader.

  • And we specify that we want to predict the price column, using

  • the isLabel attribute.

  • So this is going to set it as a label

  • in future transformations.

  • So for example, in this map transformation,

  • the price data has been separated out

  • into that y's object.

  • And the rest of the features are in the x's object.

  • Once we've flattened our data, we

  • can now apply typical ML transformation operations,

  • including things like shuffling, which is an ML best practice,

  • and batching, which will do the work of creating properly

  • sized mini-batches for training and know

  • what to pull into memory when, when the train loop is running.

  • And the other kinds of transformations

  • you may want to do here include things like normalization.

  • And finally, we run the train loop on this data set.

  • So model.fitDataset is an analog to model.fit

  • that supports consuming TF data sets

  • and knows how to pull the right stuff into memory as needed.

  • And that's tf.data.

  • So the other area we've been responding

  • to community feedback is that of visualization.

  • And the first thing I want to talk about here

  • is tfjs-vis, which is a library for in-browser visualization

  • of model behavior.

  • So with it, you can view training behavior,

  • model internals, as well as evaluation metrics.

  • And we're going to take a look at the first two.

  • So first, we import the library.

  • And here, you should note that we do provide tfjs-vis

  • as a separate package.

  • And to visualize training behavior,

  • we can use this show.fitCallbacks function.

  • And we're going to specify a named drawing

  • area to render the charts to, as well as our metrics that we

  • want to see.

  • So in one line, show.fitCallbacks

  • will plot our selected metrics, in this case our loss

  • and our accuracy or metrics on batch end

  • and at the end of each epoch.

  • So this lets us view how the model is converging live

  • in the browser.

  • That's [? adjust ?] [? hyper parameters ?]

  • as usual.

  • You can also look at the model internals

  • with functions like show.modelSummary

  • and show.layer.

  • And similarly we pass these named drawing areas.

  • And here, we see the architecture of the model,

  • including things like output shapes of the various layers

  • and the number of trainable parameters.

  • And in this example, we also see the distribution

  • of values in the first convolutional layer

  • of this network, including important statistics, like nans

  • and 0 counts, which are useful for debugging models.

  • And finally, we also want to announce TensorBoard support

  • in Node.js.

  • Now you can monitor training performance right

  • in TensorBoard when using the TensorFlow.js layers

  • API in Node.

  • You can see what that looks like.

  • So a single line will generate the necessary callbacks

  • to write the model metrics to a TensorBoard log file,

  • using this tf.node.tensorBoard command.

  • Then you can open it in TensorBoard

  • and look at how you're training is going,

  • just like you may be used to.

  • And with that, I'm going to hand it back to Nick

  • to talk about some of the platforms we execute on.

  • NICK KREEGER: JavaScript's an interesting language

  • because it actually runs in a lot more places than you think.

  • There's the traditional browser front

  • for running JavaScript in the browser.

  • We all know about that.

  • Node.js is a big server-side solution.

  • Very popular.

  • But there's also a growing trend with JavaScript in more places.

  • One of them is desktop applications.

  • Electron is a very, very popular platform

  • for developing applications.

  • Those of you who have used the Spotify desktop

  • application or Visual Studio Code, those

  • are good examples of Electron.

  • And JavaScript is also moving into the mobile space.

  • I want to highlight a couple of examples

  • that we've seen in the industry on all four platforms.

  • First is the browser.

  • Our friends at Google Creative Labs

  • have built a series of experiments

  • to explore how creative tools can be

  • more accessible for everyone.

  • There's going to be a great lightning

  • talk on this tomorrow, and I encourage you to go.

  • And they'll talk about everything they've

  • built with this project.

  • Uber has built a in-browser tool for

  • model-agnostic visualization of ML performance.

  • They use TensorFlow.js for acceleration of their linear

  • algebra--

  • k-means clustering, KL-divergent computations, and so on.

  • They are also giving a great lighting talk

  • about how they use TensorFlow.js solve this problem

  • for their platform.

  • And again, this is all in-browser.

  • Another really cool industry example is Airbnb.

  • Airbnb built a identity document detection model

  • that they use as a full TensorFlow ecosystem solution.

  • So on your Airbnb profile, if you

  • were to upload a government-issued ID,

  • it is a very big trust and safety issue.

  • So the Trust team at Airbnb built a TensorFlow model

  • to detect if a profile picture that you're

  • trying to upload directly in the client

  • contains government-issued IDs.

  • They use this in the browser, using TensorFlow.js,

  • as well as on their mobile devices with TFLite.

  • On Node.js, a good example of this being used in the industry

  • is Clinic Doctor and Clinic.js.

  • This is a Node.js performance analysis tool.

  • And they use our Node.js bindings

  • to filter out GC spikes on [? node ?]

  • processes that are running to give a true, accurate CPU

  • performance benchmark.

  • And on the desktop, our team here at Google,

  • with Magenta and their music generation models,

  • have built a series of desktop plugins for the Ableton Studio.

  • So these apps are a little mini-applications

  • that are full desktop applications that

  • use the Magenta models and accelerate them all into GPU

  • and desktop applications.

  • And we have a demo at our booth, as well, for how this works.

  • Again, the really cool part is all JavaScript and GPU

  • acceleration on the desktop, with no CUDA drivers

  • all through our webGL bindings.

  • And mobile is another interesting space.

  • WeChat, for example, is one of the most popular apps

  • in the world.

  • They have over one billion total users

  • and have a sub-application platform

  • called the mini-programs.

  • The mini-programs are great because it's

  • a no need to install app in advance, use on-demand.

  • And it has over one million apps and 1.5 million developers.

  • The mini-program itself is built using JavaScript.

  • And it makes it really easy for developers

  • to create and deploy and share these on the WeChat platform.

  • I actually want to show a demo of one of our TFJS models

  • running on WeChat.

  • So I have just a regular iOS device here,

  • and I'm going to open WeChat.

  • Someone shared with me this TFJS example.

  • And I can load up the application,

  • and it's running our PoseNet model.

  • And if I aim at it Yannick here, I can do--

  • yeah, there we go.

  • So this is just purely done in JavaScript.

  • And it's running our off-the-shelf MobileNet model.

  • And we're doing about 30 frames a second.

  • And this is all done with the WeChat JavaScript platform.

  • YANNICK ASSOGBA: Thank you.

  • So all this work over the past year and the fantastic projects

  • created by the community makes us very excited

  • to announce TensorFlow.js 1.0 today.

  • It's available now.

  • And we're super excited to see what that community builds

  • with it and hope that the API stability will

  • make this even easier for developers, going forward.

  • And really with this release, we're

  • focusing on two main things-- providing a stable API that you

  • can build applications on and make

  • managing your upgrades easier, and also bringing

  • marked improvements in performance, particularly

  • on mobile devices.

  • And we'll look at that in a bit more detail.

  • So to look at it a bit closer--

  • since we announced TensorFlow.js last year at the Dev Summit,

  • we've been steadily working on performance improvements

  • across a number of platforms.

  • And today, we see increases of about 1.4x to 9x in some

  • extreme cases.

  • So this chart shows inference performance

  • with a batch size of 1 on MobileNet in Chrome.

  • So MobileNet is a mobile-friendly image

  • classification model.

  • And we see inference times going from about 15 milliseconds

  • on a modern laptop with discrete graphics

  • to about 150 milliseconds on the Pixel 2.

  • And over the past year, we've been

  • able to do quite a bit of work to improve performance

  • on iOS devices as well.

  • So really excited for you to try this.

  • So what's next for us?

  • Well, we want to enable you to execute saved models

  • on our Node.js backend without going through the conversion

  • process.

  • And this will open up many more models

  • to be able to serve using the Node.js stack.

  • We want to provide more off-the-shelf models,

  • like we talked about earlier, to make it easier

  • to build ML-powered JavaScript apps without getting

  • into the nitty-gritty of machine learning models.

  • We're always keeping an eye on browser acceleration proposals,

  • like SIMD and WASM, as well as emerging proposals,

  • like WebGPU and WebML.

  • So the browser's only going to get faster, and so will we.

  • And finally, we also want to work

  • on expanding the platforms on which TensorFlow.js can run.

  • So for example, we saw examples of Electron

  • and things like WeChat.

  • They're also working on platforms,

  • like the Raspberry Pi and other hybrid mobile platforms

  • that run JavaScript.

  • So thanks.

  • And for more information about the things we talked about,

  • you can visit any one of these links.

  • Thank you.

  • [APPLAUSE]

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it