Placeholder Image

Subtitles section Play video

  • Cool.

  • Hello, everyone.

  • Hey.

  • Good morning.

  • Thanks for coming here.

  • It's probably been a long week.

  • The last couple of days have been crazy here,

  • and the traffic's not been fun either.

  • So hopefully it's been working all well for you guys.

  • Some of you have made your way from a different parts

  • of the country, many different areas.

  • But I hope it's been worth it.

  • And there's a lot of interesting things that we've been doing.

  • So we want to talk a little bit about some of them.

  • You've heard our keynotes and the great products we have,

  • et cetera.

  • Today I'm going to talk a little bit about machine learning,

  • right?

  • The title says machine learning is not

  • the future, which is kind of weird

  • given that we're talking about machine learning in everything

  • we're doing here.

  • But the point that I wanted to bring across here

  • is it's not really the future.

  • It's part of everything we have today

  • as you're seeing without products as all the things

  • that we're talking about.

  • We're using machine learning today

  • in everything we're doing.

  • And I want to talk about some of that

  • and tell you how you can do that too.

  • But first, let's start with this all started.

  • So and this goes back to 1997.

  • I don't know how many of you remember this Tik-Tok

  • from "The Wizard of Oz."

  • He's one of the early versions of robots in modern times,

  • really.

  • Although there are references to automatons going back

  • to the 5th Century B.C. And stuff,

  • even the Egyptians and others way back when.

  • But in the modern times, this is probably

  • one of the earliest references to something

  • that was close to a robot.

  • Any of you familiar with Tik-Tok?

  • How many of you are familiar with Tik-Tok?

  • Not too many.

  • Let me tell you a little bit about him.

  • So this one is-- in fact, the term robot

  • wasn't even coined back when Tik-Tok

  • was created by Frank Baum.

  • And what this robot would do is it had a winding thing.

  • And you needed to wind the robot.

  • And it would run.

  • And it could do pretty much what a human could do but not

  • really be alive.

  • And it was a great reference to things

  • that we've always wanted to do.

  • And so a lot of this AI and robotics

  • and a lot of these things have been part of science fiction

  • for a very, very long time.

  • In fact, a lot of what we do today in science

  • has been driven by people, literally authors, et cetera,

  • are able to really build up and think about what

  • the future might be like.

  • This is one example, but there are many, many more.

  • For example, if some of you have read Asimov,

  • he has a bunch of books.

  • And he has this whole robotics cities,

  • again way back in the '30s where he talks about robots

  • and how they might be like and the kind

  • of things that come with it.

  • And that was very interesting.

  • A lot of people over the years have

  • been inspired by these science fiction books and movies

  • to do a lot of interesting things.

  • Another example of that is back in the '70s

  • from "The Hitchhikers Guide to the Galaxy"

  • there is this robot called Marvin,

  • which is this depressed-- it's always depressing.

  • He's always talking about these things he's not happy about.

  • He's just too smart for all the things happening it.

  • And it's really a great example of what

  • people have been thinking about what's

  • going to happen in the future.

  • And now going to another example,

  • "Star Trek," another thing.

  • The example that I put here, you can

  • go to Data in "Star Trek" who talks about-- this

  • is a poem he composes, which I thought was pretty funny,

  • and how people have been thinking about AIs

  • and what they would do and what they

  • would be doing in the future.

  • In this case, again, it's like a robot,

  • like a humanoid, somebody who can do a lot of things,

  • but is not quite human.

  • Then coming to more recent times, more closer to 2001,

  • if you've seen Steven Spielberg's movie "AI" called

  • "AI" itself, they have these mechanical robots

  • that can really do-- that look like humans, that

  • can do all kinds of things.

  • But they don't feel.

  • And then they build this little kid David,

  • who can actually feel and love as well,

  • and really changes how the perception is

  • and how they think about AI or what it means

  • to the people around them.

  • And then much more recently there's

  • this movie called "Her" in 2013, which

  • talks about AI without the shape, without the robotics,

  • and all of that.

  • But AI is just that live within your computer as an OS,

  • essentially, that can interact with you.

  • That's your assistant, but much more than that.

  • It actually feels as well.

  • It understands things, et cetera.

  • So all these are great.

  • There are so many different things

  • that people have been talking about,

  • people have been thinking about.

  • But these are still science fiction.

  • This is not really what machine learning is about today.

  • This has always been the future.

  • It's still the future.

  • Maybe at some point it will be a reality.

  • But that's not where we are today.

  • That's not what we're talking about.

  • But there are some real things that we can do today.

  • And those are some of the things that I'm

  • going to talk about today.

  • So we've actually made over the last few years

  • real progress in terms of all the different things that we

  • can do with AI with machine learning,

  • use all the products that you see around you.

  • And I'm going to talk a little bit about those.

  • There's so much in there that has

  • benefited from machine learning from what we call AI as well.

  • And so this is some of the smallest of products, really,

  • at Google that we made that used machine learning in some ways.

  • But at Google, anytime we think of a product

  • there's, of course, programming.

  • And you build it, and you do all sorts of things with it.

  • But machine learning is an integral part of everything

  • we do in building that, because we want these products to be

  • really smart to give you the right things,

  • to not just follow your actions, but really

  • give you the right things when you want them as you want them.

  • And I'll go over some examples of these in later slides

  • as well.

  • So before I go into other things,

  • let's just go a little bit into what deep learning

  • and machine learning is about.

  • And I'm just quickly going to give you some examples

  • on a website that we have.

  • So part of this slide is going to talk about TensorFlow.

  • And recently we put the site up called

  • playground.tensorflow.org that allows

  • you to play with neural nets that allows you to really do

  • different kinds of things, and allows you to really understand

  • how these networks work, how machine learning works,

  • and be able to play with some of those problems.

  • So I'm going to start with a very, very basic problem

  • classification.

  • The goal in this case is there are two kinds of clients.

  • I just want to classify that it's ARB, in this case

  • the orange or the blue ones.

  • And I'm going to use the very simplest case.

  • It's a very simple linear classification,

  • if any of you guys know what that is.

  • But the idea is you have some inputs.

  • In this case, the x-axis and the y-axis are inputs.

  • And based on those two inputs you

  • want to decide if it's a blue one or an orange one.

  • And so what this model is going to learn

  • is some parameters to figure out-- OK.

  • If I get some point, I get the an x1 value and a x2 value,

  • how do I decide that?

  • It's some mathematical computation

  • that it needs to do.

  • So in this case, it does that iteratively.

  • Let's just quickly run this.

  • I've just set it up as a linear model.

  • So lets just run it very quick.

  • And as you see here, its looking at x1 and x2.

  • And very, very quickly it figures out

  • how to separate those out.

  • It just draw a line in the middle.

  • And its just optimizing what that line looks

  • like, how to separate these out very simply.

  • So this was a very easy problem.

  • This is typically not how the real problems are,

  • but it's a great place to start.

  • So let's go to a slightly more complex problem now.

  • So let's go to this one here where

  • you still have those two kinds of points,

  • but now they're clustered differently.

  • So there's one in the center-- is a circle,

  • and the rest are around it.

  • So can we actually solve this using the same kind of methods?

  • Can we actually classify using the same thing?

  • If I run this, do you think it would work?

  • Any guesses?

  • Let's try it out.

  • So it's trying to separate this out.

  • It's basically trying to draw a line.

  • But it's lost.

  • It's really not making a progress

  • if you see what's happening here.

  • So there are a few things you can do about this.

  • One, in terms of classification, again,

  • going a little bit into machine learning into the details,

  • you can add more features, more kinds of things about the input

  • data that helped the model understand

  • or how to separate them.

  • So if you guys remember your math back from high school

  • or college, in this particular case,

  • given how these clusters are structured,

  • if you looked at a couple more features

  • from based on the inputs, like in this case,

  • x1 squared and x2 squared, you can probably

  • use those to separate this out better.

  • So let's try this.

  • Let's add these two and see what happens.

  • And taking those two values, now it

  • knows how to separate those and clearly

  • separate it out into two separate, different classes.

  • So this works.

  • This great great, right?

  • This is what happens or used to happen

  • in machine learning very often for every single thing

  • that you do.

  • If you take a bunch of inputs, you want to solve some problem.

  • It's typically the inputs themselves

  • that are not in the same kind of mode

  • that you need to solve the problem.

  • And what you do is called feature generation

  • where you are combining or crossing

  • those features in different ways to really solve your problem.

  • But there's a lot of work you need

  • to do in figuring out the right features that make sense.

  • In this case, it was a simple problem.

  • So we knew x squared, x1 squared or x2 squared would work.

  • But in some cases, the problem is not as simple

  • and you really need to do a little bit more.

  • So for those kind of cases, we have

  • something called deep learning which

  • seems to work really well.

  • So let's go to go back to this example

  • and let's see if we can solve it in a different way as well.

  • So let's take out these two features first.

  • Let's change this linear to a nonlinearity.

  • In this case, I'm going to pick correctify linear unit.

  • And I'm going to add a hidden layer with a few more neurons.

  • So what this is doing is it has some inputs, as we

  • said in this case just x1 x2.

  • I'm adding another layer, which is basically

  • trying to combine those inputs in whatever

  • way is interesting to solve our particular problem.

  • And then let's see if we can train this to actually solve

  • the same problem as well.

  • So if you see this, we took the same features, no feature

  • generation.

  • We just added one layer.

  • And this is a simple problem, so one layer works.

  • And it's very easily able to separate those out.

  • This is essentially what deep learning is doing for you.

  • As you add more layers, it can understand and learn

  • the right layers of abstraction that

  • are the most interesting to solve your current problem.

  • And of course, it depends on the problem.

  • It doesn't have to be a simple classification.

  • It could be regression.

  • It could be recommendation, whatever you are trying to do,

  • really.

  • You can apply similar techniques to do that.

  • So let's go back to our slides.

  • In a sense, deep learning's basically

  • the same machine learning ideas with more layers

  • there that allow you to get different kinds of abstraction.

  • Of course, as part of that you get many, many more parameters.

  • You learn things differently.

  • This example is actually for what

  • we call inception retreat, which is our state of the art image

  • model for image classification.

  • So And we'll go through some examples of how we use it.

  • But this basically gives you something

  • nearer human accuracy.

  • So machine learning has been around for a long time,

  • in fact way more than 10 years.

  • What's really changed since then?

  • One of them is more compute, definitely.

  • Over the years, our computers have gotten faster.

  • We have more of them.

  • We've been using newer chips, CPUs, GPUs, now 2PUs here.

  • And so a lot of things that we're doing here--

  • and more computer allows us to really build

  • more complex models, more bigger models.

  • But that needs a lot of other things

  • to go with it as well for that to work, in this case,

  • better algorithms and more data help as well.

  • So over the years we've improved the algorithms slowly.

  • A lot of these algorithms that we talk about in deep learning

  • are 30, 40, 50 years old.

  • But over the last 10 years, we've

  • seen some improvements in the right places.

  • For example, in a case of deep learning,

  • there are some improvements to back propagation

  • or the kind of nonlinearities that we use that really make it

  • much easier to optimize those.

  • It used to be really, really hard

  • to train some of these models, even if you had the compute.

  • Even if you had the data.

  • Now it's much easier and there are better techniques

  • to do that.

  • So the math and the algorithms is improved.

  • And of course, the amount of data that we have,

  • that keeps growing.

  • And that really is something that machine learning can help.

  • If you think about it with small amounts of data humans

  • are great.

  • You can maybe read 10 pages of a book

  • and understand that and sift it down.

  • But what if you had a million pages?

  • You can't really expect a human to go through those million

  • pages, understand those, and summarize it for you.

  • You really need some machines that can automatically

  • do this for you.

  • And that's really where machine learning comes in.

  • So now let's talk a little bit about what's

  • happening in research in machine learning now.

  • We've talked a little bit about science-fiction,

  • mentioned a bunch of products that are using this.

  • And I'll go a little bit more into the products later.

  • But let's talk a little bit about the research, where this

  • is going, some of the examples of where

  • things of made improvements in the recent past.

  • So the first one I want to talk about is AlphaGo.

  • How many of you of heard of AlphaGo?

  • Wow, quite a few.

  • So if you remember actually almost 20 years ago now,

  • there was deep blue, which is a similar chess computer.

  • In some sense it was similar that the goal was, OK.

  • Can we really beat the best humans,

  • Casper of the reigning champion in that time using computers?

  • And back in 1997 was the first time

  • Deep Blue, which is this computer built by IBM

  • that actually beat the world champion,

  • beat Kasparov in a full championship match.

  • That was pretty amazing.

  • And that was 20 years ago.

  • It took a long time to go from there to Go,

  • partly because Go itself as a game is

  • way more complex than chess is.

  • The possibilities that you have at every level at every step

  • are much, much bigger, much more than what chess is.

  • And so one of the differences there

  • is with Deep Blue you could get away

  • with some smarts and a lot of brute force.

  • You could have a large supercomputer.

  • You have some basic smarts and a lot of rules around-- there

  • are a lot of chess players there that

  • actually taught Deep Blue in the sense it would train them

  • on-- OK.

  • Here the good moves.

  • Here are not, et cetera, et cetera,

  • plus allowing it to do a lot of brute force

  • in trying out-- OK, looking ahead how many different

  • moves you can do.

  • Now those same techniques just could not

  • work for Go because of the different possibilities

  • at every step.

  • At every step we can do a really large number of moves,

  • and supporting that or being able to go past a few steps

  • is extremely hard, even for modern day computers.

  • So what really worked in this case

  • was a combination of smartness and the computer.

  • And in case of smartness, they applied

  • deep learning to figure out, OK, looking at a board, what kind

  • of moves make the most sense?

  • And then let's try them out and see what makes sense

  • and which ones of these play well.

  • And so it's a combination of improved algorithms, improved

  • machine learning, combined with the compute power as well.

  • Another example is ImageNet, which

  • is this data set of a million images with 1,000

  • classes of different kinds.

  • In fact, a lot of classes are so close

  • together that it's very hard even

  • for humans who don't know about them to identify

  • without learning.

  • For example, there are breeds of dogs which look

  • very similar, at least to me.

  • For people who really know dogs, of course, it's easier.

  • Over the last few years, over the last five years,

  • especially on this data set, this

  • as really driven the state of, well, vision understanding

  • for machines is.

  • So about five years ago, the accuracy

  • that computers would get was maybe 70%, 75%.

  • This was the measured top five accuracy, which is basically

  • is the most-- the correct answer among the top five results

  • that the computer predicts?

  • Now, five years ago this was something like 75%.

  • Today it's past.

  • It's over 96%, which is considered to be better

  • than human accuracy.

  • Humans also make mistakes in this kind of thing.

  • So it's really coming a long way and just shows

  • the kind of progress we can make if we really get down

  • to improving machine learning.

  • This one's an interesting one, taking the kind of image models

  • that we've trained and really letting them dream.

  • This is what we call Deep Dream.

  • There are some folks at Google who

  • basically took those new networks

  • and wanted to understand what's happening in these networks.

  • What are they learning right?

  • And so in this case, what they did

  • was took a few neurons in the network,

  • just let them train from that point

  • and try to see what we can generate

  • from those neurons, what they might be understanding

  • from the images.

  • And so they converged to some of these.

  • So in some of these images in the top right,

  • it's basically started from some kind of palace or whatever

  • and combined those, and [INAUDIBLE] of that.

  • The one in the top left it has some fountains in there

  • and a bunch of other things that sort of combined.

  • And this is very interesting because in some ways

  • there are similarities to what people do as well.

  • Of course, this is a very different view.

  • But it's starting to learn in many, many ways.

  • Machine learning is really doing the same kind of learning

  • that people might do.

  • So now I'm going to give you some examples.

  • Let me just go to this and play some, a couple more

  • videos for you.

  • In this case, it's doing what-- it's basically taking the video

  • and applying different kinds of painting styles.

  • In this particular one it's just picking a blue painting style

  • and applying it to the existing video in real time.

  • Now it's a charcoal sketch.

  • This is just an example of how we're

  • mixing machine learning with things like art

  • to all kinds of interesting things in

  • and interesting combinations things that you

  • can do with these.

  • Here's another one where it makes it look like a newspaper.

  • So this one's based on a paper based on

  • by Justin Johnson and others here at Sanford.

  • Let me give you another example of similar things

  • that we can do.

  • So this one is interesting where the network

  • learned from a bunch of Chinese characters.

  • It looked at how they're drawn, and was

  • able to really draw completely new Chinese characters I've

  • never seen before.

  • These aren't real characters, actually,

  • for those of you who understand the script.

  • I don't, but then they look very real.

  • Even for people who understand the script or who

  • know the script, they feel like they're almost there,

  • they're clearly not the right character or anything

  • that they've ever seen.

  • But it's very understanding what's happening there.

  • It gets tagged it's supposed to look like this

  • and is able to understand that whole domain as well.

  • I'm going to show you a couple more videos.

  • Here's one where it's basically learned from a lot of numbers.

  • These are house numbers from Street View.

  • Let's just go back here and play it and pause it at the end.

  • And so it's learned a bunch of-- it's looked at house numbers

  • from Street View and is able to really generate those.

  • It's actually never seen these particular numbers.

  • The network is generating them.

  • This is the kind of things that these networks can do today.

  • And the last one, in this case, it's actually generating faces.

  • And so, again, it learned from a large face data set

  • and now is able to generate faces in all kinds of ways.

  • Again, these are not faces that's seen directly.

  • It's regenerating them based on all it's understanding

  • of what faces look like.

  • Yeah.

  • Another one that we've actually released very recently

  • is this model to parse sentences.

  • So this is a very semantic parsing of sentences

  • where it looks at a sentence.

  • In this case, I booked a ticket to Google,

  • and tries to understand what the subjects are,

  • all the grammatical stuff about that,

  • actually things that I personally don't understand,

  • and it's hard for me to learn.

  • But this is able to actually learn

  • from a lot of sentences that were given to it.

  • In this example, it's actually making the sentence piece

  • by piece, and then building a tree of hierarchy,

  • and doing the parsing in this case.

  • This kind of thing is extremely useful

  • when you're trying to understand natural language.

  • And this just goes to show how much these computers can

  • understand the language that you and I speak in some cases

  • better than humans can as well or at least better than kids

  • can.

  • Here's another example, sort of going back to the data example

  • where in science fiction this-- here's this sort of poem

  • that a computer might generate.

  • This is actually something that was generated by a real program

  • today or a real model today.

  • It's clearly not great, but in this particular case,

  • it just learned from a bunch of books.

  • It read those books or understood those books.

  • And was able to generate this particular poem.

  • So now let's talk about this grateful research.

  • Now let's talk about a few products

  • and how we're using them.

  • So we had this slide early on where

  • I showed you a bunch of products that [INAUDIBLE] using this.

  • Let's go into some of those products.

  • Let's see how we are actually using them.

  • What are the kind of things we're doing with machine

  • learning in these products?

  • First a search-- I'm sure most of you

  • have-- or probably all of you have used search

  • or are using search in some ways.

  • This is a very good example of how machine learning is used.

  • The slide that I showed you here,

  • there are at least two ways we are

  • using machine learning here.

  • The first, if you use Y Searcher--

  • so if you click that icon on the right for the mic

  • and you say something, there's machine

  • learning being used to understand what you're saying

  • and converting it to text.

  • So you say something.

  • That's really a bunch of bytes for the computer.

  • To convert that into actual English or whatever,

  • like you're speaking, it needs to understand that.

  • And there are pretty sophisticated machine

  • learning models that do that for you today.

  • Then the next part is once you have

  • that, what results to show you?

  • Google has billions or trillions of pages really indexed,

  • which you probably don't even think about.

  • What you see typically when you do that search

  • is just 10 pages on that first page.

  • And you really want those 10 pages

  • to be the best ones, the most appropriate ones for you.

  • In some cases, you might go to the next page or so.

  • But most people are just going to look at that first page

  • and make their decision.

  • So how do we really take those billions of pages,

  • match them to your query, and then even then

  • it's probably millions or maybe more,

  • and then put them down to those, whittle them down

  • to those top 10 pages?

  • And that's, again, machine learning

  • at work in many, many forms.

  • We're taking all these pages, understanding your query,

  • matching it through all the data that we have.

  • Maybe if you're looking for a restaurant near Mountain View,

  • we're going to leverage that and understand

  • that that needs to be combined with the kind of pages

  • that are there.

  • If it's something else, again, understanding

  • the query and the documents together, combining them,

  • and then ranking them, and sorting them,

  • that something that machine learning does really,

  • really well.

  • This example is of pictures.

  • We talked about vision models.

  • They're actually used in everything you do now.

  • So for example, over the years, image collections, the amount

  • of photos I have are huge.

  • They're just growing.

  • With digital cameras coming on, it's

  • been so much easier to take pictures and just

  • keep collecting them.

  • You never throw them away.

  • But then when you want to find a picture

  • and you really want to go through those collections,

  • it's really, really hard.

  • So how can you do that?

  • In this case, the same kind of research

  • that I was talking about, which understands those images,

  • is really indexing and labeling all the pictures

  • you have in Google Photos.

  • And in this particular example, if you

  • search for cherry blossom, it really

  • shows you all your art pictures of cherry blossom,

  • and can display them for you.

  • Another example is email.

  • So there is this thing called Smart Reply, which

  • we launched a few months ago, which

  • you might be using in email.

  • And what happens there is you get an email.

  • You typically, in this case, you probably

  • just want to get a quick, easy answer.

  • And it just gives you some choices.

  • Here, pick this one.

  • And what it's doing is it's really

  • understanding what your email thread is about

  • and then suggesting a bunch of replies ranking them

  • whatever makes the most sense and showing

  • them the most interesting ones to you if it makes sense.

  • Another one is Google Music.

  • There are many things happening in there.

  • And one of the things that we do is we recommend a lot of things

  • that you might want to play, maybe albums that you

  • might want to play, maybe actual songs

  • that you might want to play based on the time,

  • based on what you are trying to do right now,

  • based on your interest.

  • It's really understanding all of that, sorting them,

  • and, again, ranking them, recommending

  • the right things for you.

  • We only want to make it easy for you to find what you want.

  • The ideal case would be you go there.

  • Let's say it's afternoon or let's

  • say you're driving back from work.

  • And you typically like to listen to, say, classical songs

  • at that time.

  • It should really just recommend you

  • the classical song and maybe the top station

  • that you do every time.

  • You shouldn't have to look for it just because-- just

  • before this, you were doing something else.

  • So all of these are examples.

  • Machine learning can really make a difference

  • and are helping us in our everyday lives.

  • And one more that I'm going to talk about is spam.

  • So everybody uses email.

  • A few years ago, many 5, 10 years ago,

  • spam was this huge issue for everyone

  • where most of the emailing in our inbox

  • is starting to be spam.

  • You have to actually filter them out manually.

  • So it's not that from that time on to now spam is reduced.

  • In fact, spam continues to increase.

  • It's just there are filters, and algorithms

  • have gotten a lot better at recognizing what's spam.

  • Again, they're not perfect.

  • Occasionally you'll still see spam.

  • Or maybe we would say something is spam when it's not.

  • But they're getting so much better,

  • that I don't go to my spam for spam box

  • and see what I might have missed,

  • or the other way around.

  • I rarely get emails that's actually spam,

  • even though if you actually read the numbers about how much spam

  • emails there are, these are still

  • way more, maybe 10 times more than the real emails of spam

  • itself.

  • So lots of improvements in machine learning,

  • lots of improvements in the products

  • that are driven by machine learning.

  • So it's not about the algorithms or the technology itself.

  • It's about how do we apply it to the products that we build?

  • How do we make things better for all the things

  • that we're trying to do?

  • All of this technology, and algorithms

  • in the computer that we're building,

  • it's all towards solving real problems,

  • solving real needs for things that we do today.

  • So now let's talk a little bit about TensorFlow.

  • That's the product I need.

  • And it was built to help push machine learning forward.

  • Our team is a research team.

  • And part of the goal for us was to really push

  • machine learning forward.

  • And that's why we built it.

  • But that's not the angle, right?

  • We want to take that machine learning as well

  • and apply it like I was saying in all of these products.

  • TensorFlow allows us to do that, and now you as well.

  • And I'm going to tell you of how we can do this.

  • So it is open source.

  • It's a machine learning lab available on GitHub.

  • It's, in fact, the most popular machine

  • learning library we opened source six months ago.

  • And it's rapidly going to the top.

  • There's a lot of interest.

  • There are a lot of people using it in all kinds of areas.

  • Some of the research that I talked about, actually

  • people outside Google who've taken up TensorFlow, and used

  • it in doing a lot of those-- building those new things

  • and new ideas, people are including it

  • in different kinds of products as well.

  • So at the core, it's a library that's-- let's go a step

  • forward.

  • So what's this library for right there?

  • There are many different things that we can do.

  • Like I was saying, our group is doing a lot of research.

  • So it is for researchers who want to take machine learning

  • and really push it forward, make it much better than it is,

  • improve the algorithms, improve how we can do things with it.

  • So it is for them.

  • But then there are data scientists

  • who want to really take those algorithms-- let's

  • see our data, let's see our product.

  • And you want to improve that product

  • by playing this machine learning to the data that you have.

  • And so the data scientists can take those algorithms

  • because they're also open now.

  • They're made available to you on all kinds of places.

  • You can take those.

  • You can use those and make some improvements.

  • And then there are developers, like a lot of us

  • here, actually, who are trying to make, say, an app,

  • and want to make it better, or say, apply a product

  • to their data center.

  • And TensorFlow allows you to really take all the work

  • that researchers and data scientists have done,

  • and really run those same models, the train

  • models in production using the exact same APIs without having

  • to worry about-- OK.

  • This seems great, but how am I going

  • to deploy this to production.

  • That's another complexity that that's harder.

  • So we definitely try to make sure that from the beginning,

  • from the point that you get started in machine learning,

  • to actually pushing it out, to production,

  • it's a very short time cycle.

  • You shouldn't be stuck because of infrastructure.

  • That's taken care of you.

  • That's taken care of for you.

  • And you can focus on what's important for you

  • in the most ways, which is really building the products,

  • helping your customers, helping your users.

  • So just a couple examples of TensorFlow-- not going

  • to go too detailed into the code,

  • but this is a very simple example where we just

  • take two inputs, do a matrix multiply,

  • and just print out the result. So what it's doing

  • is in the first two cases, it's creating

  • two nodes in the graph.

  • And actually, let me just-- yeah.

  • It's creating two nodes in the graph,

  • and really doing a multiplication, which

  • is another node, which takes the inputs from the other two

  • nodes.

  • And then the last one really evaluates

  • based on the two inputs, combines them,

  • and generates the output.

  • This one's actually a regression problem.

  • So we talked about classification earlier.

  • In this case, it's trying to predict some values based on,

  • can we have some inputs?

  • Can we have some outputs?

  • You want to learn to predict the outputs.

  • It's basically doing something similar to what we were seeing

  • on the playground where you're taking inputs,

  • you're multiplying by matrix, and you haven't lost it.

  • You're trying to optimize it, which

  • is a typical machine learning thing.

  • And then you just loop through.

  • So the last, at the end, you basically have these.

  • You're going through a bunch of steps, and say 200 times

  • in this particular case, and you just print this out.

  • And over time you're basically learning

  • to predict things better.

  • Again, very simplistic examples, typical models

  • might be much bigger.

  • But in a lot of cases, you might also

  • be able to use models that are already there.

  • Why don't we just give you some examples

  • of the kind of performance we provide comparing it

  • to some standard benchmarks that are there on an open source

  • website.

  • And early on when we launched TensorFlow,

  • it was in the fastest library out there.

  • And there were a lot of improvements that we had to do.

  • But over the last six months, we've really focused on that.

  • And in this point, it's really at the top

  • in terms of compare compatible to everything else out there.

  • And we continue to improve it.

  • There's not the end of it.

  • There are many more improvements that you can do.

  • In this case, we're combating four different kinds of models.

  • In this case, they happen to be all image models.

  • And as you see, the load is actually better.

  • In a few cases, it's just slightly worse.

  • In few cases it's slight-- in one case

  • it's likely better, and so on.

  • So it's roughly about the same ballpark performance

  • at this point.

  • Now, one thing you can do with TensorFlow as well

  • is you can run it across many, many machines

  • rather then running on just one machine.

  • So the previous benchmark that I was showing you

  • was basically running on a single GPU

  • and training these models on GPUs, something

  • that a lot of people do.

  • In this particular case, we're training on many, many more

  • machines, so each machine having a single GPU

  • going all the way to 100.

  • And you can see it scales quite well.

  • In fact, up to 16 it's almost linear.

  • And then it's still going to be used to scale you in past that.

  • And this is something that we use internally as well

  • to train really large models and large data sets, which

  • is-- as the data is going, it's really important to be

  • able to do this, to be able to run on clusters, which many

  • of you probably do already.

  • Oops.

  • So next I'm going to talk a little bit about TensorFlow

  • for Poets.

  • So there's actually a code lot for this

  • as well if you want to look at it and look at it online,

  • or I believe there's a place here too.

  • So the idea here is-- let's say you have a bunch of images,

  • in this case flowers, and you want

  • to learn to recognize those flowers,

  • or maybe you just identify if it's a flower

  • or if these are five different kinds of flowers.

  • And what you can do is you don't have

  • to have a million examples.

  • You might just have 1,000 examples, lets say.

  • But you can get started from a model that works really,

  • really well.

  • For example, our best image model

  • that you've open sourced and made available,

  • and that's what this example talks about.

  • Take the top part of it, just train that

  • on your data set from the point that it's already trained on.

  • So it's just going to learn to classify the flowers.

  • It already understands image well,

  • so it doesn't have to really know what kind of features

  • are important for classifying images.

  • It knows that already.

  • It can just take your particular models, your particular data,

  • and learn to just separate those out into different classes.

  • So there are a few tools that I would

  • like to mention at the end that you can play with,

  • or that you can use before you go in terms of just making

  • things work well for you.

  • So a lot of you are probably familiar with Apache Spark,

  • which is used for doing analysis and doing

  • learning on large data sets.

  • So there is an integration of Spark

  • with TensorFlow-- somebody at Databricks

  • did that-- where you can actually take some images.

  • You can create.

  • It's very simplistic in the sense

  • you can create a lot of different models

  • and really use it to pick the best model.

  • You do that a lot because there are some having parameters

  • like, say, learning rate or how big should your model be,

  • et cetera.

  • You maybe try 10 different models and use

  • that to pick the best one.

  • Another thing I would like to point you do

  • is Google's own Cloud Machine Learning Platform.

  • We announced this a couple of months ago.

  • It's under limited preview right now, but you can sign up,

  • and we will make it accessible to you

  • soon, where you can really have the same power of TensorFlow

  • that you have on your machine say,

  • but maybe don't have the compute resources,

  • or you don't want to manage all the resources that you need

  • to really run at large scale.

  • And cloud takes care of all of that for you.

  • So you can experiment on your machine,

  • make sure you understand what you want to do, and then

  • push it off to the cloud, let it take care

  • of running large scale stuff.

  • There are a few examples of what you

  • can do with models that have already been trained by others

  • and really take them and just apply them.

  • There are some APIs on the forum today

  • for what's called the Mobile Mission APIs, which

  • allows you to do face tracking.

  • So let's say your pictures, it allows you to-- it

  • identifies parts of the faces.

  • It first identifies the face, and then

  • parts of the face like the eyes, nose, et cetera,

  • so you can do different kinds of things with it.

  • It also recognizes barcodes and QR codes

  • and automatically understands and parses them for you.

  • On the cloud there are also some APIs for vision and speech

  • in translation, which you can basically just send your image,

  • and it tells you what kind of image

  • it is among a lot of different classes.

  • For speech, again, you can upload data, upload speech,

  • and it will give you back text for whatever kind.

  • For translation, you can give it text in one language,

  • say English or Japanese or whatever makes sense for you,

  • and it's going to convert that to another language.

  • So many different APIs, and this is a growing area

  • that you are going to need to watch out for.

  • So now for the future.

  • It is really.

  • I started by saying it's really not the future, but it is.

  • It i the present and the future.

  • I just wanted to call out that it's important for you

  • to start thinking about it now and not wait for the future,

  • really.

  • And it's really your turn to start

  • thinking about how you can incorporate machine learning

  • into everything that you do.

  • What people are looking for today

  • is not just applications that do what they want.

  • There are tons of things that are already doing that.

  • How can you add that edge?

  • How can you how you make it really better for that user?

  • By understanding what they want before they

  • ask for every single thing.

  • And that's really something that you can do.

  • Think about this as you build your next application,

  • as you build your next project.

  • And I think that's how we'll change the future.

  • That's how we'll make machine learning part of the future.

  • So I wanted to leave you with some place with some pointers

  • to where you can read more about TensorFlow.

  • It's on GitHub, and their docs are tensorflow.org.

  • And going back to my reference for Tik-Tok,

  • Tik-Tok is waiting for you to really make the difference

  • to do these things now.

  • And thank you.

  • [APPLAUSE]

  • [MUSIC PLAYING]

Cool.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it