Placeholder Image

Subtitles section Play video

  • Good afternoon.

  • Welcome to the future forum, a series of discussions where we

  • are exploring trends that are changing the future.

  • This series is presented by the Sloan Fellows from the Stanford MSX program.

  • My name is Ravi.

  • I'm an engineer by training, with over ten years of experience.

  • I've been fortunate to design and develop products for

  • some of the leading high tech companies here in the US.

  • Currently, as a Sloan Fellow, I'm privileged to spend

  • a year in Silicon Valley and at the Stanford Graduate School

  • of Business participating in the evolution of technology and

  • learning from some of the brightest minds in business.

  • The MSX Program is a full time on-campus one-year management

  • degree specifically designed for accomplished and

  • experienced professionals from around the world.

  • My classmates on average have over 13 years of experience,

  • come from over 40 different industries, and have been leaders in driving change.

  • Today I had the honor of introducing professor Andrew Ng.

  • Andrew is one of the leading thinkers in artificial intelligence with research

  • focusing on deep learning.

  • He has taught machine learning for

  • over 100,000 students through his online course at Coursera.

  • He founded and led the Google Brain project,

  • which developed massive scale, deep learning algorithms.

  • He's currently the VP and chief scientist of Baidu,

  • the co-chairman and co-founder of Coursera, and last but

  • not least, an adjunct professor right here at Stanford University.

  • Please join me, and the 2017 Sloan Fellows in welcoming Professor Andrew Ng.

  • >> Thank you.

  • >> [APPLAUSE] >> Thank you, and thank you, Ravi.

  • So what I want to do today is talk to you about AI.

  • So as Ravi mentioned, right now I lead a large AI team

  • at Baidu, about 1300 scientists and engineers and so on.

  • So I've been fortunate to see a lot of AI applications, a lot of research in AI

  • as well as a lot of users in AI in many industries and many different products.

  • So as I was preparing for this presentation,

  • I asked myself what I thought would be most useful to you.

  • And what I thought I'd talk about is four things.

  • I want to share with you what I think are the major trends in AI.

  • Because I guess the title of this talk was AI is the New Electricity.

  • Just as electricity transformed industry after industry 100 years ago,

  • I think AI will now do the same.

  • So I share with you some of these exciting AI trends that I and

  • many of my friends are seeing.

  • I want to discuss with you some of the impact of AI on business.

  • Whether, I guess, to the GSPC and to the Sloan Fellows, whether

  • you go on to start your own company after you leave Stanford, or whether you join

  • a large enterprise, I think that there's a good chance that AI will affect your work.

  • So I'll share with you some of the trends for that.

  • And then talk a little bit about the process of working with AI.

  • This is some kind of practical advice for how to think about,

  • not just how it affects businesses, but how AI affects specifically products and

  • how to go about growing those products.

  • And then finally, I think for the sign up of this event, there was a space for

  • some of you to ask some questions and

  • quite a lot of you asked questions about the societal impact of AIs.

  • I'll talk a little bit about that as well, all right?

  • So the title of this talk is projected, no, I guess not, all right.

  • I think on the website the title was listed as the AI is the New Electricity.

  • So it's an analogy that we've been making over half a year or something.

  • About 100 years ago, we started to electrify the United States, right,

  • develop electric power.

  • And that transformed transportation.

  • It transformed manufacturing, using electric power instead of steam power.

  • It transformed agriculture, right.

  • I think refrigeration was a really, a transformed healthcare and so

  • on and so on.

  • And I think that AI is now positioned to

  • have an equally large transformation on many industries.

  • The IT industry, which I work in, [COUGH] is already transformed by AI.

  • So today at Baidu, Web search, advertising, all powered by AI.

  • The way we decide whether or not to approve a consumer loan, really that's AI.

  • When someone orders takeout through the Baidu on-demand food delivery service,

  • AI helps us with the logistics.

  • They route the driver to your door,

  • helps us estimate to tell you how long we think it'll take to get to your door.

  • So it's really up and down.

  • Both the major services, many other products in the IT industry are now

  • powered by AI, just literally possible by AI.

  • But we're starting to see this transformation of AI technology

  • in other industries as well.

  • So I think FinTech is well on its way to being totally transformed by AI.

  • We're seeing the beginnings of this in other industries as well.

  • I think logistics is part way through its transformation.

  • I think healthcare is just at the very beginnings, but

  • there's huge opportunities there.

  • Everyone talks about self-driving cars.

  • I think that will come as well, a little bit, that will take a little bit of time

  • to land, but that's another huge transformation.

  • But I think that we live in a world where

  • just as electricity transformed almost everything almost 100 years ago,

  • today I actually have a hard time thinking of an industry that I don't think

  • AI will transform in the next several years, right?

  • And maybe throughout this presentation, maybe at the end of doing Q and

  • A, if you can think of an industry that AI won't transform, okay,

  • like a major industry, not a minor one.

  • Raise your hand and let me know.

  • I can just tell you now, my best answer to that.

  • So I once, [COUGH] when my friends and I,

  • sometimes my friends and I actually challenge each other

  • to name an industry that we don't think would be transformed by AI.

  • My personal best example is hairdressing, right, cutting hair.

  • >> [LAUGH] >> I don't know how to build a robot

  • to replace my hairdresser.

  • Although I once said this same statement on stage.

  • And one of my friends, who is a robotics professor, was in the audience.

  • And so my friend stood up, and she pointed at my head, and she said,

  • Andrew, for most people's hairstyles, I would agree you can't build a robot.

  • But for your hairstyle, Andrew, I can-

  • >> [LAUGH] >> All right.

  • So despite all this hype about AI, what is AI doing?

  • What can AI really do?

  • It's driving tremendous economic value, easily billions.

  • At least tens of billions,

  • maybe hundreds of billions of dollars worth of market cap.

  • >> But what exactly is AI doing?

  • It turns out that almost all this ridiculously huge amounts of value of AI,

  • at least today, and the future may be different, but at least today almost all

  • this massive economic value of AI is driven by one type of AI, by one idea.

  • And This technical term is that it's called Supervised Learning.

  • And what that means is using AI to figure out

  • a relatively simple A to B mapping, or A to B response.

  • Relatively simple A to B or input those response mappings.

  • So, for example, given a piece of email, if I input that,

  • and I ask you to tell me if this is spam or not.

  • So, given an email, output 0 or 1 to tell me if this is spam or not, yes or no?

  • This is an example of a problem where you have an input A, you can email, and

  • you want a system to give your response B, 0 or 1.

  • And this today is done with Supervised Learning.

  • Or, given an image.

  • Tell me what is the object in this image and

  • maybe of a thousand objects or 10,000 objects.

  • Just try to recognize it.

  • So you input a picture and output a number from say,

  • one to 1000 that tells you what object this is.

  • This, AI can do.

  • Some more interesting examples.

  • When you're given an audio clip, maybe you want to output the transcript.

  • So this is speech recognition, right.

  • Input an audio clip and output detects transcript of what was said,

  • so that's speech recognition.

  • And the way that a lot of AI is built today is by having a piece of software

  • learn, I'll say exactly in a second what I mean by the word learn,

  • what it means for a computer to learn, but a lot of the value of AI

  • today is having a machine learn these input to response mappings.

  • Given a piece of English text, I'll put the French translation, or

  • I talked about going from audio to text or maybe you want to go from text,

  • and have a machine read out the text in a very natural-sounding voice.

  • So, it turns out, that the idea of supervised learning, is that,

  • when you have a lot of data, of both A and B both.

  • Today, a lot of the time, we have very good techniques for automating,

  • for automatically learning a way to map from A to B.

  • For example, If you have a giant database of emails, as well as annotations of what

  • is spam and what isn't spam, you could probably learn a pretty good spam filter.

  • Or I guess I've done a lot of work on speech recognition.

  • If you have, let's say, 50,000 hours of audio, and if you

  • have the transcript of all 50,000 hours of audio, then you could do a pretty good job

  • of having a machine figure out what is the mapping between audio and text.

  • So, the reason I want to go into this level of detail is because

  • despite all the hype and excitement about AI,

  • it's still extremely limited today, relative to what human intelligence is.

  • And clearly you and I,

  • every one of us can do way more than figure out input to response mappings.

  • But this is driving incredible amounts of economic value, today.

  • Just one example.

  • Given some information about an ad, and about a user, can

  • you tell me whether you usually click on this ad?

  • Leading Internet companies have a ton of data about this, because of showing people

  • some number of ads that we sold whether they clicked on it or not.

  • So we have incredibly good models for

  • predicting whether a given user will click on a particular ad.

  • And by showing users the most relevant ads this is actually good for

  • users because you see more relevant ads and this is incredibly lucrative for

  • many of the online internet advertising companies, right.

  • This is certainly one of the most lucrative applications we have today,

  • possibly the most lucrative, I don't know.

  • Now, at Baidu,

  • you have worth of a lot of

  • product managers.

  • And one question that I got from a lot of product managers is, you're trying to

  • design a product and you want to know, how can you fit AI in some bigger product?

  • So, do you want to use this for spam filter?

  • Do you want to use this to maybe tag your friends' faces?

  • Or do you want to use this, where do you want to build speech recognition in your

  • app, but can AI do other things as well.

  • Where can you fit AI into, you know, a bigger product or a bigger application.

  • So, some of the product managers I was working with were struggling to understand

  • what can AI do and what can't AI do.

  • So I'm curious.

  • How many of you know what a product manager is or what a product manager does?

  • Okay good, like half of you.

  • Is that right?

  • Okay, cool.

  • I asked the same question at an academic AI conference and

  • I think only about one fifth of the hands went up, which is interesting.

  • Just to summarize when we in the workflow, a lot of tech companies,

  • it's the product manager's responsibility to work with users,

  • look at data, to figure out what is a product that users desire.

  • To design the features and sometimes also the marketing and the pricing, as well.

  • But let me just say design the features and figure out what the product is

  • supposed to do, for example, should you have a light button or not?

  • Do you try to have a speech recognition feature or not?

  • So it's really to design the product.

  • If you give the product spec to engineering which is responsible for

  • building it, right, that's a common division of labor in technology companies

  • between product managers and engineers.

  • So product managers, when I was working with them,

  • was trying to understand what can AI do?

  • So there's this rule of thumb that I gave many product managers,

  • which is that anything that a typical human can do.

  • With, at most, one second of thought.

  • Right, we can probably now or

  • soon, automate with AI.

  • And this is an imperfect rule.

  • There are false positives and false negatives with these heuristics so

  • this rule is imperfect but we found this rule to be quite helpful.

  • So today, actually at Baidu, there are some product managers running around

  • looking for tasks that they could do in less than a second and

  • thinking about how to automate that.

  • >> [LAUGH] >> I have to say, before we came up with

  • this rule, they were given a different rule by someone else.

  • And before I gave this heuristic,

  • someone else told them product managers, assume AI can do anything.

  • >> [LAUGH] >> And

  • that actually turned out to be useful.

  • Some progress was made with that heuristic, but

  • I think this one was a bit better.

  • A lot of these

  • things on the left you could do with less than a second of thought.

  • So one of the patterns we see is that

  • there are a lot of things that AI can do, but AI progress

  • tends to be fastest if you're trying to do something that a human can do.

  • For example, build a self-driving car, right?

  • Humans can drive pretty well, so

  • AI is making actually pretty decent progress on that.

  • Or diagnose medical images.

  • If a human radiologists can read an image The odds of AI

  • being able to do that in the next several years is actually pretty good.

  • There are some examples of tasks that humans cannot do.

  • For example, I don't think,

  • well, very few humans can predict how the stock market will change, right?

  • Possibly no human can.

  • And so this much harder to get an AI to do that as well.

  • And there a few reasons for that.

  • First is that if a human can do it, then first,

  • you're at least guaranteed that it's feasible, right?

  • Even if a human can't do it, like predict the stock market,

  • maybe it's just impossible, I don't know.

  • A second reason is that if a human can do it,

  • you could usually get data out of humans.

  • So we have doctors that are pretty good at reading radiological images.

  • And so if A is an image and B is a diagnosis,

  • then you can get these doctors to give you a lot of data,

  • give you a lot of examples of both A and B, right?

  • So things that humans can do, can usually pay people, hire people or

  • something, and get them to provide a lot of data most of the time.

  • And then finally, If a human can do it,

  • you could use human insight to drive a lot of progress.

  • So if a AI makes a mistake diagnosing a certain radiology image,

  • like an x-ray scan, like an x-ray image, then AI makes a mistake.

  • Then if a human can diagnose this type of disease, you can usually talk to the human

  • and get some insights about why they think this patient has lung cancer or

  • whatever and try to code into an AI.

  • So one of the patterns you see across the AI industry is that progress

  • tends to be faster when we try to automate tasks that humans can do.

  • And there are definitely many exceptions, but I see so

  • many dozens of AI projects and I'm trying to summarize trends I see.

  • They're all not 100% true, but 80 or 90% true.

  • So for a lot of projects, you find it if the horizontal axis is time and

  • this is human performance,

  • In terms of how accurately you can diagnose x-ray scans or

  • how accurately can classify spam email or whatever.

  • You find that over time the AI will tend to make rapid progress

  • until you get up to human level performance.

  • And if you ever surpass it, very often

  • your progress slows down because of these reasons.

  • And so this is great,

  • because this gives AI a lot of space to automate a lot of things.

  • The downside to this is the jobs implication, right.

  • If we're especially good at doing whatever humans can do, then I think AI

  • software will be in direct competition with a lot of people for a lot of jobs.

  • I would say probably already a little bit now, but even more so in the future.

  • And I'll say a little about that later as well.

  • The fact that we're just very good at automating things people can do and

  • we're actually less good at doing things people also can't do.

  • That actually makes the competition between AI and people for jobs laborious.

  • So all right, let me come back to the AI trends.

  • And one of these I'm going to delve a little bit deeper into the AI trends is,

  • I bet some of you will be asked by your friends afterward, what's going on in AI?

  • And I hope to give you some answers that let you speak intelligently as well,

  • to others about AI.

  • It turns out one of the ideas about AI have been around for

  • many years, frankly, several decades.

  • But it's only in the last several years, maybe the last five years,

  • that AI has really taken off.

  • So why is this?

  • When I'm asked this question, why is AI only now taking off?

  • There's one picture that I always draw.

  • So I'm going to draw that picture for you now.

  • Which is [COUGH] that, if on the horizontal axis,

  • I plot the amount of data,

  • [COUGH] And on the vertical axis,

  • I plot the performance of our AI system.

  • It turns out that several years ago, maybe ten years ago,

  • we were using earlier generations of AI software, earlier generations of most

  • common machine learning algorithms, to learn these A to B mappings.

  • And for the earlier generations of, so this is an earlier machine learning.

  • Sorry, let me call this traditional machine learning algorithms, all right.

  • It turns out that for

  • the earlier generations of machine learning algorithms, even as we

  • fed it more data, its performance did not keep on getting better.

  • It was as if beyond a certain point,

  • it just didn't know what to do with all the additional data you are now giving it.

  • And here by data, I mean the amount of A, comma B data, right?

  • With both the input A as well as the target B that you want to output.

  • And what happened over last several years is because of MOS law and

  • also GP use, maybe especially in GPU computing,

  • we finally have been able to build machine learning pieces of software

  • that are big enough to absorb these huge data sizes that we have.

  • So what we saw was that, if you feed your data into a small,

  • Neural network, we'll say a little bit later what a neural network is, but

  • an example of machine learning technology.

  • If you've heard the term deep learning, which is working really well but

  • also a bit overhyped.

  • Neural network and deep learning are roughly synonyms.

  • Then with a small neural network, the performance looks like that.

  • If you build a slightly larger neural net, The performance looks like that.

  • And there's only, if you have the computational power to build a very large,

  • Neural net that your performance kind of keeps on going up, right?

  • Sorry, I think this line should be strictly above the others,

  • something like that, right?

  • And so what this means is that in today's world,

  • to get the best possible performance, in order to get up here, you need two things.

  • First, you need a ton of data, right?

  • And second, you need the ability to build a very large neural network.

  • And large is relative, but because of this I think the leading edge of AI research,

  • the leading edge of neural net research is today shifting to supercomputers,

  • or HPCs, or high performance computers or super computers.

  • So in fact today, the leading AI teams tend to have this old structure where you

  • have an AI team and you have some of the machine learning researchers, right?

  • Abbreviates to ML.

  • And you have HPC, or high performance computing or

  • super computing researchers are working together to build a giant,

  • to build the big ion, to build the really giant computers that you need

  • in order to hit the levels of today's performance.

  • I'm seeing more and more themes that kind of have an old structure like this.

  • And the old structure is organized like this because, frankly, one of the things

  • we do at Baidu, for example, it requires such specialized expertise in

  • machine learning and such specialized expertise in HPC that there's no one human

  • on this planet that knows both subjects to the levels of expertise needed.

  • Correctly right?

  • So, let's see.

  • So let me go even further into,

  • [INAUDIBLE] in the questions that some of you asked on the website signing up for

  • this event, some of you asked about what evil AI killer was taking over

  • humanity and so on- >> [LAUGH]

  • >> People do worry about that.

  • So to kind of address that, I actually want to get just slightly technical and

  • tell you what is a neural network, right?

  • So a neural network loosely inspired by the human brain, right?

  • And so a neural network is a little bit like a human brain, all right.

  • So that analogy I just made is so easy for people like me, right,

  • to make to the media, that this analogy tends to make people think

  • we're building artificial brains, just like the human brain.

  • The reality is that today,

  • frankly, we have almost no idea how the human brain works.

  • So we have even less idea of how to build a computer that works just like

  • the human brain.

  • And even though we like to say, neural net works a little bit like the brain,

  • they are so different that I think we've gone past the point where that analogy is

  • still that useful, right?

  • It's just that maybe, we don't have a better analogy right now to explain it.

  • But so then, let me actually tell you what a neural network is, and

  • I think you'll be surprised at how simple it is, right.

  • So let me show you an example of the simplest machine learning problem,

  • which is, let's say you have a data set where you want to predict the price of

  • a house, right?

  • So you have the data set where the horizontal axis is the size of the house,

  • and the vertical axis is the price of the house, square feet, dollars.

  • So you have some data set like this, Right?

  • And so well, what do you do?

  • You fit a straight line to this, right?

  • So this can be represented by a simple neural network,

  • where you input the size, And you output the price, okay?

  • And so just this straight line function is represented via a neuron,

  • which I'm going to draw in pictures as a little circle like that, okay.

  • And, if you want a really fancy neuron,

  • maybe it's not just fitting in a straight line, maybe it's I don't know,

  • at this smart you realizes that price should never be negative or something,

  • but the first approximation, let's just say is, cutting a straight line, right?

  • Maybe you don't want it to be negative or something, [SOUND].

  • Now, so, this is maybe the simplest possible in your network,

  • one input, one output with a single neuron.

  • So what is in neural network?

  • Well, it's just to take a bunch of these things,

  • where you take a bunch of these things, and stringing them together.

  • So instead of predicting the price of house just based on the size,

  • maybe you think that the price of a house actually depends on several things,

  • which is, first, there's the size, and then there's the number of bedrooms.

  • And depending on the square footage and the number of bedrooms,

  • this tells you what family size this can comfortably support, right.

  • Can this support a family of two, a family of four,

  • a family of six, whatever, right, and then, well, what else?

  • Based on the zip codes of the house, as well as the average

  • wealth of the neighborhood, maybe this tells you

  • about the school-to-school quality, right.

  • So, with two little neurons, one that tells us a family size,

  • a house can support one that tells us his group quality and maybe the zip code also

  • tells us, how walk without is this, right?

  • And maybe if I'll buy a house maybe ultimately I care about my family size and

  • support, is this a walkable region, was the school quality.

  • So let's say this things and string them into another neuron.

  • Another linear function or

  • something like it that then [SOUND] outputs the price, okay?

  • So this is in your neural network and one of the magics of a neural network is that,

  • I gave this example, as if when we're building this neural network,

  • we have to figure out that family size, walkability and school quality

  • are the three most important things that determine the price of a house, right.

  • As I drew this neural network talked about those three concepts part

  • of the magic of the new network is that when you are training one of these

  • things you don't need to figure out what are the important factors,

  • all you need to do is give it the input A [SOUND], and it responds B [SOUND] and

  • it figures out by itself what all of these intermediate things that

  • really matter for predicting the price of a house.

  • And part of the magic is when you have a ton of data, when you have enough data,

  • A and B, it can figure out an awful lot of things by itself, all right?

  • I've taught machine learning for a long time,

  • I was a full-time faculty at Stanford for over a decade,

  • now I'm still adjunct faculty in the CS department.

  • But whenever I teach people the mathematical details of a neural network,

  • often I get from the students like almost a slight sense of disappointment [LAUGH].

  • Like is this really this simple, [LAUGH] you gotta be fooling me, but

  • then you implement it and it actually works when you feed it a lot of data.

  • Because all the complexity,

  • all the smarts of the neural network comes from us giving it tons of data.

  • Maybe tens of thousands or hundreds or thousands or more of houses and

  • their prices, and only a little bit of it comes from the software, so

  • the software, well known trivia.

  • Software is really not that easy, right.

  • The software is a piece of network that only kind of knows.

  • The data is a passive, larger source of information for

  • the smarts of the neural network,

  • then the software that we have to write,

  • [SOUND], so, and let's see, yeah.

  • One of the implications of this is [SOUND] when you think about building businesses,

  • we think about building products of businesses,

  • what is the scarce weasels, right?

  • If you want to build a defensible business that deeply incorporates AI,

  • what are the moats?

  • Or how do you build a defensible business in AI?

  • Today, we're fortunate that the AI community,

  • the AI research community is quite open.

  • So almost all, maybe all of the leading groups,

  • tend to publish our results quite freely and openly.

  • [SOUND] And if you read our papers at Baidu, we don't hold anything back.

  • If you read our state of the art speech recognition paper, our state of the art

  • face recognition paper, we really try to share all the details.

  • And we're not trying to hide any details, right.

  • And many leading, researchers in AI do that, so

  • it's difficult to keep algorithms secret anyway.

  • So how do you build a defensible business using AI?

  • I think today, there are two scarce resources.

  • One is data, it's actually very difficult to acquire huge amounts of data, right,

  • A come a B.

  • Maybe to give you an example, one of the projects, well a couple examples,

  • speech recognition, I mention just now we've been training on.

  • 50,000 hours of data.

  • This year, we expect to train about 100,000 hours of data.

  • That's over 10 years of audio data, right?

  • So literally, if I pull my laptop and

  • start playing audio to you to go through all the data our system listens to,

  • we'll still be here listening until the year 2027, I guess, right?

  • So this is massive amounts of data that is very expensive to obtain.

  • Or take face recognition.

  • We've done work on face recognition.

  • So to say some numbers, the most popular academic computer vision

  • benchmark slash competition has researchers work on about 1

  • million images, right, and the very largest academic

  • papers in computer vision publish papers on maybe 15 million images, right,

  • of the kind of recognizing objects from pictures or whatever.

  • At Baidu, to train our really leading edge, possibly best in the world,

  • but I can't prove that, definitely very, very good face recognition system.

  • We train it on 200 million images, right,

  • so this scale of data is very difficult to obtain.

  • And I would say that, honestly, if I were leading a small team of five or

  • ten people, I would have no idea, frankly, how to replicate this scale of data and

  • build a system like we're able to in a large company like I do,

  • with access to just massive scale data sets.

  • And in fact, at large companies,

  • sometimes we'll launch products, not for the revenue, but for the data, right?

  • We actually do that quite often.

  • Often I get asked, can you give me a few examples, and the answer, unfortunately,

  • is usually no, actually.

  • But I frequently launch products where my motivation is not revenue but

  • is actually data, and we monetize the data through a different product.

  • So I would say that today in the world of AI, the two scarcest resources are,

  • I would say the most scarce resource today is actually talent

  • because AI needs to be customized for your business context.

  • You can't just download an open source package and apply it to your problem.

  • You need to figure out where does the spam filter fit in your business or

  • where does speech recognition fit in your business.

  • And what context, where can you fit in this AI machine learning thing?

  • And so this is why there is a talent war for AI because every company,

  • to explore your data, you need that AI talent that can come in to

  • customize the AI, figure out what is A and what is B, where to get the data,

  • how to tune the algorithm to work for your business context.

  • I'd say maybe that's a scarce resource today.

  • And then second is data is proving to be a defensible barrier for

  • a lot of AI-powered businesses.

  • So there's this concept of a virtuous circle

  • [COUGH] of AI that we see in a lot of products as well.

  • Which is, [COUGH] you might build a product, [COUGH] right?

  • For example, we built a speech recognition system to enable a voice search,

  • right, which we did at Baidu.

  • Because the US search companies have done that, too, some of the US, anyway.

  • The speech recognition system, whatever, some product,

  • because it's a great product, we get a lot of users, right?

  • The users using the product naturally generates data, right, and

  • then the data through ML feeds into our product to make the product even better.

  • And so this becomes a positive feedback.

  • That often means that the biggest and the most successful products, the most

  • successful products, the most successful, the best product often has the most users.

  • Having the most users usually means you get the most data, and

  • with modern ML, having the most data sometimes, usually,

  • often means you can do the best AI, that's machine learning.

  • And therefore have an even better product, and

  • this results in a positive feedback loop into your product.

  • And so when we launch new products,

  • we often explicitly plan out how to drive this cycle as well.

  • And I'm seeing pretty sophisticated strategies in terms of deciding how to

  • roll out the product, sometimes by geography, sometimes market segment,

  • in order to drive this cycle, in order drive the cycle, right?

  • Now this concept wasn't around for a long time, but this is really a much stronger

  • positive feedback loop just recently, because of the following reasons.

  • Is traditional AI algorithms work like that, so

  • there was kind of beyond a certain point, you didn't need more data, right?

  • This is data performance.

  • So I feel like ten years ago data was valuable, but

  • it created less of a defensive barrier because beyond a certain threshold,

  • the data, it just didn't really matter.

  • But now the AI works like that, the data is becoming even more important for

  • creating defensible barriers for AI kind of businesses.

  • Let's see, all right.

  • Strike that question then.

  • Several of you asked me about, actually Robbie was kind enough

  • to take the audience questions from the sign-up form and

  • summarize them into major categories.

  • So he summarized the questions into your major heading categories, right?

  • So one of them was AI society impact.

  • One was your practical questions for AI.

  • One of the headings that Robbie wrote was scared.

  • As in, will AI take over the human race or kill humans or whatever?

  • So I feel like there is this, so this is a circle of AI.

  • There is a, I'm not sure what to call it,

  • I'm going to call it the non-virtuous- >> [LAUGH]

  • >> Circle of hype.

  • >> [LAUGH] >> When preparing for

  • this talk, I actually went to a thesaurus to look up antonyms, opposites,

  • of the word virtuous, and vile came up.

  • But I thought, [LAUGH], vile circle of height was a bit too provocative, I know.

  • But I feel like that we are, unfortunately, there is this evil AI hype.

  • AI take over the world instead of humans, whatever.

  • Unfortunately, some of that evil AI hype, right, fears of AI,

  • is driving funding, because what if AI could wipe out the human race?

  • Then sometimes we have the individuals,

  • or sometimes government organizations or whatever.

  • They now think, well, let's fund some research, and

  • the funding goes to anti-evil AI.

  • >> [LAUGH] >> And

  • the results of this work drives more hype, right, and I think this is actually a very

  • unhealthy cycle that a small part of AI communities are getting into.

  • And I'll be honest.

  • Unfortunately, I see a small group of people, it's a small group, with a clear

  • financial incentive to drive the hype, because the hype drives funding to them.

  • So I'm actually very unhappy about this hype.

  • And I'm unhappy about it for a couple of reasons.

  • First I think that there is no clear path to how AI can become sentient, right?

  • Part of me, I hope that there will be a technological breakthrough that enables AI

  • to become sentient, but I just don't see it happening.

  • It might be that that breakthrough might happen in decades.

  • It might happen in hundreds of years.

  • Maybe it'll happen thousands of years from I don't know.

  • I really don't know.

  • The timing of technology breakthroughs is very hard to predict.

  • I once made this analogy that worrying about evil AI killer robots today

  • is a little bit like worrying about overpopulation on the planet Mars, right?

  • >> [LAUGH] >> And

  • I do hope that someday we'll colonize Mars and

  • maybe someday Mars will be overpopulated.

  • And some will ask me Andrew there are all these young,

  • innocent children dying of pollution on Mars, how can you not care about them?

  • And my answer is I haven't land to the planet yet, so

  • I don't know how to work productively on that problem.

  • >> [LAUGH] >> So, maybe the dilemma.

  • If you ask me, do I support doing research on x, right?

  • Do I support research on almost any subjects?

  • I usually want to say yes, of course.

  • I research on anti evil AI on a positive thing.

  • But I do see that there's a massive misallocation of these sources.

  • I think if there were two people in United States,

  • maybe ten people in United States where I can go and to anti evil A.I. is fine.

  • The ten people working on over population of Mars is actually fine,

  • form a committee, write some papers.

  • >> [LAUGH] >> But

  • I do think that there is much too much investment in this right now, right?

  • So yeah, so sleep easy.

  • And maybe the other thing, quite a lot of you asked about the societal impact,

  • which what I found is varying.

  • The other thing I worry about is this evil AI hype being used to

  • whitewash a much more serious issue, which is job displacement, right?

  • So frankly, I know a lot of leaders in machine learning, right?

  • And I talk to them about their project.

  • And there's so many jobs that are squarely in the cross hairs of my friends'

  • projects, and the people doing those jobs, frankly, they just don't know, right?

  • And so, in Silicon Valley, we're being responsible for

  • creating tremendous wealth, but part of me feels like we

  • need to be responsible as well for owning up to the problems we cause and

  • I think job displacement is the next big one, thank you.

  • Thank you.

  • And I'm going to say just a little bit more about that at the end.

  • And then we shouldn't whitewash this issue by pretending

  • that there's some other futuristic fear, to fearmonger about and

  • try to solve that by ignoring the real problem.

  • We'll see.

  • So the last thing I want to talk about is,

  • AI product management.

  • So AI is evolving rapidly super exciting, they're just opportunities left and right,

  • but I want to share with you some of the challenges I see as well, right?

  • Already some of the things we're working on that I end up bleeding

  • as well I feel like our own thinking is not yet mature.

  • But that you run into if you try to incorporate AI into business.

  • So AI Product Managements.

  • So maybe many of you know what a PM is, but let me just draw for

  • you a Venn diagram.

  • That's my simple model of how PMs and engineers should work together, right?

  • So let's say this is the set of all things that users will love.

  • Right, so the set of all possible things,

  • all the possible products that users will love.

  • And this is a set of all things that are feasible.

  • Right, meaning that today's technology or

  • technology now or the near future enables us to build this, right?

  • So for example I would love a teleportation device, but I don't think

  • that's technological feasible, so teleportation device will be here, but

  • we'll all love one, but I don't think it's feasible.

  • There are a lot of things that are feasible but then no one wants it.

  • But will throw a lot of those as Slick and Dally as well.

  • And I think the secret is to try to find something in the middle, right?

  • And so, roughly, I think of the PMs job as figuring out what is this set on the left.

  • And research engineering's job as figuring out what's in this right side.

  • And then the two kind of work together to built something

  • that's actually in the intersection, right?

  • Now, one of the challenges is that AI is such a new thing that

  • the work flows and processors that we're used to in tech companies,

  • they're not quite working for AI tools.

  • So, maybe for example, in Slick and Dally we have pretty well established

  • processors, product managers and engineers and engineer to do their work.

  • For example, for a lot of apps the product manager will draw a wire frame, right?

  • Where, so for example, actually for the search app, right?

  • The PM might decide well put a logo there, put a Search bar there,

  • put a microphone there, put a camera there, and then put a news feed here,

  • and then actually, well we actually moved our microphone button down here and

  • we'll have a social button.

  • This button, this button.

  • So a product manager would draw this on a piece of paper or

  • on the cat thing, and an engineer would look at this

  • drawing that the product manager drew, and they would write a piece of software and

  • this is actually a rough for the Baidu search, yeah?

  • The search button in terms of news here, right?

  • It will open like a, it combines

  • the search as well as a social newsfeed.

  • Not very social, a newsfeed, both in one.

  • But, so this works for if you pull open your app or

  • you build a lot of apps like a news app or a social feeds app or whatever,

  • this type of working together works with established process of doing this.

  • But how about an AI app?

  • You can't wire frame a self-driving car that runs by wire frame from a self

  • driving car or if you want to build a speech recognition system.

  • The PM draws this button, but I don't know how good, how accurate,

  • is my speech recognition system need to be.

  • So while the processes are not- So

  • what if this wire frame was a way for the PM and the engineer to communicate.

  • We are in still frankly trying to figure out what are good ways for a PM and

  • an engineer to communicate a shared vision of what a product should be.

  • Is that make sense?

  • So PM does a lot of work, goes out, figures out what's important to users and

  • they have in their head some idea what this product should be.

  • But how do they communicate that to the engineer?

  • All right. And so, as a complete example of that.

  • [COUGH] Let's say that you're trying to build

  • a be recognition system, I do know how to work on speech recognition, right so.

  • My team and I, they all work on speech recognition so we talk about that a lot.

  • If you're trying to build a speech recognition system, say to enable voice

  • search there a lot of ways improve the speech recognition system.

  • Maybe you want it to work better even in noisy environments, right?

  • But a noisy environment, it could mean car environment, or it could mean a cafe

  • environment, people talking versus a car noise, a highway pursuit.

  • Or maybe you really need it to work on low bandwidth audio, right?

  • Maybe sometimes users are just in a bad cell phone coverage setting, so

  • you need it to work better on low bandwidth audio.

  • Or maybe you need it to work better on accented speech, right?

  • I guess US has a lot of accents.

  • China also has a lot of accents.

  • What does accented speech mean?

  • Does it mean a European accent, or Asian accent?

  • European does it mean British, or Scottish?

  • You know what does accent really mean?, or

  • maybe you really care about something else, right?

  • So, one of the practices we've come up with, is that one of the good ways for

  • a PM to communicate with an engineer, is through data, and what I mean is for

  • many of my projects we ask the PM to be responsible for coming up with a data set.

  • For example, give me, let me say 10,000 audio clips

  • [NOISE] that really shows me what your really care about, right?

  • So and so, if the PM, comes up with ten a thousand or

  • ten thousand examples, of a people of recordings of a speech, and give us data,

  • to the engineer, and just the engineer has a clear target to the info.

  • So, found that having a PM responsible for collecting really a test set

  • is one of the most effective processes for letting the PM specify what they really

  • care about, and so if all 10,000 audio clips have a lot of car noise, this is

  • a clear way to communicate to the engineer that you really care about car noise.

  • If it's a mix of these different things, then it communicates to an engineer

  • how exactly, what mix of these different phenomena.

  • The PM wants you to optimize for, right?

  • I have to say, this is one of those things that's obvious in hindsight, but

  • that surprisingly few AI teams do this.

  • One of the bad practices I've seen is when the PM gives an engineer

  • 10,000 audio clips, but they actually care about a totally different 10,000 ones.

  • That happens surprisingly often in multiple companies, right?, and

  • then I feel like where still in the process of advancing the bleeding

  • edge of these workflow processes, so how to think about new products.

  • So, here's another example.

  • Some law work on conversational agents, right?, so, they're conversational agent.

  • I might Say to the AI may you please order takeout for me?, and

  • then the AI says well what restaurant do you want to order from?

  • And you'd say I feel like a hamburger.

  • So you'd go back and forth like a conversation or

  • a chat bot to help you order food or whatever.

  • So again if you were to draw a wire frame,

  • the wire frame would be while you say this, the chat box says this,

  • you say this chat box says this, but this is not a good spec for the AI right?

  • The wireframe is the easy part, the visual design, you can do that, but

  • how intelligent is this really supposed to be?

  • So the process that we developed by doing this, we asked the PM and

  • the engineer to sit down together and

  • write out 50 conversations that the chat box is meant to have with you, right?,

  • so for example, if you sit down and write the following.

  • Let's say the user, U for user, says,

  • Please pack for restaurant [SOUND] right,

  • for my anniversary next Monday.

  • I'm abbreviating this just to write faster.

  • Please book a restaurant for my anniversary.

  • The PM then says, well in this case, [SOUND] I want the AI to say,

  • okay, and do you want flowers?, right?

  • Do you want me to order flowers?

  • [SOUND].

  • [SOUND], What we found is that this then creates a conversation between the PM and

  • the engineer where the engineer asks a PM, wait, do you want me to suggest

  • an appropriate gift for all circumstances and all possible.

  • I would suggest some other I don't know what to buy for Christmas, I guess, or

  • is it only for anniversaries you want to buy flowers?, and

  • I don't have just buy any other gift and offer anything other than anniversaries,

  • right?, [LAUGH] >> Then we found then the process of

  • writing out 50 columns between consulate agents and engineer PMs seen down and

  • work through this conversations, that those are good process

  • to enable the PM to specify what they think is the set on the left,

  • of what the use of the and for the engineer to tell the PM what

  • the engineer thinks is feasible given today's chat box technology, right?

  • And so this is actually a process that we're using in multiple products,

  • so I think that AI Technology is

  • advancing rapidly and there's so many shiny things in AI.

  • The things you see the most in PR are often the shiniest technology but

  • the shiniest technology is often not the most useful, right?

  • But I think that's we're still missing a lot of the downstream parts of

  • the value chain of how to take the shiny AI technology that we find out in

  • research papers and how to think about, how did the product or business, and,

  • we're definitely, it definitely feels you know, software engineering today has

  • established processes like code review and you know agile development.

  • Some of you know what those are, right?, but these was established processes for

  • writing Kahoot, I think we're still in the early phases of trying to figure out how

  • on earth to organize the work of AI and the work of AI product.

  • And this is actually a very exciting time to enter this field., [COUGH], Let's see.

  • [SOUND], All right,I want this time for questions so,

  • all right, more quick I want to share with you some specific

  • examples of some time opportunities that AI, these are things

  • that are coming in the very near future, [SOUND], Let's see.

  • I think I mentioned, [COUGH], well, I mentioned Fintech,

  • I'm going to talk about that, in the near term future, I think speech recognition,

  • We'll take off, it's just in the last year or two that speech

  • recognition reached the level of accuracy, was becoming incredibly useful.

  • So about four,

  • five months ago, there was a Stanford University led study done by James Landay,

  • led by James Landay, who is a professor of Computer Science, together with us, I do,

  • and the University of Washington, and showed that speech input on this cellphone

  • is 3x faster, using speech recognition than typing on the cell phone, right?

  • So, speech recognition has passed the accuracy threshold where

  • you actually are much faster and much more efficient using speech recognition than

  • typing on the cell phone keyboard, and that's true for English and Chinese, but

  • I think, and at Baidu over the past year, we saw 100% year on year growth on

  • the user speech recognition across all of our properties.

  • So I think we're beyond the knee of the curve where speech recognition

  • will take off rapidly, and

  • so, I guess in the U.S., there are multiple companies doing small speakers.

  • Baidu has a different vision moves, but, I think that is a device you can

  • come on with your voice in your home also take off rapidly, so whenever

  • an operating system that would release the hardware makers and they know that, right?

  • What else?

  • Computer vision Is coming little bit later.

  • You know, I see something sink off faster in China than the US, so,

  • because all of us living in the US are familiar US once,

  • I might mean to a little bit even sharing things I see from China.

  • One thing that sinking off very rapidly is Face Recognition, [SOUND],

  • so I think because China is a mobile first society right?, and all of us,

  • most of us in U.S. first on the laptop or a desktop, then we got our smartphone.

  • Lot people in China really just have a smartphone or

  • first get a smartphone then a laptop or a desktop Or laptop I guess,

  • I'm not sure who buys this house anyone,

  • but because of that in China a lot of people, let's see, you can apply for

  • an educational loan on your cellphone in China.

  • And just based on buttons, just based on using your cellphone,

  • we will send you a lot of money, right, for your education.

  • So because of these very material,

  • financial transactions are happening over your cellphone, before we send you a lot

  • of money we would really like to verify that you are who you say you are,

  • right, before we send it to someone that claims to be youm but isn't you.

  • So, this in turn has driven a lot of pressure for progress and

  • face recognition, and so face recognition on mobile devices as

  • a means of biometric identity verification is taking off in China.

  • And then we've also done things like, today in Baidu headquarters,

  • instead of, do I have it, no, I don't.

  • Right, instead of having a swipe an ID card to get inside the office building,

  • today I do buy and take water, I can just woke up and there's a face recognition

  • just to recognize my face, and I just walk right through.

  • Just yesterday or

  • the day before, I posted a video on my personal YouTube channel demoing this.

  • You can look that up later if you want.

  • But we now have face recognition systems that are good enough that we trust it with

  • pretty security protocol applications, right, if you look just like me,

  • you can actually get inside my office at IT and Gibson.

  • >> [LAUGH] >> So we really trust our face recognition

  • system, so it's pretty easy.

  • So let's see, and I think both of these have been obvious to us for some time,

  • so our capital investment and investments have been massive.

  • These are well beyond the point where a small group could be competitive with us

  • unless there's some unexpected technological breakthrough.

  • I'll mention some things a little further out.

  • I'm personally very bullish about the impact of AI on healthcare.

  • I've spend quite a bit of time on this myself.

  • And I think, well,

  • the obvious one that a lot of people talk about is medical imaging.

  • I do find it challenging.

  • Yeah, I do think that a lot of radiologists that are graduating today,

  • will be impacted by AI, definitely, sometime in the course of their careers.

  • If you're planning for a 40-year career in radiology,

  • I would say that's not a good plan.

  • >> [LAUGH] >> But beyond radiology,

  • I think that many other verticals, some of which we're working on, but

  • there's a huge opportunity there.

  • And anyway, and on and on and on, right?, and I think Fintech is there.

  • I hope education will get there, but I think education has other things

  • to solve before reading these issues impact by AI, but I really think

  • that AI will be an incredibly impactful in many different verticals.

  • So let's see.

  • And what I talked about today was kind of AI technology today, right, so

  • really supervised learning, and I will say that the transformation

  • of all of these industries, there's already a relatively clear road map for

  • how to transform multiple industries using just supervised learning.

  • There are researchers working on even other forms of AI,

  • you might hear one say unsupervised learning or

  • reinforcement learning or transfer learning, there are other forms of

  • AI as well that may be don't need as much data or maybe has other advantages.

  • Most of those are in the research phase, most of them are used in very relatively

  • small ways, than not what's driving economic value today, but

  • may of us hope that there will a breakthrough in this other areas and

  • if that comes to pass, then that will unlock additional ways of value.

  • So, let's see the few that AI has had several winters before, right?

  • I think the field over height went some of the high went down.

  • So we think they were maybe two winters an AI, right, but

  • many disciplines undergo a few winters, winter and then eternal spring,

  • and I actually think that AI has pass into the phase of eternal spring.

  • I think one of the questions someone asked,

  • when will AI no longer be the top technology or something, right,

  • and I feel like if you look at slick and technology, right?

  • I think when the eternal spring of silicon technology, or maybe some other metal,

  • some other material will surpass it, but the concept of a transistor and

  • computational circuits, that seems like it's going to be with the human race for

  • a long time.

  • And I think we have reached that point for AI where AI, new networks,

  • deep learning, I think it will be with us for a long time.

  • Completely conscious of yourself, but they could be a very long time, because it's

  • trading so much value already and because there is this clear road map for

  • transforming seven industries even with the ideas we have, but hopefully there

  • will be even more breakthroughs and even more of these technologies.

  • All right, very last topic, you know the jobs issue,

  • I think that's, to the extent that we're causing these problems,

  • we should, the job displacement issue, I think we should own up to it.

  • Just as AI displaces jobs, similar to the earlier ways of job displacement,

  • I think that AI will create new jobs as well, maybe even ones we can't imagine.

  • So that's why I actually seen development for a long time.

  • I think one of the biggest challenges of education is motivation, right?

  • As in is really good for you to take these courses and study, but it's actually

  • really difficult for an individual to find the time, and the space, and

  • the energy to do the learning that gives them these long term benefits.

  • So when the, after

  • the automation replaced a lot of agriculture the United States built

  • its current educational system, your K-12 and university.

  • It was a lot of work to build the world's current educational system.

  • With AI displacing a lot of jobs I'm confident that there will be new jobs but

  • I think also we need a new educational system to help

  • people whose jobs are displaced reskill themselves to take on the new jobs.

  • So one of the things that some governments, well,

  • one of the things that we should move toward is a model of basic income but

  • not universal basic income where, your paid to quote do nothing,

  • but I think government should give people a safety net, but pay the unemployed to

  • study, right, to provide the structure to help the unemployed to study so as to

  • increase the odds of gaining the skills needed to re-enter the workforce and

  • contribute back to the tax base that is paying for all this on a basic income.

  • So I think we need a new, new deal in order to evolve society towards this

  • new world where there are new jobs, but job displacements

  • are also happening faster than before, and they have been saying more about that.

  • Finally, really, final, final thing.

  • I know that often hearing the GSB,

  • many of you have fantastic product business, or

  • social change ideas, one of the things I hope to do

  • is try to connect, frankly connect GSB and CS.

  • I think that GSB and CS are really complimentary sense of expertise, but for

  • various complicated reasons that we get into,

  • the two communities don't seem very connected.

  • So- >> [LAUGH]

  • >> Yeah, I'm in the process of organizing

  • some events that I hope will bring together some CS, some GSB,

  • maybe also some VC, some capital investments

  • to those of you interested in exploring new opportunities that AI creates.

  • So if you want to be informed of that, sign up for this mailing list

  • at bit.ly /gsb-ai.

  • There are some things being organized.

  • They're already underway, but actually instead of taking a picture of this,

  • if you just go and sign up for this on your cellphone, right now.

  • >> [LAUGH] >> Yes.

  • [LAUGH] You can do this while I'm taking questions.

  • And some of these things are already underway, but

  • when they're ready to be announced, I'll announce it to the mailing list there, so

  • that you can come in and be connected to some of these other pieces at the campus.

  • So with that, I'm happy to take questions, but let me say thank you all very much.

  • >> [APPLAUSE] >> Thanks so much, Andrew.

  • >> Thank you.

  • >> It's a great talk, and a lot of us, I know,

  • want to be engaged in product development and product management in the field of AI.

  • And you've given us a lot of good frameworks

  • to think about these conversations.

  • And the mailing list is right there, in case you wanted to note down.

  • So Andrew has gracefully accepted to fill some questions until about 5:30.

  • So if you have any questions there are going to be some Sloane fellows that

  • are going to be moving around the room, so please attract their attention.

  • But I can kick off with a question.

  • I really wanted to ask this question, because it reminded me of my TSBSA,

  • which is what scares you about AI and why?

  • But I guess you already answered part of that, so maybe you can touch on that.

  • And another question, which I felt was interesting was,

  • what is the role of known technical leaders in development of AI?

  • Who's in charge of the ethical decisions being made in directing AI?

  • >> All right, someone scarcely ever has any job displacement.

  • I think that, honestly, part of me, I really honest with you guys, right?

  • Part of me wonders with the recent presidential election,

  • part of me really wonders if many of us in Silicon Valley,

  • have we really failed a large faction of America, and it's being really honest.

  • I'm not saying I agree with everything happening with politics right now, but

  • part of me actually wonders if we create a tremendous wealth, but also frankly,

  • if we left a lot of people behind.

  • And I think it is past time for us to own up to it, and

  • also take responsibility of addressing that.

  • Let's see, what was the other question?

  • >> It was about- >> Ethical.

  • And I think in terms of ethical issues, there are some things,

  • but I think that I think jobs are so important,

  • I'm just tempted not to talk about anything else.

  • But I think that AI is really powerful, and can do all sorts of things.

  • And we see lots of, I think there's some small issues.

  • Such as, AI is sometimes bias, right?

  • For example, if you do a web search, right?

  • We want to make sure that if you search for

  • a certain ethnic group, you don't get lost results that says well, this is,

  • check out their criminal record or something like that, right?

  • We don't want AI to exhibit bias.

  • Or that AI thinks you're male versus female, we don't want to show you

  • very different types of information that they confirm is gender stereotypes.

  • So I think there's some cultural bias issues.

  • I think that openness, AI community is very open today.

  • I think we must fight to make sure what to keep it open.

  • I think the number one by far is actually jobs.

  • Maybe take some questions?

  • How does the microphone work?

  • >> Hi, Catherine Shen here, I'm a set 16, graduated last year.

  • And thanks for the talk.

  • I had a question around, you mentioned the defensibility of AI as the three things,

  • so access the data, talent scarcity and positive feedback loop.

  • And one in three, so accessing data and positive feedback loop seems to really

  • benefit large companies or companies that already have the AI technology.

  • And so I'm wondering at what point is it going to be really tough for

  • startups to, well, become a AI startup.

  • And secondly, for investors, at what kind of scale

  • do those investments need to make for a startup to be successful?

  • >> Sure, yeah, and just to clarify, I think the scarce resources are data and

  • talent.

  • And then a positive feedback loop is a strategy or

  • a tactic to drive the data, right?

  • So I think that for the problems I talked about, like speech recognition,

  • face recognition is going to be it'd be really difficult for a small company to

  • acquire enough data to tailor or whatever the computer effectively.

  • Unless there's an unexpected technological breakthrough that's small groups do

  • stuff that can't be done with today's technology.

  • But I think there's lots of small verticals.

  • So for example, take medical imaging.

  • There are some medical diseases where there are so

  • few cases around the world that if you have 1,000 images or

  • something, that might be almost all the data that exist in the world.

  • So that's one.

  • There are just some verticals that there isn't that much data.

  • But I think the other things it that there's so many opportunities in AI today.

  • Honestly, my team is regularly write full fledged business plans, do the market

  • research, size of the market, figure out the economics and all of that is good.

  • With a full fledged business platform and a new vertical, and

  • we decide let's not do it.

  • Because we just don't have enough talent to go after all the big options.

  • So we decide, let's not do it,

  • because there's something even bigger we want to to do, right?

  • So I think today, we're fortunate to have so many opportunities.

  • That there are plenty of opportunities that the large companies are,

  • frankly, not pursuing because today's world has more opportunities

  • than talented AI researchers.

  • >> Question over there.

  • >> Hi. Hi, Andrew, what do you think of the use

  • of AI in the creation, sorry, over here, in the creation of inventions?

  • So it's something that's usually the reserve of what's the human mind,

  • the use of AI to create inventions, even patentable inventions.

  • >> Yeah, I am seeing very early phases.

  • You know, creativity is a very funny thing, right?

  • So can I compose music, it's so subjective.

  • I feel like even with a 20 year old technology, automated music composition by

  • computers, a lot of us thought that the automatic composition sounded horrible.

  • But there were some people that love it, like the 20 year old technology.

  • So I don't know.

  • We're seeing a lot of cool work with AI doing special effects on images,

  • synthesizing, make this picture, if it was painted by a certain painter.

  • I don't know, it feels like a small, but very interesting area right now.

  • But making complex inventions,

  • like inventing a totally new, very complicated system with many pieces

  • I think that's beyond what I will see a clear path to today.

  • >> Couple of questions here.

  • >> So- >> Yeah, go ahead.

  • >> So as you drew the other [INAUDIBLE] when you talked about data versus

  • performance, and you said [INAUDIBLE]

  • >> [INAUDIBLE]

  • >> Yeah, so could people hear, or

  • should I repeat for the mic?

  • >> Repeat.

  • >> Sure, so scalability draws a lot of problems in AI.

  • But if Moore's Law is coming to an end,

  • how does that affect the scalability of AI?

  • It turns out that, let's see, so I think that as I've seen the road maps

  • of multiple of high performance computing hardware type companies.

  • And whereas,

  • most offer single process are doesn't seem to be working very well anymore.

  • I have seen specific and I think credible roadmaps of microprocessing companies that

  • show that for the types of computations we need for deeper, for neural networks,

  • I am confident that it will keep on scaling for the next several years.

  • And so this is same day processing, single instruction multiple data.

  • It turns out it's much easier to paralyze than a lot of the workload.

  • Your word processor's actually much harder to paralyze and

  • your network is actually much easier to paralyze.

  • So I feel there is still a lot of headroom for faster computation.

  • I will say that when I look across a mix of problems,

  • many of the problems, AI problems, are bottlenecked by data.

  • But many of the problems are also just bottlenecked by computational speed.

  • There are some problems where our ability to acquire data

  • exceeds our ability to process that data inexpensively.

  • So further progresses in HPC, which I think there is a roadmap for,

  • should open up more of that value.

  • >> There's a question right behind.

  • >> Is this on?

  • Hello.

  • Hi, Andrew, my name is Erica Lee.

  • I'm a startup founder working machine learning.

  • So two questions, you mentioned that algorithms aren't like the special

  • sauce to being successful in AI.

  • What do you recommend for people, though, building and working on AI about IP

  • protection or best ways to get around that to still build a valuable product?

  • And then two, you mentioned the relationship between the PM and

  • an engineer about the cycle of data and how to communicate.

  • That's for building a product, though.

  • What about people doing some R&D research on reinforcement on supervised learning?

  • Is there a certain lifecycle of strategy would go for

  • research breakthroughs or to improve the research processes?

  • >> Yeah, maybe, sure, right.

  • Boy, all right, so I think, yeah, IP protection is one of those things

  • that we give advice on and I get in trouble with lawyers or something.

  • >> [LAUGH] >> Honestly,

  • I don't have a strong opinion.

  • I see a lot of companies file for some patents, but how much you can rely on

  • them for defensibility is an open question, check your lawyer.

  • I actually don't have a strong opinion that.

  • We do tend to think strategically about data as a defensible barrier,

  • though, we rely on data.

  • In terms of, you said processes for

  • R&D, right?

  • The research academic committee tends to favor novelty.

  • Anything novel and shiny, you can get a paper published.

  • I would say that, maybe if you want to train up a team of engineers,

  • I've supervised PhD students at Stanford for a long time.

  • I feel like if you want to be a deep learning researcher, and

  • if you go to published papers, the formula I usually give people is this.

  • Read a lot of papers.

  • Go beyond reading papers but

  • go and replicate existing research papers yourself.

  • This is one thing that is underappreciated, actually.

  • Even pull back a little bit from trying too hard to invent a new thing.

  • I spend a lot of time replicating published results.

  • I found that to be very good training process for new researchers.

  • And then the human brain is this marvelous thing.

  • It works every time.

  • I've never seen it fail.

  • But if you read enough papers and really study them and understand them and

  • replicate enough results, pretty soon you have your own ideas for

  • pushing forward the state of the art.

  • I've mentored enough PhD students to ascertain with high confidence that this

  • is a very reliable process.

  • And then go submit your paper and get it published.

  • >> Over there.

  • >> Thank you.

  • So I'm a mechanical engineering student aspiring to

  • be a roboticist when I graduate.

  • I was wondering what are the best opportunities for

  • mechanical engineers to go into as it relates to AI and robotics.

  • Would you know that?

  • >> Yeah, so I've seen a lot of ME people take up very successful careers in AI.

  • >> Actually some of my PhD students, actually one of my PhD students was

  • an ME PhD student, and he transferred to the CSPH Department and he did very well.

  • So I think that robotics has many opportunities in specific.

  • Well, you're a Stanford student, right?

  • >> Yes.

  • >> Cool.

  • I would say, take some CS-AI classes and try to work with the AI faculty.

  • I do think that there are a lot of opportunities to build interesting

  • robots in specific verticals.

  • So I think precision agriculture is a very interesting vertical.

  • Right, so there are now multiple startups using AI.

  • Actually, for example, some of my friends are running Blue River,

  • which is using computer vision to look at specific plants, specifically,

  • heads of cabbages, and kill off.

  • We'll have AI decide which heads of cabbage to kill and which to let live so

  • as to maximize crop yield, right?

  • So there's one application where AI is letting you make, well, this is life and

  • death decisions, but this is life and death of heads of cabbage, not of humans.

  • >> [LAUGH] >> But

  • it is letting you make one at a time life and death decisions by heads of cabbage.

  • But I think that precision agriculture is one vertical.

  • I don't know, yeah, I think actually it's interesting work on surgical robotics as

  • well, but that has a bigger kind of FDA process approval.

  • So that's a longer cycle.

  • But I'm seeing less of a, actually one of the things taking off in China,

  • the love of companionship robots,

  • more social companionship robots that are being built in southern China.

  • It's not really taken off in the US yet, but

  • there suprisingly many of these things in China.

  • >> Thank you.

  • >> Right there?

  • >> Hi, I'm Phil.

  • I'm cofounder of Eurobaby, it's a Palo Alto based startup that helps parents

  • to understand the developmental needs of their child and pair with baby products.

  • I'd love to hear your take on pairing AI with humans.

  • If you think it's usually for most applications the faster way to focus on

  • an AI only approach right away or actually have a hybrid solution of AI and humans.

  • It's, for example, in self-driving cars or chatbots and some.

  • >> Yeah, I don't have a general rule for that.

  • It's so case by case, I guess.

  • A lot of speech recognition work is about making humans more efficient in

  • terms of how you communicate with or through a cell phone, for example.

  • And then for self-driving cars, we know that if a car is driving and it wants you

  • to take over, you need maybe 10, 15, maybe even longer seconds to take over.

  • So it's incredibly difficult to bench the attention from the distracted human

  • back to take over a car.

  • So that's why I think level four autonomy will be safer than trying to have

  • a human take over at a moment's notice when the car doesn't know what to do.

  • So that might be one case where this mix between full and

  • partial automation is challenging from a user interface point of view.

  • So I don't have a general rule for that.

  • >> There's some questions on the top.

  • Let's go a different direction.

  • >> Okay, so when you talk about opportunities for AI,

  • you mentioned the online education.

  • I just wanted to know more about this.

  • You mentioned that the motivation problem is one of the problems for

  • online education.

  • But do you think this is the biggest challenge that online education is facing

  • that AI could probably solve?

  • Or do you think there are some other challenges for online education?

  • On motivation, I mean,

  • people don't want to spend enough time to finish the whole course.

  • >> Yeah, so I think that's actually, AI is helping of education and

  • people talked about personalized tutors for a long time.

  • And today Coursera uses AI to give you customized course recommendations and

  • there's AI for also grading.

  • So I would say it's definitely helping at the margins, but

  • I would say that education still has a big digital transformation to go through,

  • maybe even without that much involvement of AI.

  • Maybe one pattern that is true for

  • a lot of industries is first comes the data and then comes the AI.

  • So healthcare needs this pattern.

  • Over the past year, thanks to, well, partially, Obamacare, right,

  • there's a huge movement in the United States,

  • a movement in other countries, too, towards electronic health records.

  • EHR, so the rise of EHR and

  • the fact that your X-ray scanners all went from film to digital x-rays.

  • So that wave of digitization has now created a lot of data that AI

  • can eat to create more value.

  • I would say that a lot of education still feels like it's first undergoing

  • the digital transformation.

  • And while AI can certainly help, I think there's still a lot of work to do for

  • just a digital transformation.

  • >> I think there's just one more question on the top.

  • >> Yeah, if we could talk a little bit about how Baidu is using AI for managing

  • your own cloud data centers, primarily idea operations management use cases.

  • >> Sure, so I guess, boy, let's see, I'll give one example.

  • We talked about this.

  • Several years ago, almost three years ago,

  • we did a project showing that we can detect hardware failures,

  • especially hardware failures a day ahead of time using AI.

  • And so this allows us to do preemptive maintenance, a hot swap of hard disks.

  • Copying the data off even before it fails, thus reducing constant easing reliability.

  • We've also been working to reduce power consumption

  • of the data centers, something low balancing uses AI.

  • I can't point to one big thing, but I feel like many places,

  • AI has had an impact on optimizing various aspects of data center performance.

  • >> We'd like to stay for longer but we have to leave the room for the next event,

  • so will probably be the last question.

  • Hey, how are you here?

  • I actually studied at both CS and the GSB before.

  • So my question is you actually mentioned that that's the sweet spot for

  • AI progress.

  • If human can process for less than a second,

  • then that would be a good problem set for AI to solve.

  • Can you comment on the other way, on the other side of spectrum?

  • In your experience, a problem would take a lot more seconds or

  • a long time for humans to process, yet after careful modeling or

  • careful planning, you are able to solve the problem by AI.

  • Can you give some examples on that?

  • >> Yeah, so

  • there are things that AI can do that humans can't do in less than a second.

  • So for example, I think Amazon today does a way better job recommending books to me

  • than even my wife does, right?

  • And the reason is Amazon has a much more intimate knowledge of what

  • books I browse and what books I read than even my wife does.

  • Advertising, honestly, leading Internet companies have seen so much data about

  • what ads people click on and don't click on, could remarkably be good at that task.

  • So there are some problems where a machine can consume way more data than any

  • human can and model the patterns and predictions.

  • So this is something that AI surpasses human performance because it consumes so

  • much data, right, like Amazon knowing my book preferences better than my wife.

  • Let me finish.

  • And then the other thing of tasks that take a human more than one second to do,

  • a lot of the work of designing AI into the workflow

  • is piecing many small AI pieces together into a much bigger system.

  • So for example, to build a self-driving car, we use AI to look at

  • a camera image, radar, LIDAR, whatever, the sensor data.

  • Let me just say, a picture of another car and

  • supervised learning estimates the position of the other car.

  • Supervised learning, estimates the position of the pedestrians.

  • But these are just two small pieces, well, two important pieces, of the overall AI.

  • Then there's a separate piece that tries to estimate, well,

  • where is this car going to be in five seconds?

  • Where's this pedestrian going?

  • There's another piece that plans, well, given that all of these objects are moving

  • in this way, how do I plan my car so that I don't hit anything?

  • And then after that, there's then how do I turn the steering wheel?

  • Do I turn the steering wheel five degrees or seven degrees to follow this path?

  • So often a complicated AI system has many small pieces, involve

  • the ingenuity is figuring out where to take this superpower, supervised learning,

  • and put it into this much bigger system that creates something very valuable.

  • >> Probably take one more question behind.

  • I'm Mahidhar, I'm a solutions architect in a company called OTP.

  • My question was, you mentioned about jobs and wealth distribution as well.

  • Since it's a management forum, I wanted to ask, what sort of role do you see for

  • product managers when interacting with for example sociologists or

  • legal profession, based on examples to give you.

  • Building a car, self driving car,

  • if there's a collision which is about to happen where the developer or

  • the AI has to take into consideration the person driving the car.

  • Or the pedestrian who it's about to hit.

  • That's a legal question.

  • So but there'll be a lot of questions like these.

  • What do you see the role of management interacting with different function areas?

  • >> Yeah, so the most famous example of a variation of what you said is then called

  • the trolley problem, is a philosophy that cause ethical dilemma.

  • Where I guess I think your car is, the classical version,

  • you have a trolley running on rails.

  • And the trolley is about to hit and kill five people.

  • And you have the option of yanking on the lever to divert the trolley to

  • kill one person.

  • So the ethical dilemma is do you yank on the lever or

  • not, because if you do nothing, five people die.

  • If you do something, one person dies, but you killed that person.

  • So are you going to kill someone, right, versus not doing anything?

  • So it turns out that the trolley problem wasn't important, even for trolleys.

  • Right, when we built trolleys and whatever,

  • several hundred years in the history of trolleys,

  • I don't know that anyone actually had to decide whether or not to yank the lever.

  • It's just not an important problem outside the philosophy classes.

  • >> [LAUGH] >> And I think that when the self-driving

  • car teams are not debating this, philosophers are debating this.

  • Frankly, if you're ever facing a trolley problem,

  • chances are you made a mistake long ago.

  • You should. >> [LAUGH]

  • >> Now, when

  • was the last time you faced a trolley problem, right, when driving your car?

  • I expect a self-driving car to face it about as often as you have driving

  • your car, right, which is probably pretty much never.

  • So I think right now the problem with self-driving cars is there's

  • a big white truck parked across the road.

  • Your options are slam the truck and kill the driver or brake.

  • And we don't always make the right decision for that.

  • So I would solve that first- >> [LAUGH]

  • >> Before solving the trolley problem.

  • >> [LAUGH] >> That's, I think,

  • a good point to end this great talk.

  • Thanks a lot.

  • >> Thank you.

  • >> [APPLAUSE]

Good afternoon.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it