Subtitles section Play video
-
Chris Anderson: Help us understand what machine learning is,
-
because that seems to be the key driver
-
around artificial intelligence.
-
How does machine learning work?
-
Sebastian Thrun: So, artificial intelligence and machine learning
-
is about 60 years old
-
and has not had a great day in its past until recently.
-
And the reason is that today,
-
we have reached a scale of computing and datasets
-
that was necessary to make machines smart.
-
So here's how it works:
-
If you program a computer today, say, your phone,
-
then you hire software engineers
-
that write a very, very long kitchen recipe,
-
like, "If the water is too hot, turn down the temperature.
-
If it's too cold, turn up the temperature."
-
The recipes are not just 10 lines long.
-
They are millions of lines long.
-
A modern cell phone has 12 million lines of code.
-
A browser has five million lines of code.
-
And each bug in this recipe can cause your computer to crash.
-
That's why a software engineer makes so much money.
-
The new thing now is that computers can find their own rules.
-
So instead of an expert deciphering, step by step,
-
a rule for every contingency,
-
what you do now is you give the computer examples
-
and have it infer its own rules.
-
A really good example is AlphaGo, which recently was won by Google.
-
Normally, in game playing, you would really write down all the rules,
-
but in AlphaGo's case,
-
the system looked over a million games
-
and was able to infer its own rules
-
and then beat the world's residing Go champion.
-
That is exciting, because it relieves the software engineer
-
of the need of being super smart,
-
and pushes the burden towards the data.
-
As I said, the inflection point where this has become really possible --
-
very embarrassing, my thesis was about machine learning.
-
It was completely insignificant, don't read it,
-
because it was 20 years ago
-
and back then, the computers were as big as a cockroach brain.
-
Now they are powerful enough to really emulate
-
kind of specialized human thinking.
-
And then the computers take advantage of the fact
-
that they can look at much more data than people can.
-
So I'd say AlphaGo looked at more than a million games.
-
No human expert can ever study a million games.
-
Google has looked at over a hundred billion web pages.
-
No person can ever study a hundred billion web pages.
-
So as a result, the computer can find rules
-
that even people can't find.
-
CA: So instead of looking ahead to, "If he does that, I will do that,"
-
it's more saying, "Here is what looks like a winning pattern,
-
here is what looks like a winning pattern."
-
ST: Yeah. I mean, think about how you raise children.
-
You don't spend the first 18 years giving kids a rule for every contingency
-
and set them free and they have this big program.
-
They stumble, fall, get up, they get slapped or spanked,
-
and they have a positive experience, a good grade in school,
-
and they figure it out on their own.
-
That's happening with computers now,
-
which makes computer programming so much easier all of a sudden.
-
Now we don't have to think anymore. We just give them lots of data.
-
CA: And so, this has been key to the spectacular improvement
-
in power of self-driving cars.
-
I think you gave me an example.
-
Can you explain what's happening here?
-
ST: This is a drive of a self-driving car
-
that we happened to have at Udacity
-
and recently made into a spin-off called Voyage.
-
We have used this thing called deep learning
-
to train a car to drive itself,
-
and this is driving from Mountain View, California,
-
to San Francisco
-
on El Camino Real on a rainy day,
-
with bicyclists and pedestrians and 133 traffic lights.
-
And the novel thing here is,
-
many, many moons ago, I started the Google self-driving car team.
-
And back in the day, I hired the world's best software engineers
-
to find the world's best rules.
-
This is just trained.
-
We drive this road 20 times,
-
we put all this data into the computer brain,
-
and after a few hours of processing,
-
it comes up with behavior that often surpasses human agility.
-
So it's become really easy to program it.
-
This is 100 percent autonomous, about 33 miles, an hour and a half.
-
CA: So, explain it -- on the big part of this program on the left,
-
you're seeing basically what the computer sees as trucks and cars
-
and those dots overtaking it and so forth.
-
ST: On the right side, you see the camera image, which is the main input here,
-
and it's used to find lanes, other cars, traffic lights.
-
The vehicle has a radar to do distance estimation.
-
This is very commonly used in these kind of systems.
-
On the left side you see a laser diagram,
-
where you see obstacles like trees and so on depicted by the laser.
-
But almost all the interesting work is centering on the camera image now.
-
We're really shifting over from precision sensors like radars and lasers
-
into very cheap, commoditized sensors.
-
A camera costs less than eight dollars.
-
CA: And that green dot on the left thing, what is that?
-
Is that anything meaningful?
-
ST: This is a look-ahead point for your adaptive cruise control,
-
so it helps us understand how to regulate velocity
-
based on how far the cars in front of you are.
-
CA: And so, you've also got an example, I think,
-
of how the actual learning part takes place.
-
Maybe we can see that. Talk about this.
-
ST: This is an example where we posed a challenge to Udacity students
-
to take what we call a self-driving car Nanodegree.
-
We gave them this dataset
-
and said "Hey, can you guys figure out how to steer this car?"
-
And if you look at the images,
-
it's, even for humans, quite impossible to get the steering right.
-
And we ran a competition and said, "It's a deep learning competition,
-
AI competition,"
-
and we gave the students 48 hours.
-
So if you are a software house like Google or Facebook,
-
something like this costs you at least six months of work.
-
So we figured 48 hours is great.
-
And within 48 hours, we got about 100 submissions from students,
-
and the top four got it perfectly right.
-
It drives better than I could drive on this imagery,
-
using deep learning.
-
And again, it's the same methodology.
-
It's this magical thing.
-
When you give enough data to a computer now,
-
and give enough time to comprehend the data,
-
it finds its own rules.
-
CA: And so that has led to the development of powerful applications
-
in all sorts of areas.
-
You were talking to me the other day about cancer.
-
Can I show this video?
-
ST: Yeah, absolutely, please. CA: This is cool.
-
ST: This is kind of an insight into what's happening
-
in a completely different domain.
-
This is augmenting, or competing --
-
it's in the eye of the beholder --
-
with people who are being paid 400,000 dollars a year,
-
dermatologists,
-
highly trained specialists.
-
It takes more than a decade of training to be a good dermatologist.
-
What you see here is the machine learning version of it.
-
It's called a neural network.
-
"Neural networks" is the technical term for these machine learning algorithms.
-
They've been around since the 1980s.
-
This one was invented in 1988 by a Facebook Fellow called Yann LeCun,
-
and it propagates data stages
-
through what you could think of as the human brain.
-
It's not quite the same thing, but it emulates the same thing.
-
It goes stage after stage.
-
In the very first stage, it takes the visual input and extracts edges
-
and rods and dots.
-
And the next one becomes more complicated edges
-
and shapes like little half-moons.
-
And eventually, it's able to build really complicated concepts.
-
Andrew Ng has been able to show
-
that it's able to find cat faces and dog faces
-
in vast amounts of images.
-
What my student team at Stanford has shown is that
-
if you train it on 129,000 images of skin conditions,
-
including melanoma and carcinomas,
-
you can do as good a job
-
as the best human dermatologists.
-
And to convince ourselves that this is the case,
-
we captured an independent dataset that we presented to our network
-
and to 25 board-certified Stanford-level dermatologists,
-
and compared those.
-
And in most cases,
-
they were either on par or above the performance classification accuracy
-
of human dermatologists.
-
CA: You were telling me an anecdote.
-
I think about this image right here.
-
What happened here?
-
ST: This was last Thursday. That's a moving piece.
-
What we've shown before and we published in "Nature" earlier this year
-
was this idea that we show dermatologists images
-
and our computer program images,
-
and count how often they're right.
-
But all these images are past images.
-
They've all been biopsied to make sure we had the correct classification.
-
This one wasn't.
-
This one was actually done at Stanford by one of our collaborators.
-
The story goes that our collaborator,
-
who is a world-famous dermatologist, one of the three best, apparently,
-
looked at this mole and said, "This is not skin cancer."
-
And then he had a second moment, where he said,
-
"Well, let me just check with the app."
-
So he took out his iPhone and ran our piece of software,
-
our "pocket dermatologist," so to speak,
-
and the iPhone said: cancer.
-
It said melanoma.
-
And then he was confused.
-
And he decided, "OK, maybe I trust the iPhone a little bit more than myself,"
-
and he sent it out to the lab to get it biopsied.
-
And it came up as an aggressive melanoma.
-
So I think this might be the first time that we actually found,
-
in the practice of using deep learning,
-
an actual person whose melanoma would have gone unclassified,
-
had it not been for deep learning.
-
CA: I mean, that's incredible.
-
It feels like there'd be an instant demand for an app like this right now,
-
that you might freak out a lot of people.
-
Are you thinking of doing this, making an app that allows self-checking?
-
ST: So my in-box is flooded about cancer apps,
-
with heartbreaking stories of people.
-
I mean, some people have had 10, 15, 20 melanomas removed,
-
and are scared that one might be overlooked, like this one,
-
and also, about, I don't know,
-
flying cars and speaker inquiries these days, I guess.
-
My take is, we need more testing.
-
I want to be very careful.
-
It's very easy to give a flashy result and impress a TED audience.
-
It's much harder to put something out that's ethical.
-
And if people were to use the app
-
and choose not to consult the assistance of a doctor
-
because we get it wrong,
-
I would feel really bad about it.
-
So we're currently doing clinical tests,
-
and if these clinical tests commence and our data holds up,
-
we might be able at some point to take this kind of technology
-
and take it out of the Stanford clinic
-
and bring it to the entire world,
-
places where Stanford doctors never, ever set foot.
-
CA: And do I hear this right,
-
that it seemed like what you were saying,
-
because you are working with this army of Udacity students,
-
that in a way, you're applying a different form of machine learning
-
than might take place in a company,
-
which is you're combining machine learning with a form of crowd wisdom.
-
Are you saying that sometimes you think that could actually outperform
-
what a company can do, even a vast company?
-
ST: I believe there's now instances that blow my mind,
-
and I'm still trying to understand.
-
What Chris is referring to is these competitions that we run.
-
We turn them around in 48 hours,
-
and we've been able to build a self-driving car
-
that can drive from Mountain View to San Francisco on surface streets.
-
It's not quite on par with Google after seven years of Google work,
-
but it's getting there.
-
And it took us only two engineers and three months to do this.
-
And the reason is, we have an army of students
-
who participate in competitions.
-
We're not the only ones who use crowdsourcing.
-
Uber and Didi use crowdsource for driving.
-
Airbnb uses crowdsourcing for hotels.