Placeholder Image

Subtitles section Play video

  • I'm going to talk about a failure of intuition

  • that many of us suffer from.

  • It's really a failure to detect a certain kind of danger.

  • I'm going to describe a scenario

  • that I think is both terrifying

  • and likely to occur,

  • and that's not a good combination,

  • as it turns out.

  • And yet rather than be scared, most of you will feel

  • that what I'm talking about is kind of cool.

  • I'm going to describe how the gains we make

  • in artificial intelligence

  • could ultimately destroy us.

  • And in fact, I think it's very difficult to see how they won't destroy us

  • or inspire us to destroy ourselves.

  • And yet if you're anything like me,

  • you'll find that it's fun to think about these things.

  • And that response is part of the problem.

  • OK? That response should worry you.

  • And if I were to convince you in this talk

  • that we were likely to suffer a global famine,

  • either because of climate change or some other catastrophe,

  • and that your grandchildren, or their grandchildren,

  • are very likely to live like this,

  • you wouldn't think,

  • "Interesting.

  • I like this TED Talk."

  • Famine isn't fun.

  • Death by science fiction, on the other hand, is fun,

  • and one of the things that worries me most about the development of AI at this point

  • is that we seem unable to marshal an appropriate emotional response

  • to the dangers that lie ahead.

  • I am unable to marshal this response, and I'm giving this talk.

  • It's as though we stand before two doors.

  • Behind door number one,

  • we stop making progress in building intelligent machines.

  • Our computer hardware and software just stops getting better for some reason.

  • Now take a moment to consider why this might happen.

  • I mean, given how valuable intelligence and automation are,

  • we will continue to improve our technology if we are at all able to.

  • What could stop us from doing this?

  • A full-scale nuclear war?

  • A global pandemic?

  • An asteroid impact?

  • Justin Bieber becoming president of the United States?

  • (Laughter)

  • The point is, something would have to destroy civilization as we know it.

  • You have to imagine how bad it would have to be

  • to prevent us from making improvements in our technology

  • permanently,

  • generation after generation.

  • Almost by definition, this is the worst thing

  • that's ever happened in human history.

  • So the only alternative,

  • and this is what lies behind door number two,

  • is that we continue to improve our intelligent machines

  • year after year after year.

  • At a certain point, we will build machines that are smarter than we are,

  • and once we have machines that are smarter than we are,

  • they will begin to improve themselves.

  • And then we risk what the mathematician IJ Good called

  • an "intelligence explosion,"

  • that the process could get away from us.

  • Now, this is often caricatured, as I have here,

  • as a fear that armies of malicious robots

  • will attack us.

  • But that isn't the most likely scenario.

  • It's not that our machines will become spontaneously malevolent.

  • The concern is really that we will build machines

  • that are so much more competent than we are

  • that the slightest divergence between their goals and our own

  • could destroy us.

  • Just think about how we relate to ants.

  • We don't hate them.

  • We don't go out of our way to harm them.

  • In fact, sometimes we take pains not to harm them.

  • We step over them on the sidewalk.

  • But whenever their presence

  • seriously conflicts with one of our goals,

  • let's say when constructing a building like this one,

  • we annihilate them without a qualm.

  • The concern is that we will one day build machines

  • that, whether they're conscious or not,

  • could treat us with similar disregard.

  • Now, I suspect this seems far-fetched to many of you.

  • I bet there are those of you who doubt that superintelligent AI is possible,

  • much less inevitable.

  • But then you must find something wrong with one of the following assumptions.

  • And there are only three of them.

  • Intelligence is a matter of information processing in physical systems.

  • Actually, this is a little bit more than an assumption.

  • We have already built narrow intelligence into our machines,

  • and many of these machines perform

  • at a level of superhuman intelligence already.

  • And we know that mere matter

  • can give rise to what is called "general intelligence,"

  • an ability to think flexibly across multiple domains,

  • because our brains have managed it. Right?

  • I mean, there's just atoms in here,

  • and as long as we continue to build systems of atoms

  • that display more and more intelligent behavior,

  • we will eventually, unless we are interrupted,

  • we will eventually build general intelligence

  • into our machines.

  • It's crucial to realize that the rate of progress doesn't matter,

  • because any progress is enough to get us into the end zone.

  • We don't need Moore's law to continue. We don't need exponential progress.

  • We just need to keep going.

  • The second assumption is that we will keep going.

  • We will continue to improve our intelligent machines.

  • And given the value of intelligence --

  • I mean, intelligence is either the source of everything we value

  • or we need it to safeguard everything we value.

  • It is our most valuable resource.

  • So we want to do this.

  • We have problems that we desperately need to solve.

  • We want to cure diseases like Alzheimer's and cancer.

  • We want to understand economic systems. We want to improve our climate science.

  • So we will do this, if we can.

  • The train is already out of the station, and there's no brake to pull.

  • Finally, we don't stand on a peak of intelligence,

  • or anywhere near it, likely.

  • And this really is the crucial insight.

  • This is what makes our situation so precarious,

  • and this is what makes our intuitions about risk so unreliable.

  • Now, just consider the smartest person who has ever lived.

  • On almost everyone's shortlist here is John von Neumann.

  • I mean, the impression that von Neumann made on the people around him,

  • and this included the greatest mathematicians and physicists of his time,

  • is fairly well-documented.

  • If only half the stories about him are half true,

  • there's no question

  • he's one of the smartest people who has ever lived.

  • So consider the spectrum of intelligence.

  • Here we have John von Neumann.

  • And then we have you and me.

  • And then we have a chicken.

  • (Laughter)

  • Sorry, a chicken.

  • (Laughter)

  • There's no reason for me to make this talk more depressing than it needs to be.

  • (Laughter)

  • It seems overwhelmingly likely, however, that the spectrum of intelligence

  • extends much further than we currently conceive,

  • and if we build machines that are more intelligent than we are,

  • they will very likely explore this spectrum

  • in ways that we can't imagine,

  • and exceed us in ways that we can't imagine.

  • And it's important to recognize that this is true by virtue of speed alone.

  • Right? So imagine if we just built a superintelligent AI

  • that was no smarter than your average team of researchers

  • at Stanford or MIT.

  • Well, electronic circuits function about a million times faster

  • than biochemical ones,

  • so this machine should think about a million times faster

  • than the minds that built it.

  • So you set it running for a week,

  • and it will perform 20,000 years of human-level intellectual work,

  • week after week after week.

  • How could we even understand, much less constrain,

  • a mind making this sort of progress?

  • The other thing that's worrying, frankly,

  • is that, imagine the best case scenario.

  • So imagine we hit upon a design of superintelligent AI

  • that has no safety concerns.

  • We have the perfect design the first time around.

  • It's as though we've been handed an oracle

  • that behaves exactly as intended.

  • Well, this machine would be the perfect labor-saving device.

  • It can design the machine that can build the machine

  • that can do any physical work,

  • powered by sunlight,

  • more or less for the cost of raw materials.

  • So we're talking about the end of human drudgery.

  • We're also talking about the end of most intellectual work.

  • So what would apes like ourselves do in this circumstance?

  • Well, we'd be free to play Frisbee and give each other massages.

  • Add some LSD and some questionable wardrobe choices,

  • and the whole world could be like Burning Man.

  • (Laughter)

  • Now, that might sound pretty good,

  • but ask yourself what would happen

  • under our current economic and political order?

  • It seems likely that we would witness

  • a level of wealth inequality and unemployment

  • that we have never seen before.

  • Absent a willingness to immediately put this new wealth

  • to the service of all humanity,

  • a few trillionaires could grace the covers of our business magazines

  • while the rest of the world would be free to starve.

  • And what would the Russians or the Chinese do

  • if they heard that some company in Silicon Valley

  • was about to deploy a superintelligent AI?

  • This machine would be capable of waging war,

  • whether terrestrial or cyber,

  • with unprecedented power.

  • This is a winner-take-all scenario.

  • To be six months ahead of the competition here

  • is to be 500,000 years ahead,

  • at a minimum.

  • So it seems that even mere rumors of this kind of breakthrough

  • could cause our species to go berserk.

  • Now, one of the most frightening things,

  • in my view, at this moment,

  • are the kinds of things that AI researchers say

  • when they want to be reassuring.

  • And the most common reason we're told not to worry is time.

  • This is all a long way off, don't you know.

  • This is probably 50 or 100 years away.

  • One researcher has said,

  • "Worrying about AI safety

  • is like worrying about overpopulation on Mars."

  • This is the Silicon Valley version

  • of "don't worry your pretty little head about it."

  • (Laughter)

  • No one seems to notice

  • that referencing the time horizon

  • is a total non sequitur.

  • If intelligence is just a matter of information processing,

  • and we continue to improve our machines,

  • we will produce some form of superintelligence.

  • And we have no idea how long it will take us

  • to create the conditions to do that safely.

  • Let me say that again.

  • We have no idea how long it will take us

  • to create the conditions to do that safely.

  • And if you haven't noticed, 50 years is not what it used to be.

  • This is 50 years in months.

  • This is how long we've had the iPhone.

  • This is how long "The Simpsons" has been on television.

  • Fifty years is not that much time

  • to meet one of the greatest challenges our species will ever face.

  • Once again, we seem to be failing to have an appropriate emotional response

  • to what we have every reason to believe is coming.

  • The computer scientist Stuart Russell has a nice analogy here.

  • He said, imagine that we received a message from an alien civilization,

  • which read:

  • "People of Earth,

  • we will arrive on your planet in 50 years.

  • Get ready."

  • And now we're just counting down the months until the mothership lands?

  • We would feel a little more urgency than we do.

  • Another reason we're told not to worry

  • is that these machines can't help but share our values

  • because they will be literally extensions of ourselves.

  • They'll be grafted onto our brains,