Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • WILSON WHITE: Good afternoon, everyone, especially

  • for those of you who are here in California.

  • My name is Wilson White, and I'm on the public policy

  • and government relations team here in California.

  • We have an exciting talk for you today as part of our Talks

  • at Google series, as well as a series of conversations

  • we're having around AI ethics and technology ethics more

  • generally.

  • So today, I'm honored to have Professor Yuval Noah

  • Harari with us.

  • Yuval is an Israeli historian and a professor at the Hebrew

  • University of Jerusalem.

  • He is a dynamic speaker, thinker, and now

  • an international bestselling author.

  • He's the author of three books.

  • We're going to talk about each of those books today.

  • The first book he published in 2014, "Sapien," which explored

  • some of our history as humans.

  • His second book in 2016 had an interesting take on our future

  • as humans.

  • It was "Homo Deus."

  • And then recently published a new book,

  • the "21 Lessons for the 21st Century,"

  • which attempts to grapple with some of the issues,

  • the pressing issues that we are facing today.

  • So we'll talk about some of the themes in each of those books

  • as we go through our conversation.

  • But collectively, his writings explore very big concepts

  • like free will and consciousness and intelligence.

  • So we'll have a lot to explore with Yuval today.

  • So with that, please join me in welcoming Professor Yuval

  • to Google.

  • [APPLAUSE]

  • YUVAL NOAH HARARI: Hello.

  • WILSON WHITE: Thank you, Professor, for joining us.

  • Before getting started, I have to say

  • that when the announcement went out

  • across Google about this talk, I got several emails

  • from many Googlers around the world who told me

  • that they had either read or are currently reading

  • one or multiple of your books.

  • So if you are contemplating a fourth book,

  • maybe on the afterlife, no spoilers

  • during this conversation.

  • I want to start with maybe some of the themes in both

  • your current book, "21 Lessons," as well

  • as "Homo Deus," because I'm the father of two young kids.

  • I have two daughters, a five-year-old

  • and a three-year-old.

  • And the future that you paint in "Homo Deus" is interesting.

  • So I'd like to ask you, what should I

  • be teaching my daughters?

  • YUVAL NOAH HARARI: That nobody knows

  • how the world would look like in 2050,

  • except that it will be very different from today.

  • So the most important things to emphasize in education

  • are things like emotional intelligence

  • and mental stability, because the one thing

  • that they will need for sure is the ability

  • to reinvent themselves repeatedly

  • throughout their lives.

  • It's really first time in history

  • that we don't really know what particular skills to teach

  • young people, because we just don't

  • know in what kind of world they will be living.

  • But we do know they will have to reinvent themselves.

  • And especially if you think about something like the job

  • market, maybe the greatest problem they will face

  • will be psychological.

  • Because at least beyond a certain age,

  • it's very, very difficult for people to reinvent themselves.

  • So we kind of need to build identities.

  • I mean, if previously, if traditionally people built

  • identities like stone houses with very deep foundations,

  • now it makes more sense to build identities like tents that you

  • can fold and move elsewhere.

  • Because we don't know where you will have to move,

  • but you will have to move.

  • WILSON WHITE: You will have to move.

  • So I may have to go back to school now

  • to learn these things so that I can teach the next generation

  • of humans here.

  • In "21 Lessons for the 21st Century,"

  • you tackle several themes that even we at Google,

  • as a company who are on the leading edge of technology

  • and how technology is being deployed in society,

  • we wrestle with some of the same issues.

  • Tell me a bit about your thoughts

  • on why democracy is in crisis.

  • That's a theme in the current book,

  • and I want to explore that a bit.

  • Why you think liberal democracy as we knew

  • it is currently in crisis.

  • YUVAL NOAH HARARI: Well, the entire liberal democratic

  • system is built on philosophical ideas we've inherited

  • from the 18th century, especially the idea

  • of free will, which underlies the basic models

  • of the liberal world view like the voter knows best,

  • the customer is always right, beauty

  • is in the eye of the beholder, follow your heart,

  • do what feels good.

  • All these liberal models, which are

  • the foundation of our political and economic system.

  • They assume that the ultimate authority is the free choices

  • of individuals.

  • I mean, there are, of course, all kinds of limitations

  • and boundary cases and so forth, but when

  • push comes to shove, for instance,

  • in the economic field, then corporations

  • will tend to retreat behind this last line of defense

  • that this is what the customers want.

  • The customer is always right.

  • If the customers want it, it can't be wrong.

  • Who are you to tell the customers that they are wrong?

  • Now of course, there are many exceptions,

  • but this is the basics of the free market.

  • This is the first and last thing you learn.

  • The customer is always right.

  • So the ultimate authority in the economic field

  • is the desires of the customers.

  • And this is really based on a philosophical and metaphysical

  • view about free will, that the desires of the customer, they

  • emanate, they represent the free will of human beings,

  • which is the highest authority in the universe.

  • And therefore, we must abide by them.

  • And it's the same in the political field

  • with the voter knows best.

  • And this was OK for the last two or three centuries.

  • Because even though free will was always a myth and not

  • a scientific reality--

  • I mean, science knows of only two kinds

  • of processes in nature.

  • It knows about deterministic processes

  • and it knows about random processes.

  • And their combination results in probabilistic processes.

  • But randomness and probability, they are not freedom.

  • They mean that I can't predict your actions

  • with 100% accuracy, because there is randomness.

  • But a random robot is not free.

  • If you connect a robot, say, to uranium, a piece of uranium,

  • and the decisions of the robot is determined

  • by random processes of the disintegration of uranium

  • atoms, so you will never be able to predict exactly

  • what this robot will do.

  • But this is not freedom.

  • This is just randomness.

  • Now this was always true from a scientific perspective.

  • Humans, certainly they have a will.

  • They make decisions.

  • They make choices.

  • But they are not free to choose the will.

  • The choices are not independent.

  • They depend on a million factors,

  • genetic and hormonal and social and cultural and so forth,

  • which we don't choose.

  • Now up till now in history, the humans

  • were so complicated that for a practical perspective,

  • it still made sense to believe in free will,

  • because nobody could understand you better

  • than you understand yourself.

  • You had this inner realm of desires and thoughts

  • and feelings which you had privileged access

  • to this inner realm.

  • WILSON WHITE: Yeah, but that hasn't changed today, right?

  • Like, that--

  • YUVAL NOAH HARARI: It has changed.

  • There is no longer--

  • the privilege access now belongs to corporations like Google.

  • They can have access to things happening ultimately

  • inside my body and brain, which I don't know about.

  • There is somebody out there-- and not just one.

  • All kinds of corporations and governments that maybe not

  • today, maybe in five years, 10 years, 20 years, they

  • will have privileged access to what's happening inside me.

  • More privileged than my access.

  • They could understand what is happening in my brain

  • better than I understand it, which means-- they will never

  • be perfect.

  • WILSON WHITE: Right.

  • But you will, as a free person, like, you

  • will have delegated that access or that ability

  • to this corporation or this machine or this--

  • YUVAL NOAH HARARI: No, you don't have to give them permission.

  • I mean, in some countries maybe you have no choice at all.

  • But even in a democracy like the United States,

  • a lot of the information that enables an external entity

  • to hack you, nobody asks you whether you

  • want to give it away or not.

  • Now at present, most of the data that

  • is being collected on humans is still from the skin outwards.

  • We haven't seen nothing yet.

  • We are still just at the tip of this revolution,

  • because at present, whether it's Google and Facebook and Amazon

  • or whether it's the government or whatever, they all

  • are trying to understand people mainly

  • on the basis of what I search, what I buy, where I go,

  • who I meet.

  • It's all external.

  • The really big revolution, which is coming very quickly,

  • will be when the AI revolution and machine

  • learning and all that, the infotech revolution,

  • meets and merges with the biotech revolution

  • and goes under the skin.

  • Biometric sensors or even external devices.

  • Now we are developing the ability, for example,

  • to know the blood pressure of individuals

  • just by looking at them.

  • You don't need to put a sensor on a person.

  • Just by looking at the face, you can

  • tell, what is the blood pressure of that individual?

  • And by analyzing tiny movements in the eyes, in the mouth,

  • you can tell all kinds of things from the current mood

  • of the person--

  • are you angry, are you bored--

  • to things like sexual orientation.

  • So we are talking about a world in which humans

  • are no longer a black box.

  • Nobody really understands what happens inside, so we say, OK.

  • Free will.

  • No, the box is open.

  • And it's open to others, certain others more

  • than it is open to-- you don't understand what's

  • happening in your brain, but some corporation

  • or government or organization could understand that.

  • WILSON WHITE: And that's a theme that you

  • explore in "Homo Deus" pretty--

  • YUVAL NOAH HARARI: They're both in "Homo Deus"

  • and in "21 Lessons."

  • This is like, maybe the most important thing to understand

  • is that this is really happening.

  • And at present, almost all the attention goes to the AI.

  • Like, now I've been on a two-week tour of the US

  • for the publication of the book.

  • Everybody wants to speak about AI.

  • Like, AI.

  • Previous book, "Homo Deus" came out, nobody cared about AI.

  • Two years later, it's everywhere.

  • WILSON WHITE: It's the new hot thing.

  • YUVAL NOAH HARARI: Yeah.

  • And I try to emphasize, it's not AI.

  • The really important thing is actually the other side.

  • It's the biotech.

  • It's the combination.

  • It's only the combination-- it's only with the help of biology

  • that AI becomes really revolutionary.

  • Because just do a thought experiment.

  • Let's say we had the best, the most developed AI in the world.

  • But humans, we're not animals.

  • We're not biochemical algorithms.

  • But they were something like transcendent souls

  • that make decisions through free will.

  • In such a world, AI would not have mattered much,

  • because AI in such a world could never have replaced teachers

  • and lawyers and doctors.

  • You could not even build self-driving cars

  • in such a world.

  • Because to put a self driving car on the road,

  • you need biology, not just computers.

  • You need to understand humans.

  • For example, if somebody's approaching the road,

  • the car needs to tell, is this an eight-year-old,

  • an 18-year-old, or an 80-year-old,

  • and needs to understand the different behaviors

  • of a human child, a human teenager, and a human adult.

  • And this is biology.

  • And similarly, to have really effective self-driving taxis,

  • you need the car to understand a lot of things

  • about human psychology.

  • The psychology of the passengers coming in, what they want,

  • and so forth.

  • So if you take the biotech out of the equation AI by itself

  • won't really go very far.

  • WILSON WHITE: So I want to push you there,

  • because I think it's easy to arrive at a dystopian

  • view of what that world would look

  • like with the bio and AI and cognitive abilities of machines

  • when they meet.

  • Like, how that can end up, right?

  • And we see that in Hollywood, and that dystopian view

  • is well documented.

  • But I want to explore with you, like,

  • what are some of the benefits of that combination?

  • And how can that lead to an alternative world view

  • than what's explored more deeply in "Homo Deus?"

  • YUVAL NOAH HARARI: Well, it should

  • be emphasized that there are enormous benefits.

  • Otherwise, there would be no temptation.

  • If it was only bad, nobody would do it.

  • Google won't research it.

  • Nobody would invest in it.

  • And it should also be emphasized that technology is never

  • deterministic.

  • You can build either paradise or hell with these technologies.

  • They are not just--

  • they don't have just one type of usage.

  • And as a historian and as a social critic and maybe

  • philosopher, I tend to focus more

  • on the dangerous scenarios, simply

  • because for obvious reasons, the entrepreneurs

  • and the corporations and the scientists and engineers

  • are developing these technologies.

  • They naturally tend to focus on the positive scenarios,

  • on all the good it can do.

  • But yes, definitely technology, it

  • can do a tremendous amount of good

  • to humanity, to take the example of the self-driving cars.

  • So at present, about 1.25 million people

  • are killed each year in traffic accidents.

  • More than 90% of these accidents are because of human errors.

  • If we can replace humans with self-driving cars,

  • it's not that we'll have no car accidents.

  • That's impossible.

  • But we'll probably save a million lives every year.

  • So this is a tremendous thing.

  • And similarly, the combination of being

  • able to understand what's happening inside my body, this

  • also implies that you can provide people with the best

  • health care in history.

  • You can, for example, diagnose diseases

  • long before the person understands

  • that there is something wrong.

  • At present, the human mind or human awareness

  • is still a very critical junction in health care.

  • Like, if something happens inside my body

  • and I don't know about it, I won't go to the doctor.

  • So if something like, I don't know, cancer

  • is now spreading in my liver and I still don't feel anything,

  • I won't go to the doctor.

  • I won't know about it.

  • Only when I start feeling pain and nausea and all kinds

  • of things I can't explain.

  • So after some time, I go to the doctor.

  • He does all kinds of tests.

  • And finally, they discover, oh, something's wrong.

  • And very often, by that time, it's

  • very expensive and painful.

  • Not necessarily too late, but expensive

  • and painful to take care of it.

  • If I could have an AI doctor monitoring my body

  • 24 hours a day with biometric sensors and so forth,

  • it could discover this long before I feel anything

  • at this stage when it's still very

  • cheap and easy and painless to cure it.

  • So this is wonderful.

  • WILSON WHITE: But in that world, it's

  • an AI doctor, and not a human doctor.

  • And I think one of the potential outcomes

  • that you warn about is AI or machines or that combination

  • of bio and AI replacing us, replacing us as humans.

  • And I'd like to think that one thing that makes us human

  • is having meaning in life or having a purpose for living.

  • That's kind of a unique thing that humans have.

  • And I don't think it's something that we would readily

  • want to give up, right?

  • So as this technology is evolving

  • and we're developing it, it's likely

  • something that we'll bake in this need

  • to have meaning and purpose in life.

  • You talk about in "21 Lessons" this notion that God is dead,

  • or is God back?

  • And the role that religion may play

  • in how we progress as humans.

  • Is there a place for that notion of God

  • or religion to capture and secure

  • this notion of meaning in life or purpose in life?

  • YUVAL NOAH HARARI: Well, it all depends on the definitions.

  • I mean, there are many kinds of gods,

  • and people understand very different things

  • by the word religion.

  • If you think about God, so usually people

  • have very two extremely different gods in mind

  • when they say the word God.

  • One god is the cosmic mystery.

  • We don't understand why there is something rather than nothing,

  • why the Big Bang happened.

  • What is human consciousness?

  • There are many things we don't understand about the world.

  • And some people choose to call these mysteries

  • by the name of God.

  • God is the reason there is something rather than nothing.

  • God is behind human consciousness.

  • But the most characteristic thing of that god

  • is that we know absolutely nothing about him,

  • her, it, they.

  • There is nothing concrete.

  • It's a mystery.

  • And this is kind of the god we talk

  • about when late at night in the desert we sit around a campfire

  • and we think about the meaning of life.

  • That's one kind of god.

  • I have no problem at all with this god.

  • I like it very much.

  • [LAUGHTER]

  • Then there is another god which is the petty lawgiver.

  • The chief characteristic of this god,

  • we know a lot of extremely concrete things about that god.

  • We know what he thinks about female dress code, what kind

  • of dresses he likes women to wear.

  • We know what he thinks about sexuality.

  • We know what he thinks about food, about politics,

  • and we know these tiny little things.

  • And this is a god people talk about when they stand around,

  • burning a heretic.

  • We'll burn you because you did something

  • that this god-- we know everything about this god,

  • and he doesn't like it that you do this, so we burn you.

  • And it's like a magic trick that when

  • you come and talk about God-- so how

  • do you know that God exists, and so forth?

  • People would say, well, the Big Bang and human consciousness,

  • and science can't explain this, and science can't explain that.

  • And this is true.

  • And then like a magician swapping one card for another,

  • they will, shh!

  • Take out the mystery god and place the petty lawgiver,

  • and you end up with something strange like,

  • because we don't understand the Big Bang,

  • women must dress with long sleeves

  • and men shouldn't have sex together.

  • And what's the connection?

  • I mean, how did you get from here to there?

  • So I prefer to use different terms here.

  • And it's the same with religion.

  • People understand very different things with this word.

  • I tend to separate religions from spirituality.

  • Spirituality is about questions.

  • Religion is about answers.

  • Spirituality is when you have some big question about life

  • like, what is humanity?

  • What is the good?

  • Who am I?

  • WILSON WHITE: Our purpose in life.

  • Like, why are we here?

  • YUVAL NOAH HARARI: What should I do in life?

  • And this is kind of-- and you go on a quest,

  • looking deeply into these questions.

  • And you're willing to go after these questions

  • wherever they take you.

  • WILSON WHITE: You could just Google it.

  • YUVAL NOAH HARARI: Yeah.

  • Maybe in the future.

  • But so far, at least some of these questions,

  • I think when you type, like, what is the meaning of life,

  • you get 42.

  • Like, it is the number one result in Google search.

  • So you go on a spiritual quest.

  • And religion is the exact opposite.

  • Religion is somebody comes and tells you, this is the answer.

  • You must believe it.

  • If you don't believe this answer,

  • then you will burn in hell after you die,

  • or we'll burn you here even before you die.

  • [LAUGHTER]

  • And it's really opposite things.

  • Now I think that at the present moment in history,

  • spirituality is probably more important

  • than in any previous time in history,

  • because we are now forced to confront spiritual questions,

  • whether we like it or not.

  • WILSON WHITE: And do you think that confrontation

  • with those questions, that will inform how we allow technology

  • to develop and be deployed?

  • YUVAL NOAH HARARI: Exactly Now throughout history,

  • you always had a small minority of people

  • who was very interested in the big spiritual

  • and philosophical questions of life,

  • and most people just ignored them and went along with their,

  • like, you know, fighting about who owns this land

  • and who this goad herd, to whom it belongs, and so forth.

  • Now we live in a very unique time in history

  • when engineers must tackle spiritual questions.

  • If you are building a self-driving car, by force,

  • you have to deal with questions like free will.

  • By force, you have to deal with the example everybody gives.

  • The self-driving car.

  • Suddenly two kids jump--

  • running after a ball jump in front of the car.

  • The only way to save the two kids is to swerve to the side

  • and fall off a cliff and kill the owner of the car who

  • is asleep in the backseat.

  • What should the car do?

  • Now philosophers have been arguing

  • about these questions for thousands of years

  • with very little impact on human life.

  • But engineers, they are very impatient.

  • If you want to put the self-driving car on the road

  • tomorrow or next year, you need to tell

  • the algorithm what to do.

  • And the amazing thing about this question

  • now is that whatever you decide, this will actually happen.

  • Previously, with philosophical discussions, like you had,

  • I don't know, Kant and Schopenhauer and Mill

  • discussing this issue, should I kill the two kids

  • or should I sacrifice my life?

  • And even if they reach an agreement--

  • and very little impact on actual behavior.

  • Because even if you agree theoretically,

  • this is the right thing to do, at a time of crisis,

  • philosophy has little power.

  • You react from your gut, not from

  • your philosophical theories.

  • But with a self-driving car, if you program the algorithm

  • to kill the driver--

  • and not the driver, the owner of the car, and not

  • the two kids, you have a guarantee,

  • a mathematical guarantee that this is

  • exactly what the car will do.

  • So you have to think far more carefully than ever before,

  • what is the right answer?

  • So in this sense, very old spiritual and philosophical

  • questions are now practical questions of engineering,

  • which you cannot escape if you want, for example,

  • to put a self-driving car on the road.

  • WILSON WHITE: I want to go back to this concept of religion

  • versus spirituality and the role they play

  • in "Sapiens," your first book.

  • You talk about this concept of human fictions or stories

  • that we create as humans, I guess to get us through life

  • and to get us through our interactions with each other.

  • Those fictions, those stories, as you put it,

  • they've served us well.

  • They've resulted in a lot of good for humankind,

  • but have also been the source of wars and conflict

  • and human suffering.

  • How do you square that with this moment

  • we're in where spirituality is an integral part in how

  • we think about integrating technology in our lives?

  • YUVAL NOAH HARARI: Phew.

  • That's a big question.

  • Well, so far in history, in order

  • to organize humans on a large scale,

  • you always had to have some story, some fiction which

  • humans invented, but which enough humans believed in order

  • to agree on how to behave.

  • It's not just religion.

  • This is the obvious example.

  • And even religious people would agree

  • that all religions except one are fictional stories.

  • [LAUGH]

  • Except for, of course, my religion.

  • If you ask a Jew, then he will tell you, yes.

  • Judaism is the truth.

  • That's for sure.

  • But all these billions of Christians

  • and Muslims and Hindus, they believe in fictional stories.

  • I mean, all this story about Jesus rising from the dead

  • and being the Son of God, this is fake news.

  • WILSON WHITE: Wait, that's not true?

  • YUVAL NOAH HARARI: If you ask a Jew, like a rabbi.

  • Even though rabbis tend to be, like-- to hedge their bets.

  • [LAUGH]

  • So maybe not.

  • But then you go to the Christians.

  • They will say, no, no, no, no, no no.

  • This is true.

  • But the Muslims, they believe in fake news.

  • All this story about Muhammad meeting

  • the archangel Gabriel and the Quran coming from Heaven,

  • this is all fake news.

  • And then the Muslims, they'll tell you this about Hinduism.

  • So even in religion, it's very clear.

  • The more interesting thing is that the same

  • is true in something in the economy.

  • Corporation, you can't have a modern economy

  • without corporations like Google and without money,

  • like dollars.

  • But corporations and currencies, they

  • are also just stories we invented.

  • Google has no physical or biological reality.

  • It is a story created by the powerful shamans

  • we call lawyers.

  • [LAUGHTER]

  • Even if you ask a lawyer, what is Google,

  • like, you push them to, what is it,

  • they will tell you it's a legal fiction.

  • It's not this chair.

  • It belongs to Google, I think.

  • But it's not it.

  • It's not the money.

  • It's the manager.

  • It's not the workers.

  • It's a story created by lawyers.

  • And for example, I mean, if somehow

  • with some natural calamity destroys--

  • like, there is an earthquake and the Googleplex collapses,

  • Google still exists.

  • Even if many of the workers and managers are killed,

  • it just hires new ones.

  • [LAUGHTER]

  • And it still has money in the bank.

  • And even if there is no money in the bank, they can get a loan

  • and build new buildings and hire new people,

  • and everything is OK.

  • But then if you have the most powerful shaman

  • like the Supreme Court of the United States comes and says,

  • I don't like your story.

  • I think you need to be broken into different fictions.

  • Then that's the end.

  • WILSON WHITE: So-- so you--

  • [LAUGHTER]

  • That's a lot to unpack.

  • [LAUGHTER]

  • So the advent that we're in now with fake news

  • and really seriously questioning what veracity means

  • and how veracity impacts these kind of foundational things

  • that you laid out earlier in your remarks that have allowed

  • us to work with each other, work across borders, et cetera,

  • with this, where you are on this notion of stories and fictions

  • that we have, is this advent of fake news, is that a reality?

  • Is that where we should be in terms of questioning what's

  • true and what's not true?

  • YUVAL NOAH HARARI: On the one hand, fake news is old news.

  • We've had them throughout history,

  • and sometimes in much worse form than what we see today.

  • WILSON WHITE: But is there such thing as truth?

  • YUVAL NOAH HARARI: Yes, there is absolutely.

  • I mean, there is reality.

  • I mean, you have all these stories

  • people tell about reality.

  • WILSON WHITE: I see.

  • YUVAL NOAH HARARI: But ultimately, there is reality.

  • The best test of reality that I know is the test of suffering.

  • Suffering is the most real thing in the world.

  • If you want to know whether a story is

  • about a real entity or a fictional entity,

  • you should just ask, can this entity actually suffer?

  • Now Google cannot suffer.

  • Even if the stock goes down, even if a judge comes and says,

  • this is a monopoly, you have to break it up, it doesn't suffer.

  • Humans can suffer like the managers,

  • the owners of the stocks, the employees, they can suffer.

  • WILSON WHITE: My girls.

  • YUVAL NOAH HARARI: Yeah.

  • They can certainly suffer.

  • But we know, we can very easily that Google is just a story

  • by this simple test that it cannot suffer.

  • And it's the same of nations.

  • It's the same of currencies.

  • The dollar is just a fiction we created.

  • The dollar doesn't suffer if it loses its value.

  • WILSON WHITE: Let me push you on that, right?

  • So oftentimes, like just in the US,

  • they say kind of the system we set up in the US

  • is an experiment.

  • It's often styled as an experiment democracy

  • with checks and balances, et cetera.

  • Under one view of that, you can say that that's kind of a story

  • that we've created in America, right?

  • We've created this kind of really nice story.

  • But if that was broken apart, like,

  • that entity is not suffering.

  • But if that experiment is the thing, the proper functioning

  • of those institutions and the things

  • that support that-- so that's the thing.

  • YUVAL NOAH HARARI: We know that it functions properly

  • because it alleviates suffering.

  • It provides health care, it provides safety.

  • And if it doesn't, then we would say

  • the experiment doesn't work.

  • The experiment--

  • WILSON WHITE: So would you say that experiment is a fiction?

  • Or is that experiment reality?

  • Is it a thing?

  • YUVAL NOAH HARARI: The experiment

  • is a story that we share.

  • It's things that we humans have invented and created

  • in order to serve certain needs and desires that we have.

  • It is a created story, and not an objective reality.

  • But it is nevertheless one of the most powerful forces

  • in the world.

  • When I say that something is a fiction or a story,

  • I don't mean to imply it's bad or that it's not important.

  • No.

  • Some of the best things in the world

  • and the most powerful forces in the world

  • are these shared fictions.

  • Nations and corporations and banks and so forth,

  • they are all stories we created, but they

  • are the most powerful forces today in the world,

  • far more powerful than any human being or any animal.

  • And they can be a tremendous force for good.

  • The key is to remember that we created them to serve us,

  • and not that we are here in order to serve them.

  • The trouble really begins when people

  • lose sight of the simple reality that we are real, they are not.

  • And a lot of people throughout history and also

  • today, they kind of take it upside down.

  • They think the nation is more real than me.

  • I am here to serve it, and not it is here

  • to serve me and my fellow humans.

  • WILSON WHITE: Very interesting.

  • So we're going to open it up for questions

  • from the audience in a few minutes here,

  • but I want to try to get an easy win.

  • So in "21 Lessons," you tackle really big challenges

  • and questions that we're wrestling with today.

  • Of those questions, which do you think is the easiest to solve?

  • And what should we be doing to go about solving them?

  • YUVAL NOAH HARARI: Ooh.

  • What is the easiest to solve?

  • [EXHALE]

  • [LAUGH]

  • WILSON WHITE: Trying to get quick wins on the board here.

  • YUVAL NOAH HARARI: Yeah.

  • I'll address the fake news question,

  • not because it's the easiest to solve, but also

  • maybe because it's one of the most relevant to what

  • you're doing here in Google.

  • And I would say that the current incarnation of the fake news

  • problem has a lot to do with the model of the news

  • and information market, that we have constructed a model which

  • basically says, exciting news for free

  • in exchange for your attention.

  • And this is a very problematic model,

  • because it turns human attention into the most scarce resource,

  • and you get more and more competition for human attention

  • with more and more exciting news that-- again,

  • and some of the smartest people in the world

  • have learned how to excite our brain,

  • how to make us click on the next news story.

  • And truth gets completely pushed aside.

  • It's not part of the equation.

  • The equation is excitement, attention.

  • Excitement, attention.

  • And on the collective level, I think

  • the solution to this problem would

  • be to change the model of the news market

  • to high-quality news that costs you a lot of money,

  • but don't abuse your attention.

  • It's very strange that we are in a situation when people

  • are willing to pay a lot of money

  • for high-quality food and high-quality cars,

  • but not for high-quality news.

  • And this has a lot to do with the architecture

  • of the information market.

  • And I think there are many things that you here in Google

  • can do in order to help society change the model of the news

  • market.

  • WILSON WHITE: I'd want to continue to explore that,

  • and whether that would create, like, an economic divide

  • or exacerbate the current divide,

  • but I'm going to open it up now for audience questions.

  • We have a microphone here on the side.

  • Start with you.

  • AUDIENCE: Hi.

  • Thank you so much for writing your books.

  • They are completely wonderful, and I've

  • had a joy reading them.

  • So one of the things that you kind of explored here

  • is we are facing a couple of global problems.

  • And historically, we have never created global organizations

  • which are responsible for solving global problems who had

  • any ability to enforce them.

  • And even when we've created them,

  • they have come after great tragedies.

  • So how can we sort of make that happen and make somebody

  • responsible, and have the ability

  • to have those organizations enforce those solutions?

  • YUVAL NOAH HARARI: Yeah.

  • I mean, it's not going to be easy.

  • But I think the most important thing

  • is to change the public conversation

  • and focus it on the global problems.

  • If people focus on local problems,

  • they don't see the need for effective global cooperation.

  • So the first step is to tell people again and again

  • and again, look.

  • The three biggest problems that everybody on the planet

  • is now facing are nuclear war, climate change,

  • and technological disruption.

  • And even if we are able to prevent nuclear war and climate

  • change, still AI and biotech are going

  • to completely disrupt the job market and even the human body.

  • And we need to figure out how to regulate this

  • and how to prevent the dystopian consequences,

  • and make sure that the more utopian

  • consequences materialize.

  • And for that, we need global cooperation.

  • So it would be obvious to everybody,

  • you cannot prevent climate change on a national level,

  • and you cannot regulate AI on a national level.

  • Whatever regulation the US adopts,

  • if the Chinese are not adopting it, it won't do much help.

  • So you need cooperation here.

  • And then it goes into practical political issues.

  • I mean, you have elections coming up,

  • mid-term elections in the US.

  • So if you go to a town meeting with an inspiring congressman

  • or congresswoman, so you just ask them, if I elect you,

  • what will you do about the danger of climate change,

  • about the danger of nuclear war, and about getting

  • global regulations for AI and for biotech?

  • What's your plan?

  • And if they say, oh, I haven't thought about it,

  • then maybe don't vote for that person.

  • [LAUGHTER]

  • WILSON WHITE: Question.

  • AUDIENCE: Hi, Yuval.

  • Thanks for coming here today.

  • So in one of your talks, you suggested

  • that to avoid getting our hearts hacked,

  • we need to stay ahead by knowing ourselves better.

  • And it seems to me that the process of knowing yourself

  • needs a lot of intelligence.

  • And in some ways, it's a skill that needs to developed.

  • I mean, the intellect that we have as humans

  • seems fairly new when compared to other properties

  • like we got evolutionarily.

  • So how do you suggest that we can

  • learn to think and use our intelligence better, and also

  • do that at a scale?

  • Because if only some people know themselves

  • but millions around you or billions or on the

  • don't, then you can only go so far.

  • YUVAL NOAH HARARI: No, I don't think that knowing yourself

  • is necessarily all about intelligence.

  • Certainly not in the narrow sense of intelligence.

  • If you include emotional intelligence and so forth,

  • then yes.

  • But in the more narrow sense of IQ, I think this is not--

  • there are many very intelligent people

  • in the world who don't know themselves

  • at all, which is an extremely dangerous combination.

  • Now some people explore themselves through therapy.

  • Some use meditation.

  • Some use art.

  • Some use poems.

  • They go on a long hike, go for a month to the Appalachian Trail

  • and get to know themselves on the way.

  • There are many ways to do it, which are not necessarily

  • about intellect.

  • It's not like reading articles about brain science.

  • That's going to help in some ways.

  • And in this sense, I think it's a very kind

  • of democratizing ability or force to get to know yourself.

  • After all, you-- you're always with yourself.

  • It's not like you need some special observatory and to get

  • some very rare machines from, I don't know,

  • that cost millions of dollars.

  • You just need yourself.

  • AUDIENCE: Sure.

  • But what about the art of thinking?

  • YUVAL NOAH HARARI: What about?

  • AUDIENCE: The art of thinking.

  • YUVAL NOAH HARARI: The art of thinking.

  • AUDIENCE: I mean, people are very intelligent,

  • but they don't really use their intelligence

  • to understand themselves [INAUDIBLE]..

  • YUVAL NOAH HARARI: Yeah.

  • Again, there is no easy way to do it.

  • If it was easy to get to know yourself better,

  • everybody would do it long ago, and we

  • would be living in a very, very different world.

  • WILSON WHITE: We have folks joining us

  • from around the world as well, so I have a question

  • from the question bank.

  • Compassion is the critical underpinning

  • of any successful society, yet I believe

  • that technology is reducing our capacity for empathy.

  • It feels that we no longer value compassion, perhaps even seeing

  • compassion as weak.

  • What are, in your view, effective ways

  • to motivate members of society to develop their compassion?

  • YUVAL NOAH HARARI: No, I don't think

  • that technology is inherently undermining compassion.

  • It can go both ways.

  • Certainly, communication technology

  • can make you aware of the plight of people

  • on the other side of the world.

  • And without that, you may be extremely

  • compassionate about your immediate, like, family members

  • and neighbors, and won't care at all about people

  • on the other side of the world.

  • So I don't think there is an inherent contradiction

  • or collision between technology and compassion.

  • But it is true that the way we design technology

  • can make us less compassionate, and even the way

  • that we design ourselves.

  • For most of history, you had economic and political systems

  • trying to shape people.

  • And in the past, they did it with education

  • and with culture.

  • And in the present and future, we

  • are likely to do it more and more

  • with biotech and with brain computer interfaces.

  • So our ability to manipulate ourselves is growing.

  • And therefore, it's extremely important

  • to remember to take compassion into account.

  • Otherwise, the danger is that armies and corporations

  • and government in many cases, they

  • want something like intelligence.

  • They want more intelligent workers and soldiers.

  • They want more decisive workers.

  • And sort of, don't take a whole day to decide.

  • I want you to decide this in half an hour.

  • And as our ability to manipulate humans--

  • and I mean manipulate--

  • re-engineer the body and the brain as it grows--

  • we might engineer more decisive and intelligent humans

  • at the price of compassion.

  • Which many corporations and armies and governments

  • find either irrelevant or even problematic,

  • because it causes people to be hesitant

  • and to take more time about the decisions,

  • and so on and so forth.

  • So we need to remember the enormous importance

  • of compassion.

  • And again, it goes back also to the question

  • about getting to know yourself, which

  • I think is the key to developing compassion.

  • Not just because when you understand your own,

  • that this makes me miserable, then you understand, oh.

  • The same thing may make other people also miserable.

  • It's even much deeper than that.

  • When you really get to know yourself,

  • you realize that when you ignore others

  • and when you mistreat others, very often, it harms you

  • even before it harms them.

  • It's a very unpleasant experience to be angry.

  • So your anger may harm other people, or maybe not.

  • Maybe you're boiling with anger about somebody,

  • and you don't do anything about it because she's your boss.

  • But you don't harm her, but your anger harms you.

  • So the more you understand yourself, the greater incentive

  • you have to do something about my anger, about my hatred,

  • about my fear.

  • And most people discover that as they develop more compassion

  • towards others, they also experience far more peace

  • within themselves.

  • WILSON WHITE: Wow.

  • Another live question.

  • AUDIENCE: Thank you.

  • After reading your books, it occurs to me

  • that you've most likely educated yourself both broadly

  • and deeply to be the foundation for your ideas.

  • For those of us that are interested in cultivating

  • our mind similarly, wondering if you could share

  • a little bit about your reading habits

  • and how you choose what to consume.

  • YUVAL NOAH HARARI: My reading habits.

  • I read very eclectically.

  • Like, no book is barred from entering the book list.

  • But then I tend to be extremely impatient about the books I

  • actually read.

  • I would begin, like, 10 books and drop nine of them

  • after 10 pages.

  • It's not always the wisest policy,

  • but it's my policy that if a book didn't really teach me

  • something new, had some interesting insight

  • in the first 10 pages, the chances it will--

  • it could be that on page 100 there

  • will be some mind-blowing idea that I'm now missing.

  • But there are so many--

  • I keep thinking, there are so many books,

  • wonderful books out there that I will never read,

  • so why waste time on the less optimal book?

  • So I will try, like, a book on biology and then economics

  • and then psychology and then fiction and whatever,

  • and just go through them quite quickly until I find

  • something that really grabs me.

  • WILSON WHITE: Another live question.

  • AUDIENCE: Hi, Mr. Harari.

  • Thanks for being here.

  • Fascinating talk as always.

  • I do a little bit of meditation myself,

  • and I've heard that you do a lot of meditation

  • on the order of hours a day.

  • Is that right?

  • YUVAL NOAH HARARI: I try to do two hours every day,

  • and I try to go every year to a long retreat of 45 or 60 days.

  • AUDIENCE: So I was wondering, how

  • do you feel that has influenced your life and the ideas

  • that you have?

  • YUVAL NOAH HARARI: Oh, it's had a tremendous influence,

  • I think both on my inner peace of mind,

  • but also on my work as a scientist.

  • And maybe the two most important influences

  • is that first it enabled me to have

  • more clarity and more focus.

  • And certainly when you write about such big subjects

  • like trying to summarize the whole of history in 400 pages.

  • So having a very, very focused mind

  • is very important, because the great difficulty

  • is that everything kind of distracts you.

  • You start writing about the Roman Empire

  • and you say, well, I have to explain

  • this and this and this and this, and you end up

  • with 4,000 pages.

  • So we have to be very-- what is really important,

  • and what can be left outside?

  • And the other thing is that at least the meditation

  • that I practice, which is with passive meditation,

  • it's all about really knowing the difference

  • between the fictions and stories generated

  • by our mind and the reality.

  • What is really happening right now?

  • And when I meditate, the thing that happens

  • is that constantly, the mind is like a factory that constantly

  • generates stories about myself, about other people,

  • about the world.

  • And they are very attractive.

  • Like, I get identified with them.

  • And the meditation is constantly, don't.

  • It's just a story.

  • Leave it.

  • Just try to stay with what is really happening right now.

  • And this is the central practice in meditation.

  • It's also a guiding principle when I study history

  • or when I study what's happening in the world.

  • AUDIENCE: Great.

  • Thank you.

  • WILSON WHITE: Let's take another question from the Dory.

  • With inequality rising across most nations

  • in the last few decades, what is your perspective

  • on how we can use technological growth to solve this problem

  • and create a more equitable world?

  • Do we need a different economic paradigm to achieve this?

  • YUVAL NOAH HARARI: Yes, we probably

  • need a different economic paradigm, because we

  • are entering kind of uncharted waters,

  • especially because of the automation revolution

  • and the growing likelihood that more and more people might

  • be completely pushed out of the job market,

  • not just because there won't be enough jobs,

  • but simply because the pace of change in the job market

  • will accelerate.

  • So even if there are enough jobs,

  • people don't have the psychological balance

  • and stamina to constantly retrain, reskill, or reinvent

  • themselves.

  • And so I think the biggest problem in the job market

  • is really going to be the psychological problem.

  • And then what do you do when more and more people are

  • left out?

  • And there are explorations of new models

  • like universal basic income and so forth, which

  • are worth exploring.

  • I don't have the answers.

  • I will just say that anybody who thinks

  • in terms like universal basic income

  • should take the word universal very, very seriously,

  • and not settle for national basic income.

  • Because the greatest inequality we

  • are facing will probably be inequality between countries,

  • and not within countries.

  • Some countries are likely to become extremely wealthy

  • due to the automation revolution,

  • and California is certainly one of these places.

  • Other countries might lose everything,

  • because their entire economy depends

  • on things like money or labor, which will lose its importance,

  • and they just don't have the resources

  • and the educational system to kind of turn themselves

  • into high-tech hubs.

  • So the really crucial question is not,

  • what do we do about, I don't know,

  • Americans in Indiana who lose their jobs?

  • The really important question is,

  • what do we do about people in Guatemala or Bangladesh

  • who lose their jobs?

  • This should be, I think, the focus

  • of this question of inequality.

  • WILSON WHITE: OK.

  • We'll take another live question.

  • AUDIENCE: Hello, Mr. Harari.

  • Thank you for doing this Q&A. So at Google,

  • we have a responsibility to build products and services

  • which not only achieve results for our shareholders,

  • but also that actually benefit our end users.

  • So in order to spend less time hacking humans

  • and spend more time reducing suffering,

  • we need to understand what type of future we want to build.

  • So what I wanted to ask you is, what

  • are your personal methodologies for making predictions

  • about the future?

  • And what suggestions would you give

  • to Googlers who want to have a more versed understanding

  • of the future?

  • YUVAL NOAH HARARI: As I said in the very beginning,

  • I don't think we can predict the future,

  • but I think we can influence it.

  • What I try to do as a historian-- and even

  • when I talk about the future, I define myself as a historian,

  • because I think that history is not the study of the past.

  • History is the study of change, how

  • human societies and political systems and economies change.

  • And what I try to do is to map different possibilities

  • rather than make predictions.

  • This is what will happen in 2050.

  • And we need to keep a very broad perspective.

  • One of the biggest dangers is when

  • we have a very narrow perspective,

  • like we develop a new technology and we think,

  • oh, this technology will have this outcome.

  • And we are convinced of this prediction,

  • and we don't take into account that the same technology might

  • have very different outcomes.

  • And then we don't prepare.

  • And again, as I said in the beginning,

  • it's especially important to take into account

  • the worst possible outcomes in order to be aware of them.

  • So I would say whenever you are thinking

  • about the future, the future impact of a technology

  • and developing, create a map of different possibilities.

  • If you see just one possibility, you're not looking wide enough.

  • If you see two or three, it's probably also not wide enough.

  • You need a map of, like, four or five different possibilities,

  • minimum.

  • WILSON WHITE: Let's take another live question.

  • AUDIENCE: Hey, Mr. Harari.

  • So my question is--

  • I'll start very broad, and then I'll

  • narrow it down for the focus.

  • I'm really interested in, what do

  • you think are the components that

  • make these fictional stories so powerful in how

  • they guide human nature?

  • And then if I narrow it down is, I'm

  • specifically interested in the self-destruction behavior

  • of humans.

  • How can these fictional stories led by a few people

  • convince the mass to literally kill or die

  • for that fictional story?

  • YUVAL NOAH HARARI: It again goes back to hacking the brain

  • and hacking the human animal.

  • It's been done throughout history, previously just

  • by trial and error, without the deep knowledge of brain science

  • and evolution we have today.

  • But to give an example, like if you

  • want to convince people to persecute

  • and exterminate some other group of people, what you need to do

  • is really latch onto the disgust mechanisms in the human brain.

  • Evolution has shaped homo sapiens

  • with very powerful disgust mechanisms in the brain

  • to protect us against diseases, against all kinds of sources

  • of potential disease.

  • And if you look at the history of bias and prejudice

  • and genocide, one recurring theme

  • is that it repeatedly kind of latches

  • onto these disgust mechanisms.

  • And so you would find things like women are impure,

  • or these other people, they smell bad

  • and they bring diseases.

  • And very, very often disgust is at the center.

  • So you'll often find comparison between certain types of humans

  • and rats or cockroaches, or all kinds

  • of other disgusting things.

  • So if you want to instigate genocide,

  • you start by hacking the disgust mechanisms in the human brain.

  • And this is very, very deep.

  • And if it's done from an early age,

  • it's extremely difficult afterwards.

  • People can-- they know intellectually

  • that it's wrong to say that these people are disgusting,

  • that these people, they smell bad.

  • But they know it intellectually.

  • But when you place them, like, in a brain scanner,

  • they can't help it.

  • If they were raised--

  • I mean, so we can still do something about it.

  • We can still kind of defeat this.

  • But it's very difficult, because it really

  • goes to the core of the brain.

  • WILSON WHITE: So I'll end on a final question,

  • because we're at time.

  • When Larry and Sergey, when they founded Google,

  • they did so with this deep belief

  • in technology's ability to improve people's lives

  • everywhere.

  • So if you had a magic wand and you could give Google

  • the next big project for us to work on, in 30 seconds or less,

  • what would you grant us as our assignment?

  • YUVAL NOAH HARARI: An AI system that

  • gets to know me in order to protect me and not in order

  • to sell me products or make me click on advertisements and so

  • forth.

  • WILSON WHITE: All right.

  • Mission accepted.

  • [LAUGH]

  • Thank you, guys.

  • [APPLAUSE]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it