Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • WILSON WHITE: Good afternoon, everyone, especially

  • for those of you who are here in California.

  • My name is Wilson White, and I'm on the public policy

  • and government relations team here in California.

  • We have an exciting talk for you today as part of our Talks

  • at Google series, as well as a series of conversations

  • we're having around AI ethics and technology ethics more

  • generally.

  • So today, I'm honored to have Professor Yuval Noah

  • Harari with us.

  • Yuval is an Israeli historian and a professor at the Hebrew

  • University of Jerusalem.

  • He is a dynamic speaker, thinker, and now

  • an international bestselling author.

  • He's the author of three books.

  • We're going to talk about each of those books today.

  • The first book he published in 2014, "Sapien," which explored

  • some of our history as humans.

  • His second book in 2016 had an interesting take on our future

  • as humans.

  • It was "Homo Deus."

  • And then recently published a new book,

  • the "21 Lessons for the 21st Century,"

  • which attempts to grapple with some of the issues,

  • the pressing issues that we are facing today.

  • So we'll talk about some of the themes in each of those books

  • as we go through our conversation.

  • But collectively, his writings explore very big concepts

  • like free will and consciousness and intelligence.

  • So we'll have a lot to explore with Yuval today.

  • So with that, please join me in welcoming Professor Yuval

  • to Google.

  • [APPLAUSE]

  • YUVAL NOAH HARARI: Hello.

  • WILSON WHITE: Thank you, Professor, for joining us.

  • Before getting started, I have to say

  • that when the announcement went out

  • across Google about this talk, I got several emails

  • from many Googlers around the world who told me

  • that they had either read or are currently reading

  • one or multiple of your books.

  • So if you are contemplating a fourth book,

  • maybe on the afterlife, no spoilers

  • during this conversation.

  • I want to start with maybe some of the themes in both

  • your current book, "21 Lessons," as well

  • as "Homo Deus," because I'm the father of two young kids.

  • I have two daughters, a five-year-old

  • and a three-year-old.

  • And the future that you paint in "Homo Deus" is interesting.

  • So I'd like to ask you, what should I

  • be teaching my daughters?

  • YUVAL NOAH HARARI: That nobody knows

  • how the world would look like in 2050,

  • except that it will be very different from today.

  • So the most important things to emphasize in education

  • are things like emotional intelligence

  • and mental stability, because the one thing

  • that they will need for sure is the ability

  • to reinvent themselves repeatedly

  • throughout their lives.

  • It's really first time in history

  • that we don't really know what particular skills to teach

  • young people, because we just don't

  • know in what kind of world they will be living.

  • But we do know they will have to reinvent themselves.

  • And especially if you think about something like the job

  • market, maybe the greatest problem they will face

  • will be psychological.

  • Because at least beyond a certain age,

  • it's very, very difficult for people to reinvent themselves.

  • So we kind of need to build identities.

  • I mean, if previously, if traditionally people built

  • identities like stone houses with very deep foundations,

  • now it makes more sense to build identities like tents that you

  • can fold and move elsewhere.

  • Because we don't know where you will have to move,

  • but you will have to move.

  • WILSON WHITE: You will have to move.

  • So I may have to go back to school now

  • to learn these things so that I can teach the next generation

  • of humans here.

  • In "21 Lessons for the 21st Century,"

  • you tackle several themes that even we at Google,

  • as a company who are on the leading edge of technology

  • and how technology is being deployed in society,

  • we wrestle with some of the same issues.

  • Tell me a bit about your thoughts

  • on why democracy is in crisis.

  • That's a theme in the current book,

  • and I want to explore that a bit.

  • Why you think liberal democracy as we knew

  • it is currently in crisis.

  • YUVAL NOAH HARARI: Well, the entire liberal democratic

  • system is built on philosophical ideas we've inherited

  • from the 18th century, especially the idea

  • of free will, which underlies the basic models

  • of the liberal world view like the voter knows best,

  • the customer is always right, beauty

  • is in the eye of the beholder, follow your heart,

  • do what feels good.

  • All these liberal models, which are

  • the foundation of our political and economic system.

  • They assume that the ultimate authority is the free choices

  • of individuals.

  • I mean, there are, of course, all kinds of limitations

  • and boundary cases and so forth, but when

  • push comes to shove, for instance,

  • in the economic field, then corporations

  • will tend to retreat behind this last line of defense

  • that this is what the customers want.

  • The customer is always right.

  • If the customers want it, it can't be wrong.

  • Who are you to tell the customers that they are wrong?

  • Now of course, there are many exceptions,

  • but this is the basics of the free market.

  • This is the first and last thing you learn.

  • The customer is always right.

  • So the ultimate authority in the economic field

  • is the desires of the customers.

  • And this is really based on a philosophical and metaphysical

  • view about free will, that the desires of the customer, they

  • emanate, they represent the free will of human beings,

  • which is the highest authority in the universe.

  • And therefore, we must abide by them.

  • And it's the same in the political field

  • with the voter knows best.

  • And this was OK for the last two or three centuries.

  • Because even though free will was always a myth and not

  • a scientific reality--

  • I mean, science knows of only two kinds

  • of processes in nature.

  • It knows about deterministic processes

  • and it knows about random processes.

  • And their combination results in probabilistic processes.

  • But randomness and probability, they are not freedom.

  • They mean that I can't predict your actions

  • with 100% accuracy, because there is randomness.

  • But a random robot is not free.

  • If you connect a robot, say, to uranium, a piece of uranium,

  • and the decisions of the robot is determined

  • by random processes of the disintegration of uranium

  • atoms, so you will never be able to predict exactly

  • what this robot will do.

  • But this is not freedom.

  • This is just randomness.

  • Now this was always true from a scientific perspective.

  • Humans, certainly they have a will.

  • They make decisions.

  • They make choices.

  • But they are not free to choose the will.

  • The choices are not independent.

  • They depend on a million factors,

  • genetic and hormonal and social and cultural and so forth,

  • which we don't choose.

  • Now up till now in history, the humans

  • were so complicated that for a practical perspective,

  • it still made sense to believe in free will,

  • because nobody could understand you better

  • than you understand yourself.

  • You had this inner realm of desires and thoughts

  • and feelings which you had privileged access

  • to this inner realm.

  • WILSON WHITE: Yeah, but that hasn't changed today, right?

  • Like, that--

  • YUVAL NOAH HARARI: It has changed.

  • There is no longer--

  • the privilege access now belongs to corporations like Google.

  • They can have access to things happening ultimately

  • inside my body and brain, which I don't know about.

  • There is somebody out there-- and not just one.

  • All kinds of corporations and governments that maybe not

  • today, maybe in five years, 10 years, 20 years, they

  • will have privileged access to what's happening inside me.

  • More privileged than my access.

  • They could understand what is happening in my brain

  • better than I understand it, which means-- they will never

  • be perfect.

  • WILSON WHITE: Right.

  • But you will, as a free person, like, you

  • will have delegated that access or that ability

  • to this corporation or this machine or this--

  • YUVAL NOAH HARARI: No, you don't have to give them permission.

  • I mean, in some countries maybe you have no choice at all.

  • But even in a democracy like the United States,

  • a lot of the information that enables an external entity

  • to hack you, nobody asks you whether you

  • want to give it away or not.

  • Now at present, most of the data that

  • is being collected on humans is still from the skin outwards.

  • We haven't seen nothing yet.

  • We are still just at the tip of this revolution,

  • because at present, whether it's Google and Facebook and Amazon

  • or whether it's the government or whatever, they all

  • are trying to understand people mainly

  • on the basis of what I search, what I buy, where I go,

  • who I meet.

  • It's all external.

  • The really big revolution, which is coming very quickly,

  • will be when the AI revolution and machine

  • learning and all that, the infotech revolution,

  • meets and merges with the biotech revolution

  • and goes under the skin.

  • Biometric sensors or even external devices.

  • Now we are developing the ability, for example,

  • to know the blood pressure of individuals

  • just by looking at them.

  • You don't need to put a sensor on a person.

  • Just by looking at the face, you can

  • tell, what is the blood pressure of that individual?

  • And by analyzing tiny movements in the eyes, in the mouth,

  • you can tell all kinds of things from the current mood

  • of the person--

  • are you angry, are you bored--

  • to things like sexual orientation.

  • So we are talking about a world in which humans

  • are no longer a black box.

  • Nobody really understands what happens inside, so we say, OK.

  • Free will.

  • No, the box is open.

  • And it's open to others, certain others more

  • than it is open to-- you don't understand what's

  • happening in your brain, but some corporation

  • or government or organization could understand that.

  • WILSON WHITE: And that's a theme that you

  • explore in "Homo Deus" pretty--

  • YUVAL NOAH HARARI: They're both in "Homo Deus"

  • and in "21 Lessons."

  • This is like, maybe the most important thing to understand

  • is that this is really happening.

  • And at present, almost all the attention goes to the AI.

  • Like, now I've been on a two-week tour of the US

  • for the publication of the book.

  • Everybody wants to speak about AI.

  • Like, AI.

  • Previous book, "Homo Deus" came out, nobody cared about AI.

  • Two years later, it's everywhere.

  • WILSON WHITE: It's the new hot thing.

  • YUVAL NOAH HARARI: Yeah.

  • And I try to emphasize, it's not AI.

  • The really important thing is actually the other side.

  • It's the biotech.

  • It's the combination.

  • It's only the combination-- it's only with the help of biology

  • that AI becomes really revolutionary.

  • Because just do a thought experiment.

  • Let's say we had the best, the most developed AI in the world.

  • But humans, we're not animals.

  • We're not biochemical algorithms.

  • But they were something like transcendent souls

  • that make decisions through free will.

  • In such a world, AI would not have mattered much,

  • because AI in such a world could never have replaced teachers

  • and lawyers and doctors.

  • You could not even build self-driving cars

  • in such a world.