Placeholder Image

Subtitles section Play video

  • ALVING: Good afternoon and welcome to the Council on Foreign Relations' discussion on

  • the future of artificial intelligence, robots and beyond. I'm Amy Alving, and I'll be your

  • moderator for today. We have a very distinguished panel here, and in your information, you have

  • a detailed bio on everybody on the stage, so we won't go into those specifics.

  • But briefly, let me introduce Professor Peter Bock, emeritus from George Washington University,

  • who has decades of experience in building and developing artificial intelligence systems.

  • Next to me we have Paul Cohen, also an academic from the University of Arizona, who is now

  • at my alma mater, DARPA, working for the Defense Department's most advanced research and development

  • organization. And we also have Andy McAfee from MIT, who comes to this from a business

  • and economic background with long experience looking at the impact of artificial intelligence

  • from an economic perspective. So we'll start today with thirty minutes of

  • moderated discussion amongst the panelists, and then we'll turn it over to the audience

  • for Q&A. I think in this area, it's important to make

  • sure that we have some common understanding of what we're talking about when we say artificial

  • intelligence. And so I'll ask Peter to start off by describing to us, what is artificial

  • intelligence more than just smart software? BOCK: Yeah, in my TED talk, I described people

  • who come up to me and say that AI is really the field that tries to solve very, very,

  • very hard problems, and I always found that definition a bit smarmy, because all of us

  • here are involved in solving very, very, very hard problems. That's not it at all.

  • It's a general purpose problem-solving engine that has a more or less broad domain of applications

  • so that a single solution can apply to many different situations even in different fields.

  • That's beginning -- a beginning definition for AI, and also probably a longer definition,

  • an engine that can eventually be broadened into beginning to imitate, shall we say, or,

  • in fact, emulate the cognition of our own thinking patterns.

  • I think I'll stop there and let the rest jump in.

  • ALVING: OK. So, Paul, I know that from your perspective, artificial intelligence is about

  • more than just crunching a lot of numbers. You know, the buzzword in -- out in the world

  • today is big data, big data is going to solve all our problems. But big data isn't sufficient,

  • is that correct? COHEN: That's right. So do you want me to

  • talk about what's AI or why big data isn't sufficient?

  • ALVING: Both. How does -- how does AI go beyond simply crunching a lot of numbers for big

  • data? COHEN: Let me give an example. I'm working

  • on a program -- managing a program at DARPA now called Big Mechanism. Big Mechanism is

  • sort of a poke in the eye to big data. But it's actually -- it's based on exactly this

  • distinction between crunching numbers and understanding what the data is telling you.

  • So the purpose of the Big Mechanism program is for machines to read the primary literature

  • in cancer biology, assemble models of cell signaling pathways in cancer biology that

  • are much bigger or more detailed than any human can comprehend, and then figure out

  • from those models how to attack and suppress cancer.

  • Now, data certainly is an important part of that, but I think the difference between big

  • data and Big Mechanism is that we seek causal models of something really complicated. Data

  • informs those models, but the understanding comes from those models. And AI has always

  • been about understanding, understanding the visual world, understanding speech, understanding

  • -- it's always been about understanding. ALVING: And so is the artificial intelligence

  • creating the understanding? COHEN: Yeah.

  • ALVING: Or are you building in the understanding... COHEN: No, no, the machine will read the literature.

  • I mean, you know, you see it in the papers. The papers say things like, well, you know,

  • we suppressed this gene and the following stuff happened, and so you take that little

  • piece of causal knowledge and you put it into your big, complicated model. And as the model

  • gets bigger and more complicated, you get a more and more refined understanding of how

  • cancer works. ALVING: So, Andy, I know you look at this

  • from more of an economic impact perspective. Where do you see this play between model-based

  • understanding and big data playing out in the market today?

  • MCAFEE: And this is the Jets versus the Sharks of the artificial intelligence world. There

  • are these two camps that have been going at it for as long as we've been thinking about

  • these problems. There's the model first camp. You need to understand cause and effect. You

  • need to understand the world before we can think about simulating it in a piece of -- or

  • embedding it in a piece of technology. There's the other part that says, No, actually.

  • And the best distinction I ever heard between those two approaches -- the one that brought

  • it home to me -- was the way a child learns language versus the way and adult learns language.

  • So if I were to start learning a language tomorrow, I would do it the model-based way.

  • I'd sit down with a grammar textbook. I'd try to understand how to conjugate the verbs.

  • I'd understand if there are masculine and feminine. I'd go through this arduous model-based

  • process of trying to acquire a new language. And like we all know, that's not how a two-year-old

  • does it. She just sits around and listens to the adults around her talking and talking

  • to her, and she builds up a very much data-first understanding of the world to the point that

  • she acquires language flawlessly, without having a single grammar lesson.

  • And what's further interesting about that is that if you ask her, why did you add the

  • S to that word? Why is it "I go" but "he goes"? She would say, "I have no idea -- I've never

  • heard the word conjugation before. I just know that's how language works."

  • So this divide is a really, really fundamentally important divide. The news from the trenches

  • that I can bring you is that in the world of -- in real-world applications, the data

  • side is winning. And there's almost a dismissive term for the model-based view these days among

  • a lot of the AI geeks that are doing work at Google and Apple and Facebook and putting

  • things in front of us. They call the model-based view "feature engineering," and they put kind

  • of air quotes around it, and they're almost quite dismissive about it.

  • And in general, in head-to-head competitions among different approaches in areas that we

  • care about, image recognition, natural language processing, artificial speech, things like

  • that, the model-based approach is the one that's winning these competitions and, therefore,

  • is being embedded in the commercial technologies that we're using.

  • COHEN: You meant to say the data... MCAFEE: I meant to say the data-based -- thank

  • you -- the data-based side is winning. My single favorite example of that -- this was

  • a crazy demonstration -- a team out of -- that founded a start-up called DeepMind built a

  • completely data-first learning system, and they asked it to play old-fashioned Atari

  • videogames from the 1980s. And they said, we're not going to even try to teach you the

  • rules of Pac-Man or Battlezone or anything like that. All you're going to do is try to

  • minimize this thing in the upper-right-hand corner called the score. You figure it out

  • from there. They pointed it at seven different Atari games.

  • On three of those games, the system eventually got better than any human player. So I'm not

  • -- I personally am not taking a stand on the model versus data. I'm just saying, over and

  • over again these days, the data world is winning the competitions.

  • ALVING: Peter? BOCK: I couldn't agree with Andrew more. I'm

  • in that... MCAFEE: Boring panel.

  • (LAUGHTER) BOCK: I'm in the same camp that he describes

  • as being data-driven, not rule-driven. I have been developing, since 1980, programs that

  • he describes called collective learning systems that play games and get better than humans

  • at them, simple games, but it soon became obvious to me that it's interesting to play

  • games, but none of you out there is spending much of your life playing games, and do you

  • really want an opponent who is not a natural opponent to play games with you? I think you

  • should be talking to your therapist about that.

  • The ultimate result of that, which is a totally data-driven game -- that is, it's all -- it

  • doesn't have any rules at all -- is the art that you see being generated out in the place

  • where we were having lunch on the screen. That is trained by the masters. Simply we

  • show where all of these -- all the paintings of Van Gogh or Renoir or Rembrandt and so

  • forth, and then we say, here's a photograph. Render it in the style of that.

  • And when it's all through, you -- if you were to say to ELIZA, so what does this mean over

  • here? She would say, "I don't understand the question," because she doesn't know how to

  • answer questions like that. She just knows how to paint. Probably Van Gogh would be incensed

  • with the remark or at least simply turn away and walk into the fields to paint again.

  • And one last thing. It was Fu at Purdue University who said many years ago, if you have the right

  • features, almost any decision-making apparatus will work. If you don't have the right features,

  • no apparatus will work. So, once again, we have to say that the data side people do have

  • to pay attention to the extremely important aspect of extracting what you're looking at

  • and what the important aspects of that are or listening to or smelling or feeling and

  • use that, those features, as the basis for assembling a lot of statistical data.

  • Three of my graduate students are now building exactly the system that Andrew just described,

  • a natural language understanding system. They have read 11,000 English novels -- not the

  • students, the machine -- they haven't read any of them...

  • (LAUGHTER) ... which disturbs me a bit. And it can carry

  • on a coherent conversation. It's still under development, so I'm not ready to show it yet,

  • but it can carry on a coherent conversation with somebody who cares to converse with it.

  • It tends to wander around a bit, sort of like people who've had perhaps a few beers and

  • are talking about who knows what, but nonetheless, it is coherent and that's a step in the right

  • direction. It is understanding based on learning and experience.

  • COHEN: So we don't entirely agree about everything. Let's go back to your game example for just

  • a moment, because I don't want people to think that the distinction between model-based approaches

  • and data-based approaches is quite so cut and dried.

  • Humans who have played -- who have learned to play one videogame will learn to play the

  • next more quickly. That's not true of computers. It'll be a fixed cost per game. And the rate

  • at which the machine learns, the next game will be unaffected by anything it learned

  • in the first game. Now, why is that not true for humans? Well,

  • it's obviously because, as you learn games or, in fact, learn anything at all, they abstract

  • general principles from what they're learning. Call that models, if you like. It's as good

  • a word as any, but it is something that we know machines don't do very well. In fact,

  • DARPA has sunk vast amounts of money into programs with names like transfer learning,

  • right, where the goal is to try and transfer knowledge acquired in one context to another.

  • Can't do it. I also don't want to leave anyone with the

  • impression that we're trying to have humans build models of anything, right? We're trying

  • to have machines... BOCK: This is important.

  • COHEN: ... build -- trying to have machines build models of things by reading.

  • MCAFEE: With data, build the model. COHEN: With the data, build the model. And

  • it's not any old model. It has to be a causal model.

  • MCAFEE: Good. COHEN: Right? Because only when you have a

  • causal model does it make sense to say, push this button and the following thing will happen.

  • If it's not a causal model -- and so that sort of brings me to the fundamental limitation

  • of data-driven science as it is practiced today is that really what you've got there

  • is correlation versus cause. I mean, what it all boils down to is machines

  • are exceptionally good at finding associations between things that co-vary -- that's correlation

  • -- and they're exceptionally bad at figuring out what causes what, right? And, you know,

  • if you want to do science, if you want to change the world, it's important to understand

  • why things are the way they are, not just that if you, you know, profile your customer

  • in the following way, you can increase the probability that they'll buy some product.

  • BOCK: My experience is quite different, Paul. My experience is that these games that learn

  • how -- these machines that learn how to play games can be built to understand the general

  • purpose roles that hold by expanding their knowledge base so that one of the features

  • they have is, what is the game like? And they bring in and they notice the similarity. They

  • don't bother re-learning that lesson that greed is good and red is bad. They know that