Placeholder Image

Subtitles section Play video

  • SUSANNA: Authors at Google today is very pleased

  • to invite David Mindell.

  • David Mindell is the Dibner professor

  • of the history of engineering and manufacturing at MIT.

  • He has 25 years of experience as an engineer

  • in the field of undersea robotic exploration

  • as a veteran of more than 30 oceanographic expeditions,

  • and more recently, as an airplane pilot and engineer

  • of autonomous aircraft.

  • He is the award winning author of "Iron Coffin-- War,

  • Technology, and Experience aboard the USS Monitor,"

  • and "Digital Apollo-- Human and Machine in Spaceflight."

  • And his most recent book, "Our Robots, Ourselves"

  • was published by Viking on October 13 of this year.

  • Please join me in welcoming David Mindell.

  • [APPLAUSE]

  • DAVID MINDELL: Thank you Susanna for that nice introduction.

  • It's a pleasure to be here.

  • And I'm going to talk about my new book,

  • "Our Robots, Ourselves."

  • I come to this book sort of out of the experience

  • of my previous book, which was called "Digital Apollo."

  • And "Digital Apollo" was about the computers

  • and the software that were used inside both the command module

  • and the lunar module for the Apollo lunar landings

  • in the '60s, how they were designed,

  • how they were engineered.

  • It was really the first embedded computer.

  • It was certainly the first computer

  • that software became central to human life

  • and was life critical software, and one of the first real time

  • control computers, and the first digital fly by wire system.

  • And you can see in this image over on the right, which

  • is the cover image-- it was made actually by John Knoll who you

  • may know from Industrial Light & Magic-- and a little bit more

  • clearly presented here.

  • The book focuses on this kind of climactic moment in the Apollo

  • 11 lunar landing where the mythology went,

  • Armstrong reaches up and turns off

  • the computer the last minute and lands the spacecraft by hand

  • to avoid this crater that you can see there

  • out the window, West crater.

  • And the book sort of takes that moment as a starting point

  • for why would he turn off the computer,

  • and why was that important?

  • And now it turns out that he didn't turn off the computer.

  • He turned it from a fairly highly automated targeting

  • mode that kind of allowed him a kind of cursor

  • control around the moon to a still fairly highly

  • semi-automated mode, attitude hold

  • in his right hand, rate of descent

  • with a switch in his left hand.

  • Still very much a fly by wire kind of automated mode.

  • That's actually not so far from how pilots

  • fly Airbus airliners today.

  • He didn't turn off the computer.

  • He moved it to a different level of automation

  • to suit what he felt was the situation at the time.

  • And this was a very interesting moment because I learned,

  • in the course of writing this book, at the time,

  • the Soviet spacecraft were controlled by analog computers.

  • And they were very highly automated.

  • They left very little discretion and role for the astronauts.

  • The American computer was a general purpose

  • digital computer, one of the first integrated circuits--

  • uses of integrated circuits.

  • In fact, this computer consumed 60%

  • of the national output of integrated circuits

  • for a couple years during the '60s.

  • So it was a very advanced, forward looking thing to do,

  • including all the complexities of the software.

  • And yet all that advanced technology

  • did not make the machine more automated.

  • It just made it automated in a more nuanced, sophisticated way

  • that gave the human better control over the spacecraft.

  • And that gave me this idea, which I then

  • pursued throughout this new book,

  • that often the highest level of automation--

  • we talk about levels of automation--

  • is not necessarily full autonomy.

  • And the most difficult challenging

  • technological problem is not full autonomy,

  • but rather what I've come to call a perfect five.

  • If you think about level one as fully manual,

  • level 10 as fully automated, the perfect five

  • is really the most complicated, difficult,

  • and I think also socially and financially rewarding

  • place to have automated systems where the human is in control.

  • There's trusted, transparent autonomy.

  • And the system can be moved up and down various levels,

  • turning things on, turning things off,

  • in order to suit the particular situation.

  • And what you see through the Apollo story,

  • and many of the stories in the book is that a lot of systems

  • start out in the engineering phase

  • as imagining full autonomy.

  • The engineers on the Apollo computer

  • thought there would only be two buttons on the interface.

  • One would be go to moon, and one would be take me home.

  • And instead of course what you ended up

  • with was this very rich, very carefully designed

  • mix of instruments and controls.

  • As these systems frequently, time and time again,

  • move from laboratory to field, there

  • are human interventions and human controls

  • put in at critical moments.

  • So again, I've been talking about the perfect five.

  • So the subtitle of the book is "The Myths of Autonomy."

  • I want to read you a little bit from chapter one

  • about what those myths are.

  • First there's the myth of linear progress,

  • the idea that technology evolves from direct human involvement

  • to remote presence, and then to fully autonomous robots.

  • Political scientist Peter Singer--

  • you may be familiar with his book "Wired for War"--

  • epitomizes this pathology when he writes that quote,

  • "this concept of keeping the human in the loop is already

  • being eroded by both policymakers and the technology

  • itself, which are both moving rapidly toward pushing humans

  • out of the loop," unquote.

  • But there's no evidence to suggest

  • that this is a natural evolution,

  • that the technology itself, as Singer puts it,

  • does any such thing.

  • In fact, there's good evidence-- a lot of it

  • is presented in this book-- that people are moving into deeper

  • intimacy with their machines.

  • Second is the myth of replacement,

  • the idea that machines take over human jobs one for one.

  • But human factors, researchers, and cognitive systems engineers

  • have found that really does automation simply

  • mechanize a human task.

  • Rather, it tends to make the task more complex,

  • often increases the workload, or at least shifts it around.

  • And finally, as I mentioned, we have the myth

  • of full autonomy, the Utopian idea that

  • robots today and in the future should operate entirely

  • on their own.

  • Yes, automation can certainly take

  • on parts of tests that were previously

  • accomplished by humans.

  • Machines do act on their own in response

  • to their environments for certain periods of time.

  • But the machine that operates entirely independently

  • of human direction is a useless machine.

  • And I used to say that only a rock is truly autonomous.

  • But then my geologist friends reminded me

  • that even rocks are shaped and formed by their environments.

  • Automation changes the type of human involvement required,

  • transforms it, but does not eliminate it.

  • For any apparently autonomous system,

  • you can always find the wrapper of human control

  • that makes it useful and returns meaningful data.

  • The questions that I'm interested in then

  • are not manned versus unmanned, human control

  • versus autonomous, but rather where are the people?

  • Which people are they?

  • What are they doing, and when are they doing it?

  • And so you can trace through networks of-- in some sense,

  • any programming task is a kind of placing

  • of human intention, and human design,

  • and human views of the world into a piece of software that

  • is later executed at some future date.

  • So the book covers four extreme environments.

  • And the idea is that in these extreme environments like space

  • flight, people have been forced to use robotics for 30

  • or 40 years because, in many cases,

  • it was the only way to do a job where

  • people couldn't physically be.

  • And we can look at those environments

  • and see something about our robotic future

  • in more ordinary environments like automobiles

  • and other aspects of daily life.

  • So I begin with the deep ocean which is where my career began.

  • And the move from scientists visiting the seafloor

  • in submarines-- you may be familiar with Alvin,

  • it was a three man submarine operated by the Oceanographic

  • Institute at Woods Hole where I used

  • to work-- toward remote robots.

  • And this was the evolution that I

  • was involved in starting in the late '80s,

  • moving into the 1990s.

  • And we were all convinced that the world was moving

  • from human, to remote, to autonomous.

  • There was a great deal of tension over that issue.

  • This image appeared in National Geographic in 1981

  • in an article written by my mentor in that field, Dr.

  • Robert Ballard, you may know as the discoverer of the Titanic.

  • And in this article, Ballard laid out this new vision

  • for exploring the seafloor which was remote or telepresence.

  • And you can see there's now a traditional oceanographic

  • ship-- can you see that out there--

  • traditional oceanographic ship on the surface,

  • a fiber optic cable, itself a rather

  • new technology in the 1980s.

  • Then a kind of basic towed sled called Argo in this case,

  • and then a little robot called Jason

  • that would come off this vehicle and explore,

  • in this case, the mid-ocean ridges or hydrothermal vents.

  • And Ballard started a lab at Woods Hole

  • called the Deep Submergence Lab which

  • spent the '80s kind of developing

  • this particular system.

  • Now interestingly, before the Jason robot came online,

  • we had just the sled Argo.

  • And that's the thing that actually discovered the Titanic

  • wreck in 1985.

  • In 1986 though, the Woods Hole groups

  • together went back and revisited the Titanic wreck

  • and sent this little kind of proto Jason

  • called Jason Junior.

  • It connected to Alvin, the three man submarine, little robot,

  • human vehicle, remote robot, down

  • the grand staircase of the Titanic,

  • took tremendous pictures.

  • It was here on the cover of the "National Geographic" magazine.

  • But there was a great deal of tension,

  • in some cases literally tension pulling

  • on the cable between the folks who

  • operated the manned submersible, Alvin,

  • and the folks who felt that remote presence was

  • the way of the future.

  • In fact, I use these two covers because the "Geographic"

  • article, which Ballard wrote, shows

  • only the little remote robot, doesn't deal with Alvin at all.

  • And the "Time" magazine cover-- which actually, Ken Marschall

  • lives out here in LA, did this painting off-- shows

  • only Alvin.

  • And this is why a lot of people still to this day think that

  • Alvin discovered the Titanic, which it didn't.

  • But what you see here is a sense that actually

  • the human and the robotic vehicles

  • are evolving together and kind of playing off

  • each other in the evolution.

  • This image, which is a family tree of undersea vehicles

  • from Woods Hole, kind of implies by it's sort

  • of Darwinian ascent nature that you

  • have the manned submersible here, Alvin,

  • then the remote vehicles Jason, then

  • this whole panoply of autonomous vehicles.

  • I worked a little bit on ABE.

  • I talk about it in the book.

  • The REMUS vehicles are the ones that

  • discovered the wreck of the Air France crash

  • that I also discuss in the book, moving

  • toward this level of higher and higher autonomy.

  • But at the top of this is vehicles

  • that called hybrid ROVs.

  • They are sometimes remote, and they're sometimes autonomous.

  • And so the real evolution is actually,

  • if anything, a kind of convergence

  • and a blurring of the lines between what's

  • a human operated vehicle, what's a remote vehicle,

  • and what's an autonomous vehicle.

  • These are a few different images from some of my colleagues

  • at Woods Hole about what the kind

  • of currently engineered future of oceanography looks like.

  • Here you have what we used to think of an autonomous vehicle.

  • Send it off the ship.

  • It'll go down like an autonomous robot.

  • Run a bunch of track lines.

  • Keep some algorithms.

  • Do some mapping and come back.

  • But of course, as those vehicles evolved,

  • they developed acoustic technologies

  • to stay in touch with what's on the ship.

  • And every expedition you go on is really

  • a collaboration between a manned vehicle, a ship,

  • and an autonomous vehicle.

  • And you send the robotic system out in the morning.

  • It maybe comes back a day or two later.

  • Exchanges energy for information.

  • Talks to its human overlords a little bit.

  • Goes back out, does it again.

  • And you have this kind of constantly going out and coming

  • back.

  • And the autonomy is actually pretty well bounded in time.

  • And there's always, again, this kind of wrapper of, go out,

  • do this, come back.

  • Sometimes they just run straight track lines.

  • But simply getting the vehicle in six miles of water

  • to go out and do something and return

  • is an amazing feat of technology.

  • It requires all kinds of subtlety

  • in the software, and algorithms, and system design, and whatnot.

  • But the challenge is to keep it under control.

  • I'll read you a little passage from the book

  • about my experience with the remote vehicles

  • and how it transformed the presence that we experienced.

  • "On a summer day in 1988, two years after the Titanic

  • exploration, I walked down the stairs of an old green aluminum

  • building in Woods Hole called the Deep Submergence

  • Laboratory.

  • I was there looking for a job straight out of college,

  • and I was there to meet a man named Skip Marquet, one

  • of the original designers of Alvin

  • and the co-founder of the Deep Submergence Lab

  • with Bob Ballard.

  • Touring around this lab, I saw exotic robots, heavy pressure

  • housings, and other things foreign to me.

  • 'This thing's been inside the Titanic,' Marquet said as he

  • pointed out Jason Junior, opened up on a lab bench with

  • electronic guts spilled out.

  • But inside the robots and surrounding

  • them were things that were deeply

  • familiar to me-- electronics, microprocessors,

  • software manuals.

  • And in a moment, I was hooked.

  • I could bring my interests in electronics and my engineering

  • degree to this unusual alien adventure.

  • I was eager to travel the world doing engineering

  • and build electronics that would find

  • their way into extreme environments,

  • and not have to report to work in a cubicle.

  • I joined the Deep Submergence Lab as a junior engineer.

  • What was it like then to operate a remote robot

  • in the deep ocean?

  • First of all, we should qualify the term robot.

  • The term was commonly used for the vehicle,

  • but there was very little resembling autonomy about it.

  • In fact, it was something of a blank slate technically

  • speaking.

  • There was relatively little computing power on board,

  • only enough to flick the lights and instruments on and off,

  • turn on the thrusters, and do some local housekeeping.

  • The video and most of the instrumentation signals

  • went straight up the optical fibers

  • and was multiplexed through the computer

  • to go to the top for processing.

  • Even when Jason was doing something automatic

  • such as holding its own constant depth,

  • the feedback loops were usually closed up on the surface

  • through a computer on the ship.

  • Jason's control room though consisted

  • of five or six 27 inch video monitors mounted on the wall,

  • displaying imagery from the multiple cameras navigation

  • data.

  • A series of control stations were

  • arranged beneath them, one for the pilot, one for an engineer.

  • A data logger changed the videotapes.

  • This left plenty of room in the van

  • for a chief scientist who usually sat behind the pilot

  • to direct the dive.

  • Then 10 or 20 other people, other scientists, engineers,

  • graduate students, and film teams from the media.

  • When all was stable, the whole control van

  • would become concentrated on the seafloor.

  • 'Now that's the world of telepresence,' the pilot Martin

  • Bowen said.

  • 'That's where I forget about my body.

  • And I project myself into the ocean floor,

  • and have to make that vehicle dance.'

  • The pilots learned to narrow their attention

  • to hear just a few voices.

  • If the compass didn't look right or the surface weather

  • was flaring up, the pilot could ask the navigator about it.

  • I tended to stand the navigation watch,

  • and I learned to anticipate much of what

  • the pilot needed and when he needed it--

  • move the ship a bit, change the navigation quality, watch out

  • you're getting a little bit close to this.

  • 'I just started mapping things in my own head,' Martin Bowen

  • said. 'What the obstacles are, how I need to fly, how low.

  • I have the advantage of being surrounded by people who are

  • also helping to take care.'

  • So our presence on the seafloor deeply related to what

  • was going on that darkened control room.

  • As Jason pinged and photographed its way around shipwreck sites,

  • or hydrothermal vents, or other places,

  • the group in the control room was in constant conversation,

  • observing, questioning, speculating on what

  • the cameras and sensors showed.

  • This constant real time seminar about the ongoing exploration,

  • combined with the beautiful haunting

  • images we were seeing on the monitors,

  • is what transported us into this other world.

  • It was the most fundamental surprising difference

  • from Alvin, where even though you were physically

  • near that other world, you only had

  • two people plus the pilot talking to each other."

  • So the rest of the book goes through a series

  • of similar experiences in other environments.

  • I mentioned the undersea environment

  • with both remote and autonomous vehicles.

  • There's a chapter on space where I

  • talk about the groups who use the Mars Exploration

  • rovers, not too far from here, up at JPL, and how there, even

  • though they're experiencing a 20 minute time delay from when

  • they get their signals to when they originated on Mars, a much

  • longer time delay to issue a command

  • and see a response, they still, over time,

  • develop an immensely intense feeling of presence

  • on the red planet.

  • And they feel like they are present in the landscape.

  • The robots up there are not doing any geology on their own.

  • They're remote controlled from the ground.

  • They have certain autonomous features

  • that are used at certain times.

  • But they build a picture on the ground, in this case

  • as much through large scale paper maps as through computer

  • screens that the group on the ground

  • can kind of explore together.

  • Similar stories about the repairs

  • of the Hubble Space Telescope.

  • There were five missions over a 15 year period

  • that conducted those repairs.

  • Beautiful choreographed dance between humans, robots,

  • all networked through these kind of extended networks.

  • There's a chapter on the Predator, the remotely piloted

  • aircraft that the Air Force uses both

  • for persistent surveillance, real time video,

  • in Iraq, and Afghanistan, and elsewhere,

  • and also for firing missiles and shooting people.

  • And what you found there is very similar to what we experienced

  • in the Jason control room.

  • Their bodies, the operators of Predator

  • are physically removed from the battlefield.

  • But their experience of warfare is actually very, very intense,

  • partly because they are doing what

  • aircraft pilots who are literally above the battlefield

  • don't do.

  • The Predator pilots will circle for hours.

  • They have a camera on a single farmhouse,

  • or compound, or social organization.

  • And because of the social relationships

  • they form through the networks, sometimes with friendly forces

  • on the ground who they're trying to support,

  • other times of people who they're

  • surveilling and potentially attacking-- they're

  • still social relationships-- gives them

  • a very intense feeling of presence.

  • And in fact, the Predator operators

  • experience post-traumatic stress disorder

  • at about the same rate that the Air Force airplane pilots do,

  • which is remarkable considering they're physically not at risk.

  • And it's a very, very complicated conversation--

  • I'm sure you've read about in the media-- going on

  • within the Air Force about, are these people warriors?

  • Do they deserve medals?

  • Do they deserve combat pay?

  • There's very high rates of burnout.

  • It's a very kind of unsettling-- I

  • hate to use this word-- but disruptive experience

  • for the people involved.

  • Similar story aboard airliners where

  • I talk about how, in some airlines, pilots are actually--

  • and the airlines themselves-- are backing away

  • from highly automated systems like autoland

  • which will land an airplane without the pilot touching

  • the controls, but is a fairly brittle, kind of high stress

  • situation, in favor of things like heads up displays where

  • in any kind of weather, in any kind of situation,

  • the pilot always does the same procedures

  • and is given a flight path vector and a guidance

  • cue overlaid on the actual runway

  • or where the runway will be if he can't see it for weather,

  • and lands that way.

  • Very much dependent on software, very highly automated,

  • integrated technological system.

  • But the perception and the motor coordination

  • are still going through the human body.

  • And actually interestingly, the space shuttle

  • had an autoland system that our taxpayer dollars all paid for.

  • It was never used.

  • The pilots never, just like the lunar landing pilots,

  • didn't like the idea or could never

  • even test the autoland system on the shuttle.

  • But they had the same guidance cues and flight path vectors

  • in their heads up display, which were actually

  • generated by the autoland.

  • But instead of the autoland operating

  • the flight controls directly, it sort of went

  • through the cognition and the body

  • of the commander of the mission and flew it

  • by joystick following those cues.

  • So one of the things you see is all autonomous systems--

  • this is actually a quote from a DOD report in 2012.

  • "All autonomous systems are joint human machine cognitive

  • systems.

  • There are no fully autonomous systems

  • just as there are no fully autonomous sailors, soldiers,

  • airmen, or Marines."

  • And you can extend that into the civilian word world

  • as, there are no autonomous pilots, drivers, other things.

  • Everybody's embedded in a network.

  • There's always these kinds of trades of autonomy,

  • particularly depending on things like spatial positioning,

  • bandwidth, what kind of networks you're in.

  • Those things vary over time.

  • So I want to read you a more modern story

  • from the world of oceanography about autonomous vehicles

  • to show you how this plays out.

  • "James Kinsey, a young engineering scientist

  • at the Deep Submergence Lab 20 years

  • after I started working there, came to the job

  • with great plans for the autonomy

  • he planned to bestow on his vehicles.

  • He began to build up probabilistic models of how

  • the hydrothermal vent plumes propagate through the ocean,

  • and to try to instruct the vehicles to follow

  • minute detections from their sensors down to the vents.

  • Over time, however, Kinsey realized

  • that trying to imbue that much autonomy into the vehicle

  • was likely a problem from the beginning.

  • Because of the nature of oceanographic exploration,

  • the tasks are poorly defined, and the environment

  • is always changing.

  • Anything programmed into the vehicle

  • constituted assumptions, models about how the world worked

  • that might not be valid in a new context

  • where the people could see more.

  • 'I think it focused on the wrong aspects of autonomy

  • perhaps,' Kinsey told me.

  • 'You're requiring the vehicle to understand a lot of context

  • that may not be available to it.'"

  • Kinsey had his own version of the surprise

  • that I had talked about in a previous chapter, experienced

  • by space engineers as the abstractions

  • of autonomy from a kind of design and research phase

  • met real applications.

  • "One of the problems with having a vehicle that

  • makes its own decisions, Kinsey realized, is quote,

  • 'there's a certain amount of opaqueness to what it's doing.

  • Even if you're monitoring it, you say gee,

  • it just suddenly wandered off to the southwest.

  • Is that a malfunction, or is it a part of the decision tree?'

  • Operating in the deep ocean is expensive,

  • and autonomous vehicles, even though they're unmanned,

  • are far from disposable.

  • Kinsey observes, 'people like to know where their stuff is,

  • especially when they pay a lot of money for it.'

  • Overall in the ocean, the lines between

  • human, remote, and autonomous vehicles are blurring.

  • Engineers envision now an ocean with many vehicles working

  • in concert.

  • Some may contain people.

  • Others will be remote or autonomous.

  • All will be capable of shifting modes at any given time.

  • Alvin recently had an upgrade.

  • And the new software was actually originally designed

  • for autonomous vehicles.

  • The chain challenges are to coordinate all these machines,

  • keep the humans informed, and ensure

  • that the robots' actions reflect human intentions.

  • Some will operate through high bandwidth channels

  • like optical fibers, others through more constrained

  • channels.

  • Some will circle close to a node to flash up their data,

  • then slide back into the abyss.

  • Each will do as it's told and make some decisions

  • on its own accord to structures imbued

  • by its human programmers.

  • In this world undersea, but also on land,

  • we can imagine autonomy as a strangely shaped three

  • dimensional cloud, with vehicles constantly

  • moving back and forth across its boundaries.

  • Now imagine that one of these vehicles is your car,

  • and the 3D cloud of autonomy is your neighborhood.

  • At certain times in certain places,

  • the car has some kinds of autonomy--

  • stay within a highway lane or drive in a high speed convoy.

  • At other times, such as if you're far from a cell tower

  • or driving in snow and the ice obscures the car's sensors,

  • the autonomous capabilities are reduced,

  • and the driver must be more involved.

  • You drive in and out of the cloud,

  • delicately switching in and out of automatic modes."

  • So what does this mean for engineering?

  • I'm an engineer as well as a scholar,

  • and I like to think that all of this work

  • tells us something about how to build autonomous systems.

  • One of them is this.

  • I've interviewed a lot of airline pilots.

  • And I've asked every single one of them,

  • have you ever asked this question?

  • And every single one of them says yes.

  • In fact, one of them said, "oh, that's

  • only what the new guys say, what's it doing now.

  • The experienced people say, oh it does that sometimes."

  • And so I've been involved with a partner of mine,

  • a company called Aurora Flight Sciences,

  • building full size autonomous vehicles for research projects.

  • And one of them was this program called

  • AACUS, which was a full size autonomous cargo

  • helicopter designed to deliver cargo to some remote area

  • without putting a human pilot at risk.

  • But again, there's always a wrapper

  • of human activity around any autonomous system.

  • And in this case, part of that wrapper

  • was the people who needed the supplies, right?

  • There's no reason to go somewhere

  • if you don't have something to bring someone.

  • And so there's a marine landing support specialist

  • on the ground.

  • And now we were required by ONR, this person is not a pilot,

  • should not have a kind of flight control station,

  • and should be able to work with the system with only

  • 15 minutes of training.

  • So we talk to these folks and we said, how would you like it

  • if we could build an autonomous cargo helicopter that

  • has a lidar that could come in and find a landing zone,

  • and land where you want your supplies.

  • And they all, all of them had experience

  • in Iraq and Afghanistan.

  • They said, oh my goodness, no way, horrifying,

  • terrifying to have a 10 ton Black Hawk

  • helicopter bearing down on you at 100 knots, not fun.

  • They said, you have no idea how scary it

  • is to be in a combat zone and see unmanned aircraft flying

  • around when you don't know who's they are,

  • what their intentions are, what they're doing even.

  • If they're friendly they're scary.

  • So we had to build a system that had

  • a kind of situated or embedded autonomy

  • where the people on the ground could at least

  • have some basic kind of state control of the aircraft.

  • The simplest one is if it's coming in

  • and it actually has a lidar on it,

  • you can just see some of these operational modes here.

  • Here's the helicopter scanning the terrain

  • and choosing a landing zone.

  • But maybe the landing zone the helicopter chooses

  • is not one that's acceptable to the person on the ground,

  • or vice versa.

  • So you have about a minute to do a kind of little negotiation.

  • Do this, do that.

  • And the aircraft has to be able to respond in ways

  • that the human finds predictable, understandable.

  • And that basically means relatively simple.

  • What we ended up with in this case was actually

  • designing the kind of core state machine for the autonomous

  • system-- in this case, it was in MATLAB Stateflow--

  • to have these kind of macro states of autonomy

  • that were very simple, very well understood, very predictable,

  • and then kind of autocoding both the user

  • interface and the core autonomy mission manager out of that

  • to give the system a very kind of predictable look

  • and feel to the person.

  • There's a few things they could command the helicopter to do.

  • Go away, abort, circle for a little, choose another zone.

  • And there were a lot of complex algorithms

  • buried in each of those states.

  • But the states had to be relatively

  • simple and straightforward.

  • The fly off for that vehicle was in 2014.

  • It went very well.

  • We beat Lockheed Martin at that game.

  • That program is now progressing into other phases.

  • We are very proud of our work on it.

  • And yet, when it was reported in the "Wall Street Journal,"

  • this was the headline-- "Navy Drones

  • With a Mind of Their Own."

  • And so there's still a great kind

  • of public narrative of, Star Wars-y,

  • 20th century science fiction, the drones

  • are coming to kill us kinds of things,

  • despite the fact that people who are closest to it

  • on the front lines, that's the last thing they want.

  • I'll just briefly close with another program

  • that we're working on.

  • This is a DARPA program called ALIAS

  • where the assignment is really to build a kind of robot that

  • sits in the co-pilot seat-- not really a robotic co-pilot

  • because you actually end up changing the roles of both

  • the pilot and the co-pilot.

  • And then can either convert an existing aircraft

  • into a remotely piloted aircraft,

  • or can kind of help a single pilot

  • and kind of-- so really, one of the terms people

  • use these days is a co-robot that allows support and help,

  • but still keeps the human in the loop.

  • Very challenging program from a bunch of different points

  • of view, but one of them that I think is the most worthwhile

  • is learning how to build a system that's

  • truly collaborative and allows the pilot

  • to sort of go through checklists and different procedures.

  • This program is written up by John Markoff in "The New York

  • Times" this summer, "A Machine in the Co-pilot's Seat."

  • There's a kind of prototype of it that Aurora has already made

  • called Centaur, which is an optionally piloted aircraft.

  • It can be flown by a human pilot like a traditional aircraft.

  • It can be remotely operated from the ground station.

  • Or the mode that I think is the coolest,

  • you can fly the ground station from the backseat.

  • And so you're like a UAV operator,

  • but you're sitting in the seat.

  • And again, you see this convergence

  • of remote, human occupied, and autonomous vehicles.

  • So what can we conclude from all this?

  • I'll read a little bit from the conclusion.

  • "The fully autonomous land robot making its way

  • through a landscape under computer control

  • remains an attractive idea for engineers.

  • Perceiving the environment, classifying, matching it

  • up to models and prior experience,

  • and making plans to move forward resemble our daily acts

  • of living.

  • Uncertainties in the world and within the machines,

  • the unexpected that will always foil our prior assumptions

  • make the problem not only harder, but even more

  • interesting.

  • Thinking these problems through, aided

  • by the medium of technology is a noble effort, engineering

  • at its philosophical best.

  • How do we observe, decide, and act in the world?

  • How should we live with uncertainty?

  • But we should not confuse technical thought experiments

  • with what's useful in a human context.

  • When lives and resources are at stake, time and time

  • again, over decades of examples given in this book,

  • from the deep ocean to the outer planets,

  • we have reined in the autonomy.

  • It's not a story about progress, that one day we'll

  • just advance enough to get it right,

  • but a story about the move from laboratory and R&D

  • into the field.

  • That transition tempers the autonomy,

  • whether the task is to respond to interactions

  • and return scientific data, or to protect and defend

  • human life.

  • In retrospect, Neil Armstrong's last minute manual

  • intervention, turning off the automation on his moon landing,

  • signaled the limits of the 20th century vision of full autonomy

  • and foretold the slow advent of potent human collaboration

  • and presence.

  • The lone autonomous drone is as much

  • an anachronism as the lone unconnected computer.

  • The challenges of robotics in the 20th century

  • are those of situating machines within

  • human and social systems.

  • They're challenges of relationships."

  • So I'll just close with a little startup

  • that I've been working on called Humatics which

  • aims to take what we've done in those two projects for ONR

  • and DARPA and bring them to a larger world of robotics.

  • How do we enable robotics and autonomous systems

  • to work within human environments?

  • And how do we make autonomy transparent, trusted,

  • and by extension, safe and useful for people

  • where other people are around?

  • I often say, if you look at the full autonomy

  • problem in aviation, build a fully unmanned aircraft

  • to take off, fly somewhere, and land,

  • we solved that problem 20 years ago.

  • We know how to do that.

  • To do that same task from an airport

  • that other people are using, through air

  • space that other people are flying through, over places

  • where people are living, landing at another airport

  • that other people are using, we're

  • just beginning to think about that problem.

  • That's a very challenging problem

  • that we really don't even have answers to yet.

  • So I'll just leave at that.

  • And there's a bunch of books out there.

  • And we have some time for questions.

  • Thank you.

  • [APPLAUSE]

  • AUDIENCE: So I'm really interested in the failure modes

  • between, as the autonomy is transitioning

  • between-- I think the Air France example is a great one.

  • So you put all these safety modes,

  • as we're now putting them in cars, to prevent crashes.

  • And over the statistics, they're safer.

  • But then you start to see these really catastrophic failures

  • when but they don't work.

  • So how do we prevent those?

  • How do we keep the pilots having context

  • when the autopilot says, I need to shut off

  • and you're in control?

  • Like, it's in your lap--

  • DAVID MINDELL: Great question.

  • Obviously, really essential issue.

  • I've talked a lot about the Air France crash

  • because it's sort of the classic hand off tragedy.

  • First of all, you can look into that story

  • and you can see that there are things that the airliner could

  • have been programmed to do to make the transition smoother.

  • For example, it lost its air data from the pitot tubes

  • when they iced over.

  • And at that point, just checked out,

  • said, OK, no more fly by wire at all.

  • You're in a manual reversion mode.

  • Here you go.

  • Right?

  • Again, if you think about engineering the relationship--

  • which wasn't really done when that airliner was designed--

  • you can think about a lot of alternatives.

  • Anybody who programs flight controls for UAVs

  • will tell you that you could fly perfectly flying

  • without pitot tubes, right?

  • You have a sort of basic internal flight model.

  • You knew you were going at a certain rate.

  • It's perfectly capable of holding

  • the airplane stable for a while, quite a while

  • actually, or descending to a safe altitude

  • while the crew can sort of--

  • Secondly, the interfaces on airliners

  • are really not all that well designed by modern standards.

  • And they are designed to kind of shift liability

  • as much as they are to provide insight into what's

  • going on inside the system.

  • Thirdly, and this has come up quite

  • a bit since that crash and a couple other crashes as well,

  • the pilot's manual skills had clearly degraded.

  • I mean, flying an airliner at a very high altitude,

  • often autopilot, is not an easy task.

  • The air is very thin.

  • The line between stalling and overspeeding

  • is a very narrow line.

  • It's not something that's typically done.

  • At the same time, every beginner pilot is taught,

  • when the airplane is stalling, don't pull the stick back.

  • And in the Air France case, one guy pulled it back

  • and one guy pushed it forward.

  • So the hand off is challenging.

  • There are a lot of places that it's done well.

  • There are other places that it's done poorly.

  • There's certainly room for innovation there.

  • In fact, I think, again, if we see the perfect five

  • as a goal of the technology, then

  • we should focus on improving the hand offs in lots

  • of different directions.

  • One of the goals of the book is to at least point out

  • places that it's happened in these other cases that

  • are useful examples for people who are working

  • and other technologies today.

  • So absolutely essential thing.

  • AUDIENCE: Thanks.

  • AUDIENCE: So it's not a huge tactical leap

  • to go from the assisted helicopter,

  • for instance, landing at the human supervision to adding

  • forward looking infrared and a minigun

  • to start blasting anything with heat.

  • Are there things that can be done

  • to prevent that, or prevent the evil genius,

  • or the little subshot of CIA from developing

  • these technologies that really, technically, aren't

  • that difficult after these other problems are solved?

  • DAVID MINDELL: Right, that's a great question too.

  • The current US military official policy is we don't do that.

  • Weapons are always released under some kind

  • of human approval process.

  • And I think, if anything, I would almost

  • guess that that policy will become stronger because there's

  • a fear among general military professionals

  • like, what do we actually do?

  • If actually joysticking the aircraft maybe

  • isn't our great skill anymore, and we're

  • getting kind of pushed around in the networks, well,

  • deciding who to kill and when, that's an important thing.

  • That said, there's a long history of autonomy in weapons.

  • And a guided missile from the 1950s

  • is, again, a kind of situated autonomy where a human aims

  • a device and releases it.

  • And it has some sort of automatic function.

  • And the debates are really about what

  • is an appropriate and acceptable line for those things.

  • Landmines are autonomous vehicles, right?

  • And they're autonomous weapons without a lot

  • of discrimination.

  • And a lot of people feel they're beyond an ethical line that's

  • acceptable.

  • So some of these issues have a long rooting in other questions

  • about weapons.

  • I personally feel you're a lot more likely to get killed

  • by a poorly designed robot then you are

  • to get killed by an evil robot.

  • And that kind of poor design and poor situating

  • is really a much more immediate concern

  • for us, a la Air France or other things,

  • than the evil robots coming to take over.

  • AUDIENCE: I think this may be a little bit of a follow up

  • on the Air France, except on the ground.

  • There's a lot of work and a lot of press

  • lately about the development of these autonomous vehicles,

  • the self driving cars.

  • And I was wondering if you have any comments or ideas of how

  • you think that's going to go, or how

  • you think it should develop.

  • DAVID MINDELL: Yeah, great question.

  • And I do discuss this in the book.

  • I believe that we should be able to improve

  • the safety of automobiles, and even radically change

  • the driver experience using all the good stuff that you've

  • read about-- machine vision, and lidars,

  • other kinds of advanced sensors, path planning.

  • But I think it should be organized

  • around keeping the human deeply involved in the experience.

  • Reducing workload, yes.

  • Allowing you to text, yes.

  • Allowing relief of boring activities

  • like sitting in traffic jams, by all means.

  • Sleeping in the trunk, no.

  • And sleeping in the trunk with your kids in the backseat,

  • again, there are 30 or 40 examples

  • about how that's really not a safe way

  • to think about a system.

  • It's a very interesting, very important debate as you know.

  • In the last week, since the book came out,

  • Tesla released one of its early autopilot features,

  • and a lot of people feel released it

  • without the proper-- beta testing

  • is not something you really do with your users

  • when they're going down the highway at 90 miles an hour.

  • It's very different from software in other realms.

  • And there are some interesting stuff popping up

  • about various ways that there are problems potentially

  • with that algorithm.

  • I think that particular issue has more

  • to do with software testing and release policy

  • than it does with what's the right way

  • to automate the vehicle.

  • But it's still very much on people's minds.

  • And I know there are people in the driverless car

  • community who are horrified.

  • I think they're right that if there's

  • an accident in the next week or two

  • with this early release of this software,

  • it will set back a lot of people's idea of progress

  • for a long time.

  • That said, I think it's amazing and exciting to think

  • about, how can we use technology and driving to expand

  • your experience of the world, and bring you more

  • richly out into the environment that you're in, whether it's

  • your situatedness vis a vis other people,

  • or the geographic environment, or other parts of the driving

  • task, while at the same time relieving you of-- I'm a pilot

  • and I fly long distances in my plane.

  • But I can mostly do it like this where my hands are crossed

  • and I'm looking.

  • And things don't change all that fast

  • unless there's a small number of emergencies.

  • Whereas when I drive, I'm, you know, got

  • my shoulders tensed up.

  • And a car or a hit could run our in front of you at any moment.

  • That's a much more kind of tense experience.

  • I think there's any every reason to believe

  • we can relieve that kind of hyperactive nervousness

  • of driving into things that will both reduce error, reduce

  • fatigue, reduce workload.

  • But again, I have a lot of trouble imagining

  • sleeping in the back of the trunk

  • while the car is barrelling down the highway.

  • AUDIENCE: Also, I'm wondering is,

  • is there likely a human factors part?

  • Because one of the issues with some of the autopilots when

  • they kick out is that the highly trained pilots

  • have trouble figuring out how to move into the mode of flying.

  • And is that going to be a problem

  • particularly with drivers?

  • DAVID MINDELL: You get manual skill degradation

  • if you don't practice it enough.

  • And that's something to certainly consider.

  • Although again, when you're going in a fly by wire world,

  • the manual skills are things that you can kind of construct,

  • technically.

  • There's nothing that says that this

  • has to be how you drive a car.

  • People have been trying these things for decades.

  • But two joysticks down near your waist as a possible way

  • you can drive the car.

  • That's actually how they flew the lunar module.

  • And interfaces need to be designed well.

  • There's always challenges with human factors issues.

  • The regulatory environment is challenging.

  • But it's very hard to picture a fully autonomous car

  • that doesn't have a red stop button, or at least

  • some kind of intervention.

  • And if you think about what that constitutes,

  • it constitutes the saying that the driving task

  • has been understood completely, in some prior space and time,

  • by other people.

  • And there's no possibility that you being in the environment

  • at the particular time have nothing to add to the story.

  • That's just something we know is not true, because being there,

  • and being the most immediate person, and the one

  • with your rear end on the line, provides you with information

  • that the system may benefit from.

  • And so there are other ways to think

  • about how you might do that.

  • AUDIENCE: Hi.

  • So if I was half my age and entering college,

  • what field of study should I go into if I'm

  • interested in robotics and this field?

  • Is it more software these days, or is there still

  • a lot of hardware, or a combination?

  • DAVID MINDELL: That's a great question.

  • Obviously the software is critical, and exciting,

  • and interesting.

  • And at the same time, all of these problems

  • actually point to improvements in sensors

  • that are needed that you might not even

  • see if you weren't thinking about them in this way.

  • That's one of the things Humanics

  • is going to do is build new kinds of sensors that enable

  • autonomous systems to work in closer

  • proximity to human environments.

  • I'm an electrical engineer.

  • I like building embedded systems and I do some of both.

  • But I do think that the hardware innovation is

  • equally interesting to the software innovation.

  • I think on just the autonomous car side, as much at Google

  • as anywhere else, there's been amazing improvement in lidar

  • and bringing the cost down, simply because of the needs

  • that driving has presented.

  • And lidar has limits.

  • So there are other systems that are going to come out as well.

  • I don't know that it breaks down either hardware or software

  • per se.

  • I do think that all of those innovations kind of need

  • to be coupled with these higher level system configurations.

  • And one of the things that was really important to me

  • in that AACUS helicopter program was that what I'm talking about

  • is not just an interface issue.

  • Yes, we need better interfaces.

  • Yes we need good interfaces.

  • But no amount of good interface is going on

  • put a Band-Aid on a system that behaves unpredictably

  • or in ways that people can't relate to.

  • It's really about how you design the system

  • and kind of conceptualize the core autonomy.

  • AUDIENCE: So you were talking about the self-piloting cars

  • and how you would never be comfortable without having

  • a big red stop button in your car.

  • And that would be very nice.

  • I like that button.

  • But what I really want is that button in everybody else's car.

  • DAVID MINDELL: That you can stop?

  • AUDIENCE: That I can stop.

  • So if a car comes barreling down the road with some crazy

  • driver, and I cannot react in a way that will save me inside

  • of my vehicle, for that vehicle to stop.

  • That's really my greater fear when I drive around

  • Los Angeles is not the software or even my own reactions,

  • but just the crazy things that sometimes happen.

  • DAVID MINDELL: Yep, that's certainly true anyway.

  • AUDIENCE: In a way, all the time.

  • DAVID MINDELL: No matter what car

  • you make, with no matter what technology,

  • at some level you're only going to eliminate

  • half the accidents, which are the ones where

  • you run into other people.

  • A lot of accidents are people running into things, actually.

  • AUDIENCE: Well like, you can't stop the wall.

  • DAVID MINDELL: Well another way to say it is you

  • can build a car that won't hit anything.

  • And if you build a car that won't hit anything,

  • that doesn't mean it needs to be fully autonomous.

  • It can sort of do collision avoidance

  • or collision prevention.

  • Those are probably never going to be perfect.

  • My Volvo right now will slam on the brakes

  • if I rear end someone.

  • AUDIENCE: Sure.

  • But my question is, one thing the technology can really do

  • is it has these communication protocols that

  • are much faster than the ways humans can communicate.

  • So there is a way, using software, using robots,

  • to make the driving experience in this case considerably

  • safer by ensuring other people won't hit you.

  • And that's something that I think--

  • I'm sorry to interrupt you-- is something that is really

  • absent from this discussion right now of all this exercise

  • safety potential of autonomous decision making.

  • So how do you think we can change that conversation?

  • DAVID MINDELL: Well I think that's a great point.

  • And again, I think you have to think about the autonomy as

  • situated in a human setting.

  • That means other cars, other drivers with their pluses

  • and minuses as operators, and thinking

  • about those relationships, whether it's

  • V to V communications, or radar sensors, or ultrasonic sensors,

  • or whatever, are all things that you do need to think about.

  • I doubt we'll achieve perfection in any of them.

  • But I think there's some low handing fruit probably

  • that can be addressed.

  • So by all means.

  • AUDIENCE: Thank you.

  • DAVID MINDELL: Might you like to know

  • if the person who's behind you on the highway

  • has a history of drunk driving convictions?

  • Yeah, I would like to know that.

  • But there are privacy and equity issues

  • around that kind of thing.

  • AUDIENCE: So I have a quick question that's

  • related to the hand off issue.

  • It seems like to make the hand off, whether it's planned

  • or emergency safe, you're going to have

  • to build up some kind of model of whether to believe and trust

  • the user, the actual human input or not, which

  • stress as problematic.

  • Like in a self driving car situation,

  • if something strange happens, hand off occurs.

  • User is maybe not paying attention

  • and does something like slam on the break in an unsafe way

  • or swerve in an unsafe way, at a certain point

  • the machine is going to have to decide, no, I don't trust you.

  • Is that problematic?

  • DAVID MINDELL: It is actually.

  • It's an old question.

  • They asked this question about the Apollo computer.

  • They asked it about an Airbus airliner.

  • Should the computer allow the person

  • to do something dangerous?

  • It's a great question, really interesting one.

  • AUDIENCE: And presuming that you know the difference

  • between dangerous and not--

  • DAVID MINDELL: Well exactly.

  • I mean, that's a human judgment.

  • All you're really saying in that case

  • is that I'm shifting the judgment of what's

  • dangerous from the person in the environment

  • to a group of people sitting around the table

  • six months before.

  • And there's good reasons to do that.

  • People sitting around the table are probably not fatigued.

  • They have a lot more time to deliberate.

  • There's collectively more brain power, so on and so forth.

  • And at the same time, the person who's in there, however

  • drunk, or tired, or silly they may be,

  • are seeing things that you couldn't see otherwise.

  • That's just a judgment call really that needs to be made,

  • that is being made every time a parameter is set,

  • a threshold is set, a configuration

  • file-- I actually have a story in the book

  • about the configuration file for one

  • of the DARPA Grand Challenge cars, which

  • is a source of a lot of-- it was 1,300 lines long.

  • There were 1,300 parameters that people had to go in

  • and manually set to make that thing work.

  • I'm curious what the modern cars are for that.

  • And that's fine.

  • And obviously you try to eliminate them and make

  • them more automated.

  • But there's still a lot of tweaking and setting

  • that goes on inside of any kind of AI system these days.

  • And those judgments should be transparent

  • and understood who's making them, why, for what reason.

  • That gets as simple as, when you drive to the grocery store,

  • do you want to prioritize speed of travel,

  • fuel efficiency, or safety?

  • Very often those three things are

  • in conflict with each other.

  • Somebody's got to make that decision.

  • I'm just saying it might as well be you rather than somebody

  • behind closed doors.

  • AUDIENCE: Hi.

  • I have question regarded the perfect five scenario.

  • Will this be applicable 20 years down the line,

  • or is just in the veil for incremental development

  • to autonomy?

  • DAVID MINDELL: Applicable to?

  • AUDIENCE: 20 years down the line,

  • will perfect five would still be your--

  • DAVID MINDELL: Yeah, I do actually don't think-- I mean,

  • again all the evidence you've seen in the last 50 years

  • is that systems that get deployed where there are lives

  • at stake end up in that state.

  • So it's an empirical argument based on many, many systems

  • studies, basically.

  • I could be wrong in 20 years.

  • I mean, if had told someone in 1960

  • that they could make a computer with a 10 megahertz clock rate

  • and all of three megabytes of RAM,

  • they would have said of course, that'll

  • be plenty to get us fully autonomous ships.

  • And indeed that happened, and computers

  • have done all kinds of amazing things,

  • but there are still human interventions

  • in all of these systems.

  • So I could be proven wrong.

  • But it is an empirical argument based

  • on how people have done this for many previous decades.

  • AUDIENCE: But incrementally, wouldn't human intervention

  • would be like, we would kind of forget the basics.

  • Like, I came to US two years back,

  • and now I can't drive stick.

  • So incrementally, even all these things start getting added.

  • Now all of these semi-autonomous cars are here,

  • and then I might just forget the very basics

  • of having that intervention.

  • DAVID MINDELL: And again, these things all go on.

  • And ditto with the anti-lock brakes.

  • I'm sure none of you ever have to stop on ice,

  • but from where I come from, we all brake on ice all the time.

  • And you have to change the way that you use the brakes

  • actually when you have that.

  • There are all these little forms of autonomy

  • that come up from the ground up.

  • But I would argue they're still sort of managed by the user.

  • And in fact, your car still has a low gear one and low gear

  • two.

  • I don't know if all cars have that, but most cars have that.

  • And so you have the automatic mode.

  • And then for whatever reasons, again, it's

  • no, sometimes there are reasons.

  • You can go into those low gears.

  • So you're still have a gear shift in the car, interestingly

  • enough.

  • AUDIENCE: Thank you.

  • DAVID MINDELL: Great.

  • Thanks for your attention.

  • [APPLAUSE]

SUSANNA: Authors at Google today is very pleased

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it