Placeholder Image

Subtitles section Play video

  • how do we think about self driving cars?

  • The technology is essentially here and we now have to take a technology in which machines can make a bunch of quick decisions, oftentimes quicker than we can.

  • Ah, that could drastically reduce traffic fatalities, could drastically improve the efficiency of our transportation grid, helped solve things like carbon emissions that are causing, causing the warming of the planet.

  • But Joey made a very elegant and simple point, which is what are the values that we're going to embed in the cars, if in fact we're going to get all the benefits of self driving cars and how do we make the public comfortable with it?

  • Now, some of it's just right now, the overriding concern of the public is safety.

  • Alright.

  • The notion of essentially taking your hands off the wheel, but as Joey pointed out, they're going to be a bunch of choices that you have to make a classic problem being if the cars are driving and you can swerve to avoid a pedestrian who maybe wasn't paying attention, it's not your fault.

  • But if you swerve, you're going to go into a wall and might kill yourself.

  • And how do you make the calculations about odds and airbags and speed and all that, that you can have that machine make that decision?

  • Uh but that's a moral decision, not just a pure utilitarian decision.

  • And who's setting up those rules?

  • Do we have broad consensus around what those rules are?

  • That's going to be important?

  • We're going to have the same set of questions when it comes to medicine.

  • Um We have invested heavily in thinking about precision medicine or individualized medicine, thinking about how the combination of the human genome and uh, yep, computer data and a large enough sample size can potentially arrive at a whole host of cures.

  • Parkinson's Alzheimer's cancer.

  • There are a whole bunch of interesting choices that we're going to have to make as we proceed in this, because the better we get at it, the more predictive we are about certain genetic variations having an impact.

  • How we think about insurance, how we think about medical pricing, who gets what, when, how is that something that we're going to hand out over to an algorithm and if so, who is writing it?

  • So, so these are going to be unavoidable questions.

  • And I think that Joy is exactly right, making sure that the broad public that's not necessarily going to be following every single iteration of this debate still feels as if their voices heard, they're represented.

  • The people in the room are mindful of a range of equities that's going to be really important.

  • And what is the role of government in that context, as we start to get into these ethical questions?

  • Well, my instinct is initially the role is a convener course.

  • The way I've been thinking about the regulatory structure as a I emerges is that early in a technology 1000 flowers should bloom and the government should have a relatively light touch investing heavily in research, making sure that there is a conversation between basic research and applied research and companies that are trying to figure out how to apply it.

  • A good example of where this has worked pretty well, I think is in predicting the weather.

  • Got big data, really complex systems.

  • Government basically said, hey, we got all this data and suddenly a whole bunch of folks are gathering around working with the National Weather Center and developing new apps.

  • And we've actually been able to predict an oncoming tornado 34 times faster than it used to be.

  • That saves lives.

  • That's a that's a good example of where the government isn't doing all the work initially.

  • But his inviting others to participate as technologies emerge and mature than figuring out how they get incorporated into existing regulatory structures becomes a tougher problem.

  • And the government needs to be involved a little bit more.

  • Not always to force the new technology into the square peg that exists, but maybe to change the peg.

  • And one of the things that we're trying to do, for example, is to get the Federal Drug Administration the FDA to redesign how it's thinking about genetic medicine.

  • When a lot of it's rules regulations were designed for a time when, you know, it was worried about heart stent, uh, is a very different problem.

  • So, basic research from government convening to make sure the conversations are happening, ensuring transparency.

  • But as things mature, making sure that uh, there is a transition and a seamless way to get the technology to rethink regulations.

  • And as Joey pointed out, making sure that the regulations themselves reflect a broad base set of values because otherwise, if it's not transparent, we may find that it's disadvantaging certain people, certain groups or that the public is just suspicious of it.

  • I can say one thing about that.

  • So there's it ties to two things.

  • So one is when we did the this car trolley problem, I think we found that most people like the idea that the driver or the passenger could be sacrificed to save many people, but they would never buy that car.

  • And that was that was sort of short version of the result.

  • The other related thing, which is, I don't know if you've heard the neuro diversity movement, but this is a this, if we solve autism, let's say.

  • And Temple Grandin talks about this a lot.

  • She says that, you know, Mozart and Einstein and Tesla would all be considered autistic if they're here today.

  • I don't know if that's true, but something might be honest about the spectrum.

  • So if we were able to eliminate autism, um, and make everyone, you're a normal, normal, I bet a whole swath of mighty kids would not be the way they are.

  • And you know, you probably wouldn't want Einstein as your kids, as somebody who was in Cambridge at the Harvard Law School.

  • B, I didn't want to echo that stereotype about and and someone was therapy, but some of the brilliant kids are kind of on the spectrum.

  • And I think one of the things that's really important, whether we're talking about autism or just diversity broadly, one of the problems I think is that allowing the market and each individual to decide, okay, I just want a normal kid and I want a car that's going to protect me is not going to lead to a maximum for the societal benefit.

  • And I think that whether it's government or something, we can't just have this market driven.

  • And I think a lot of these decisions are going to be this way.

  • I think that's a great point.

  • And it actually goes to the larger issue, um, that we wrestle with all the time around ai oh, and science fiction taps into this all the time.

  • Part of what makes us human are the kinks, they're the mutations there, the outliers there, the flaws that create art or the new invention, right?

  • We we have to assume that if a system is perfect, then it's static and part of what makes us who we are, part of what makes us alive is that is dynamic.

  • And we're surprised.

  • One of the challenges that will have over time is to think about where those areas where it's entirely appropriate for us just to have things work exactly the way they're supposed to without surprises.

  • So airline flight might be a good example where, you know, I'm not that interested in having surprises.

  • If I have a smooth flight every time I'm fine.

  • Right?

  • Yeah.

how do we think about self driving cars?

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it