Placeholder Image

Subtitles section Play video

  • So, security is two different things:

  • it's a feeling, and it's a reality.

  • And they're different.

  • You could feel secure even if you're not.

  • And you can be secure

  • even if you don't feel it.

  • Really, we have two separate concepts

  • mapped onto the same word.

  • And what I want to do in this talk is to split them apart --

  • figuring out when they diverge and how they converge.

  • And language is actually a problem here.

  • There aren't a lot of good words

  • for the concepts we're going to talk about.

  • So if you look at security from economic terms,

  • it's a trade-off.

  • Every time you get some security, you're always trading off something.

  • Whether this is a personal decision --

  • whether you're going to install a burglar alarm in your home --

  • or a national decision,

  • where you're going to invade a foreign country --

  • you're going to trade off something: money or time, convenience, capabilities,

  • maybe fundamental liberties.

  • And the question to ask when you look at a security anything

  • is not whether this makes us safer,

  • but whether it's worth the trade-off.

  • You've heard in the past several years, the world is safer

  • because Saddam Hussein is not in power.

  • That might be true, but it's not terribly relevant.

  • The question is: Was it worth it?

  • And you can make your own decision,

  • and then you'll decide whether the invasion was worth it.

  • That's how you think about security: in terms of the trade-off.

  • Now, there's often no right or wrong here.

  • Some of us have a burglar alarm system at home and some of us don't.

  • And it'll depend on where we live,

  • whether we live alone or have a family,

  • how much cool stuff we have,

  • how much we're willing to accept the risk of theft.

  • In politics also, there are different opinions.

  • A lot of times, these trade-offs are about more than just security,

  • and I think that's really important.

  • Now, people have a natural intuition about these trade-offs.

  • We make them every day.

  • Last night in my hotel room, when I decided to double-lock the door,

  • or you in your car when you drove here;

  • when we go eat lunch

  • and decide the food's not poison and we'll eat it.

  • We make these trade-offs again and again,

  • multiple times a day.

  • We often won't even notice them.

  • They're just part of being alive; we all do it.

  • Every species does it.

  • Imagine a rabbit in a field, eating grass.

  • And the rabbit sees a fox.

  • That rabbit will make a security trade-off:

  • "Should I stay, or should I flee?"

  • And if you think about it,

  • the rabbits that are good at making that trade-off

  • will tend to live and reproduce,

  • and the rabbits that are bad at it

  • will get eaten or starve.

  • So you'd think

  • that us, as a successful species on the planet -- you, me, everybody --

  • would be really good at making these trade-offs.

  • Yet it seems, again and again, that we're hopelessly bad at it.

  • And I think that's a fundamentally interesting question.

  • I'll give you the short answer.

  • The answer is, we respond to the feeling of security

  • and not the reality.

  • Now, most of the time, that works.

  • Most of the time,

  • feeling and reality are the same.

  • Certainly that's true for most of human prehistory.

  • We've developed this ability

  • because it makes evolutionary sense.

  • One way to think of it is that we're highly optimized

  • for risk decisions

  • that are endemic to living in small family groups

  • in the East African Highlands in 100,000 BC.

  • 2010 New York, not so much.

  • Now, there are several biases in risk perception.

  • A lot of good experiments in this.

  • And you can see certain biases that come up again and again.

  • I'll give you four.

  • We tend to exaggerate spectacular and rare risks

  • and downplay common risks --

  • so, flying versus driving.

  • The unknown is perceived to be riskier than the familiar.

  • One example would be:

  • people fear kidnapping by strangers,

  • when the data supports that kidnapping by relatives is much more common.

  • This is for children.

  • Third, personified risks are perceived to be greater

  • than anonymous risks.

  • So, Bin Laden is scarier because he has a name.

  • And the fourth is:

  • people underestimate risks in situations they do control

  • and overestimate them in situations they don't control.

  • So once you take up skydiving or smoking,

  • you downplay the risks.

  • If a risk is thrust upon you -- terrorism is a good example --

  • you'll overplay it,

  • because you don't feel like it's in your control.

  • There are a bunch of other of these cognitive biases,

  • that affect our risk decisions.

  • There's the availability heuristic,

  • which basically means we estimate the probability of something

  • by how easy it is to bring instances of it to mind.

  • So you can imagine how that works.

  • If you hear a lot about tiger attacks, there must be a lot of tigers around.

  • You don't hear about lion attacks, there aren't a lot of lions around.

  • This works, until you invent newspapers,

  • because what newspapers do is repeat again and again

  • rare risks.

  • I tell people: if it's in the news, don't worry about it,

  • because by definition, news is something that almost never happens.

  • (Laughter)

  • When something is so common, it's no longer news.

  • Car crashes, domestic violence --

  • those are the risks you worry about.

  • We're also a species of storytellers.

  • We respond to stories more than data.

  • And there's some basic innumeracy going on.

  • I mean, the joke "One, two, three, many" is kind of right.

  • We're really good at small numbers.

  • One mango, two mangoes, three mangoes,

  • 10,000 mangoes, 100,000 mangoes --

  • it's still more mangoes you can eat before they rot.

  • So one half, one quarter, one fifth -- we're good at that.

  • One in a million, one in a billion --

  • they're both almost never.

  • So we have trouble with the risks that aren't very common.

  • And what these cognitive biases do

  • is they act as filters between us and reality.

  • And the result is that feeling and reality get out of whack,

  • they get different.

  • Now, you either have a feeling -- you feel more secure than you are,

  • there's a false sense of security.

  • Or the other way, and that's a false sense of insecurity.

  • I write a lot about "security theater,"

  • which are products that make people feel secure,

  • but don't actually do anything.

  • There's no real word for stuff that makes us secure,

  • but doesn't make us feel secure.

  • Maybe it's what the CIA is supposed to do for us.

  • So back to economics.

  • If economics, if the market, drives security,

  • and if people make trade-offs based on the feeling of security,

  • then the smart thing for companies to do for the economic incentives

  • is to make people feel secure.

  • And there are two ways to do this.

  • One, you can make people actually secure

  • and hope they notice.

  • Or two, you can make people just feel secure

  • and hope they don't notice.

  • Right?

  • So what makes people notice?

  • Well, a couple of things:

  • understanding of the security,

  • of the risks, the threats,

  • the countermeasures, how they work.

  • But if you know stuff, you're more likely

  • to have your feelings match reality.

  • Enough real-world examples helps.

  • We all know the crime rate in our neighborhood,

  • because we live there, and we get a feeling about it

  • that basically matches reality.

  • Security theater is exposed

  • when it's obvious that it's not working properly.

  • OK. So what makes people not notice?

  • Well, a poor understanding.

  • If you don't understand the risks, you don't understand the costs,

  • you're likely to get the trade-off wrong,

  • and your feeling doesn't match reality.

  • Not enough examples.

  • There's an inherent problem with low-probability events.

  • If, for example, terrorism almost never happens,

  • it's really hard to judge the efficacy of counter-terrorist measures.

  • This is why you keep sacrificing virgins,

  • and why your unicorn defenses are working just great.

  • There aren't enough examples of failures.

  • Also, feelings that cloud the issues --

  • the cognitive biases I talked about earlier: fears, folk beliefs --

  • basically, an inadequate model of reality.

  • So let me complicate things.

  • I have feeling and reality.

  • I want to add a third element. I want to add "model."

  • Feeling and model are in our head,

  • reality is the outside world; it doesn't change, it's real.

  • Feeling is based on our intuition,

  • model is based on reason.

  • That's basically the difference.

  • In a primitive and simple world,

  • there's really no reason for a model,

  • because feeling is close to reality.

  • You don't need a model.

  • But in a modern and complex world,

  • you need models to understand a lot of the risks we face.

  • There's no feeling about germs.

  • You need a model to understand them.

  • This model is an intelligent representation of reality.

  • It's, of course, limited by science, by technology.

  • We couldn't have a germ theory of disease

  • before we invented the microscope to see them.

  • It's limited by our cognitive biases.

  • But it has the ability to override our feelings.

  • Where do we get these models? We get them from others.

  • We get them from religion, from culture, teachers, elders.

  • A couple years ago, I was in South Africa on safari.

  • The tracker I was with grew up in Kruger National Park.

  • He had some very complex models of how to survive.

  • And it depended on if you were attacked by a lion, leopard, rhino, or elephant --

  • and when you had to run away, when you couldn't run away,

  • when you had to climb a tree, when you could never climb a tree.

  • I would have died in a day.

  • But he was born there, and he understood how to survive.

  • I was born in New York City.

  • I could have taken him to New York, and he would have died in a day.

  • (Laughter)

  • Because we had different models based on our different experiences.

  • Models can come from the media,

  • from our elected officials ...

  • Think of models of terrorism,

  • child kidnapping,

  • airline safety, car safety.

  • Models can come from industry.

  • The two I'm following are surveillance cameras,

  • ID cards,

  • quite a lot of our computer security models come from there.

  • A lot of models come from science.

  • Health models are a great example.

  • Think of cancer, bird flu, swine flu, SARS.

  • All of our feelings of security about those diseases

  • come from models given to us, really, by science filtered through the media.

  • So models can change.

  • Models are not static.

  • As we become more comfortable in our environments,

  • our model can move closer to our feelings.

  • So an example might be,

  • if you go back 100 years ago,

  • when electricity was first becoming common,

  • there were a lot of fears about it.

  • There were people who were afraid to push doorbells,

  • because there was electricity in there, and that was dangerous.

  • For us, we're very facile around electricity.

  • We change light bulbs without even thinking about it.

  • Our model of security around electricity is something we were born into.

  • It hasn't changed as we were growing up.

  • And we're good at it.

  • Or think of the risks on the Internet across generations --

  • how your parents approach Internet security,

  • versus how you do,

  • versus how our kids will.

  • Models eventually fade into the background.

  • "Intuitive" is just another word for familiar.