Advanced Other 165 Folder Collection
After playing the video, you can click or select the word to look it up in the dictionary.
Loading...
Report Subtitle Errors
[APPLAUSE]
- Two out of the three
fundamental mysteries

about our place in the universe
have already been resolved.

The first is literally about
our place in the universe.

Many years ago Copernicus told
us that we were not at its

centre, that we were just a
tiny dot suspended in the abyss.

This is an image of the earth
taken from the probe Voyager 1

as it was leaving the
solar system from about six

billion kilometres away.
All of human history, all of
the history of life on Earth,

has taken place on
that pale blue dot.

The second mystery,
Darwin then revealed

that we humans are just
one branch, or one twig,

of a beautifully rich and
delicate evolutionary tree.

And that much of the
machinery of life

is shared even with the lowliest
of our fellow creatures.

The third mystery is
that of consciousness,

our inner universe.
Now earlier this year, for
the third time in my life,

I ceased to exist.
As the propofol anaesthetic
flowed from the cannula

in my wrist into my bloodstream
and then into my brain,

there was a falling apart.
A blackness.
An absence.
And then, I was back.
Drowsy and disoriented,
but definitely there.

And when you wake
from a deep sleep,

you might be confused what
time it is, especially

in flying somewhere, but you'll
know that some time has passed.

There seems to be
some basic continuity

between your consciousness then,
and your consciousness now.

But coming around from
a general anaesthetic,

it could have been five minutes.
It could have been five hours.
It could have been five
days, or five years.

I was simply not there.
A premonition of the
oblivion of death.

And general anaesthesia doesn't
just work on your brain.

It doesn't just
work on your mind.

It works on your consciousness.
By altering the delicate
electrochemical circuitry

inside your head, the
basic ground state

of what it is to be is
temporarily abolished.

And in this process lies one
of the greatest remaining

mysteries in science
and philosophy.

How does consciousness happen?
Why is life in the first person?
It is going away,
and coming back.

The modern incarnation
of this problem

is usually traced to Descartes,
who in the 17th century

distinguished between matter
stuff, res extensa, the stuff

that these desks are made
of, that clothes are made of.

But also the brains and bodies
and made of, material stuff.

And res cogitans, the stuff
of thought, of feelings.

The stuff of consciousness.
And in making this
distinction, he gave rise

to the now infamous
mind/body problem,

and life has never
been simple ever since.

But Descartes actually
generated even more mischief

with his doctrine of
the beast machine,

which I'm going to mention now,
because it anticipates where

I'm going to end up as the bell
rings when I finish in an hour.

Before Descartes, people
commonly believed in something

called the great chain of
being, with rocks and plants

at one end, and other
non-human animals, a bit higher

up than humans, and then angels
and gods at the very top.

And this great scale
of being was also

a scale of moral virtue, so that
humans had more moral virtues

than animals and plants,
and then rocks and so on.

Now Descartes, in making
this division between mind

and matter, argued that only
humans had minds, and therefore

moral status, while other
animals didn't have minds.

They were merely physiological
machines, or beast machines,

morally equivalent to
plants, and to rocks.

And in this view, the
physiological mechanisms

that give rise to the
property of being alive

were not relevant
to the presence

of mind or consciousness.
Now I'm going to propose, at the
end of this talk, the opposite.

That our conscious sense of
self arises because of, and not

in spite of, the fact that
we, too, are beast machines.

So to get there, let's return
to the apparent mystery

of consciousness.
Now as recently as 1989, which
is quite a while ago, but not

that long ago,
Stuart Sutherland,

who was founding professor
of experimental psychology

at my university of
Sussex, had this to say.

"Consciousness is a fascinating
but elusive phenomenon.

It is impossible to specify
what it is, what it does,

or why it evolved.
Nothing worth reading
has been written on it."

[LAUGHTER]
It's quite a pessimistic
point of view.

And that may have
been true then.

I don't think it was true
then, but in any case

things have changed a lot since.
And more or less, about
the time that Sutherland

made these remarks, we can
see the birth, or the rebirth,

of the study of consciousness
within the neurosciences.

And a good landmark
is this paper

by Francis Crick and Christof
Koch, published in 1990.

And they start their paper by
saying that it is remarkable

that most of the work
in cognitive sciences,

and the neurosciences, makes
no reference to consciousness

or awareness at all.
And then they go on to
propose their own theory

of what the neural correlates
of consciousness are.

What it is in the brain that
is responsible for being

conscious.
And since then, over
the last 25 years,

there's been first a trickle,
and now a deluge of research

on the brain basis of
conscious experience.

Some of this work
is being carried out

in my laboratory, the Sackler
Centre, the consciousness

science, which was founded
six years ago with Hugo

Critchley, my co-director.
And there are now even
specialised academic journals,

The Neuroscience
of Consciousness,

which I started last year
with Oxford University Press.

And this is a real
change of the tide.

When I started out
more than 20 years ago,

it was thought to be a
very-- it was thought

to be career suicide to
want to study consciousness,

scientifically.
And it may still
be, we don't know.

Let's see.
So while the brain
basis of consciousness

is still a mystery,
it is, in some sense,

an accessible mystery.
And the author, Mark Haddon,
put this very nicely, I think.

He said the raw material
of consciousness

is not on the other
side of the universe.

It didn't happen 14
billion years ago.

And it's not squirrelled
away deep inside an atom.

The raw material
of consciousness

is right here, inside
your head, and you can

hold the brain in your hands.
But the brain won't deliver
its secrets very easily.

What's extraordinary
about the brain

is not so much the
number of neurons,

though there are
about 90 billion.

It's not even the
number of connections,

though there are so many that
if you counted one every second,

it would take you about
three million years

to finish counting.
What's truly extraordinary
are the patterns

of connectivity, which
to a large extent,

are still not known, but within
which are inscribed everything

that makes you, you.
The challenge is then this,
at least the the way I see it.

How can the structure
and dynamics

of the brain, in connection with
the body and the environment,

account for the subjective
phenomenological properties

of consciousness?
And considering
things this way, we

come up against what
the philosopher David

Chalmers has often called the
hard problem of consciousness.

And the idea is this.
There is an easy problem.
The easy problem is to
understand how the combined

operations of the brain and the
body give rise to perception,

to cognition, to thinking,
to learning, to behaviour.

How the brain works,
in other words.

The hard problem
is to understand

why and how any of this
should have anything

to do with consciousness at all.
Why aren't we just robots,
or philosophical zombies,

without any in a universe?
Now there's a tempting
intuition here,

which is that, even if we solve
the hard problem, even if we

solve the easy problem, the
hard problem would still remain

as mysterious as it seems now.
But this just seems
wrong-headed to me.

It may not be necessary to
explain why consciousness

exists at all, in order to
make progress in understanding

its material basis.
And this for me, is the real
problem of consciousness;

how to account for its
various properties in terms

of biological mechanisms without
pretending that it doesn't

exist at all, as you do if
you solve the easy problem,

and without trying to
account for why it's

parts of the universe in
the first place, which

is the hard problem.
And in the history
of science, we've

been somewhere similar before.
It's hard to say if it's
exactly the same situation.

But in our
understanding of life,

eminent biochemists of the time
found it entirely mysterious

how biological mechanisms
could give rise

to the property of being alive.
And there were
proposed of things,

like elan vital
and essence vital,

and all sorts of other stuff.
And although we don't
yet understand everything

about life, this initial
sense of mystery about life

has, to a large
extent, dissolved

as biologists have just
got on with the business

of understanding the properties
of living systems in terms

of mechanisms.
An important part
of this story was

the realisation that life
is not just one thing,

but rather a constellation of
partially dependent, partially

separable, processes, like
metabolism, homeostasis,

and reproduction.
In the same way,
to make progress

on the real problem
of consciousness,

it can be useful to distinguish
different aspects or dimensions

of what it is to be conscious.
The space of possible
minds, if you like.

And one simple classification
is into conscious level,

which is the property of
being conscious at all.

For example, the
difference between being

in a dreamless sleep, or
under general anaesthesia,

and being awake and
conscious as you are now.

And the conscious content,
when you are conscious,

you're conscious of something.
The myriad of sights, sounds,
smells, emotions, feelings,

and beliefs that populate your
inner universe at any one time.

And one thing you are conscious
of when you are conscious,

is the experience, the specific
experience, of being you,

and this is conscious self.
And it's the third
dimension of consciousness.

Now I don't claim
these distinctions mark

completely independent aspects
of what it is to be conscious,

but they're a pragmatically
useful way of breaking down

the problem a bit.
So let's start with
conscious level.

What are the fundamental
brain mechanisms

that underlie our ability
to be conscious at all?

And we can think of this, at
least a first approximation,

as a scale from being
completely unconscious,

as if you were in a coma, or
under general anaesthesia,

to being awake, alert, and
fully conscious as you are now.

And there's various states
in between being drowsy,

being mildly sedated and so on.
What's important is that, while
being conscious and being awake

often go together, this
is not always the case.

For instance, when you are
dreaming you are asleep,

but you are having
conscious experiences.

The conscious experience
of your dreams.

And on the other
side of this diagram,

there are pathological states,
like the vegetative state,

where physiologically you will
go through sleep/wake cycles,

but there is nobody at home.
There is no
consciousness happening.

So what are the
specific mechanisms

that underlie being conscious
and not simply being

physiologically awake?
Well there are a number
of possibilities.

Is it the number of neurons?
Well actually, probably not.
There are more neurons
in your cerebellum,

this bit at the
back of your brain,

than in the rest of
your brain put together.

In fact are about four times
more neurons in your cerebellum

than in the rest of your cortex.
But if you have damage
to your cerebellum, yeah,

you'll have some problems
with coordination

and other things, some
cognitive problems,

but you won't lose
consciousness.

It's not just the
number of neurons.

Doesn't seem to be
any particular region.

In fact, there are regions
that, if you suffer damage,

you will permanently
lose consciousness;

in thalamina, the
nuclei, and the thalamus

deep inside the brain.
But these seems to be
more like on/off switches

than actual generators
of conscious experience.

It's not even neural
activity, at least not

simple kinds of neural activity.
Your brain is
still highly active

doing unconscious
states, during the sleep.

And even if your brain
is highly synchronised,

one of the first
theories of consciousness

was it depended on neurons
firing in synchrony

with each other.
If your brain is
too synchronised,

you will lose consciousness,
and this happens

in states of absence epilepsy.
What seems to be the case is
that, being conscious it all,

depends on how different
brain regions talk

to each other in specific ways.
And this was some groundbreaking
work by Marcello Massimini

in Milan about 10 years ago.
And what he did here,
was he stimulated

the cortex of the brain
with a brief pulse

of electromagnetic energy,
using a technique called

transcranial magnetic
stimulation or TMS.

And then he used EEG
electroencephalography

to listen to the brain's echos.
A little bit like
banging on the brain

and listening to its
electrical response.

And what he noticed
when you do this,

and you can see on
the left is asleep,

and on the right is awake.
And this is very
much slowed down.

When you stimulate the
brain in a sleep condition,

there is still a response.
There's still an echo, but
the echo stays very localised

to the point of stimulation.
It doesn't travel
around very much,

and it doesn't last very long.
But when you stimulate
a conscious brain,

there's a spatial
temporally complex response.

This echo bounces around
all over the cortex

in very interesting ways.
What's more, the complexity of
this echo can be quantified.

You can apply some
simple algorithms

to describe how
complex, how rich,

this pattern of
interactivity is.

This is also from
the Milan group.

And what they've done here is,
they basically look at the echo

as it moves around the brain.
And they see the extent to
which you could describe it,

the minimum description length.
How much can you compress
the image of that echo?

Much the same way
that algorithms make

compressed files from
digital images in your phone.

And they came up
with an index called

the perturbational
complexity index.

And what you find
is, you now have

a number that you can attach
to how conscious you are.

This is, I think,
really intriguing,

because it's a first
step towards having

an actual measurement
of conscious level.

This graph on the bottom
shows this measure

applied to a variety
of conscious states,

ranging from pathological
conscious states,

like the vegetative
state, where there

is no consciousness at all,
all the way through locked

in syndrome, and
then healthy waking.

And you can immediately
see that techniques

like this might already have
clinical value in diagnosing

potential for
consciousness patients

might have after
severe brain injury.

Now at Sussex, we are continuing
work along these lines.

We actually look,
instead of bashing

on the brain with a
sharp pulse of energy,

we want to see whether we can
get something similar just

by recording the spontaneous
activity of the brain.

So we look at
spontaneous dynamics

from, in this case, waking
states and anaesthesia.

This is work with my PhD
students Michael Schartner

and Adam Barrett.
We measure its
complexity, and indeed, we

find that we can
distinguish different levels

of consciousness just by
the spontaneous activity

of the brain.
In a way this isn't
that surprising,

because we know
various things change.

The balance of
different frequencies

of your brain activity changes
when you lose consciousness.

But this doesn't
have to do with that.

This is independent of that.
There's something specific
that is being detected

by these changes in complexity.
More recently we've applied
the same measures to sleep,

in this case taking
advantage with colleagues

in Milan of recordings
taken from directly

within the human
cerebral cortex.

These are implanted electrodes.
And we see much the same story.
If you compare where
the two areas are,

you compare the complexity
of wakeful rest,

and early non-rem sleep, where
you are not dreaming very much.

You see that complexity
fools a great deal.

What's interesting here is, if
you compare wakeful rest to REM

sleep, where people will often
report dreams if you wake them

up, the level of
complexity is very much

as it is during
the wakeful state.

There's something
else going on here,

which is that the complexity in
the frontal part of the brain

seems to be higher than in
other parts of the brain.

And that's something we still
don't understand fully, yet.

I just wanted to
give you something

hot off the press, so to speak,
which is where you've also

been applying these
measures now to data taken

from people under the
influence of psychedelic drugs;

psilocybin, ketamine, and LSD.
And what we find, at least in
our hands to start with here,

is that the level of
complexity actually

increases as compared to the
baseline state, which is not

something we've seen before
in any other application

of these measures.
So what's important about
this way of looking at things

is that, it's grounded
in a theory that

tries to explain why certain
sorts of brain dynamics

go along with being conscious.
And put very simply,
the idea is this--

and it goes back to Guilio
Tononi and Gerald Edelman,

people that I went to work with
in America about 18 years ago--

the idea is very simple.
Consciousness is
extremely informative.

Every conscious experience
you have, or have had,

or will have, is different
from every other conscious

experience you have had,
are having, or will have.

Even the experience
of pure darkness

rules out a vast repertoire
of alternative possible

experiences that you could have,
or might have in the future.

There's a huge amount of
information for the organism

in any conscious experience.
At the same time every
experience that you have

is highly integrated.
Every conscious scene is
experienced, as all of a piece,

is bound together.
We don't experience colours and
shapes separately in any way.

It's conscious experiences
at the level of phenomenology

combine these properties.
They are the one hand
highly informative, composed

of many different parts.
On the other, bound
together in a unified whole.

And this motivates us to search
for mathematical measures which

have the same property,
which are neither

lacking in information.
On the left, you
see a system which

is all connected together,
so it can't enter

very many different states.
On the right is a system which
is completely dissociated,

so they can enter states,
but it's not a single system.

We want measures that track
this middle ground of systems,

that combine both integration
and differentiation.

And a number of these
measures now exist.

There are some
equations here, which

we can talk about
later if you like,

that try to target
this middle ground.

And time will tell whether,
by applying these more

phenomenologically
grounded measures,

we come up with even more
precise practical measures

of consciousness.
Now why is this business
of measurement important?

And I want to make a
general point here,

which is that, if you're
trying to naturalise

a phenomenon which
seems mysterious,

the ability to measure it
is usually one of the most

important steps you can take.
And we've seen numerous
examples of this.

The history of our understanding
of heat and temperature

is one very good example
Here's an early thermometer

from the 18th century, which
used the expansion of air.

But of course there
are many problems

in generating a
reliable thermometer

and a scale of temperature,
if you don't already

have fixed points.
And if you don't
know what heat is you

get trapped in a kind
of vicious circle

that took a long
time to break out of.

But people did
break out of this,

and the development
of the thermometers

catalysed our
understanding of heat

from being something that flowed
in and out of objects, to being

something that was identical
to a physical property.

The mean molecular kinetic
energy of molecules

in a substance.
And having that
concept of heat now

allows us to talk
about temperature

far beyond the realms
of human experience.

We can talk about
the temperature

on the surface of
the sun, in a sense,

the way we can talk about the
temperature of interstellar

space, close to absolute zero.
None of these things
make any sense

and in our phenomenological
experience of hot and cold.

So this brings me to my
first take-home message.

Measurement is important, and
consciousness, conscious level,

depends on a complex balance of
differentiation and integration

in brain dynamics,
reflecting the fact

that conscious
experiences themselves

are both highly informative
and always integrated.

Now when we are conscious, we
are conscious of something.

So what are the brain
mechanisms that determine

the content of consciousness?
And the hero for this
part of the story

is the German physicist and
physiologist Hermann Von

Helmholtz.
And he proposed the
idea that the brain is

a kind of prediction machine.
That what we see,
hear, and feel are

nothing other than the brain's
best guess about the causes

of sensory inputs.
And the basic idea is,
again, quite simple.

The brain is locked inside
its bony skull home,

and has very indirect access
to the external world.

All it receives are ambiguous
and noisy sensory signals,

which are highly
and directly related

to this external world
of objects, and so on,

if there is an external world
of objects out there at all.

They know about that.
Perception in this
view is, by necessity,

a process of inference in
which the brain interprets

these ambiguous and noisy
sensory signals with respect

to some prior
expectations or beliefs

about the way the world is.
And this forms the
brain's best guess

of the causes of the
sensory signals that

are impacting our sensory
surfaces all the time.

What we see is the brain's
best guess of what's out there.

I want to give you
a couple of examples

that illustrate this process.
It's quite easy to do, in a way.
This first example is a
well-known visual illusion

called Edelstein's Checkerboard.
Now here, you'll
see two patches.

You'll see patches A and B. And
I hope to you they look to be

different shades of grey.
Do they?
They look to be
different shades of grey.

Of course they are exactly
the same shade of grey.

I can illustrate that by
putting an alternative image,

and joining up
those two patches.

You'll see that's it's
the same shade of grey.

You may not believe
me, so what I'll do

is, I'll shift it along, and
you'll see even more clearly.

There are no sharp edges.
It's the same shade of grey.
What's going on
here, of course, is

that the brain is unconsciously
applying its prior knowledge

that a cast shadow dims
the appearance of surfaces

that it casts onto.
So we therefore see
the patch B as being

lighter than it
really is, in order

to account for that effect.
And this is of course
an illustration

of the success of the visual
system, not its failure.

The visual system is a very
bad physical light metre,

but that's not what
it's supposed to do.

It's supposed to, or one
thing it's supposed to do,

is to interpret the
causes of sensory signals

in terms of meaningful
objects in the world.

It's also an example
of what we sometimes

call cognitive impenetrability.
Even if you know the patches
are the same shade of grey,

when I take that bar away,
they again look different.

Can't do much about that.
The second example
just shows you

how quickly the brain can
take in new prior information

to change the nature of
conscious perception.

This is a so-called
Mooney image.

And if you haven't seen
it before, hopefully what

you will see here
is just a passing

of black and white splotches.
Does everybody kind of get that?
Black and white splotches?
Some of you might
have seen this before.

And now what I'm going
to do is fill it in,

and you see something
very different.

What you'll see is a very
meaningful scene here involving

at least two objects, a
woman, a hat and a horse.

Now if you stare at
this for a while,

I won't leave it
up for too long--

but if you just look
at it for a little bit,

and then I take that
image away again,

you should still be able to see
the objects within that image.

Now for me this is
quite remarkable,

because the sensory information
hasn't changed at all.

All that's changed are your
brain's prior expectations

about what that
sensory data signifies.

And this changes what
you consciously see.

Now this also works in
the auditory domain.

Here are two spectrograms.
This is something
called sine wave speech,

and what you see here are two
time frequency representations

of speech sounds.
The one on the top has all the
sharp acoustical features that

provide normal speech removed.
A little bit like
thresholding an image.

And the bottom is
something else.

So I'm going to
play the top first,

and let's see what
it sounds like.

[STRANGE BEEPS AND NOISES]
And now I'll play
you something else.

(BOTTOM SOUND - A MAN'S
VOICE): Jazz and swing fans

like fast music.
- So I hope you all understood
that piece of sage advice.

And now if I play the
original sound again--

[BEEPS AND WHISTLES THAT SOUND
LIKE THE SENTENCE]

- Yeah?
This is exactly the same.
Again, all this
change is what we

expect that sound to signify.
[SAME SOUND PLAYED AGAIN]
- One more time, just for luck.
It's not just a bunch of
noisy whistles, it's speech.

Now this typical
framework for thinking

about these kinds of effects
is Bayesian Inference.

And this is a form of
probabilistic reasoning, which

is applicable in all
sorts of domains,

not just in neuroscience,
in medical diagnosis,

and all sorts of things,
like finding lost submarines.

But in neuroscience, we talk
about the Bayesian brain.

And it's a way of
formalising Helmholtz's idea

that perception is a
form of best guessing.

And the idea is that sensory
signals and prior beliefs

can be represented as
probability distribution.

So for instance,
this yellow curve

is the probability of
something being the case,

maybe that you've got a brief
glimpse of an object moving

to the right.
The sensory data may
say something different.

It may have a
probability that peaks

at a different
angle of movement.

Maybe it's drifting in
a different direction.

And the optimal
combination of the prior,

and the likelihood, the yellow
curve and the red curve,

is this green curve, which
we will call the posterior

distribution.
And that represents the
best optimal combination

of these two sorts of evidence.
And the idea is, well
that's what we perceive.

Thinking about
perception in this way

does something rather
strange to the way,

classically in
neuroscience, people

have thought about perception.
The classical view
is that the brain

processes sensory
information in a bottom-up,

or feed-forward direction.
This is a picture of the
visual system of the monkey,

and the idea is that information
comes in through the retina,

then goes through the thalamus.
It then goes to the
back of the brain.

And as the sensory signals
percolate deeper and deeper

and deeper into the
brain, they encode

or represent progressively
more sophisticated features

of objects.
So you start out at early
levels the visual cortex

with response to
luminance, and edges,

and then higher up to objects,
including other monkeys.

What's important here is
the perceptual heavy lifting

is done by information
flowing in this bottom-up or

feed-forward direction.
Now the Bayesian brain idea
says something very different.

It says that what's really
important are the top-down

or inside-out connections
that flow from the centre

of the brain back out.
And we've known for
a long time there's

a large number, a very large
number of these connections,

and some descriptions more
than flow the other way around.

But the function has
been rather mysterious.

Thinking about
the Bayesian brain

gives us a nice way
to interpret this.

Which is that it's exactly
these top-down or inside-out

connections that
convey predictions

from high levels of the brain to
lower levels, to lower levels,

back out to the
sensory surfaces.

So these blue arrows convey
the brain's predictions

about the causes
of sensory signals.

And then what flows in the
feed-forward or bottom-up

direction, from the outside-in,
that's just the prediction

area, the difference between
what the brain expects

and what it gets at each
level of description.

So this is often called
predictive coding,

or predictive processing,
in some formal frameworks.

And the idea is that
minimization of prediction

error occurs across all
levels of this hierarchy

at the same time,
and what we then

perceive is the consequence
of this joint minimization

of prediction error.
So you can think of
perception as a sort

of controlled
hallucination, in which

our perceptual predictions are
being reined in at all points

by sensory information from
the world and the body.

Now there's quite a
lot of experiments

that show that
something like this

is probably going
on in the brain.

These are a couple of examples.
And since they're--
I was looking for
the best example

so they don't come
from my lab at all.

[LAUGHTER]
This is from Lars
Muckli in Glasgow.

He's shown using advanced
brain reading techniques, which

I won't describe, that you
can decode the context of what

a person is seeing from parts
of the visual cortex that

isn't even receiving any input.
And what's more, you
can decode better

when you decode from the top
part of the cortex, which

is supposed to receive
predictions from higher levels.

So that suggests there are
predictions being fed back.

And another study by Andre
Bastos and Pascal Fries

in Germany, they used a method
called Granger causality, which

is sensitive to information
flow in systems.

And they find that top-down
signals and bottom-up signals

are conveyed in
different frequency bands

in the cortex,
which is what you'd

predict from predictive coding.
One last experiment which I
find particularly interesting

is an experiment from a Japanese
group of Masanori Murayama.

And they used to
optigenetics, which

is a way of using lights
to selectively turn

on or off neural circuits.
And in this
experiment they showed

that by just deactivating
top superficial levels

of somatosensory
cortex in a mouse

brain, the part
of the mouse brain

that's sensitive to
touch, they could

affect how well that
mouse was able to do

tactile discriminations.
Those top-down connections were
coming from a motor cortex.

So there's a lot of evidence
that top-down connections

in the brain are
important for perception,

is the basic message there.
But what's rather strange,
and what I'm going to tell

you next is that all this
stuff is all very good,

but predictive processing is
not a theory of consciousness.

Nothing I've said has anything
to do with consciousness,

at all.
It has to do-- it's a
very general theory of how

brains do what they do.
How they do
perception, how they do

cognition, how they do action.
So somewhat
counter-intuitively, I

think this is exactly why it's
a great theory of consciousness.

And the reason I think
this is because it

allows us to ask all
sorts of questions

about the real problem.
About what it is,
what happens in brains

that underlies what you happen
to be conscious of right now,

without getting sucked into the
metaphysical pluckhole of why

you are conscious
in the first place.

In other words, it provides
a powerful approach

to looking for neural
correlates of consciousness,

those things in the brain that
go along with being conscious.

Because we can
now take advantage

of a very general theory
of how brains do what

they do, rather
than just looking

at this region or that region.
So what does
predictive processing,

or the Bayesian brain say about
consciousness, specifically?

Well many years ago, some
influential experiments

revealed a very
strong connection

between top-down signalling
and conscious contents.

In this example by Alvaro
Pascual-Leone and Vincent

Walsh, what they
did was they had

people look at visual motion,
examples of visual motion.

And they used TMS, this
interventional technique

where you can zap the
brain very briefly.

I mentioned it before.
But they used it
here, specifically

to interrupt the
top-M down signalling

that was evoked by this
perception of visual motion.

And the result was that, if
you interrupted specifically

the top-down feedback,
you would abolish

the conscious perception
of visual motion,

even if you left the
bottom-up signalling intact.

So that was an early key.
Now, more recently, in our
lab and in many other labs

all over the place, we've been
asking some other questions

about the relationship
between what you expect

and what you
consciously experience.

One of the most basic
questions you can ask

is, do we consciously see
what we expect to see?

Or do we see what violates our
expectations of what we expect?

And a recent study from our
group, led by Yair Pinto,

used a method called
continuous flash suppression

to address this question.
It's illustrated here.
You see different images
in the different eyes.

In one eye you see this rapidly
changing Mondrian pattern

of squares.
And in the other eye, you
either see a face or a house.

And they change
contrast like this.

So initially, the person would
just see this random pattern,

and then they'll see
either a house or a face.

And simply, you just
ask them to expect

to see-- you just tell
them a face is more likely

or a house is more likely.
And what we find over
a number of studies

is that we see faces
more quickly when that's

what we're expecting to see.
It may seem obvious, but it
could be the other way around.

At least in these studies,
we see what we expect to see,

not what violates
our expectations.

That's the data.
And the same goes for houses.
These kinds of studies
support the idea

that it's the top-down
predictions that

are really important
for determining

what we're conscious of.
There's another experiment
which I will just mention.

We did pretty much
the same thing.

This is called motion
induced blindness.

If you are in a lab rather
than in a lecture theatre,

and you stare at this
central point here,

then this red dot might
disappear from time to time.

And what we did was
after it disappeared,

we changed its colour and we
led people to expect the colour

change to be one
thing or another.

And again, it reappeared more
quickly if it changed colour

in the way you were expecting.
Again, I am confirming that
once your expectations were

validated, then that accelerated
your conscious awareness

of something in the world.
Now that's just
behavioural evidence.

That's just asking people what
they see and when they see it.

We've also been interested
in the brain mechanisms that

underlie and shape how
our expectations change,

what we consciously see.
And we've been particularly
interested in something

called the alpha rhythm.
Now the alpha rhythm
is an oscillation

of about 10 hertz or
10 cycles per second.

That's especially prominent
in the visual cortex,

across the back of the brain.
In one study, led by in
this case a PhD student,

Maxine Sherman, with
Ryota Kanai, in Sussex.

What we did here, we manipulated
people's expectations

of what they we're
likely to see.

And it's a very boring
experiment, this was.

The only thing they could see
was what we call Gabor patches.

They're just very dim
patches of lines that

are blurry around the edges.
But the visual system loves
these kinds of things.

They activate early visual
cortex very, very well.

And people were expecting
either that a patch

was there, or that it wasn't
there, in different conditions.

And while doing this we
measured brain activity.

And to cut a long story
short, what we found

was that there were certain
phases, certain parts

of the cycle, this
10 Hertz cycle,

at which your expectation
had a greater effect on what

you said that you saw.
So there was part of this
cycle, as the alpha rhythm,

there was part of it where
your expectations dominated

your perception.
And there was another
part of it which

was the opposite, where
your sensory signals were

more important in determining
what you saw at that point.

So this suggests
that this oscillation

in the back of the
brain is orchestrating

this exchange of predictions
and prediction errors.

And that's the
sort of cycle that

might be the neural mechanism
for conscious vision.

And other theories about what
the alpha rhythm is doing,

there are many.
One is that it's doing
nothing, it's just the brain

idling away, and I think this is
at least a more interesting way

to think about it.
Another experiment we've
done with another PhD student

Asa-Chang, we showed
people these very fast

changing luminance sequences.
And it turns out
that your brains

will learn to predict
the specific changes

in these sequences that
change you very quickly.

And the signature
of this learning,

again, happens to be
in the alpha rhythm,

and suggests that this
oscillation has something

important to do with how
the brain learns and updates

predictions about
sensory signals.

But we do not go around the
world looking at Gabor patches

or rapidly changing
things like this.

We go around the world
looking at people and objects.

And that's what our visual
world is composed of.

So can any of these
ideas say anything

about our everyday
visual experience?

And I think that's a
very important challenge

in neuroscience to cross.
Get out of the lab and think
about real world experiences.

So we've been using virtual
reality over the last few years

to try to get at
some of these ideas.

This is an Oculus Rift, which is
now available to buy, I think.

And we've been using
these to address

some of these real world
aspects of visual perception.

And one of these
real world aspects

is called perceptual presence.
And this is the
observation that,

in normal perception, objects
really seem to be there,

as opposed to being
images of objects.

And this is, of
course, what Magritte

plays with in his
famous painting,

The Treachery of Images.
For instance, this is an object.
I think it's there,
and in some sense

I can perceive the back
of it, even though I

can't see the back of it,
even though the back of it's

not giving me any sensory
data, I perceive it

as an object with a back.
How does one explain that?
Well one idea you can come up
with within this Bayesian brain

framework is that,
the brain is not only

predicting the possible causes
of the sensory signals getting

right here, right now.
But it's also predicting
how sensory signals

would change were I to
make particular actions.

Were I to pick this object
up and move it around,

or just move my eyes from
one place to another.

There's a long paper.
I wrote about that which I--
please don't read it.
[LAUGHTER]
- But that's the basic idea.
And how do you test
an idea like that?

So we've been using some
innovative virtual reality

methods, or augmented
reality methods,

with my post-doc,
Keisuke Suzuki.

And what we do is, we
have virtual objects,

and these virtual objects,
they either behave

as a normal object would.
They're all weird,
unfamiliar objects.

But they can either behave as
a normal object would behave,

so you can learn to
predict what would happen.

This one is weird.
It always shows
you the same face,

a little bit like having
the moon on a plate

in front of you.
And then there are
other conditions

where objects respond
to your movements,

but they do so in
unreliable and strange ways.

So the question is,
what does the brain

learn about these objects,
and how do we experience them?

Do we experience them as
objects in different ways

when they behave differently?
And we're still doing
those experiments.

Another way we can use
VR, is to investigate

what happens in visual
hallucinations of the kind

experienced in psychosis,
and in certain other more

pharmacologically
induced conditions.

What we're doing here
is, we're coupling

immersive virtual
reality, imagine

you've got a headset
strapped to your head

again, with clever image
processing that models

the effects of overactive
priors on visual perception

to generate a highly
immersive experience.

This is Sussex campus,
actually, but now it

seems quite different than
it did at lunchtime today.

I'll tell you that.
What we've done, we've
recorded this panoramic video

which we can feedback
through VR headset.

And we've processed
this video through one

of these Google deep dream
algorithms you might have seen,

that can generate
sort of bowls of pasta

that looks like animals.
And this might seem
like a lot of fun.

It is fun, but there is
a serious purpose here,

because it allows us to
model the effects, model very

unusual forms of
perception and how

they might play out in different
ways in the visual hierarchy.

And understanding how visual
hallucinations might happen,

and how the wider effects they
have on the mind, I think,

is a very important part of
studying visual perception.

So that brings us to the
second take-home message,

which is that what
we consciously see

is the brain's best guess of
the causes of its sensory input.

Normal perception is a fantasy
that is constrained by reality.

Now before I move on
to the last section,

I want to pay tribute to an
unlikely character in a talk

about neuroscience,
which is Ernst Gombrich.

Ernst Gombrich was one of
the foremost historians

of art of the 20th century.
And it turns out that Gombrich's
approach to understanding art

had a lot in common with
ideas and the Bayesian brain.

And more specifically,
with the idea

that perception
is largely an act

of imagination, or construction,
on the part of the perceiver.

And this is most apparent in
his concept of the beholder's

share, which emphasises
that the viewer brings

an awful lot to the table in the
act of experiencing an artwork.

So he had this to say in his
1960 book, Art and Illusion,

"the artist gives the beholder
'more to do', he draws them

into the magic circle of
creation and allows him

to experience something of the
thrill of making which had once

been the privilege
of the artist."

I think for me this
is very powerful

when looking at, especially,
things like Impressionist art.

And here, one way
to think about this

is, that the artist
has reverse engineered

the whole perceptual
process, so that what's

there are not the objects,
the end points of perception,

but rather the raw materials;
the patterns of light that

engage our perceptual
machinery in doing its work.

And for me this might be
why paintings like this

are particularly powerful.
Now the final dimension
of consciousness

I want to talk about
is conscious self.

The fundamental experience
of being someone.

Being someone like you.
There are many aspects
to our experience

of being a conscious self.
There is the bodily
self, the experience

of being and identifying
with a particular body.

A bit of the world goes
around with you in the world

all the time.
There's the perspectival
self, the experience

of seeing the world, or
experiencing the world,

from a particular first
person perspective,

usually somewhere in the
body, but not always.

There is the volitional
self, the experience

of intending to do things,
and of making things

happen in the world of agency.
And these ideas are, of
course, often associated

with concepts of will.
Then there's the narrative self.
This is where--
only until now, we

don't have to worry
about the concept of I,

but when we get to the narrative
self, there is now and I.

There is a continuity of self
experience from hour to hour,

from day to day,
from month to month,

and from year to year, that
you associate a name with,

and a particular set of
autobiographical memories.

And finally, there's
a social self.

The way I experience
being me is partly

dependent on the way I
perceive you as perceiving me.

I'm just going to talk,
in the minutes remain,

about the bodily self.
This is something we're working
on quite a lot in Sussex.

The experience of
identifying with, and owning,

a particular body.
And the basic idea I want to
convey, is again, very simple.

It's just that we should think
of our experience of body

ownership in the
same way that we

think about our experience
of other things, as well.

That is, it's the brain's
best guess of the causes

of body-related signals.
And the brain is always
making this inference.

It's making its inference
about what in the world

is part of the body, and
what is not part of the body.

But it has access, in
this case, to other sorts

of sensory signals, not
just visual signals,

or tactile signals, but
also proprioceptive signals.

The orange arrows here.
These inform the brain about
the body's configuration

and position in space.
And then also, and
often overlooked,

are interoceptive signals.
These are signals that originate
from within the body, that

tell the brain about
the physiological state

or condition of the inside
of the internal physiological

milieu.
And you can think the idea
is that our experience

of embodied self-hood
is the brain's best

guess of the causes of all
the signals put together.

Yeah, that's just to
emphasise interoception.

An important part
of this idea is

that interoception, the sense
of the body from within,

should work along the same
principles, the same Bayesian

principles that we've
been thinking about,

vision and audition, previously.
That is, our experience of
the inside of our bodies

is the brain's best guess of the
causes of the signals that come

from the inside of our bodies.
So we can think of, again,
top-down predictions

carrying predictions about
what the bodily state is like,

and bottom-M up
prediction errors that

report the differences
between what's going on

and what the brain expects.
So what is our experience
of the inside of our bodies?

Well, way back at the
beginning of psychology,

William James and
Carl Langer proposed

that emotions,
emotional experience,

was really about the brain's
perception of changes

in its physiological state,
rather than perception

of the outside world.
So, in this classic
example, seeing a bear

does not in itself generate
the experience of fear.

Rather seeing the
bear sets in train,

a load of physiological changes
preparing for fight and flight

responses.
And it's the perception
of those bodily changes

in the context with the
bear being around that leads

to our experience of fear.
So the Bayesian perspective
just generalises that idea,

and says that
emotional experience

is the brain's best
guess of the causes

of interoceptive signals.
And this fits very nicely
with a lot of evidence.

And this is just one study
done by a Finnish group.

And all they did here was,
they had people report

where on their bodies they felt
various emotions to take place.

And so you feel anxiety
in one part of your body.

You feel fear in
another, and so on.

So our experience
of emotion does

seem to be
intrinsically embodied.

Now another part of our
experience of being a body

is the body as a physical
object in the world.

And this might seem quite
easy to take for granted,

since our physical body
is just always there.

It goes around with us,
it changes over the years,

in unfortunate ways.
But it's always there.
But it would be a mistake to
take our experience of body

ownership for granted.
And there are some
classic experiments

that demonstrate how malleable
our experience of body

ownership is.
This is the famous
rubber hand experiment.

Probably some of
you have seen this.

What happens here is that a
volunteer has their hand hidden

under a table, and the fake
hand is put on top of the table,

and then both hands
are simultaneously

stroked with a paintbrush.
And it turns out that just
seeing a hand-like object where

a hand might be, feeling
touch, and then seeing

the object being touched,
is enough evidence

that the brain's best guess
becomes that fake hand is,

in fact, part of my body.
Sort of part of my body.
This is what it looks
like in practise.

Here you can see the fake
hands, focusing on it.

There's the real
hands, not focusing

on it, simultaneous
stroking-- and there are

various ways you can test it.
[AUDIENCE LAUGHTER]
- I found in doing this, it
works even better on children,

by the way, if you do that.
So that's interesting,
because that's

using visual and tactile
signals to convince

the brain that this object
is part of its body.

In my lab, we've been
interested in whether these

signals that come from inside
the body also play a role.

So we set up a virtual reality
version of this rubber hand

illusion, where people
wear these goggles,

and they see a
virtual fake hand.

And we also record
their heartbeats.

And now what we can do is,
we can make the virtual hand

flash either in time or out
of time with their heart beat.

And we asked the
question, do people

experience this
virtual hand as more

belonging to them
when it's flashing

in time, rather than out of
time, with their heartbeat?

And the answer is that it does.
And this is just some
data, basically that,

bigger than that, which means
that, indeed, they experience

the hand as more their own.
The way we measure
that actually,

is that first we can ask them.
That's the easiest way.
Then we can also
ask them to point

to where they think
their hand really is,

and we can see
how far they drift

from where their hand really is
to where the virtual hand is.

And that provides a
more objective way

of measuring the
strength of the effect.

Here's what it looks
like in practise.

Again if you can see this,
that's the real hand.

That's somebody's virtual hand.
Again, imagine you're
wearing a headset

so you'll see this
in 3-D. And you

can just about see it pulsing
to read and back, I hope.

And you can also do
some other things

with these virtual
reality rubber hands

that you couldn't do
with real rubber hands.

For instance, you can map
movements of the real hand

to the virtual hand, so you
can start to ask questions

about the extent to which
the virtual hand moves

as I predict it to move.
How much does that affect
the extent to which I

feel it to be part of my body?
You can make it change colour.
So you can have somebody embody
a skin colour associated with

a cultural out-group, and see
if they become less racist

as a result.
And then my favourite is where
you can change, actually,

the size of the body.
And that's coming up here.
So here what we do is, we
can have the hand telescope

up and down in size.
And again, this might seem
like fun, and it is fun,

but there is a serious purpose.
There are various conditions.
There's in fact,
a condition called

Alice in Wonderland
Syndrome, where people report

that all parts of their body
are, indeed, telescoping

up and down in size.
And in a more subtle way, there
are lots of body dysmorphias,

of subtle misperceptions
of body shape, which

might be associated
with eating disorders.

And so these sorts of
techniques allow us to approach,

in a very fine grained
way, how people might

mis-perceive their own bodies.
That brings me to the third
take-home message about self.

And with apologies to
Descartes, the take home message

is that, I predict
myself, therefore I am.

In the last nine minutes,
before the bell rings,

I want to go full circle and
return to this Cartesian idea

of the beast machine.
To try to convince you that
our experience of being

a conscious self is intimately
tied up with our beast machine

nature.
And to do this, I need to
mention one final aspects

of perceptual inference,
which has a lot

to do with Karl Friston,
who's done a lot of work

in the Bayesian brain
UCL here in London.

And if we think of the brain
as being in the business

of minimising prediction
errors, this can be done either

by updating our
perceptual predictions,

which is what I've been
talking about so far.

And this what Helmholtz said.
Or we can minimise prediction
errors by making actions.

We can change what
we predict, or we

can make an action so that
our predictions come true.

You can change
with sensory input,

or you can change
what you believe

about your sensory input.
One point of doing this is,
that you can make actions,

then, to find out more about
the world that surrounds you.

And this is what
Helmholtz has in mind

when he says that
each movement we make,

by which we alter the
appearance of objects,

should be thought of as
an experiment designed

to test whether we've
understood correctly

the invariant relations of
the phenomena before us.

Which Gregory, much later,
said something similar

when he talked about perception
as hypothesis testing.

The point of this
is, that we make

eye movements, and other
kinds of movements,

to understand what
the world is like.

That, in fact, there was is
tomato there, for instance.

But there's another way to
think about active inference,

which is that, when we
minimise prediction error,

what we're actually doing is
controlling a sensory variable.

We're preventing
it from changing,

because we're making our
prediction about what

it is come true.
And this is the use
of active inference,

to control or
regulate something,

rather than to understand what
the causes of that something

are.
And this brings a very
different tradition

to mind, which is 20th
century cybernetics.

And this is Ross Ashby,
who was a pioneer

of this way of thinking.
And he, with Roger Conan
right at the end of his life,

wrote a paper.
The title of the paper,
was "Every good regulator

of a system must be a
model of that system".

The idea here is, if you want
to regulate something very

precisely, then you
need a good model

of what effects that system.
Now you could apply this idea
to the external world, as well.

When you try to
catch a cricket ball,

you are actually trying
to control the level

of the angle above the horizon.
But it applies more
naturally, I think,

to the internal
state of our body.

So really, what matters about
my internal physiological

condition, I don't
really need to know

exactly what it's like inside
my body, and care about it.

But I need to control it, and
my brain needs to regulate it.

So this way of thinking
about active inference

applies more naturally
to interoception.

Think about it this way that.
Having good predictive
models are always useful,

but we can have a pendulum
that swings, on the one hand,

towards control.
We can use these predictive
models for control,

and that's more applied to the
state of our internal body.

Or we can swing the other way,
and think about perception,

understanding.
You could think of the
instrumental, epistemic ways

of thinking about the role
of action and perception.

And this brings to mind--
I mentioned Karl Friston.
He's come up with this
thing called the free energy

principle.
And I can only nod
to the vast body

of work he's done here on this.
With the slogan, which is that
organisms, over the long run,

maintain themselves in states
in which they expect to be in,

in virtue of having
good predictive models

about their own
internal condition.

So this takes us right
back to Descartes,

but in a very different way.
As I said right at the beginning
of this lecture, for Descartes

our physiological
reality was rather

irrelevant to our minds, our
rationality, our consciousness.

This is a quote from a 1968
paper on his beast machine

argument, "without minds to
direct their bodily movements,

animals must be regarded as
unthinking, unfeeling machines

that move like clockwork."
Now I think if you
try to think how

this idea of our
predictive models

controlling our internal
physiological states,

and the resulting experiences
that perceptual content that

might give rise to, you
can make the opposite case.

And the opposite case would
be that conscious self-hood

emerges because of, and
not in spite of, the fact

that we are beast machines.
And I think this is a deeply
embodied view of consciousness

and self, and it speaks to this
fundamental link in continuity

between what it is to be alive,
what it is to have a mind,

and what it is to
be a conscious self.

So I repeat, the third
take-home message should

make even more sense now.
That I predict myself,
therefore I am.

And I am a conscious
self because I'm

a bag of self-fulfilling
predictions

about my own physiological
persistence over time.

Now why does any of this matter?
It's a lot of interesting
ideas, but why should we

be interested in
studying consciousness?

Well it's a very
interesting thing,

I hope I've convinced you.
But there are lots
of practical reasons

to be interested as well.
There are between 20 and
60,000 patients in the UK

alone, who are in
disorders of consciousness.

You are in the vegetative
state, or in coma,

or in some other
severely abnormal state

of consciousness.
Having better measures
of conscious level,

as I described at
the beginning, is

going to really
change again, and how

we treat people like this.
And of course, in psychiatry.
Psychiatric disorders are
increasing that prevalence

across all modern
societies, and it's

estimated one in six
of us, at any one time,

are suffering from a
psychiatric condition.

And understanding the mechanisms
that underlie conscious

content and conscious
self, because a lot

of psychiatric disorders
include disturbances

of the way we
experience our body,

even though that might not
be the most obvious symptom,

can help us understand
the mechanisms

involved in
psychiatric disorders,

not just the symptoms.
There are also some
more general reasons

for studying consciousness,
which bring up

some ethical questions.
When does consciousness
emerge in development

on newborn babies conscious?
Or does consciousness
start even in the womb?

Maybe different dimensions
of consciousness

emerge at different times.
Are other animals conscious?
Well I think it can make a
very good case for mammals

and primates, but what
about the octopus?

The octopus has more
neurons in its arms

than in its central brain.
They're very smart creatures.
Here, you have to ask the
question not only, what

is it like to be an
octopus, but what is

it like to be an octopus arm?
And finally, with the rise
of artificial intelligence,

we should begin to ask
questions about what would it

take for a machine to have some
kind of subjective experience.

I don't think we're
anywhere near that yet,

but we should consider
what science can tell us

about its possibility, because
that would raise some very,

very tricky ethical questions.
But, fundamentally,
consciousness

remains fascinating for
me for the same reason

that it's motivated people
throughout the ages.

I mean, Hippocrates, the
founder of modern medicine,

put it one way.
He said, from the brain and
from the brain alone arise

our sorrows, our joys.
And he also had his
first view of psychiatry

that madness comes
from its moistness.

And then Francis Crick,
in the 1990s, who

I mentioned in the beginning.
He gave birth, if you like,
to the modern neuroscience

of consciousness.
He said much the same thing
in his astonishing hypothesis.

But there is this
mystery and wonder still,

about how the biological
machinery inside our heads

gives rise to the rich
inner universe that

makes life worth living.
And despite this mystery, modern
science is making progress.

I hope I've given you a flavour,
even though we don't understand

how consciousness happens, we
can begin to understand its

mechanisms.
So we should not be afraid of
naturalising consciousness.

It's not a bad thing
to understand its basis

in the material world.
As so often in science,
with greater understanding

comes a larger sense of wonder,
and a greater realisation

that we are part of, and not
apart from, the rest of nature.

[AUDIENCE APPLAUSE]
    You must  Log in  to get the function.
Tip: Click on the article or the word in the subtitle to get translation quickly!

Loading…

The Neuroscience of Consciousness with Anil Seth

165 Folder Collection
Josh published on October 2, 2018
More Recommended Videos
  1. 1. Search word

    Select word on the caption to look it up in the dictionary!

  2. 2. Repeat single sentence

    Repeat the same sentence to enhance listening ability

  3. 3. Shortcut

    Shortcut!

  4. 4. Close caption

    Close the English caption

  5. 5. Embed

    Embed the video to your blog

  6. 6. Unfold

    Hide right panel

  1. Listening Quiz

    Listening Quiz!

  1. Click to open your notebook

  1. UrbanDictionary 俚語字典整合查詢。一般字典查詢不到你滿意的解譯,不妨使用「俚語字典」,或許會讓你有滿意的答案喔