Placeholder Image

Subtitles section Play video

  • It's an old idea. It's an idea that came from neuroscience.

  • Does a though live in an individual neuron?

  • Or do all neurons in the human brain participate in all ideas?

  • And it's been very hard to test in humans because

  • you can't put a probe on every single neuron in the human brain.

  • In an artificial neural network, we have this luxury of being able to look at everything that's going on.

  • [music]

  • Our project is really asking the question:

  • "What is a neural network learning inside?"

  • And we study a specific kind of network called a GAN.

  • That's a generative adversarial network.

  • We tell it, "Imagine and image that you haven't seen that looks like these million other images".

  • [music]

  • The surprising result of the project is that

  • neural networks actually show evidence

  • of composition.

  • And so the question is, "How the heck is it doing it?"

  • [music]

  • If it's just memorizing, then it's approaching things the way we normally program computers to do things, right?

  • If it's composing, it's sort of a sign that it's thinking in a more human-like way,

  • that it's understanding the structure of the world.

  • [music]

  • But correlation is not the same as causation.

  • It could be that neuron that correlates with trees

  • is actually what the neural network is using to think about the color green.

  • So how do we know the difference?

  • And just like those individual neurons that correspond to trees or doors,

  • We found that there are individual neurons that actually correlate with these visible bugs,

  • with these visible artifacts.

  • So that was really surprising to us.

  • Because not only is the network sifting through things, and sorting out things that make sense,

  • It's also sifting and assigning the things that don't make sense to their own variables as well.

  • And so it was really surprising to us that we go into a neural network

  • and do a certain type of brain damage, right?

  • Basically perform a lobotomy on these twenty neurons,

  • and instead of doing damage to the network, we actually got the network to perform better.

  • And so why is it that a network actually has neurons in it that cause problems?

  • Are mistakes an important part of learning?

  • It's one of the mysteries that we uncovered. We don't know the answer to that.

  • But I think that there's more profound reasons to be interested in this

  • beyond the ancient puzzle of, "How does thinking work?" and, "How do humans work?"

  • Because we're also using these AI's to build our future world, to build our future societies

  • and it's important that

  • we are able to understand, anticipate, and control the world that we create,

  • and as long as we don't really understand what rules they're applying inside,

  • we're not going to be able to do that.

  • And so, I think, I don't know, I think it's the most important thing in the world to study this kind of thing.

  • [music]

It's an old idea. It's an idea that came from neuroscience.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it