Placeholder Image

Subtitles section Play video

  • Intelligence is what gives us power over the world.

  • If we're going to make entities, machines that are more intelligent than us, they would be more powerful than human beings.

  • And so then the question is:

  • How do we retain power forever over entities that are more powerful than ourselves?

  • About five years ago, what had previously been a very obscure branch of AI technology, suddenly began to take off.

  • These developments represent a tipping point in the history of the field.

  • And they're giving people now, in a very real sense, what would it be like if we had artificial and general intelligence without regulation to ensure that those systems are safe.

  • We may well lose control over our own future.

  • My name is Stuart Russell.

  • I work in the area of artificial intelligence, and have been doing so for about 45 years.

  • As soon as we had working computers, the people who developed them wanted to make those machines intelligent.

  • And then about five years ago, language models suddenly began to take off.

  • So a "language model" is a predictor that says, "Given a sequence of words, what's the next word likely to be?"

  • The latest model, GPT-4 from OpenAI, is a 32,768-gram model, meaning it predicts the next word from the preceding 32,767 words.

  • So that's an enormous model.

  • All of a sudden, these language models went from being a little, niche technology to something that appears to be extremely intelligent.

  • You can ask it to draw pictures.

  • You can ask it to write code.

  • You can say, "I've forgotten the proof of Pythagoras' theorem, but I'd love you to give it to me in the form of a Shakespeare sonnet," and it will do that.

  • The interesting question is: Is it intelligence?

  • Has the system built an internal representation of the world?

  • And the answer is: We haven't the faintest idea.

  • A lot of people ask me, "Should we be worried about these systems?"

  • I think it's unlikely that the current generation of models represent a real threat to human control.

  • I don't wanna come across as a naysayer or a Luddite, but I think we could face serious risks:

  • When you release a system, you should provide convincing evidence that it's going to behave itself,

  • that it's not going to cause risks to people by giving medical advice or advising them to commit suicide.

  • The threat that most people recognize, that something that can outthink human beings,

  • what some people call 'AGI' or Artificial General Intelligence,

  • would clearly represent a threat to humanity if we couldn't figure out how to solve the control problem.

  • The way we develop AI systems, we specify an objective, and off it goes.

  • If the system knows the objective, they're gonna pursue it at all costs.

  • What happens if we specify the objective incorrectly?

  • Imagine, for example, that an AI system is helping us figure out how to fix climate change.

  • One of the consequences we don't like is the acidification of the oceans.

  • AI system figures this all out, but the catalytic reaction starts to absorb oxygen from the atmosphere, and that's enough for us all to die slowly and painfully.

  • Changing the way we think about AI so that their only goal is to bring about the futures that we humans prefer,

  • you get this very different kind of behavior that defers to humans, that asks permission, that is cautious.

  • It'll ask permission before messing with the oceans.

  • It'll say, "Is it okay to get rid of a quarter of the oxygen in the atmosphere?"

  • And we would say, "Ah, no, don't like that."

  • We have to start designing standards to make sure that whenever you design AI systems and release them in the world, that they conform to this basic design template.

  • So what I'm working on is a new way of thinking about AI that is not susceptible to this problem.

  • We're going to need regulation.

  • In fact, I think it's long past the time when we need some regulation.

  • If we do, we really could do marvelous things.

  • We could greatly accelerate the rate of scientific progress.

  • We could have much better healthcare.

  • Maybe we could have better politics.

  • So I don't want to say we should cut off AI research at this stage.

  • We accept these types of regulations in many, many other spheres.

  • For example, in aviation, you can develop supersonic aircraft, but you can't put passengers in them until you've shown that they are safe.

  • We're still at the very early stages of figuring out how to do this type of regulation, but now that AI systems are really quite powerful,

  • now that they can talk to someone for days on end, we need to have guarantees that the systems don't pose an undue risk.

Intelligence is what gives us power over the world.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it