Placeholder Image

Subtitles section Play video

  • The AI arms race is on, and it seems nothing can slow it down.

  • Google says it's launching its own artificial-intelligence-powered chatbot to rival ChatGPT.

  • Too much AI too fast.

  • It feels like every week, some new AI product is coming onto the scene and doing things never remotely thought possible.

  • We're in a really unprecedented period in the history of artificial intelligence.

  • It's really important to note that it's unpredictable how capable these models are as we scale them up.

  • And that's led to a fierce debate about the safety of this technology.

  • We need a wake-up call here.

  • We have a perfect storm of corporate irresponsibility, widespread adoption of these new tools, a lack of regulation, and a huge number of unknowns.

  • Some researchers are concerned that, as these models get bigger and better, they might, one day, pose catastrophic risks to society.

  • So, how could AI go wrong, and what can we do to avoid disaster?

  • So, there are several risks posed by these large language models.

  • [Large language model: An artificial intelligence tool which uses large data sets to process and generate new content.]

  • One class of risk is not all that different from the risk posed by previous technologies like the internet, social media, for example.

  • There's a risk of misinformation, 'cause you could ask the model to say something that's not true, but in a very sophisticated way and posted all over social media.

  • There's a risk of bias, so they might spew harmful content about people of certain classes.

  • Some researchers are concerned that, as these models get bigger and better, they might, one day, pose catastrophic risks to society.

  • For example, you might ask a model to produce something from a factory setting that it requires a lot of energy for.

  • And, in service of that goal of helping your factory production, it might not realize that it's bad to hack into energy systems that are connected to the internet.

  • And because it's super smart, it can get around our security defenses, hacks into all these energy systems, and that could cause, you know, serious problems.

  • Perhaps a bigger source of concern might be the fact that bad actors just misuse these models.

  • For example, terrorist organizations might use large language models to, you know, hack into government websites

  • or produce biochemicals by using the models to, kind of, discover or design new drugs.

  • You might think most of the catastrophic risks we've discussed are a bit unrealistic, and, for the most part, that's probably true.

  • But one way we could get into a very strange world is if the next generation of big models learned how to self-improve.

  • One way this could happen is if we told, you know, a really advanced machine learning model to develop, you know, an even better, more efficient machine learning model.

  • If that were to occur, you might get into some kind of loop where models continue to get more efficient and better,

  • and then that could lead to even more unpredictable consequences.

  • There are several techniques that labs use to, you know, make their models safer.

  • The most notable is called "reinforcement learning from human feedback", or RLHFs.

  • The way this works is, labelers are asked to prompt models with various questions, and if the output is unsafe, they tell the model,

  • and the model is then updated so that it won't do something bad like that in the future.

  • Another technique is called "red teaming", throwing the model into a bunch of tests and then seeing if you can find weaknesses in it.

  • These types of techniques have worked reasonably well so far,

  • but, in the future, it's not guaranteed these techniques will always work.

  • Some researchers worry that models may eventually recognize that they're being red-teamed,

  • and they, of course, want to produce output that satisfies their prompts, so they will do so,

  • but then, once they're in a different environment, they could behave unpredictably.

  • So, there is a role for society to play here.

  • One proposal is to have some kind of standards body that sets, you know, tests that the various labs need to pass before they receive some kind of certification that, hey, this lab is safe.

  • Another priority for governments is to invest a lot more money into research on how to understand these models under the hood and make them even safer.

  • You can imagine a body like, you know, a CERN that, that lives currently in Geneva, Switzerland for physics research,

  • something like that being created for AI safety research, so, we can try to understand them better.

  • For all these risks, artificial intelligence also comes with tremendous promise.

  • Any task that requires a lot of intelligence could potentially be helped by these types of models.

  • For example, developing new drugs, personalized education, or even coming up with new types of climate change technologies.

  • So, the possibilities here truly are endless.

  • So, if you'd like to read more about the risks of artificial intelligence and how to think about them,

  • please click the link and don't forget to subscribe.

The AI arms race is on, and it seems nothing can slow it down.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it