Brief Review — Managing extreme AI risks amid rapid progress

Prof. Bengio and Prof. Hinton, Urges for AI Safety

Sik-Ho Tsang
4 min readMay 26, 2024
Bengio and Hinton, 1st and 2nd Authors, Promoting AI Safety (Image from Global News CA)

Managing extreme AI risks amid rapid progress
2024 Science, by 21 Oraganizations
(Sik-Ho Tsang @ Medium)

2011 [Thinking Fast and Slow] 2015 [Deep Learning] 2017 [AI Reshapes World] 2019 [New Heights with ANN] 2021 [Deep Learning for AI] 2022 [Small is the New Big]
==== My Other Paper Readings Are Also Over Here ====

  • AI is progressing rapidly. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems.
  • Yet, there is a lack of consensus about how to manage them. AI safety research is lagging. Governance is needed.

Outline

  1. Story Outside The Paper
  2. Rapid Progress, High Stakes
  3. Reorient Technical R&D
  4. Governance Measures

1. Story Outside The Paper

1.1. Professor Geoffrey Hinton, Godfather of AI

Prof. Hinton status at x.com (Link) Also at BBC News: https://www.bbc.com/news/world-us-canada-65452940
  • Last year, the launch of ChatGPT has made AI safety a hot discussion topic.
  • Prof. Hinton concerns about AI safety. Last year, he even left Google to criticize Google at that moment.
Prof. Hinton in recent 2024 Neflix Movie, Atlas, for AI Safety (Link)
  • A bit sidetrack, he even has a shot in a recent Netflix movie, Altas, newly released this month, talking about AI safety. The movie is talking about AI against human, please have a watch if you are Hinton’s fans.

1.2. Elon Musk

20% Chance AI Destroys Humanity (News from BusinessInsider)
  • Elon Musk also concerns about AI Safety. He has been joining AI safety forums over the last year.
  • (The recent Netflix movie, Altas, also has neural link technology used.)

1.3. Professor Yann LeCun

Prof. LeCun status at x.com (Link)

Responsible AI, AI Being Misused, etc… AI Safety has been widely discussed and debated last year.

2. Rapid Progress, High Stakes

  • Going back to the paper … There are AI systems that match or exceed human abilities. Tech companies reserve a lot of cash on AI. Hardware and algorithms will also improve.
  • There is no fundamental reason for AI progress to slow or halt at human-level abilities. (Slowing down AI progress for AI safety was a hot topic last year.)
  • They can be scaled. They can be replicated.

If managed carefully and distributed fairly, of course AI could help humanity cure diseases, elevate living standards, and protect ecosystems.

But alongside advanced AI capabilities come large-scale risks. AI systems threaten to amplify social injustice, erode social stability, enable large-scale criminal activity, and facilitate automated warfare, customized mass manipulation, and pervasive surveillance.

  • Many risks could soon be amplified, and new risks created, as companies work to develop autonomous AI. Once autonomous AI systems pursue undesirable goals, we may be unable to keep them in check.
  • This unchecked AI advancement could culminate in a large-scale loss of life and the biosphere.

Only an estimated 1 to 3% of AI publications are on safety. We are already behind schedule. It must be re-oriented.

3. Reorient Technical R&D

  • There are many open technical challenges in ensuring the safety and ethical use of generalist, autonomous AI systems. These challenges cannot be addressed by simply using more computing power.
  • Oversight and honesty: More capable AI systems can better exploit weaknesses in technical oversight and testing.
  • Robustness: AI systems behave unpredictably in new situations.
  • Interpretability and transparency: AI decisionmaking is opaque, with larger, more capable models being more complex to interpret.
  • Inclusive AI development: AI advancement will need methods to mitigate biases.
  • Addressing emerging challenges: Future AI systems may exhibit failure modes that we have so far seen only in theory or lab experiments.
  • Evaluation for dangerous capabilities: As AI developers scale their systems, unforeseen capabilities appear spontaneously, without explicit programming.
  • Evaluating AI alignment: If AI progress continues, AI systems will eventually possess highly dangerous capabilities.

Authors call on major tech companies and public funders to allocate at least one-third of their AI R&D budget.

4. Governance Measures

We urgently need national institutions and international governance to enforce standards that prevent recklessness and misuse.

  • Institutions to govern the rapidly moving frontier of AI: To keep up with rapid progress and avoid quickly outdated, inflexible laws, national institutions need strong technical expertise and the authority to act swiftly.
  • Government insight: To identify risks, governments urgently need comprehensive insight into AI development.
  • Safety cases: Despite evaluations, we cannot consider coming powerful frontier AI systems “safe unless proven unsafe.” Developers of frontier AI should carry the burden of proof to demonstrate that their plans keep risks within acceptable limits.
  • Mitigation: To keep AI risks within acceptable limits, we need governance mechanisms that are matched to the magnitude of the risks. Regulators should clarify legal responsibilities.

(I only share some of the points. Please read the paper directly if interested.)

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.

No responses yet