AI experts sign doc comparing risk of ‘extinction from AI’ to pandemics, nuclear war

31 May 2023

Cointelegraph By Tristan Greene

The “Godfather of AI” and the CEOs of OpenAI, Google DeepMind and Anthropic are among the hundreds of signatories.

News

Join us on social networks

Dozens of artificial intelligence (AI) experts, including the CEOs of OpenAI, Google DeepMind and Anthropic, recently signed an open statement published by the Center for AI Safety (CAIS).

We just put out a statement:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc.https://t.co/N9f6hs4bpa

(1/6)

— Dan Hendrycks (@DanHendrycks)

May 30, 2023

The statement contains a single sentence:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Among the document’s signatories are a veritable “who’s who” of AI luminaries, including the “Godfather” of AI, Geoffrey Hinton; University of California, Berkeley’s Stuart Russell; and Massachusetts Institute of Technology’s Lex Fridman. Musician Grimes is also a signatory, listed under the “other notable figures” category.

Related: Musician Grimes willing to ‘split 50% royalties’ with AI-generated music

While the statement may appear innocuous on the surface, the underlying message is a somewhat controversial one in the AI community.

A seemingly growing number of experts believe that current technologies may or will inevitably lead to the emergence or development of an AI system capable of posing an existential threat to the human species.

Their views, however, are countered by a contingent of experts with diametrically opposed opinions. Meta chief AI scientist Yann LeCun, for example, has noted on numerous occasions that he doesn’t necessarily believe that AI will become uncontrollable.

Super-human AI is nowhere near the top of the list of existential risks.In large part because it doesn’t exist yet.

Until we have a basic design for even dog-level AI (let alone human level), discussing how to make it safe is premature. https://t.co/ClkZxfofV9

— Yann LeCun (@ylecun)

May 30, 2023

To him and others who disagree with the “extinction” rhetoric, such as Andrew Ng, co-founder of Google Brain and former chief scientist at Baidu, AI isn’t the problem, it’s the answer.

On the other side of the argument, experts such as Hinton and Conjecture CEO Connor Leahy believe that human-level AI is inevitable and, as such, the time to act is now.

Heads of all major AI labs signed this letter explicitly acknowledging the risk of extinction from AGI.

An incredible step forward, congratulations to Dan for his incredible work putting this together and thank you to every signatory for doing their part for a better future! https://t.co/KDkqWvdJcH

— Connor Leahy (@NPCollapse)

May 30, 2023

It is, however, unclear what actions the statement’s signatories are calling for. The CEOs and/or heads of AI for nearly every major AI company, as well as renowned scientists from across academia, are among those who signed, making it obvious the intent isn’t to stop the development of these potentially dangerous systems.

Earlier this month, OpenAI CEO Sam Altman, one of the above-mentioned statement’s signatories, made his first appearance before Congress during a Senate hearing to discuss AI regulation. His testimony made headlines after he spent the majority of it urging lawmakers to regulate his industry.

Altman’s Worldcoin, a project combining cryptocurrency and proof-of-personhood, has also recently made the media rounds after raising $115 million in Series C funding, bringing its total funding after three rounds to $240 million.

  

You might also like

Open chat
1
BlockFo Chat
Hello 👋, How can we help you?
📱 When you've pressed the BlockFo button, we automatically transfer to WhatsApp 🔝🔐
🖥️ Or, if you use a PC or Mac, then we'll open a new window to load your desktop app.