With the rise of ChatGPT, Bard and other large language models (LLMs), we’ve been hearing warnings from the people involved like Elon Musk about the risks posed by artificial intelligence (AI). Now, a group of high-profile industry leaders has issued a one-sentence statement effectively confirming those fears.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
It was posted to the Center for AI Safety, an organization with the mission “to reduce societal-scale risks from artificial intelligence,” according to its website. Signatories are a who’s who of the AI industry, including OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis. Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, considered by many to be the godfathers of modern AI, also put their names to it.
It’s the second such statement over the past few months. In March, Musk, Steve Wozniak and more than 1,000 others called for a six-month pause on AI to allow industry and public to effectively catch up to the technology. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter states.
Though AI is not (likely) self-aware as some have feared, it already presents risks for misuse and harm via deepfakes, automated disinformation and more. The LLMs could also change the way content, art and literature are produced, potentially affecting numerous jobs.
US President Joe Biden recently stated that “it remains to be seen” if AI is dangerous, adding “tech companies have a responsibility, in my view, to make sure their products are safe before making them public… AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security.” In a recent White House meeting, Altman called for regulation of AI due to potential risks.
With a lot of opinions floating around, the new, brief statement is mean to show a common concern around AI risks, even if the parties don’t agree on what those are.
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” a preamble to the statement reads. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
Source: www.engadget.com