Four artificial intelligence experts have raised concerns after their work was cited in an open letter, co-signed by Elon Muskamong others, in which an urgent pause in the investigation was demanded.
Musk’s letter, dated March 22 and with more than 1,800 signatures as of Friday, called for a six-month break in developing “more powerful” systems than OpenAI’s new GPT-4backed by Microsoft, that can hold human-like conversations, compose songs, and summarize long documents.
Since the release last year of ChatGPT, the predecessor of GPT-4, rival companies have rushed to launch similar products.
The open letter states that the AI systems with “competitive human intelligence” pose profound risks to humanityand cites 12 pieces of research by experts, including university academics and employees and former employees of OpenAI, Google and its affiliate DeepMind.
US and EU civil society groups have since lobbied lawmakers to halt the OpenAI investigation. OpenAI did not immediately respond to requests for comment.
Critics have accused the Future of Life Institute (FLI)the organization behind the letter and which is funded primarily by the Musk Foundationto prioritize imagined doomsday scenarios over more immediate concerns about AI, such as racist or gender bias.
Among the research cited is “On the Dangers of Stochastic Parrots,” an article she co-authored. Margaret Mitchellwho previously oversaw AI ethics research at Google.
Mitchell, now head of scientific ethics at AI firm Hugging Facecriticized the letter, telling Reuters it was unclear what was considered “more powerful than GPT4”.
“By taking many questionable ideas for granted, the letter affirms a set of priorities and a narrative on AI that benefits FLI supporters”, said. “Ignoring active damage right now is a privilege some of us don’t have.”
Mitchell and his co-authors –Timnit Gebru, Emily M. Bender, and Angelina McMillan-Major- subsequently published a response to the letter, accusing its authors of “AI scaremongering and hype.”
“It is dangerous to be distracted by an AI utopia or apocalypse that promises a ‘prosperous’ or ‘potentially catastrophic’ future,” they wrote. “The responsibility does not fall properly on the artifacts, but on their builders.”
FLI’s President, Max Tegmark, told Reuters the campaign was not an attempt to hamper OpenAI’s corporate advantage.
“It’s very hilarious. I’ve seen people say: ‘Elon Musk is trying to slow down the competition'”he said, adding that Musk had no role in writing the letter. “This is not a company.”
Shiri Dori-Hacohen, an adjunct professor at the University of Connecticut, told Reuters that he agreed with some points in the letter, but took issue with the way his work was cited.
Last year, Dori-Hacohen co-authored a study in which argued that the widespread use of AI already posed serious risks. His research held that the current use of AI systems could influence decision-making regarding climate change, nuclear war, and other existential threats.
According to her, “it is not necessary for AI to reach the level of human intelligence to aggravate those risks”. “There are non-existential risks that are very, very important, but they don’t get the same kind of attention on a Hollywood level.”
Asked about the criticism, FLI’s Tegmark said that both the short- and long-term risks of AI should be taken seriously. “If we quote someone, it just means we’re saying they endorse that sentence. It doesn’t mean they’re endorsing the letter, or that we endorse everything they think,” he told Reuters.
Dan Hendricks, director of the California-based Center for AI Safety, also quoted in the letter, defended its content, telling Reuters it was sensible to take black swan cases into accountthose that seem improbable, but that would have devastating consequences.
The open letter too It warned that generative AI tools could be used to flood the Internet with “propaganda and falsehoods.”
Dori-Hacohen said it was “quite empowering” that Musk had signed it, citing increased misinformation on Twitter following its acquisition of the platform, documented by civil society group Common Cause and others.
Musk and Twitter did not immediately respond to requests for comment.
Source: Ambito