Biden initiates round of dialogue to regulate the development of artificial intelligence

Biden initiates round of dialogue to regulate the development of artificial intelligence

San Francisco – The White House convened the executives of advanced companies in artificial intelligence (AI), such as Google, Microsoft, OpenAi and Anthropic, to start a “discussion about” the risks “associated with this technology. Senior members of the Government intervened at the meeting, held yesterday, including Vice President Kamala Harris.

“Our goal is to have a frank discussion about the current and near-term risks we perceive in AI developments,” he said in the invitation.

The Joe Biden administration is also seeking to discuss “steps to reduce those risks, and other ways we can work together to make sure Americans benefit from AI advances while being protected from harm.”

Satya Nadella (Microsoft), Sundar Pichai (Google), Sam Altman (OpenAI) and Dario Amodei (Anthropic) attended the meeting, the White House said.

Artificial intelligence has been a part of everyday life for years, from social media recommendation algorithms to high-end home appliances. Yet the dazzling success since late last year of ChatGPT, the generative AI interface from OpenAI, a startup company heavily funded by Microsoft, was the starting point for a race toward ever more intuitive and efficient systems that are capable of to generate increasingly complex texts, images and programming codes.

Its launch sparked excitement and concerns on a new scale. Especially when Sam Altman, the director of OpenAI, anticipated the next generation of so-called “general” AI, where programs will be “smarter than humans in general.”


AI risks range from discrimination by algorithms to the automation of tasks performed by humans, theft of intellectual property or sophisticated large-scale misinformation, among others. What’s more, those responsible for ChatGPT themselves have admitted not knowing how these systems evolve and learn, which, in one extreme, could lead them to become independent, create their own AI and carry out malicious acts on the network, as has emerged from disturbing dialogues maintained by specialists with the chats.

“Language models capable of generating images, sound, and video are a dream come true for those who want to destroy democracies,” said UC Berkeley professor David Harris, an AI and public policy specialist.

A problem frequently mentioned by specialists is that it becomes impossible to distinguish between reality and fiction based on the photos, audios and videos created entirely with these tools.

In late 2022, the White House released a “Blueprint for an AI Bill of Rights,” a short document that lists general principles like guarding against dangerous or fallible systems.

Earlier this year, the National Institute of Standards and Technology (NIST), a government-affiliated center, designed a “risk management framework” on the matter.

Biden said last month that these companies “clearly need to make sure their products are safe before making them available to the general public.” However, “these guidelines and statements do not oblige affected companies to do anything,” said David Harris, who was director of AI research at Meta.

On the other side of the Atlantic, Europe hopes to lead the way again towards ad-hoc regulation around AI, as it already did with personal data law.

It should be remembered that Elon Musk and hundreds of world experts recently signed a call to pause six months in research on artificial intelligence, warning of “great risks for humanity.”

In the petition, posted on the futureoflife.org site, they called for a moratorium until security systems are put in place with new regulatory authorities, surveillance of the systems, techniques that help distinguish between the real and the artificial, and institutions capable of dealing with to the “dramatic economic and political disruption, especially for democracy, that AI will cause.”

The AI ​​giants do not deny that there are risks, but they fear that innovation will be stifled by regulations that are too restrictive.

“I am sure that AI will be used by malicious actors, and yes, it will cause damage,” Microsoft chief economist Michael Schwarz said during a panel at the World Economic Forum in Geneva, according to Bloomberg. However, he called on lawmakers not to rush it and when there is “real harm” to ensure that “the benefits of regulation outweigh the price to society.”

Source: Ambito

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts