its main CEO asks to regulate the artificial intelligence sector

its main CEO asks to regulate the artificial intelligence sector

Washington – Sam Altman, the CEO of OpenAI, creator of the ChatGPT interface, the world’s most widespread artificial intelligence (AI) platform, told US senators yesterday that regulating the technology is critical to limiting the risks involved.

“We believe that regulatory intervention by governments will be crucial to mitigate the risks of increasingly powerful models,” Altman, 38, said at a hearing of the US Senate Legislation and Judicial Privacy Subcommittee.

“It is essential that the most powerful AI is developed with democratic values, which means that the leadership of the United States is decisive,” he said.

Yesterday was the CEO’s first appearance before that forum, at a time when the debate on the regulation of AI is growing, stimulated by a series of facts that demonstrate its power to create absolutely credible false news and capable of manipulating the public debate. and electoral campaigns. Last week, Altman had participated in a White House meeting on AI that discussed how to apply regulatory safeguards. Responding to a question about whether the companies agree on the rule, Altman told reporters: “We are surprisingly in agreement on what needs to happen.”

Christina Montgomery, IBM’s director of privacy, will also appear before the Upper House.

Goals

“Artificial intelligence urgently needs rules and safeguards to meet its immense promise and pitfalls,” said Senator Richard Blumenthal, the panel’s chairman.

“This audience kicks off the work of our subcommittee to oversee and illuminate the advanced algorithms and powerful technology of AI,” capable of learning and developing new skills in ways that specialists themselves admit not to understand.

The dissemination of images totally created based on AI, which do not present formal flaws, but are totally false and capable of distorting the political debate, has caused alarm in recent times. In this sense, a false gallery of photos of Donald Trump, former president of the United States, detained by the New York police, circulated on social networks.

Those images were developed in March with artificial intelligence by Eliot Higgins, founder of the investigative journalism platform Bellingcat, who justified himself on Twitter: “I was kidding, I thought maybe only five people would retweet them.” Still, the message was clear: the power of AI to produce “fake news” is enormous.

A few days ago, Trump himself posted on his Truth Social network a video with artificially created audio, according to which a CNN presenter said, in his own voice, things that he never really said.

threats

In this sense, an article published this week by the Associated Press agency pointed out that “technologists list a whole series of alarming scenarios in which generative AI could be used to create (…) digital content created or modified by algorithms in order to of confusing voters, dirtying a candidate or even inciting violence.”

The democratic debate and the cleanliness of the electoral processes, assure many, are in danger.

“For example, automated message bots with a candidate’s voice instructing his supposed supporters to go vote on the wrong date, audio recordings with a candidate’s voice confessing to a crime or expressing racist views, video footage that show a politician giving a speech or an interview that they never gave. Also fake images designed to look like real news reports from local media, wrongly claiming that a candidate has dropped out of the race,” she detailed.

“We are not prepared for what is coming,” warned AJ Nash, vice president of intelligence for the cybersecurity company ZeroFox, in the article. “For me, the real leap forward is the emergence of new audio and video capabilities. And when that can be manufactured on a large scale and distributed through the networks, the impact is multiplied ”, he added.

Options

The regulatory options that are on the table are varied. Some proposals focus on AI that could endanger people’s lives, jobs or livelihoods, such as in medicine and finance. Other possibilities include rules to ensure that it is not used to discriminate or violate someone’s civil rights.

Another discussion is whether to regulate the developer of the AI ​​or the company that uses it to interact with consumers. OpenAI, the company behind the ChatGPT chatbot, has discussed creating an independent regulatory body. It is not clear which approaches will win, but some members of the business community, such as IBM and the US Chamber of Commerce, favor the approach of only regulating critical areas such as medical diagnostics, which they call a risk-based proposition.

The growing popularity of so-called generative AI, which uses data to create new content like the human-like prose of ChatGPT, has sparked concerns that this rapidly evolving technology could encourage test cheating, encourage misinformation and give rise to new types of scams.

As lawmakers catch up, the top priority for big tech companies is lobbying against a “premature overreaction,” said Adam Kovacevich, head of the House of Progress, a pro-tech group.

Source: Ambito

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts