Artificial Intelligence: Summit in England: How dangerous will AI become?

Artificial Intelligence: Summit in England: How dangerous will AI become?

Economics Minister Habeck is discussing artificial intelligence with other top politicians in England. Great Britain is staging the summit where experts once cracked the Nazis’ encryption code.

During the Second World War, the British tried to decipher messages from enemy states in Bletchley Park north of London. The British government is now organizing an international summit there to talk about the dangers of artificial intelligence. Federal Economics Minister Robert Habeck, US Vice President Kamala Harris and other politicians are meeting today. Representatives from several companies are also coming – including tech billionaire Elon Musk.

What is the summit about?

British Prime Minister Rishi Sunak announced that he would discuss the dangers of new technologies. He compared the future impact of artificial intelligence (AI) on the world to the Industrial Revolution, the discovery of electricity or the invention of the Internet. Sunak would like to discuss possible risks with states and companies. At the same time, he is also using the campaign to promote Great Britain as a business location.

How dangerous is artificial intelligence?

Sunak’s government warns of various scenarios in a paper. It could become easier for dangerous groups to commit fraud, plan cyberattacks or influence society with false information. In principle, it could become easier for terrorists to develop biological or chemical weapons, but they would still have to get the necessary substances. In the next 18 months, AI will increase existing risks, but will not represent completely new threat scenarios.

Could computers wipe out humanity?

In the extreme case – according to Sunak at least – there is a risk that humanity will lose control through a kind of artificial superintelligence. But that doesn’t have to make people lose sleep, some experts believe that it won’t happen: “But no matter how uncertain and unlikely these risks are: if they were to occur, the consequences would be extremely serious.”

Some researchers believe it is misleading to focus too much on existential risk. It is a “dangerous distraction from the discussions we need to have about the regulation of AI,” said Mhairi Aitken from the Alan Turing Institute to the “Politico” portal.

Sasha Costanza-Chock from the Berkman Klein Center at Harvard University and the Algorithmic Justice League organization sees it similarly: “We have to repair the damage that has already been done.” The headline is not that AI could one day kill us, but that people in institutions are already using AI to cause harm at this very moment. Critics see major questions regarding data protection, copyrights or discrimination against people through algorithms. Some also criticize the working conditions of people who train AI.

Some say companies urgently need to be held more accountable. There was relatively little government control when social media emerged, warned Andrew Rogoyski from the University of Surrey at a British Tories party conference in Manchester. This had a negative impact on mental health. This mistake should not be made again.

Are there no rules?

Politics is gradually becoming active. In the USA, President Joe Biden wants to minimize risks through software with artificial intelligence (AI) with a legal framework. A presidential decree from Monday stipulates, among other things, that for programs that could potentially be dangerous to national security, the economy or health, the developers must inform the US government when training the AI ​​models. They will also have to share results of security tests with the authorities. The European Union also wants to create a legal framework to regulate the technology with an AI strategy.

Sunak warns against hasty regulation: You can’t lay down laws if you don’t yet know how something works. With his summit, Sunak wants to bring Great Britain into the conversation as an important player and location, but it is questionable how much can really come out of the meeting.

Which companies should you know?

The start-up OpenAI, which developed the chatbot ChatGPT, has become particularly well-known in recent months. The software can form sentences at the linguistic level of a human. She estimates, word by word, how the sentence could continue. In order for this to work, such programs are trained with enormous amounts of text and information. In addition to numerous start-ups, rich tech companies such as Google, Amazon, Microsoft, Meta and Apple also play an important role in AI with their resources.

What can AI do today? And what good can it bring?

Software with artificial intelligence is already ubiquitous – but is usually narrowly specialized for tasks. It can be found, for example, in image enhancement, auto-correction, in chatbots, which are gradually being used instead of hotlines, and in healthcare, for example to analyze symptoms. Germany’s Digital Minister Volker Wissing believes that in addition to risks, we must also see opportunities. Otherwise you will lose touch: “We have to work with our value partners to set standards that form the basis for a new industry.”

Why is it sometimes called AI and sometimes AI?

This is just a translation question. In German we speak of artificial intelligence (AI), in English we speak of artificial intelligence (AI). The British define it as the ability of machines to achieve a specific goal through the execution of cognitive skills.

What can the summit achieve?

That is questionable. Sunak’s summit looked professional, wrote the left-liberal newspaper “The Guardian”. But if you take a closer look, alarm bells should ring. Sunak is presenting himself as a leading figure while he is under pressure in his country due to poor poll numbers. Several critics complained in an open letter that too few civil society representatives had been invited. Instead, it is an event behind closed doors that focuses on speculation about existential risks – from systems that are being built by the very companies that now want to influence their regulation.

Source: Stern

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts