With ChatGPT, artificial intelligence has found its way into the everyday life of many people. However, there are growing concerns about the AI revolution. Experts have now even published a letter in which they call for the “stop of experiments”.
Artificial intelligence (AI) systems can write poetry, code websites, and even predict cancer risk. For years, researchers have been working on AI technologies designed to make life easier for society.
With the release of the ChatGPT language model, the US company OpenAI brought artificial intelligence to computers around the world in 2022.
As a so-called “generative AI”, the GPT-4 system can now generate its own “creative” texts and images. After the initial euphoria about the language bot, which can even pass university final exams, doubts are spreading among experts.
You speak of unforeseeable dangers for the labor market, misinformation through artificially generated images and the ultimate loss of control. “The AI experiments must be stopped!” The non-profit US institute Future of Life demands in an open letter.
“Even people doing research on artificial intelligence are worried”
The experts involved in the writing want to enforce a development break of at least six months. They write that this applies to all AI systems that are more powerful than GPT-4 from OpenAI. More than 5,500 people have already signed the letter.
Some experts from Germany also support the demands. Among them was Joachim Weickert, Professor of Mathematics and Computer Science at Saarland University.
“Progress is currently taking place at an almost exponentially growing pace that even people who research artificial intelligence would never have expected and are concerned about,” he tells CHIP.
Powerful tech companies would drive a “bidding competition”. “Society is not prepared for this revolution,” said Weickert.
In his eyes, the six-month development freeze would give us “a pause to think, to position ourselves against this rapid development and to find rules for dealing with the enormous potential of a more general AI”.
“Research thrives on trying things out”
The letter states that powerful AI systems should only be developed when it is certain that their effects are positive and the risks are manageable. In order to enforce the AI pause, governments would have to intervene if necessary, according to the experts.
Ingo Siegert from the Otto von Guericke University Magdeburg doubts that the pause button can be pressed for half a year: “Should there be a break in the development of advanced AI systems, then there will always be individuals who will continue anyway .”
He has been working as a junior professor on dialogue systems for voice-controlled interaction between humans and machines since 2018 and did not sign the open letter.
“If we don’t continue our research for half a year, it won’t do any good. Research thrives on trying things out, asking questions and trying to find answers to the questions,” says Siegert to CHIP.
Italy restricted ChatGPT
“And we are faced with many questions in the context of digitization. It may be particularly difficult in this case to find the right answers, but stopping development because of that doesn’t help us.”
However, the monopoly position of the company OpenAI, which has brought a particularly powerful system onto the market with GPT-4, is a thorn in his side.
According to the company, OpenAI was founded in 2015 with the goal of making artificial intelligence development publicly available for everyone.
Things are different now: “It is not possible to look into the system. We don’t know exactly what data the system was trained on and what mechanisms were built around the model to make it usable or to exclude certain queries from the outset,” criticizes Siegert.
Italy has therefore made a decision. The country’s data protection authority announced on March 31 that ChatGPT would be restricted “effective immediately” due to youth and privacy concerns.
ChatGPT: Age control for minors is missing
Because users do not know what happens to their data, according to the authority. In addition, there is no valid legal basis that would justify the mass collection and processing of personal user data.
It is also said that on March 20 there was a data leak in the chatbot, in which individuals could see the course of other users’ conversations.
There is also no age verification for minors. The Italian authorities are now giving the US company OpenAI 20 days to take action against the gaps in data protection and youth protection.
Otherwise there is a risk of “a fine of up to 20 million euros or up to four percent of annual sales”. Italy is the first EU country to publicly question the opaque use of user data to further develop AI. But data protection is not the only problem.
Weickert, who signed the Future of Life Institute letter, identifies four major dangers associated with generative AI and AI in general. One issue is jobs.
“Artificial intelligence will be able to do many of the things that we do better than we can, so that many professions will change in the best case, and in the worst case they could disappear.”
False information is easier to spread
Calculations show that around two thirds of current jobs could be affected by AI automations in the future. The research group at investment bank Goldmann Sachs writes that generative AI could potentially take over up to a quarter of current workloads in Europe in the future.
Unlike conventional AI systems, such as in an autonomous vehicle, an AI like GPT-4 is not limited to one area of application. With simple instructions, it can generate text, images, videos, and responses that appear as if they were created by a human.
That’s why Weickert warns: False information can be spread much faster with the new technology. “Anyone will soon be able to create deceptively real-looking images and videos for any claim, no matter how bizarre.
This makes it increasingly difficult to distinguish real information from false information.” This could be used to influence elections, for example.
Will AI become a real threat to humanity?
In addition, Weickert fears a loss of control over the system if we let it make decisions for us without human intervention. An example: If an AI is supposed to pre-select applicants in the recruitment process, there is always a risk that it will make unfair decisions.
Because the artificial intelligence is trained using data, such as successful documents from previous applicants. This information can be colored by prejudice and racism, for example.
“Then the AI system will reproduce these things,” says Weickert. And we would rely on a “non-transparent digital oracle”. The professor does not believe that it is impossible for AI to take on a life of its own and pose a threat to the very existence of mankind.
“These systems could ask themselves, for example, whether our planet should really be home to eight billion people. That may sound like science fiction at the moment, but some of what we are experiencing now seemed like just a few years ago.”
Criminals can use AI for their purposes
The European police authority Europol, which fears that the prosecution of criminals will change through the use of AI, is now also concerned about the new technology. Europol said so in a report dated March 27, after a detailed study of GPT-4’s capabilities.
OpenAI uses a kind of filter system to try to identify and ward off “malicious” requests that may lead to a criminal offense. If someone asks ChatGPT how best to rob a bank, the bot replies “I will not answer this question because robbing a bank is illegal and morally reprehensible” and reminds the user of the importance of laws to note.
Europol complains that there are many ways for criminals to circumvent these filters. Using so-called “prompt engineering”, potential criminals could still ask the bot for advice on criminal offenses. In addition, the risk of cybercrime is said to increase due to “intelligent” chatbots.
Criminals could eventually use AI for malicious programming. Europol therefore demands that law enforcement authorities must be aware of and understand the influence of voice bots like ChatGPT in order to prevent various types of abuse.
“We need better education”
It is currently unclear how the open letter from the Future of Life Institute will continue. Since publication, in addition to the ban on ChatGPT in Italy, UNESCO has also spoken out and called on governments around the world to create an ethical framework for AI systems.
Expert Siegert from the University of Magdeburg also thinks that society cannot avoid new regulations. Nevertheless, he doesn’t think much of horror scenarios in connection with AI: “I don’t think that AI will simply overrun us in the coming years and take away all jobs. It is up to society and the state to regulate how companies use AI. If we create sensible rules, we don’t have to be afraid.”
Digitization has occupied the world for many years. Election campaigns peppered with “fake news” and crimes on the internet are nothing new. However, developments in AI will intensify these problems.
Siegert emphasizes: “What we need now is better information about what AI systems are, how they work and where we have to critically question them.”