WWhenever there are revolutionary technical innovations in Germany, the first thing people call for is regulation. This also applies to ChatGPT, which is already used almost every day by high school students and students and, as a text and speech processing program, poses both opportunities and risks. Undoubtedly, it increases the possibilities of cheating, especially in science. ChatGPT is still producing a lot of errors at the moment. However, most researchers consider it a false hope that the origin of a text generated with artificial intelligence (AI) remains technically easily recognizable.
The AI researcher from the Institute for Technology Assessment and Systems Analysis at KIT in Karlsruhe, Steffen Albrecht, who wrote a background paper for a hearing of the Committee for Education, Research and Technology Assessment this week in the German Bundestag, refers to the unique character of ChatGPT generated texts. The existing software for tracking down plagiarism is failing, and new programs are being trained, but not with resounding success. He proposes a kind of watermark that intersperses certain patterns in texts that do not bother people when reading, but are recognizable by machines.
When writing scientific texts, the AI could help to get an overview of relevant literature or to publish in another language. Scientific publishers have good reason to refuse to publish texts written by AI systems. Because German copyright law requires a personal intellectual creation, so that only texts created by humans can be protected. The increasing pressure to publish, especially in the qualification phase, could nevertheless tempt some researchers to have their studies written by an AI system – not to mention term papers and dissertations.
Exam formats need to be changed
The first universities have already made their way and changed their examination formats. They rely more on face-to-face exams than on homework. In the social and linguistic subjects, however, this is difficult. Both in law and in other humanities subjects, ChatGPT could draft counterarguments to one’s own position and thus train controversial discussions. The Higher Regional Court of Stuttgart is currently testing the use of AI in contract reviews and other routine legal tasks in a pilot project.
In schools, the process of searching for sources, building up an argument, i.e. the preparatory work for your own linguistic text, could play a much greater role in the future. The Technical University in Munich has developed the tool “PEER” (Paper Evaluation and Empowerment Resource) at the department of Kelkelejda Kasneci, which is intended to support students in writing essays. Students can photograph or upload their text, which is then examined by the AI and provides personalized feedback with suggestions for improvement. Kasneci himself believes that weaker students in particular can benefit from such tools. However, this also means that ChatGPT cannot replace teachers, but requires continuous support of the learning process. Otherwise, what has already been shown with other digital teaching and learning offers will be repeated: the stronger students benefit enormously, and the weaker ones learn even less effectively. When learning to read, the so-called eye-tracking technology in AI-supported textbooks could recognize whether the children can follow what they are reading. Similar models are conceivable for language deficits and learning disabilities.
One of the biggest risks for AI researchers is not only copyright issues, but also the risk of students feeding ChatGPT lots of personal information over which they no longer have control because the system is maintained by a private company in the United States. In addition, users can easily commit copyright infringements if ChatGPT uses copyrighted text parts that are similar or even identical to the original without this being recognizable to the user.
FDP: Don’t be afraid of artificial intelligence
Berlin was the first state to publish a handout for dealing with AI in schools using the example of ChatGPT, which refers to the possibilities of self-learning and checking one’s own learning progress, but also states that a text generated by ChatGPT that is output as your own , is to be rated as unsatisfactory in any case. Other countries and the Conference of Ministers of Education will follow.
In a position paper for AI in education, the FDP parliamentary group primarily emphasizes the opportunities. “Fear of AI must not determine our actions”. However, the FDP also overshoots the mark when it thinks that knowledge transfer will primarily be carried out by AI-based learning tools in the future. Although they can relieve teachers, they certainly cannot replace them. On the other hand, the FDP’s proposal to use AI for educational diagnostics and performance evaluation seems to make more sense in order to ensure support needs and objective evaluation criteria.
The Liberals strictly reject the classification of chat robots as “high-risk applications”, as is currently being discussed at European level. Then “their use, for example in schools, would be practically impossible,” says the paper that is available to the FAZ. When it comes to diagnostics, for example before school starts or during school entrance examinations, AI applications could be of great help if teachers can handle them. “Ethical and data protection debates must be realistic rather than unrealistic,” says the position paper of the Free Democrats.