The information environment in Finland during the coronavirus pandemic was exceptional and intense in many ways. The spread of disinformation and the number of actors involved reached unprecedented levels. The demand for accurate information was enormous, and the situation was constantly evolving. Information was disseminated through various channels. Official information played a crucial role, but at the same time, social media posed challenges in the fight against false and misleading information.
Malicious bots increased significantly during the pandemic. The operation of bots – i.e. programs imitating human users – was particularly aggressive during the key corona measures. These included, for example, the biggest information campaigns about corona vaccinations and instructions. This was evident in a study that analyzed a total of 1.7 million tweets related to the topic of COVID-19 on Twitter/X in Finland over the course of three years.
Bots accounted for 22 percent of the messages, while normally bots produce about 11 percent of the content in Twitter/X. Of the identified bot accounts, 36 percent (4,894) acted maliciously. In particular, they emphasized the unintentional dissemination of misinformation, i.e. incorrect information. About a quarter (approx. 460,000) of all messages contained incorrect information. Roughly the same proportion of messages expressed a negative attitude towards vaccines.
According to the study, malicious bots used the Finnish Institute for Health and Welfare’s (THL) Twitter to intentionally spread disinformation, i.e. misleading information, but did not actually target THL. The bots increased the effectiveness and reach of their publications in different ways. For example, they mentioned other accounts in 94 percent of their tweets. The bots also proved to be adaptable; their messages varied according to the situation.
The study utilized the latest version of Botometer (4.0) to classify bot accounts, going beyond mere identification to differentiate between regular bots and COVID-19-specific malicious bots. This distinction is critical, as it reveals that traditional binary classifications of bots are insufficient.
The findings highlight how regular bots often align with governmental messaging, enhancing their credibility and influence, while malicious bots employ more aggressive and deceptive tactics. The malicious bots may amplify false narratives, manipulate public opinion, and create confusion by blurring the line between credible and noncredible sources.”
Ali Unlu, Senior Researcher, primary author of the study
Bot activity should be taken into account in public health communication
Malicious bots pose persistent threat even after the pandemic’s peak. They continue to spread misinformation, particularly concerning vaccines, by exploiting public fears and skepticism.
The research suggests that these bots could have long-term implications for public trust in health institutions and highlights the importance of developing more sophisticated tools for detecting and mitigating the influence of such bots.
“Public health agencies need to enhance their monitoring and response strategies. Our study suggests that preemptive measures such as public education on bot activity and improved detection tools. The study also calls for more actions from social media platforms to curb clearly false information and account authenticity, which could significantly improve public trust and the effectiveness of public health communication,” says Lead Expert Tuukka Tammi from THL.
Non-English setting makes the research unique
Unlike most studies in this domain, which are predominantly in English, this research is one of the few that investigates social media bots in a non-English language, specifically Finnish. This unique focus allows for a detailed examination of external factors such as geographical dispersion and population diversity in Finland, providing valuable insights that are often overlooked in global studies.
“This study represents a significant contribution to understanding the complex role of bots in public health communication, particularly in the context of a global health crisis. It highlights the dual nature of bot activity -; where regular bots can support public health efforts, while malicious bots pose a serious threat to public trust and the effectiveness of health messaging. The research provides a roadmap for future studies and public health strategies to combat the ongoing challenge of misinformation in the digital age,” concludes Professor of Practice Nitin Sawhney from Aalto University’s computer science department.
The study was conducted as part of the joint Crisis Narratives research project between Aalto University and THL, and was funded by the Research Council of Finland from 2020 to 2024.
Source:
Finnish Institute for Health and Welfare
Journal reference:
Unlu, A., et al. (2024). Unveiling the Veiled Threat: The Impact of Bots on COVID-19 Health Communication. Social Science Computer Review. doi.org/10.1177/08944393241275641.