The recent launch of Chinese artificial intelligence (AI) lab DeepSeek is raising privacy concerns regarding potential security risks, a cybersecurity expert warns.
DeepSeek develops open-source large language models, claiming on its website it rivals one of OpenAI’s ChatGPT models. Both companies are private and not publicly traded.
DeepSeek sent shockwaves throughout global tech markets, temporarily sinking some publicly traded tech stocks amid allegations the company and other rivals used ChatGPT’s proprietary work for their own AI apps.
From a cyber perspective, DeepSeek prompts questions related to data privacy laws, government oversight of data and the potential for cyberattacks.
“It is critical to approach interactions with such platforms with a degree of caution, especially given the data privacy laws that vary significantly from one jurisdiction to another,” cautions Adrianus Warmenhoven, a cybersecurity expert at NordVPN, in a statement.
For example, as a Chinese AI startup, DeepSeek operates within a regulatory environment where government oversight of data is stringent. This raises concerns related to data collection, storage and usage, he says.
Different privacy, security standards
“DeepSeek’s privacy policy, which can be found in English, makes it clear: user data, including conversations and generated responses, is stored on servers in China,” Warmenhoven says.
“This raises concerns because of data collection outlined — ranging from user-shared information to data from external sources — which falls under the potential risks associated with storing such data in a jurisdiction with different privacy and security standards.”
DeepSeek’s AI model has also faced growing backlash for its refusal to address political topics, Warmenhoven adds. CBC reported Tuesday that DeepSeek seems to struggle with questions that would upset Chinese authorities, such as those related to Taiwan and Tiananmen Square.
Users should be aware that any data shared with the platform could be subject to government access under China’s cybersecurity laws, which mandate companies to provide access to data upon request by authorities, Warmenhoven says.
Another concern lies in the lack of transparency that often surrounds how AI models are trained and how they operate. “Users should consider whether their interactions or uploaded data might inadvertently contribute to machine learning processes, potentially leading to data misuse or the development of tools that could be exploited maliciously.”
DeepSeek was hit by a cyberattack and outages earlier this week. As AI platforms become more sophisticated, they also become prime targets for hackers looking to exploit user data or the AI itself, Warmenhoven says.
Culturally, there are differences in data practices when developing AI tools, he adds. In some regions, data collection may occur not out of ‘malicious intent’ but because it’s standard practice to gather extensive user or app usage information as part of app development.
“This contrasts with Western approaches that prioritize minimizing data collection to protect user privacy, operating on the principle that if data is not essential, it should not be collected,” he says
Generally speaking, Warmenhoven recommends users scrutinize the terms and conditions of these AI platforms, and understand where data is stored and who has access to it.
Feature image: The icon for the smartphone app DeepSeek is seen on a smartphone screen in Beijing, Tuesday, Jan. 28, 2025. (AP Photo/Andy Wong)