'We should be alarmed': UK experts wary over use of China's DeepSeek

If AI chatbot is left unchecked, it could feed disinformation campaigns and erode public trust: expert

By
Web Desk
|
Deepseek app is seen in this illustration taken, January 28, 2025. — Reuters
Deepseek app is seen in this illustration taken, January 28, 2025. — Reuters

UK experts have expressed reservations and urged caution over the rapidly increasing use of the Chinese artificial intelligence platform DeepSeek. They cited concerns about it spreading misinformation and alleged that the Chinese state might exploit users' information and data. 

The British government has said it was up to citizens if they want to use the new AI, but officials are monitoring anything that might pose a threat to national security regarding data from DeepSeek and said they would take action if any threats emerged, according to The Guardian

A professor of the foundations of AI at the University of Oxford, Michael Woolridge, said it would not be unreasonable to assume that data shared with the chatbot would not be shared with the state. 

"I think it’s fine to download it and ask it about the performance of Liverpool football club or chat about the history of the Roman empire, but would I recommend putting anything sensitive or personal or private on them? Absolutely not […] Because you don’t know where the data goes," he said. 

Another expert who is a member of the United Nations' high-level advisory body on AI, Dame Wendy Hall, stated: "You can’t get away from the fact that if you are a Chinese tech company dealing with information, you are subject to the Chinese government’s rules on what you can and cannot say". 

Co-founder of the Centre for Information Resilience (CIR), Ross Burtley, expressed his concerns as he said, "We should be alarmed". 

He alleged if the AI chatbot is left unchecked, it could "feed disinformation campaigns, erode public trust and entrench authoritarian narratives within our democracies". 

The UK technology secretary Peter Kyle on Tuesday told the News Agents podcast that people need to make their own choices about DeepSeek right now because "we haven’t had the time to fully understand it […] this is a Chinese model that […] has censorship built into it". 

"So, it doesn’t have the kind of freedoms you would expect from other models at the moment. But of course, people are going to be curious about this," he added. 

Some users and testers of DeepSeek have found that it refuses to answer questions on sensitive topics. 

"The biggest problem with generative AI is misinformation,” Hall said. “It depends on the data in a model, the bias in that data and how it is used. You can see that problem with the DeepSeek chatbot."