It took less than a week for the overwhelming euphoria caused by the latest version of Chinese AI DeepSeek (R1) is turning into growing distrust. The first strong visible signs of this turnaround are the Taiwan's decision, followed three days later by Australia to ban the use of DeepSeek programs on all government devices as of February 4, each time citing investigations that concluded that there was a " unacceptable risk to national security ". Next to, South Korea, France, Italy have demanded explanations from DeepSeek that they did not have regarding the security of user data. Beijing immediately responded, accusing it of politicizing technological and commercial issues, forgetting in passing that DeepSeek had just been the target of a massive attack revealing probable flaws.

A crazy week

Following the massive hack which temporarily paralyzed DeepSeek on the weekend of January 25-26, Many cyber security experts have looked into the reliability of its security. Like the company Wiz Research, which wanted to assess the vulnerability of Chinese AI protections, the effectiveness and cost of use of which had just excited the world.

In a dedicated blog, the Romanian firm expert in complex digital systems, decided on January 29:

“Within minutes, we found a publicly accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated, exposing sensitive data.” And to hammer home the point a few lines later: “More importantly, The exhibition allowed full control of the database and possible elevation of privileges within the DeepSeek environment, without any authentication or defense mechanism against the outside world”!

Since then, warnings have multiplied, as flaws, weaknesses and even suspicious connections with Chinese entities were discovered. And institutional distrust was thus established to the point of pushing several governments, as early as February 2, to ban or restrict the use of Chinese AI on government sites and equipment.

A cascade of reproaches

The various investigations carried out by companies or laboratories specializing in cybersecurity have thus made it possible to update multiple data security issues, those of individual users as well as those of organizations classified as “sensitive”. At least three types of risks emerge.

  • Cybersecurity risks : a database containing millions of sensitive information has been discovery without the need for authentication, potentially exposing passwords and confidential data. Although quickly patched, this incident exposed a glaring lack of security.
  • Jailbreak Vulnerability: Many AI models, including DeepSeek, include a "system prompt" that allows you to define the behavior and limits of the AI. This prompt is, of course, secure and confidential, but researchers at Wallarm, an American platform specializing in APIs, have managed to extract this prompt from DeepSeek and modify the behavior parameters.
  • Suspicions of Chinese interference : DeepSeek is also seen as a threat due to suspected links to the Chinese government. According to the media The Independent, researchers have discovered a suspicious hidden code in DeepSeek. Supposed to allow the exchange of user data with the operator China Mobile, it would be activated when creating an account and connecting to DeepSeek from a web browser. Problem, China Mobile is in the crosshairs of the American authorities because it is suspected of having close ties to the Chinese army. This is reminiscent of other tensions, those resulting from the arrival on the 5G network market of another Chinese suspected of collusion with the Beijing regime and boycotted by certain countries: Huawei.

A new topic for the AI Action Summit

These examples illustrate at least DeepSeek's serious security gaps, exposing users to significant risks of data theft and privacy compromise.

As the emergence of DeepSeek R1 was set to fuel discussions about the most efficient AI model, during the AI Action Summit to be held in Paris on February 11-12, it is very likely that the DeepSeek model will now be invited into the debates on the still very insufficient security of generative AI and particularly of Chinese productions.