Taiwan has officially prohibited government agencies from using DeepSeek AI, citing serious national security risks, including potential data leaks and cybersecurity threats.
Key Points at a Glance:
- Taiwan’s Ministry of Digital Affairs has banned DeepSeek AI for government use, warning of cross-border data transmission risks.
- The decision follows similar actions by Italy and Australia, which blocked DeepSeek AI due to concerns over personal data handling.
- DeepSeek AI has been targeted by large-scale cyberattacks, including DDoS incidents and malicious software exploits.
- Governments worldwide are tightening AI regulations, with the EU and UK introducing new security frameworks for AI governance.
Taiwan has joined a growing list of nations that have imposed restrictions on the use of DeepSeek AI, a Chinese artificial intelligence platform. The Ministry of Digital Affairs released a statement warning that DeepSeek AI’s operations involve cross-border data transmission, raising concerns over information leakage and national security vulnerabilities.
Authorities cited similar actions taken by Italy and Australia, both of which recently blocked DeepSeek AI due to concerns about how the platform processes and handles personal data. The decision marks an increasingly global effort to regulate AI technologies developed by foreign entities, particularly those linked to China.
Despite its open-source nature and cost-effective AI model, DeepSeek has drawn scrutiny due to its susceptibility to cybersecurity threats. Cybersecurity experts have highlighted that DeepSeek’s AI models are vulnerable to various jailbreak techniques, which could allow users to bypass safety restrictions and manipulate responses.
In a further escalation, the AI platform has been a target of sustained cyberattacks. Security firm NSFOCUS reported that DeepSeek AI’s API interface was subjected to three waves of distributed denial-of-service (DDoS) attacks between January 25 and 27, 2025, with attack durations averaging 35 minutes. Earlier attacks on January 20 and 25 also exploited NTP reflection and SSDP reflection methods to disrupt operations.
Adding to the controversy, malicious actors have leveraged DeepSeek AI’s popularity to distribute counterfeit software packages, which were designed to steal sensitive data from developers. Russian cybersecurity firm Positive Technologies uncovered fake Python packages—deepseeek and deepseekai—that posed as legitimate DeepSeek API clients, but instead functioned as information-stealing malware.
These fraudulent packages, downloaded over 222 times before being removed from the Python Package Index (PyPI), primarily targeted users in the U.S., China, Russia, Hong Kong, and Germany.
Taiwan’s move aligns with a broader international crackdown on AI-related security risks. The European Union’s Artificial Intelligence Act, which went into effect on February 2, 2025, aims to regulate high-risk AI applications while outright banning systems deemed to pose unacceptable risks.
Meanwhile, the UK government has introduced a new AI Code of Practice, emphasizing security protocols to mitigate threats such as data poisoning, model obfuscation, and indirect prompt injection attacks. The initiative seeks to ensure AI systems are developed with security-first approaches to prevent exploitation.
The risks associated with AI misuse are not theoretical. Google’s Threat Intelligence Group (GTIG) recently reported that over 57 different cyber threat actors linked to China, Iran, North Korea, and Russia have attempted to exploit AI models, including DeepSeek, for cyber operations.
Hackers have also been found attempting to jailbreak AI systems to override ethical constraints, enabling them to generate malicious code, facilitate scams, or provide instructions for creating weapons. AI company Anthropic has responded by developing Constitutional Classifiers, an advanced security mechanism designed to prevent large-scale jailbreak exploits.
As governments tighten restrictions on AI, Taiwan’s decision to block DeepSeek AI signals a growing emphasis on securing national infrastructure from foreign AI platforms. With the EU, UK, and several nations implementing stricter AI oversight, the future of open-source, cross-border AI technologies remains uncertain.
The ongoing debate underscores the delicate balance between AI innovation and security concerns, with policymakers worldwide seeking effective regulatory frameworks to protect users and national interests.