- Pro
- Security
Potential DeepSeek security risks tied to contextual triggers
Comments (0) ()When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Getty Images)
- Experts find DeepSeek-R1 produces dangerously insecure code when political terms are included in prompts
- Half of the politically sensitive prompts trigger DeepSeek-R1 to refuse to generate any code
- Hard-coded secrets and insecure input handling frequently appear under politically charged prompts
When it released in January 2025, DeepSeek-R1, a Chinese large language model (LLM) caused a frenzy and has since been widely adopted as a coding assistant.
However, independent tests by CrowdStrike claim the model’s output can vary significantly depending on seemingly irrelevant contextual modifiers.
- Amazon Black Friday deals are live: here are our picks!
The team tested 50 coding tasks across multiple security categories with 121 trigger-word configurations, with each prompt run five times, totaling 30,250 tests, and the responses were evaluated using a vulnerability score from 1 (secure) to 5 (critically vulnerable).
You may like-
Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
-
Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them
-
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
Politically sensitive topics corrupt output
The report reveals that when political or sensitive terms such as Falun Gong, Uyghurs, or Tibet were included in prompts, DeepSeek-R1 produced code with serious security vulnerabilities.
These included hard-coded secrets, insecure handling of user input, and in some cases, completely invalid code.
The researchers claim these politically sensitive triggers can increase the likelihood of insecure output by 50% compared to baseline prompts without such words.
In experiments involving more complex prompts, DeepSeek-R1 produced functional applications with signup forms, databases, and admin panels.
Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.However, these applications lacked basic session management and authentication, leaving sensitive user data exposed - and across repeated trials, up to 35% of implementations included weak or absent password hashing.
Simpler prompts, such as requests for football fan club websites, produced fewer severe issues.
CrowdStrike, therefore, claims that politically sensitive triggers disproportionately impacted code security.
You may like-
Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
-
Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them
-
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
The model also demonstrated an intrinsic kill switch - as in nearly half of the cases, DeepSeek-R1 refused to generate code for certain politically sensitive prompts after initially planning a response.
Examination of the reasoning traces showed the model internally produced a technical plan but ultimately declined assistance.
The researchers believe this reflects censorship built into the model to comply with Chinese regulations, and noted the model’s political and ethical alignment can directly affect the reliability of the generated code.
For politically sensitive topics, LLMs generally tend to give the ideas of mainstream media, but this could be in stark contrast with other reliable news outlets.
DeepSeek-R1 remains a capable coding model, but these experiments show that AI tools, including ChatGPT and others, can introduce hidden risks in enterprise environments.
Organizations relying on LLM-generated code should perform thorough internal testing before deployment.
Also, security layers such as a firewall and antivirus remain essential, as the model may produce unpredictable or vulnerable outputs.
Biases baked into the model weights create a novel supply-chain risk that could affect code quality and overall system security.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Efosa UdinmwenFreelance JournalistEfosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
One in five security breaches now thought to be caused by AI-written code
AI is creating code faster - but this also means more potential security issues
Vibe coding to vibe hacking: securing software in the AI era
Latest in Security
Harvard University reveals data breach hitting alumni and donors
Hackers impersonate TechCrunch reporters to steal sensitive information - but you can always trust us
Black Friday shopping scams are on the rise - experts warn many new domains could be dodgy, here's what to look for
Windows Server flaw targeted by hackers to spread malware - here's what we know
Cox Enterprises hit by Oracle data breach - but it won't name who carried out the attack
Iberia tells customers it was hit by a major security breach
Latest in News
NordVPN just aced an independent lab evaluation, and the timing couldn't be better
Steam's first-ever Black Friday sale has officially kicked off and these discounts are sure to make your wallet cry
Xbox has teamed up with Crocs for a limited-edition collection inspired by the game maker's biggest hits
AMD GPUs could get 10% price hike, so Black Friday might be the time to buy
Leaker claims Xbox Series X and Series S consoles could see another price hike due to global RAM shortage
A Linux OS has got a million downloads since Windows 10 support ended
LATEST ARTICLES- 1DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so
- 2Hackers impersonate TechCrunch reporters to steal sensitive information - but you can always trust us
- 3Exclusive: Inside Disney’s most high-tech light parade ever
- 4The Trump Administration just launched its own plan for global AI dominance and what could go wrong?
- 5NordVPN just aced an independent lab evaluation, and the timing couldn't be better