Silver Bulletin

DeepSeek: The AI Disruptor and what this means for Backup & Recovery

Posted by: Rick Norgate

You will have seen the headlines about DeepSeek, China’s latest AI model that’s making waves in the tech world. It’s fast, powerful, and cheap; but it’s also raising significant cybersecurity concerns, particularly around data privacy, misinformation, and potential regulatory risks.

So, what does this mean for organisations when it comes to backup & recovery? Let’s break it down.

What is DeepSeek?

DeepSeek is a new AI chatbot and large language model (LLM) developed in China, positioned as a competitor to OpenAI’s ChatGPT and Google Gemini. It has rapidly gained traction, particularly in Asia, and its rise has sparked concern among cybersecurity experts.

Why? Because, like many AI models, it requires vast amounts of user data to function and where that data goes (and how it’s secured) is the big question.

The Cybersecurity Risks of DeepSeek

Governments and cybersecurity experts are warning about several risks:

  • Data Privacy Concerns: AI models rely on massive data collection. If your employees or customers interact with DeepSeek, where is that data going? What control do you have over it?
  • Regulatory Issues: Governments worldwide are already tightening regulations on AI. The EU AI Act, U.S. executive orders, and Australia’s recent warnings about DeepSeek show that compliance will become a major factor in AI adoption.
  • Cybercrime & Social Engineering: AI models like DeepSeek can be leveraged by threat actors for automated phishing campaigns, misinformation, and identity fraud, increasing risk exposure for businesses.

What This Means for Backup & Recovery

AI advancements like DeepSeek aren’t just a security concern, they also highlight gaps in cyber resilience that many organisations have yet to address.

  1. Data Sovereignty & AI Models
    Many companies use AI tools for internal support, incident response, and automation. But with DeepSeek, the question is: where is your data being stored and processed? If an AI model is based in a jurisdiction with different regulations, do you trust it with sensitive data?
  2. Misinformation & Cyber Threats
    AI-generated misinformation is a growing risk. Imagine fake recovery playbooks or AI-assisted cyberattacks that generate realistic but misleading alerts during an incident. Organisations need trusted sources and verified response plans to mitigate this risk.
  3. AI in Cyber Recovery
    On the positive side, AI can also strengthen cyber resilience. Platforms like Predatar already use AI-driven analytics to detect anomalies, automate recovery testing, and improve threat intelligence. But trusting the right AI is key, especially when compliance and security are at stake.

The Takeaway

AI isn’t going anywhere, and models like DeepSeek will continue to push boundaries. For cybersecurity, backup, and recovery teams, the real risk is trusting AI blindly—without knowing where the data is going or how it could be manipulated.

As governments tighten regulations and threat actors evolve, businesses need to ensure their backup, disaster recovery, and cyber resilience strategies remain one step ahead. That means:

Understanding where AI models process your data
Verifying sources of information during a cyber incident
Using AI-powered recovery tools that prioritise security

The future of cyber resilience will involve AI, but we need to make sure it’s working for us, not against us.

Posted by: Rick Norgate on January 30, 2025

Listed in

We use cookies to improve your experience on our website. By browsing this website, you agree to our use of Cookies.

Close