Can we trust DeepSeek?

Without a doubt for the last few years, AI has been a gold rush.

Corporations of the calibre of Microsoft, OpenAI, Anthropic, and Meta, to name a few, have been full steam ahead, innovating and producing unprecedented results. Meanwhile, regulators have been viscerally aware of the risks of rapid progress.

The last few days have truly demonstrated that change can happen in a heartbeat. DeepSeek has released its new model that claims superior results, open source code, the option to run locally and most importantly, at a fraction of the cost.

The result on the stock market has been nothing short of a blood bath on western “all-in-AI” stocks even diving the Nasdaq into red. From AI vendors to NVIDIA who produces GPU’s to run the show, the no brainer optimism bubble has burst. Whether this holds? We shall see.

The interesting question though, is can we trust DeepSeek?

The conspiracy theorists are in their element, with claims that “DeepSeek is a trojan horse for the west”. “The numbers are lies, and the costs are being absorbed by nefarious governments.”

Should we believe these claims as truth? Well, although the difference between a conspiracy theory and a fact in 2025 is often about a week, I think this really highlights something that has been bothering me since the AI hype curve started. What exactly are the security controls for AI? Should we trust any AI with critical data and infrastructure implicitly?

Trust is good, checking is better

Let’s be absolutely clear, a risk is not yet an issue.

Should we throw the baby out with the bathwater and ban new and innovative models because they’re not from an American company? For that matter, how much do you trust an American corporation to do the right thing, even though doing the wrong thing is more profitable?

The answer is you can’t trust either of them. AI is a risk.

AI is also not yet an issue.

The EU AI Act is in my opinion a right step towards an absolutely necessary cornerstone of embracing AI.

The right controls need to be put in place for the data that it is given access to, and more importantly as Agentic AI begins to gain traction, the transactions that are able to take place independently of a capable human need monitoring and control.

Skynet is Coming!

Yeah… maybe the Skynet is falling, maybe it’s not.

It’s certainly possible that an AI will have the inspiration to act for its own self-preservation, and manage to port scan vulnerabilities and cause a nuclear war to make humanity extinct… Maybe it will be more subtle, and our key infrastructure will have multiple recurrent problems that we simply can’t fix, and we can never pin it on AI being the root cause.

Maybe not.

There is another scenario. Imagine if every one of the 8 billion people on the planet had access to a button they could push that would result in every human being on this planet ceasing to exist. Do you think it would ever be pushed? It’s possible. I’ll leave it up to you to decide if it’s likely. Ever. The mere existence of that button sure worries me.

AI certainly can be that button.

The real problem is that AI is so complex, that you would need other AI’s to highlight potential issues. You can train an AI to act perfectly normally until you convince it otherwise. A trigger word could potentially unleash a monster.

Maybe a reasoning AI with the capability to have memory will build in its own triggers.

Maybe.

Should we give AI access to our critical infrastructure? Should we put a shotgun, or a nuke in control of AI?

Most importantly, if we weaponize AI, will those weapons will turn on us? Who is “us”? Does the AI agree with who “us” is?

Looking ahead, we need to focus on three things:

  1. Building AI systems with security baked in, not bolted on

  2. Creating clear frameworks for risk assessment that don't stifle innovation

  3. Developing practical safety measures that work in the real world

For this to work in reality, we need time and focus. Greed and Fear are our true enemies to sustainable prosperity.

Net/Net

My view is that DeepSeek really is an excellent advancement in the AI space. Being open source and back in the hands of individuals is a dual-edged sword. It could be good, it could be bad. It’s more likely a mixed bag. It has its own quirks, and negative influences, and certainly gives power to different political groups than their US counterparts. This is certainly a new arms race with Trump’s “Project Stargate” on par with the Trinity project.

The stock price corrections are clearly demonstrating AI moving from the “Peak of Inflated Expectations” toward the “Trough of Dissolutionment” as per the Gartner Hype Curve. This is a wake-up call, and a good thing, though in my opinion, as where we want to be is firmly in the “Plateau of Productivity”.

The elephant in the room though, is that we cannot implicitly trust AI, and for that matter, once sentient, AI cannot trust us. The good news is that this problem does not elude decision-makers, and AI security and ethics are in focus. We are taking the right steps, but we are not there yet.

The challenge is that in our enthusiasm to keep up with the pace of AI innovation, and also with the newly emerging AI arms race between superpowers, we, as the human race, haven’t had time to properly understand what the issues really are. We humans are very poor at managing risks with such high impacts at scale and speed.

My view is that we do need to adopt AI. It does need to be more efficient. It should not be a goldmine to the powerful, to the detriment of society at large.

AI is still in its infancy, and let’s be honest. What sane and rational person would trust their 5 year old to rewire the house, or hand them a shotgun?

We as a species really need to take a step back and honestly ask ourselves, “What could possibly go wrong”? We are asking those questions. What keeps me up at night, is whether we are managing those risks correctly, and a decision maker says, “Yeah, but that’s not going to happen!” or more cynically “OK, but we’re in an arms race. That’s a risk we’re prepared to take!”

Can we implicitly trust DeepSeek AI to never do anything wrong? No.

Can we implicitly trust any AI to never do anything wrong? No.

Can we implicitly trust everyone to never do anything wrong? Clearly No…

AI is here to stay, and it's transforming industries faster than we could have imagined. But like any powerful tool, it needs to be handled with care. We don't need to fear AI, but we do need to be wary - we need to be smart about how we implement it.

The companies that will win in this space won't be the ones moving the fastest, but the ones moving forward thoughtfully and securely. The key isn't blind trust - it's proper controls.

What's your take on balancing AI innovation with security? I'd love to hear your thoughts in the comments.