Singapore EN

What the Dawn of DeepSeek Means for Global AI

Ryan Cox

Head of AI , Synechron

Artificial Intelligence

On 27th January 2025, DeepSeek, a Chinese AI startup, disrupted overnight our understanding of AI economics.

Achieving the seemingly impossible, DeepSeek had created a high-performing AI model for only $5.6M (although some are already questioning whether this figure is reliable), a sum dwarfed by the more than $100M budgets required by industry leaders like OpenAI. DeepSeek’s surge in popularity, led to a mass sell-off of shares in Nvidia, Meta and Microsoft. It’s important to note though that demand for chips is far from spent and tech stocks have already partially recovered.

What makes DeepSeek different?

What sets DeepSeek apart is its efficient architecture, innovative training techniques, and optimized resource management. In theory, this opens the door for businesses of all sizes to adopt advanced AI without the exorbitant costs. Beyond cost savings, this democratization of AI also introduces new competition, spurring better results and innovation for end users across the globe. It also underscores the importance of leveraging global AI talent.

In response to the development, OpenAI CEO, Sam Altman, called DeepSeek “impressive” but has promised that "we will obviously deliver much better models" in the near future. OpenAI has also claimed that the reason DeepSeek is cheaper and more efficient is because they may have used OpenAI models to train it. A problem here is that open-weight models don’t disclose their training sources, so you don’t know if the data has been obtained legally or is biased (i.e. if it’s copyrighted or gleaned from other closed sourced models like OpenAI, where DeepSeek or others don’t have permission to use it for training their models).

Contrast with the Stargate Project

The cheaper, more agile DeepSeek contrasts markedly with the recently announced Stargate Project. Launched with some fanfare at the White House on 21st January 2025, Stargate is a strategic AI joint venture that merges private sector innovation (led by OpenAI, Oracle, and SoftBank and investment firm MGX) with US national interests to reshore critical AI capabilities. It is hoped that this significant investment in AI infrastructure will create 100,000 new jobs and drive economic growth. Furthermore, by accelerating research and development, the US is looking to throw down the gauntlet to China in the race for global AI dominance – and that challenge seems now to have been accepted.

DeepSeek’s achievement partly reflects a geopolitical response to US export restrictions, raising questions about the country’s long-term AI leadership versus strategic positioning. Crucially, this disruption also highlights the need for smarter, not bigger AI investments – which will speed up AI model training, leading to faster breakthroughs and broader innovations (Altman has already stated that it's "legit invigorating to have a new competitor"). Ultimately it's always a good thing for users to have more choice and AI adoption will continue to increase.

Governance and security

The challenge of DeepSeek for companies isn't just cost optimization – it's governance. While open-weight models offer great customization potential, they also demand robust model validation frameworks. Added to this, some models are developed with levels of censorship, particularly concerning sensitive political topics, cultural issues, or content that might be considered inappropriate – or which go against Chinese government policies. These open-weight models also lack built-in security certifications, placing the burden of compliance on the deploying organization.

By contrast, OpenAI supports customers’ compliance with privacy laws (GDPR and CCPA), and has attained CSA STAR Level 1 and SOC 2 Type 2 compliance. Additionally, before deploying new models, OpenAI publishes safety research, conducts external read teaming, and performs frontier risk evaluations to assess high-stakes AI risks.

So the deploying organization still bears responsibility

Open-weight models like Llama and DeepSeek offer greater transparency, which is beneficial for audits and research; but, as stated above, they shift more of the burden of compliance, security, and risk mitigation onto the deploying organization.

LLMs – both open and closed source – must be validated at deployment, during model updates, when system prompts change, and continuously, as user interactions evolve. This includes testing for accuracy, bias, and security vulnerabilities. Just because today's model exhibits no geopolitical biases, how will companies ensure the next version remains neutral? So, it’s important to think of model validation frameworks as the unit test coverage for GenAI.

CIOs and tech leaders must implement governance and validation frameworks that combine qualitative policy reviews (e.g. fairness and compliance audits) with quantitative benchmarks (e.g. accuracy, security, and robustness metrics) to ensure their GenAI solutions remain compliant, secure, and aligned with business objectives.

At Synechron, our testing of DeepSeek's 32B parameter model has revealed promising capabilities and important considerations for bias mitigation and compliance standards before it’s rolled out more broadly – and these are things we’ll be examining in forensic detail over the coming months.

A call to action: Establish model validation frameworks

Organizations need to start planning their AI infrastructure now. With a focus on scalable computing resources, talent development, and strategic partnerships, business leaders can future proof their businesses, moving fast when new technology and models emerge to gain a competitive edge.

Successful AI implementation in 2025 will also mean balancing innovation with practical governance. Organizations and technology leaders should focus on three key areas: establishing clear model validation frameworks for open-weight models; developing robust compliance protocols; and creating flexible architectures that can adapt to this rapidly evolving landscape.

The Author

Ryan Cox, Head of AI
Ryan Cox

Head of AI

Ryan Cox is a Senior Director and Synechron’s Co-Head of Artificial Intelligence. We partner with companies to explore the potential of AI technology to revolutionize their business. Synechron's AI practice specialises in large language models, generative AI technologies, AI strategy and architecture, and AI research and development. We ensure AI systems and solutions deployed at our clients' sites are ethical, safe and secure. Contact Ryan on LinkedIn or via email

See More Relevant Articles