The DeepSeek Shock
The moment the market realized frontier AI capability was no longer geographically exclusive.
Background
By late 2024, the prevailing assumption in Western AI was that frontier capability required frontier capital. OpenAI had raised $6.6B. Anthropic had secured billions from Amazon and Google. NVIDIA's market cap exceeded $3 trillion on projected GPU demand. The scaling hypothesis, that performance tracked compute spend, underpinned a trillion-dollar infrastructure buildout.
This assumption was never unchallenged. Chinese AI labs had been publishing competitive research for years. Alibaba's Qwen, Baidu's ERNIE, and Zhipu AI's GLM models were all tracking Western benchmarks at lower price points. The open-source ecosystem, driven by Meta's Llama releases, Mistral in France, and a growing community of independent researchers, had been steadily closing the gap between proprietary and open models.
What US export controls on advanced NVIDIA chips did was force Chinese labs to prioritize efficiency over scale. DeepSeek, a research lab spun out of High-Flyer quantitative hedge fund and founded by Liang Wenfeng with $700M in backing, was among the most methodical. Operating under hardware restrictions, the team developed architectural innovations, particularly Mixture-of-Experts routing, that reduced training costs by orders of magnitude.
On December 26, 2024, DeepSeek released V3: a 671-billion-parameter MoE model that matched GPT-4-class benchmarks. The reported training cost was approximately $5.5 million, a fraction of publicly estimated training budgets for comparable Western models.
Three weeks later, they released R1.
Aftermath
DeepSeek R1 launched on January 20, 2025. It was a reasoning model with chain-of-thought capabilities comparable to OpenAI's o1. The model was open-weight. The chatbot app was free. Performance on math and coding benchmarks was competitive with the best proprietary offerings.
On January 27, the financial markets reacted. NVIDIA lost over $590 billion in market value in a single session. The broader NASDAQ shed approximately $1 trillion as investors reassessed whether the AI infrastructure buildout had been mispriced. It was among the largest single-day market capitalization declines in US stock market history.
The same day, DeepSeek's app surpassed ChatGPT as the most downloaded on the US App Store. It was the first Chinese AI product to reach that position.
Government responses followed within days. Italy's data protection authority banned DeepSeek on January 30, citing GDPR violations related to data storage in China. By February 1, NASA, the US Navy, the Pentagon, and other federal agencies had restricted it from government devices. The concerns centered on data sovereignty, not model capability.
Over the following year, DeepSeek continued publishing updates under increasingly open licenses. V3 moved to MIT license in March. R1-0528 improved AIME 2025 math scores from 70% to 87.5% while halving hallucination rates. By December 2025, V3.2-Speciale was benchmarking alongside Gemini 3.0 Pro.
In February 2026, OpenAI sent a memo to the US House Select Committee on China alleging that DeepSeek employees had used obfuscated third-party routers to circumvent access restrictions and extract capabilities through model distillation.
Industry Impact
The DeepSeek moment did not create the trends it exposed. Chinese AI labs had been progressing steadily. Open-source models had been gaining ground. Efficiency research was accelerating globally. What DeepSeek did was make the convergence visible and force the market to price it in.