September 17, 2025
4 min read
Secrets of DeepSeek AI Model Revealed in Landmark Paper
The first peer-reviewed study of the DeepSeek AI model shows how a Chinese start-up firm made the market-shaking LLM for $300,000
DeepSeek says its R1 model did not learn by copying examples generated by other LLMs.
Iain Masterton/Alamy Live News
The success of DeepSeek’s powerful artificial intelligence (AI) model R1 — that made the US stock market plummet when it was released in January — did not hinge on being trained on the output of its rivals, researchers at the Chinese firm have said. The statement came in documents released alongside a peer-reviewed version of the R1 model, published today in Nature.
R1 is designed to excel at ‘reasoning’ tasks such as mathematics and coding, and is a cheaper rival to tools developed by US technology firms. As an ‘open weight’ model, it is available for anyone to download and is the most popular such model on the AI community platform Hugging Face to date, having been downloaded 10.9 million times.
The paper updates a preprint released in January, which describes how DeepSeek augmented a standard large language model (LLM) to tackle reasoning tasks. Its supplementary material reveals for the first time how much R1 cost to train: the equivalent of just US$294,000. This comes on top of the $6 million or so that the company, based in Hangzhou, spent to make the base LLM that R1 is built on, but the total amount is still substantially less than the tens of millions of dollars that rival models are thought to have cost. DeepSeek says R1 was trained mainly on Nvidia’s H800 chips, which in 2023 became forbidden from being sold to China under US export controls.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Rigorous review
R1 is thought to be the first major LLM to undergo the peer-review process. “This is a very welcome precedent,” says Lewis Tunstall, a machine-learning engineer at Hugging Face who reviewed the Nature paper. “If we don’t have this norm of sharing a large part of this process publicly, it becomes very hard to evaluate whether these systems pose risks or not.”
In response to peer-review comments, the DeepSeek team reduced anthropomorphizing in its descriptions and added clarifications of technical details, including the kinds of data the model was trained on, and its safety. “Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model,” says Huan Sun, an AI researcher at Ohio State University in Columbus. “Other firms should do the same.”
DeepSeek’s major innovation was to use an automated kind of the trial-and-error approach known as pure reinforcement learning to create R1. The process rewarded the model for reaching correct answers, rather than teaching it to follow human-selected reasoning examples. The company says that this is how its model learnt its own reasoning-like strategies, such as how to verify its workings without following human-prescribed tactics. To boost efficiency, the model also scored its own attempts using estimates, rather than employing a separate algorithm to do so, a technique known as group relative policy optimization.
The model has been “quite influential” among AI researchers, says Sun. “Almost all work in 2025 so far that conducts reinforcement learning in LLMs might have been inspired by R1 one way or another.”
Training technique
Media reports in January suggested that researchers at OpenAI, the company, based in San Francisco, California, that created ChatGPT and the ‘o’ series of reasoning models, thought DeepSeek had used outputs from OpenAI models to train R1, a method that could have accelerated a model’s abilities while using fewer resources.
DeepSeek has not published its training data as part of the paper. But, in exchanges with referees, the firm’s researchers stated that R1 did not learn by copying reasoning examples that were generated by OpenAI models. However, they acknowledged that, like most other LLMs, R1’s base model was trained on the web, so it will have ingested any AI-generated content already on the Internet.
This rebuttal is “as convincing as what we could see in any publication”, says Sun. Tunstall adds that although he can’t be 100% sure R1 wasn’t trained on OpenAI examples, replication attempts by other labs suggest that DeepSeek’s recipe for reasoning is probably good enough to not need to do this. “I think the evidence now is fairly clear that you can get very high performance just using pure reinforcement learning,” he says.
For researchers, R1 is still very competitive, Sun says. In a challenge to complete scientific tasks such as analyzing and visualizing data, known as ScienceAgentBench, Sun and colleagues found that although R1 was not first for accuracy, it was one of the best models in terms of balancing ability with cost.
Other researchers are now trying to apply the methods used to create R1 to improve the reasoning-like abilities of existing LLMs, as well as extending them to domains beyond mathematics and coding, says Tunstall. In that way, he adds, R1 has “kick-started a revolution.”
This article is reproduced with permission and was first published on September 17, 2025.
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.