Two American AI labs released open-source models this week, each taking dramatically different approaches to the same problem: how to compete with China’s dominance in publicly accessible AI systems.
Deep Cogito dropped Cogito v2.1, a massive 671-billion-parameter model that its founder, Drishan Arora, calls “the best open-weight LLM by a U.S. company.”
Not so fast, countered The Allen Institute for AI, which just dropped Olmo 3, billing it as “the best fully open base model.” Olmo 3 boasts complete transparency, including its training data and code.
Ironically, Deep Cognito’s flagship model is built on a Chinese foundation. Arora acknowledged on X that Cogito v2.1 “forks off the open-licensed Deepseek base model from November 2024.”
That sparked some criticism and even debate about whether fine-tuning a Chinese model counts as American AI advancement, or whether it just proves how far U.S. labs have fallen behind.
> best open-weight LLM by a US company
this is cool but i’m not sure about emphasizing the “US” part since the base model is deepseek V3 https://t.co/SfD3dR5OOy
— elie (@eliebakouch) November 19, 2025
Regardless, the efficiency gains Cogito shows over DeepSeek are real.
Deep Cognito claims Cogito v2.1 produces 60% shorter reasoning chains than DeepSeek R1 while maintaining competitive performance.
Using what Arora calls “Iterated Distillation and Amplification”—teaching models to develop better intuition through self-improvement loops—the startup trained its model in a mere 75 days on infrastructure from RunPod and Nebius.
If the benchmarks are true, this would be the most powerful open-source LLM currently maintained by a U.S. team.
Table of Contents
ToggleWhy it matters
So far, China has been setting the pace in open-source AI, and U.S. companies increasingly rely—quietly or openly—on Chinese base models to stay competitive.
That dynamic is risky. If Chinese labs become the default plumbing for open AI worldwide, U.S. startups lose technical independence, bargaining power, and the ability to shape industry standards.
Open-weight AI determines who controls the raw models that every downstream product depends on.
Right now, Chinese open-source models (DeepSeek, Qwen, Kimi, MiniMax) dominate global adoption because they are cheap, fast, highly efficient, and constantly updated.
Image: Artificialanalysis.ai
Many U.S. startups already build on them, even when they publicly avoid admitting it.
That means U.S. firms are building businesses on top of foreign intellectual property, foreign training pipelines, and foreign hardware optimizations. Strategically, that puts America in the same position it once faced with semiconductor fabrication: increasingly dependent on someone else’s supply chain.
Deep Cogito’s approach—starting from a DeepSeek fork—shows the upside (rapid iteration) and the downside (dependency).
The Allen Institute’s approach—building Olmo 3 with full transparency—shows the alternative: if the U.S. wants open AI leadership, it has to rebuild the stack itself, from data to training recipes to checkpoints. That’s labor-intensive and slow, but it preserves sovereignty over the underlying technology.
In theory, if you already liked DeepSeek and use it online, Cogito will give you better answers most of the time. If you use it via API, you’ll be twice as happy, since you’ll pay less money to generate good replies thanks to its efficiency gains.
The Allen Institute took the opposite tack. The whole family of Olmo 3 models arrives with Dolma 3, a 5.9-trillion-token training dataset built from scratch, plus complete code, recipes, and checkpoints from every training stage.
The nonprofit released three model variants—Base, Think, and Instruct—with 7 billion and 32 billion parameters.
“True openness in AI isn’t just about access—it’s about trust, accountability, and shared progress,” the institute wrote.
Olmo 3-Think 32B is the first fully open-reasoning model at that scale, trained on roughly one-sixth the tokens of comparable models like Qwen 3, while achieving competitive performance.
Image: Ai2
Deep Cognito secured $13 million in seed funding led by Benchmark in August. The startup plans to release frontier models up to 671 billion parameters trained on “significantly more compute with better datasets.”
Meanwhile, Nvidia backed Olmo 3’s development, with vice president Kari Briski calling it essential for “developers to scale AI with open, U.S.-built models.”
The institute trained on Google Cloud’s H100 GPU clusters, achieving 2.5 times less compute requirements than Meta’s Llama 3.1 8B
Cogito v2.1 is available for free online testing here. The model can be downloaded here, but beware: it requires a very powerful card to run.
Olmo is available for testing here. The models can be downloaded here. These ones are more consumer-friendly, depending on which one you choose.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.


