Huawei Technologies is preparing for a significant escalation in its artificial intelligence (AI) chip ambitions, announcing plans to double the annual output of its flagship Ascend 910C processors to 600,000 units in 2026, up from 2025 levels.
The move comes as Washington continues to ratchet up export restrictions on Chinese chipmakers and their supply chains, thereby intensifying Beijing’s efforts to achieve semiconductor self-sufficiency. It was first reported by Bloomberg, which cited sources familiar with the situation.
Huawei is now projecting an overall output of up to 1.6 million Ascend-series dies next year, compared with about 1 million units this year, according to the report. This means Huawei must continue to boost chip production yields and enhance chip design.
The report came after Huawei’s Rotating Chairman Xu Zhijun said in a speech at Huawei Connect 2025 in Shanghai on September 18 that the company will introduce four new Ascend chips, including the Ascend 950PR in the first quarter of next year, the Ascend 950DT by the end of 2026, the Ascend 960 in 2027 and the Ascend 970 in 2028. He emphasized that this new Ascend series will utilize Huawei’s self-developed high-bandwidth memory (HBM) technology.
“Because of US sanctions, Huawei cannot rely on Taiwan Semiconductor Manufacturing Co (TSMC) for production. This is why a single Ascend chip still lags Nvidia in performance,” Xu said. “However, with more than 30 years of expertise in linking machines, we have invested heavily in super‑node interconnection technology. That breakthrough allows us to scale clusters to tens of thousands of graphics cards to create world‑class AI computers.”
He said the company’s strategy is to pursue cluster-level scale through super-nodes to compensate for the limitations of single-chip performance.
Some state-affiliated media said that the 910B and 910C chips have already matched Nvidia’s A100 in specific benchmarks, and that the upcoming Ascend 950 series will match Nvidia’s Blackwell series.
“Huawei plans to reach a very substantial scale of Ascend production by 2026, meaning that the Ascend 950 is not a distant concept but a tangible product that enterprises can buy and deploy,” Zhang Zikan, a Jiangsu-based technology columnist, says in his article with the title “Huawei’s 950 series can break Nvidia’s monopoly!”
“Technology companies can now plan long-term strategies without fear that geopolitical shocks will suddenly undercut their computing infrastructure,” he said. “A stable and reliable domestic chip supply gives these companies the confidence and resilience to expand.”
He added that Huawei’s announcement should be understood not as the launch of a single chip, but as the unveiling of a comprehensive solution and a clear roadmap for future development.
Widening scope of US curbs
On September 29, 2025, the Bureau of Industry and Security (BIS) issued an interim rule extending its Entity List and Military End-User List to cover affiliates of sanctioned firms. Any company at least 50% owned by a listed entity now automatically inherits the same restrictions. This closes a loophole that allowed subsidiaries and spin-offs to keep trading freely, widening licensing and denial rules to many more companies.
Analysts note that the broadened controls may prompt Semiconductor Manufacturing International Corp (SMIC) to explore more complex alternative methods for sourcing critical machinery parts and raw materials from overseas suppliers.
To understand why Huawei plans to pursue a cluster-level scale of AI chips, one must first understand SMIC’s technological bottleneck and the unique features of the Ascend 950 series.
Since 2019, the US has stopped ASML, the world’s largest supplier of lithography machines, from shipping extreme ultraviolet (EUV) lithography to China. With only deep ultraviolet (DUV) lithography, SMIC can produce 14-nanometer chips in a single exposure and up to 7-nanometer chips in multiple exposures. In comparison, Nvidia’s AI chips are faster and more energy-efficient, as they are based on 3nm or 5nm technology, two generations more advanced than China’s.
Additionally, Washington banned Nvidia from selling high-end AI chips, such as the A100 and H100, as well as the most advanced Blackwell series, including the B200, to China. It only allows the shipment of the H20, a downgraded version of the H100, to China. However, China accuses Nvidia of dumping low AI chips on its soil.
Now, Huawei wants to compensate for quality with quantity. It plans to enhance the interconnectivity of its chips using its self-developed HBM technology, SuperPod architecture and “UnifiedBus” (Lingqu in Chinese) interconnect protocol. Theoretically, a company can use Huawei chips to build a powerful AI cluster, just as it can with Nvidia chips, although it would require more chips than would be required ifusing Nvidia ones.
“For clusters at the scale of tens of thousands of cards, interconnection technology is absolutely critical,” a Chinese writer at Eetrend.com writes. “The Ascend 950PR relies on Huawei’s SuperPod architecture and its proprietary ‘UnifiedBus’ interconnect protocol, which in theory can support clusters of more than 500,000 cards.”
He says Huawei’s strategy is to increase overall computing power through architectural innovation in response to the limitations of single-chip performance.
Huawei has recently unveiled its latest super-node products, the Atlas 950 SuperPod and Atlas 960 SuperPod, which support 8,192 and 15,488 Ascend cards, respectively. It has also introduced the Atlas 950 SuperCluster and Atlas 960 SuperCluster, with capacities exceeding 500,000 cards.
Huawei’s technical limitations
While Huawei claims its AI cluster will surpass Nvidia’s, many technology columnists remain skeptical, given that the current 910B and 910C chips still cannot match the performance of Nvidia’s A100 or H100 chips.
A columnist at idcsp.com, a technology news website, notes that while domestic substitution is feasible, there are apparent technical limitations for Huawei’s AI chips when competing with Nvidia chips.
“The Ascend 910B still relies on a 14nm process, far behind the 4nm node used for Nvidia’s H100, resulting in a significant efficiency gap,” he writes. “In terms of applications, the 910B offers a cost advantage in small and medium-scale AI inference but remains less competitive for large-scale training and high-performance computing, where the H100 dominates.”
The columnist concludes that Huawei’s 910B may have surpassed Nvidia’s A100 in efficiency and price-performance, but it still falls short in memory capacity when compared with the H100. He says all these factors mean higher migration costs for enterprises considering a switch from Nvidia to Huawei.
Dai Hong, another Chinese columnist, says the Ascend 910B delivers 5.7 Total Operations Processing System (TOPS) per watt compared with the H100’s 15 TOPS per watt, and its memory bandwidth stands at 1 terabyte per second (TB/s) versus 3 TB/s. He says the CUDA core architecture also contributes to Nvidia’s advantage in the AI chip market.
Read: China accuses Nvidia, other US chipmakers of monopoly and dumping
Follow Jeff Pao on Twitter at @jeffpao3