Reports indicate that Huawei has developed the successor to its Ascend 910C AI chip—the Ascend 920, and plans to officially launch the next-generation chip in the second half of 2025.
According to the latest market updates, Huawei has already scheduled mass production of this new AI semiconductor product to begin in the second half of this year. In response, several industry experts have stated that the Ascend 920 is poised to fill the gap left by Nvidia’s H20 chip in the Chinese market due to the latest U.S. export restrictions.
On April 15, the U.S. Department of Commerce announced a new set of export licensing requirements for AI chips including Nvidia’s H20, AMD’s MI308, and equivalent products destined for China. Under these export control measures, vendors must now obtain a license to sell such chips to China.
China is known to be a key market for the H20. With the new restrictions taking effect indefinitely, the emergence of the Ascend 920 is not only expected to challenge Nvidia’s monopoly in the AI chip sector, but also to provide strong computing power support for the development of China’s AI industry—accelerating AI adoption and innovation across more domains.
According to SevenTech, the Ascend 920 comes with the following specifications and performance highlights:
Process Technology: The Ascend 920 will be manufactured using SMIC’s 6nm (N+3 node) process technology.
Compute and Memory Bandwidth: Powered by HBM3 memory modules, it will deliver 900 TFLOPS of BF16 performance and 4,000 GB/s of memory bandwidth.
Architecture and Training Efficiency: Built on the same architectural design as the Ascend 910C, the new chip reportedly achieves 30%–40% better training efficiency, with performance expected to surpass Nvidia’s H20.
Interface Support: The chip supports PCIe 5.0 and next-generation high-throughput interconnect protocols, allowing for optimized resource scheduling and cross-node collaboration.
Enhanced Features: The chip includes improved tensor operation accelerators and optimizations for Transformer and Mixture-of-Experts (MoE) models, making it more suitable for training larger and more complex AI models.