Huawei’s artificial intelligence research arm, Noah’s Ark Lab, has denied allegations that its Pangu Pro Moe large language model copied elements from Alibaba’s Qwen 2.5-14B model.
The lab issued a statement on Saturday asserting that its AI model was independently developed and trained.
The rebuttal follows a report published on Friday by an anonymous group called HonestAGI on the code-sharing platform GitHub. The report alleged that Huawei’s model showed “extraordinary correlation” with Alibaba’s Qwen 2.5-14B, suggesting that it may have been derived through “upcycling” rather than being trained from scratch. The post sparked widespread debate in AI circles and Chinese tech media, raising questions about copyright infringement, misrepresentation of technical development, and the authenticity of Huawei’s reported training efforts.
In its response, Noah’s Ark Lab said the model was “not based on incremental training of other manufacturers’ models,” and emphasized that the team had made “key innovations in architecture design and technical features.” It also noted that the Pangu Pro Moe model is the first large-scale model fully trained on Huawei’s proprietary Ascend AI chips.
The lab added that its developers followed open-source license requirements when referencing any third-party code, but did not specify which models or components were used.
Alibaba has not commented on the accusations, and Reuters was unable to verify HonestAGI’s identity or contact its members.
The controversy comes amid a heated race among Chinese tech giants to dominate the domestic AI landscape. While Alibaba’s Qwen series—launched in May 2024—is consumer-focused and used in chatbot applications, Huawei’s Pangu models are more commonly deployed in government, finance, and manufacturing sectors.
Huawei, which first entered the large model space in 2021, recently open-sourced its Pangu Pro Moe models on Chinese developer platform GitCode in June, aiming to broaden adoption by offering free access to developers.