Executive Summary
This report provides a comprehensive analysis of the potential and current state of DeepSeek R2 optimization for Huawei's range of AI chips. DeepSeek R2, a prominent open-source AI model, has garnered significant attention for its advanced capabilities in coding and multilingual reasoning, coupled with a strong emphasis on efficiency. Huawei, in its pursuit of AI self-sufficiency, has developed the Ascend series of AI chips, offering a domestic alternative to established players. The synergy between DeepSeek's software prowess and Huawei's hardware capabilities holds considerable strategic importance, particularly within the context of the global AI landscape. While direct official announcements of DeepSeek R2 optimization for Huawei chips are currently limited in the available information, the existing collaboration around DeepSeek's earlier R1 model and the reported performance of Huawei's Ascend chips in inference tasks suggest a strong potential and ongoing effort in this direction. This report delves into the architecture and intended uses of DeepSeek R2, the capabilities of Huawei's Ascend chips, reported performance benchmarks, technical requirements for compatibility, community discussions, and the potential benefits and drawbacks of leveraging this technological combination.
Introduction
DeepSeek R2 has emerged as a noteworthy open-source AI model, distinguished by its sophisticated architecture and a broad spectrum of capabilities, particularly excelling in code generation and understanding across numerous programming languages, as well as demonstrating robust multilingual reasoning.1 Its design prioritizes both efficiency and capability, aiming to provide a powerful AI tool without demanding extensive computational resources.1 DeepSeek's commitment to open-source principles further enhances its appeal, fostering community-driven improvements and wider accessibility.1 The rapid succession of model releases from DeepSeek, including the accelerated launch of R2 3, indicates a proactive strategy to solidify its position in the rapidly evolving AI landscape and potentially address competitive pressures from other advanced AI models. This urgency in development suggests a keen focus on real-world performance and seamless integration with various hardware platforms.
Concurrently, Huawei has been making significant strides in the development of its Ascend series of AI chips, such as the 910C and the more recent 920.8 This initiative is a crucial component of China's broader strategy to achieve self-reliance in critical technologies, particularly in the face of increasing US sanctions.8 Reports indicate that Huawei's Ascend chips, especially the 910C variant, have demonstrated competitive performance in AI inference tasks when compared to industry-leading Nvidia GPUs like the H100.8 Moreover, Huawei is actively developing a comprehensive software ecosystem around its Ascend hardware, aiming to provide a robust and developer-friendly environment that could potentially challenge Nvidia's established CUDA dominance.8 The reported inference performance of the Ascend 910C, achieving a substantial portion of Nvidia H100's capabilities 10, signifies a viable alternative for inference workloads within China, creating a strong impetus for optimizing advanced AI models like DeepSeek R2 for this domestic platform.
The potential synergy between DeepSeek R2 and Huawei's Ascend chips carries significant strategic weight, particularly within the context of the ongoing technological competition between the US and China and China's determined pursuit of technological independence.3 Optimizing DeepSeek R2 for Huawei's chips could lead to the development of cost-effective AI solutions, reducing reliance on Western technology and fostering a more resilient domestic AI ecosystem.1 The US export restrictions on advanced AI chips 5 underscore the critical need for Chinese AI companies like DeepSeek to effectively utilize domestic hardware such as Huawei's Ascend series. The demonstrated success of DeepSeek's R1 model, which achieved notable performance even when trained on less powerful Nvidia chips 20, highlights the potential of algorithmic efficiency to compensate for certain hardware limitations, making the optimization of R2 for Huawei chips a particularly promising endeavor.
DeepSeek R2: Architecture and Capabilities
DeepSeek R2 is built upon a state-of-the-art transformer-based architecture, meticulously optimized for coding and computational tasks.1 This architecture incorporates several key highlights, including advanced transformer layers that implement multiple attention mechanisms, enabling a more nuanced understanding of code context.1 The model also features enhanced tokenization techniques that improve code comprehension and generation accuracy, along with an adaptive context window capable of handling complex, long-form code snippets with remarkable precision.1 A significant architectural innovation employed by DeepSeek R2 is the Mixture-of-Experts (MoE) system 3, which enhances model efficiency by activating only a subset of its parameters relevant to a specific task. This allows for improved performance without a proportional increase in computational cost. Additionally, DeepSeek R2 leverages a Multi-Head Latent Attention (MLA) mechanism 3, which compresses key-value matrices into smaller latent vectors, significantly reducing memory usage and enabling the processing of longer contextual information. This focus on architectural innovation suggests that DeepSeek R2 is designed for efficient scaling and operation, which could be advantageous when running on hardware with different performance profiles compared to the Nvidia GPUs it was initially developed on. Compared to its predecessor, R1, DeepSeek R2 represents a substantial leap in design and functionality, incorporating a refined training methodology and an expanded knowledge base, allowing it to learn more efficiently and with greater contextual comprehension.2
DeepSeek R2 is intended for a wide array of use cases across various domains. In software development, it can accelerate code generation, understand and debug code across multiple programming languages, and assist in creating efficient data manipulation and analysis scripts.1 Its capabilities extend to generating technical documentation automatically and providing AI-powered coding assistance for developers of all levels.1 Researchers find DeepSeek R2 valuable for complex algorithm development, aiding in the creation of sophisticated machine learning algorithms and accelerating research by quickly generating computational prototypes.1 Beyond coding-centric tasks, DeepSeek R2 demonstrates potential in specialized domains such as healthcare, finance, and legal services, where accuracy and domain expertise are critical.36 Its scalable architecture makes it suitable for both small-scale projects and enterprise-level applications, offering a versatile AI solution for diverse needs.1 The broad applicability of DeepSeek R2, particularly its proficiency in multilingual reasoning and coding 3, underscores the value of optimizing its performance across various hardware platforms to facilitate wider adoption.
Independent benchmarks have consistently demonstrated DeepSeek R2's exceptional performance across various metrics, often outperforming other open-source and even proprietary AI models. For instance, DeepSeek R2 has shown high accuracy in generating production-ready code snippets and superior performance across a wide range of programming languages.1 It has also been reported to exhibit faster processing speeds and improved accuracy in complex reasoning tasks compared to earlier models.1 A key aspect of DeepSeek's approach is its focus on achieving this high level of performance with significantly lower computational resource requirements compared to models like GPT-4 or Claude.1 This emphasis on efficiency suggests that DeepSeek R2 might be inherently more adaptable to different hardware architectures, including those with potentially varying strengths compared to the Nvidia GPUs on which it was initially developed and benchmarked. The competitive performance of DeepSeek models, such as V3, against top proprietary models 29 highlights the importance of ensuring optimal performance across diverse hardware platforms to maintain its competitive edge in the rapidly evolving AI landscape.
Huawei's Ascend AI Chips
Huawei's Ascend series of AI chips represents a significant endeavor in China's pursuit of self-reliance in critical computing technologies. This series includes several key processors, such as the Ascend 910, 910B, 910C, and the anticipated 920.8 While the initial Ascend 910 was primarily designed for AI training, subsequent iterations like the 910C are increasingly optimized for inference workloads.8 These chips are indigenously designed and manufactured by Huawei's HiSilicon division, often in collaboration with Semiconductor Manufacturing International Corporation (SMIC), reflecting China's strategic push for independence in advanced semiconductor technology.10 The development of these different Ascend chip variants suggests that optimization strategies for AI models like DeepSeek R2 might need to be tailored to the specific strengths and potential limitations of each hardware iteration.
The architecture of the Huawei Ascend 910C chip incorporates approximately 53 billion transistors and is manufactured using SMIC's 7nm N+2 process technology.10 Reported tests conducted by DeepSeek researchers indicate that the Ascend 910C achieves around 60% of the inference performance of Nvidia's high-end H100 GPU.8 This performance is considered unexpectedly good for inference tasks, and there is potential for further improvements through manual optimizations, such as the use of hand-written CUNN (Compute Unified Neural Network) kernels.11 While the Ascend 910C demonstrates strong capabilities in inference, it is generally acknowledged that Nvidia currently maintains a lead in the domain of AI training, primarily due to its more mature and deeply integrated hardware and software ecosystem, including CUDA.10 However, Huawei is actively developing newer and more powerful Ascend chips, such as the 920, which aim to rival the performance of Nvidia's top-tier offerings, potentially bridging this gap in the future.11 The significant inference performance achieved by the Ascend 910C makes it a relevant platform for optimizing inference-heavy AI models like DeepSeek R2. The necessity for manual optimization through CUNN kernels suggests that specific software adaptations might be required to fully exploit the hardware's capabilities.
To support its Ascend series of AI chips, Huawei has been developing a comprehensive software ecosystem. This includes the Ascend Computing Language (AscendCL), which provides a set of APIs for managing various aspects of the hardware, such as device management, memory management, and model execution.8 Huawei has also developed its MindIE inference engine, which is designed to optimize the execution of AI models on Ascend hardware.8 Furthermore, Huawei has been working to ensure compatibility with popular AI development frameworks like PyTorch, which is reported to have support for Ascend chips.15 This compatibility facilitates the porting and deployment of AI models developed using PyTorch onto Huawei's hardware. Huawei's ModelArts Studio platform further simplifies this process by offering pre-optimized DeepSeek models, including a distilled version of R1, for inference on Ascend hardware.11 The growing software support for Huawei's Ascend chips, including PyTorch compatibility and the development of AscendCL and MindIE, lowers the technical hurdles for optimizing and deploying advanced AI models like DeepSeek R2 on this platform. Huawei's proactive offering of DeepSeek R1 on its cloud service indicates a clear strategic alignment and a demonstrated capability to support DeepSeek's AI models.
Official Optimization Efforts for Huawei Chips
An examination of official announcements and publications from both DeepSeek and Huawei reveals a collaborative relationship, particularly concerning the deployment of DeepSeek's AI models on Huawei's hardware infrastructure.8 Huawei's cloud-computing division has partnered with AI infrastructure firms like SiliconFlow to make DeepSeek's models, specifically V3 and R1, available through Huawei's Ascend cloud service.12 Huawei itself has explicitly stated that the version of DeepSeek R1 offered on its ModelArts Studio platform is "Ascend-adapted," indicating optimization for Huawei's Ascend data center GPUs.11 This collaboration extends to integrating DeepSeek's models into Huawei's smartphones, blending Pangu and DeepSeek AI models within the Celia virtual assistant.52 While these announcements clearly demonstrate a partnership and optimization efforts for DeepSeek's earlier models, the available research material does not contain explicit official announcements from either DeepSeek or Huawei specifically detailing the optimization of DeepSeek R2 for Ascend chips. However, the strong foundation of collaboration established around R1 suggests a high likelihood of ongoing or future optimization initiatives for R2, especially given the strategic importance of this synergy for both entities and the broader Chinese AI ecosystem.
Huawei provides technical documentation outlining the process of building an inference system for DeepSeek R1 on its Huawei Cloud Flexus X instances, utilizing Ollama to deploy a distilled version of the model.53 This documentation indicates the compatibility of DeepSeek's AI models with Huawei's cloud infrastructure and the Ascend chips powering it. DeepSeek's Wikipedia page describes its software ecosystem, including libraries like hfreduce for asynchronous communication and HaiScale Distributed Data Parallel (DDP) for parallel training.7 While these tools could be leveraged for optimizing model performance on distributed systems potentially utilizing Huawei hardware, the documentation does not explicitly detail specific optimization strategies for DeepSeek R2 on Huawei Ascend chips. The availability of Huawei's documentation for deploying DeepSeek R1 provides valuable insights into the general process and potential software and hardware requirements for running DeepSeek models on their platform, which can serve as a basis for understanding the likely approach for optimizing R2.
Reported Performance and Compatibility
While direct performance benchmarks for DeepSeek R2 running on Huawei Ascend chips are not present in the provided snippets, there is information available regarding the performance of DeepSeek R1 on this hardware. DeepSeek's researchers reportedly tested Huawei's Ascend 910C chip and found that it achieves approximately 60% of the inference performance of Nvidia's H100 GPU.8 This indicates that Huawei's Ascend 910C is capable of delivering strong inference results for DeepSeek's models. The fact that DeepSeek R1, a sophisticated reasoning model, can run on the Ascend 910C with a significant fraction of the performance of a top-tier Nvidia card suggests a degree of inherent compatibility between DeepSeek's model architecture and Huawei's hardware.
Discussions within the AI community highlight the compatibility of DeepSeek models with Huawei's hardware architecture, noting the efforts to adapt the software for optimal performance. DeepSeek has reportedly implemented optimizations at a low level, using assembly-like PTX (Parallel Thread Execution) programming, which allows for more fine-grained control over the hardware, potentially including non-Nvidia architectures.19 Furthermore, DeepSeek's native support for Ascend processors and its PyTorch repository are mentioned as factors that facilitate a smoother transition from Nvidia's CUDA environment to Huawei's CUNN.15 The fact that Huawei offers an "Ascend-adapted" version of DeepSeek R1 on its cloud platform and that this involves a CUDA-to-CUNN conversion 11 further underscores the achieved compatibility, at least for the earlier model. This suggests that the groundwork for compatibility between DeepSeek's AI models and Huawei's Ascend chips is already in place, which would likely extend to DeepSeek R2.
Technical Requirements on Huawei Chips
Based on the available information regarding DeepSeek R1's deployment on Huawei hardware, it is possible to infer some of the potential technical requirements for running DeepSeek R2 optimally on Huawei chips. Given that DeepSeek R1 inference is reportedly running on the Huawei Ascend 910C 8, it is plausible that DeepSeek R2 would also require a similar class of Huawei Ascend processor for achieving comparable inference performance. The specific Ascend model might depend on the size and architectural complexity of DeepSeek R2, which boasts advancements over R1.2 While DeepSeek models have been trained on Nvidia's high-performance H800 GPUs 8, the inference phase often has different hardware demands, and Huawei's Ascend 910C appears to meet the requirements for DeepSeek's current top-tier reasoning model.
In terms of software requirements, running DeepSeek R2 on Huawei chips would likely necessitate the use of Huawei's MindIE inference engine 8, which is specifically designed for Ascend hardware. Optimizations at the kernel level, potentially leveraging Huawei's Ascend Computing Language (AscendCL) 47, could further enhance performance. While DeepSeek's models are likely developed using frameworks like PyTorch 15, the deployment on Huawei hardware might involve a conversion process from CUDA-based implementations (common in PyTorch for Nvidia GPUs) to their CUNN equivalents for Ascend chips.11 The availability of Huawei's documentation on deploying DeepSeek R1 using Ollama on their cloud platform 53 suggests that a similar approach, potentially with updated software components optimized for R2, would be employed.
Community Insights and Experiences
Discussions within online communities, particularly on platforms like Reddit's r/LocalLLaMA and r/DeepSeek, offer additional perspectives on the use of DeepSeek models with Huawei hardware. Users have noted that DeepSeek R1 is indeed being run for inference on Huawei's Ascend 910C chips, often facilitated through Huawei's cloud infrastructure.18 There is also considerable anticipation and speculation within these communities regarding the potential release and performance of DeepSeek R2, with some members expressing the belief that Huawei's new generation of AI chips will play a crucial role in enabling DeepSeek to effectively compete with leading US AI models.26 The sentiment within these discussions suggests a strong interest in seeing DeepSeek R2 optimized for and running on Huawei's Ascend platform, driven by factors such as cost-effectiveness and the desire to support domestic technological advancements.
While the community discussions confirm the deployment of DeepSeek R1 on Huawei hardware and express enthusiasm for R2, the provided snippets do not contain specific user-generated performance benchmarks for DeepSeek R2 running on Huawei Ascend chips. The benchmarks that are available primarily focus on the performance of R1 on the Ascend 910C relative to Nvidia's H100.8 However, the active engagement of the community and the reported success of R1 on Huawei hardware imply a likely continuation of these efforts for the more advanced R2 model.
Potential Benefits and Drawbacks
Leveraging DeepSeek R2 on Huawei's Ascend chips presents several potential benefits. One of the most significant advantages is the potential for cost-effectiveness, particularly within the Chinese market, compared to relying on high-end Nvidia GPUs, which can be expensive and subject to supply constraints.1 This combination also offers the benefit of reduced reliance on Western technology, contributing to a more resilient and self-sufficient domestic AI ecosystem, which is a strategic priority for China.8 Furthermore, this alignment with China's national AI strategy could potentially lead to increased government support and investment in both DeepSeek and Huawei's AI initiatives.5 The close integration of DeepSeek's software with Huawei's specific hardware architecture could also lead to optimized performance tailored to the strengths of the Ascend chips.
However, there are also potential drawbacks to consider. While Huawei's Ascend chips have made significant progress, their performance, especially in training large AI models, might still have limitations compared to the most powerful Nvidia GPUs.10 There could also be software compatibility challenges or the need for specific and potentially time-consuming optimizations to ensure DeepSeek R2 runs smoothly and efficiently on Huawei hardware. Additionally, the long-term reliability and scalability of Huawei's AI chip ecosystem, while promising, are still being established compared to the more mature and widely adopted platform offered by Nvidia.10 Despite the improving inference performance of Huawei's Ascend chips 8, the established dominance of Nvidia in AI training and the maturity of its CUDA ecosystem might still make Nvidia the preferred choice for certain computationally intensive tasks or advanced research endeavors.
Conclusion and Future Outlook
Based on the available research, while there is no explicit official confirmation of DeepSeek R2 being fully optimized for Huawei's Ascend chips, the strong collaborative relationship between the two companies, particularly evident in the support for DeepSeek R1 on Huawei's cloud infrastructure 8, strongly suggests that optimization efforts for DeepSeek R2 are likely underway or planned. The reported inference performance of Huawei's Ascend 910C chip, achieving a significant portion of Nvidia H100's capabilities, provides a solid foundation for running advanced AI models like DeepSeek R2.
The strategic importance of this synergy for both companies, coupled with China's broader goals of technological self-reliance in the AI domain, further reinforces the likelihood of continued and deepening optimization efforts. As the AI industry continues to evolve at a rapid pace, the collaboration between leading AI model developers like DeepSeek and domestic hardware providers like Huawei will be crucial in shaping the future landscape of AI, particularly in regions seeking alternatives to established global players. Continued research, development, and open communication from both DeepSeek and Huawei will be essential to fully understand the extent and effectiveness of DeepSeek R2 optimization for Huawei's Ascend chips.
Suggested Table:
Table 1: Performance Comparison: Huawei Ascend 910C vs. Nvidia H100 (Inference)
Note: This table summarizes the reported relative inference performance based on the provided research snippets.8 Specific workload benchmarks and power consumption data were not consistently available across the snippets.
Works cited
DeepSeek R2: The Best Open-Source Model - BytePlus, accessed April 22, 2025, https://www.byteplus.com/en/topic/397968
DeepSeek Launches New AI Model R2: A Leap Beyond R1 - BytePlus, accessed April 22, 2025, https://www.byteplus.com/en/topic/406685
DeepSeek R2 and the Dawn of a Multipolar AI World Order by Luca Moretti - 1950.ai, accessed April 22, 2025, https://www.1950.ai/post/deepseek-r2-and-the-dawn-of-a-multipolar-ai-world-order
What Are the Potential Challenges DeepSeek Might Face with the Early Release of R2, accessed April 22, 2025, https://deepseek.ai/blog/deepseek-r2-challenges
DeepSeek Fast-Tracks Launch Of R2 AI Model - StratNews Global, accessed April 22, 2025, https://stratnewsglobal.com/world-news/deepseek-fast-tracks-launch-of-r2-ai-model/
New DeepSeek R2 is Insane: AI Revolution - BytePlus, accessed April 22, 2025, https://www.byteplus.com/en/topic/406688
DeepSeek - Wikipedia, accessed April 22, 2025, https://en.wikipedia.org/wiki/DeepSeek
DeepSeek R1 Adds Huawei Ascend Support Shaking Up AI Hardware and Challenging Nvidia's Dominance - CTOL Digital Solutions, accessed April 22, 2025, https://www.ctol.digital/news/deepseek-r1-adds-huawei-ascend-support-shaking-up-ai-hardware-and-challenging-nvidia-dominance/
Deepseek Rushes to Launch New AI Model as China Goes All In - Sci En.tempo.co, accessed April 22, 2025, https://en.tempo.co/read/1979852/deepseek-rushes-to-launch-new-ai-model-as-china-goes-all-in
[News] DeepSeek Reportedly Reveals Huawei's Ascend 910C Reaches 60% of NVIDIA H100's Inference Power - TrendForce, accessed April 22, 2025, https://www.trendforce.com/news/2025/02/05/news-deepseek-reportedly-reveals-huaweis-ascend-910c-reaches-60-of-nvidia-h100s-inference-power/
Huawei adds DeepSeek-optimized inference support for its Ascend AI GPUs, accessed April 22, 2025, https://www.tomshardware.com/tech-industry/artificial-intelligence/huawei-adds-deepseek-inference-support-for-its-ascend-ai-gpus
Huawei, Moore Threads adopt DeepSeek's AI models - Tech in Asia, accessed April 22, 2025, https://www.techinasia.com/news/huawei-moore-threads-adopt-deepseeks-ai-models
Huawei-DeepSeek Alliance May Become Threat to Nvidia, Expert Warns | Nasdaq, accessed April 22, 2025, https://www.nasdaq.com/articles/huawei-deepseek-alliance-may-become-threat-nvidia-expert-warns
DeepSeek uses Huawei's Ascend 910C chip for AI inference - Tech With Muchiri, accessed April 22, 2025, https://techwithmuchiri.com/deepseek-uses-huaweis-ascend-910c-chip/
DeepSeek research suggests Huawei's Ascend 910C delivers 60% of Nvidia H100 inference performance | Tom's Hardware, accessed April 22, 2025, https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance
Huawei Ascend 910C offers 60% of Nvidia H100 performance: Report, accessed April 22, 2025, https://www.huaweicentral.com/huawei-ascend-910c-offers-60-of-nvidia-h100-performance-report/
DeepSeek highlights potential of Huawei Ascend 910C - DevX, accessed April 22, 2025, https://www.devx.com/daily-news/deepseek-highlights-potential-of-huawei-ascend-910c/
Investors discover there is no moat in AI (Deepseek R1) - Page 2 - Linus Tech Tips, accessed April 22, 2025, https://linustechtips.com/topic/1598652-investors-discover-there-is-no-moat-in-ai-deepseek-r1/page/2/
Deepseek 'clearly not interested' in scaling up — 160-person team focused on developing new models | Tom's Hardware, accessed April 22, 2025, https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-clearly-not-interested-in-scaling-up-160-person-team-focused-on-developing-new-models
DeepSeek's R2 AI Leap: Disrupting Markets and Raising Eyebrows Worldwide - OpenTools, accessed April 22, 2025, https://opentools.ai/news/deepseeks-r2-ai-leap-disrupting-markets-and-raising-eyebrows-worldwide
DeepSeek, Huawei, Export Controls, and the Future of the U.S.-China AI Race - CSIS, accessed April 22, 2025, https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race
DeepSeek Accelerates Launch of New AI Model R2 After Sucess of R1 - TECHi, accessed April 22, 2025, https://www.techi.com/deepseek-accelerates-launch-of-new-ai-model-r2/
DeepSeek accused of risking US security with data to China - Tech in Asia, accessed April 22, 2025, https://www.techinasia.com/news/deepseek-accused-risking-security-data-china
Nvidia's $589B DeepSeek rout - Hacker News, accessed April 22, 2025, https://news.ycombinator.com/item?id=42839650
Cover Story: DeepSeek Sets Up Race for Chinese Dominance in AI - Global Neighbours, accessed April 22, 2025, https://www.globalneighbours.org/cover-story-deepseek-sets-up-race-for-chinese-dominance-in-ai/
DeepSeek R2 Release Date Ideas? - Reddit, accessed April 22, 2025, https://www.reddit.com/r/DeepSeek/comments/1k4hz0t/deepseek_r2_release_date_ideas/
how does DeepSeek manage to do it without NVIDIA technologies? : r/LocalLLaMA - Reddit, accessed April 22, 2025, https://www.reddit.com/r/LocalLLaMA/comments/1ibgb9o/how_does_deepseek_manage_to_do_it_without_nvidia/
Huawei Pitches Ascend 910C AI Chip as Local Alternative to NVIDIA, accessed April 22, 2025, https://www.turtlesai.com/en/pages-1431/huawei-pitches-ascend-910c-ai-chip-as-local-altern
DeepSeek V3-0324 tops non-reasoning AI models in open-source first - AI News, accessed April 22, 2025, https://www.artificialintelligence-news.com/news/deepseek-v3-0324-tops-non-reasoning-ai-models-open-source-first/
Deepseek R2 is coming - Reddit, accessed April 22, 2025, https://www.reddit.com/r/DeepSeek/comments/1k4akt3/deepseek_r2_is_coming/
Reports Suggest DeepSeek Running Inference on Huawei Ascend 910C AI GPUs, accessed April 22, 2025, https://www.techpowerup.com/forums/threads/reports-suggest-deepseek-running-inference-on-huawei-ascend-910c-ai-gpus.331763/
Huawei Ascend 910B Accelerators Power Cloud Infrastructure for DeepSeek R1 Inference | TechPowerUp Forums, accessed April 22, 2025, https://www.techpowerup.com/forums/threads/huawei-ascend-910b-accelerators-power-cloud-infrastructure-for-deepseek-r1-inference.331994/
DeepSeek: Disrupting the AI Industry with Low-Cost, High-Performance Innovation, accessed April 22, 2025, https://blog.win-source.net/q-a/deepseek-disrupting-the-ai-industry-with-low-cost-high-performance-innovation/
Deepseek R2: BEST Opensource Model Will Change the World! Cheap, Fast, & Coming Soon! - YouTube, accessed April 22, 2025, https://www.youtube.com/watch?v=UC4sa_AdMns
DeepSeek's AI breakthrough bypasses Nvidia's industry-standard CUDA, uses assembly-like PTX programming instead : r/LocalLLaMA - Reddit, accessed April 22, 2025, https://www.reddit.com/r/LocalLLaMA/comments/1icaq2z/deepseeks_ai_breakthrough_bypasses_nvidias/
DeepSeek vs. OpenAI: What is DeepSeek? What does it do? | Mindflow Blog, accessed April 22, 2025, https://mindflow.io/blog/deepseek-vs-openai-what-is-deepseek-what-does-deepseek-do
DeepSeek's R2 Is Coming Sooner Than You Think And It Will Shock the World | Fello AI, accessed April 22, 2025, https://felloai.com/2025/02/deepseeks-r2-is-coming-sooner-than-you-think-and-it-will-shock-the-world/
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence - GitHub, accessed April 22, 2025, https://github.com/deepseek-ai/DeepSeek-Coder-V2
DeepSeek - R2 - in just 2 days - according to the official announcement, accessed April 22, 2025, https://forum.cursor.com/t/deepseek-r2-in-just-2-days-according-to-the-official-announcement/82245
DeepSeek R2 vs Competition: Outperforming Kimi K1.5 - BytePlus, accessed April 22, 2025, https://www.byteplus.com/en/topic/386635
Timeline of DeepSeek, accessed April 22, 2025, https://timelines.issarice.com/wiki/Timeline_of_DeepSeek
New DeepSeek benchmark scores : r/LocalLLaMA - Reddit, accessed April 22, 2025, https://www.reddit.com/r/LocalLLaMA/comments/1jj3w03/new_deepseek_benchmark_scores/
Why Huawei & Alibaba are Surprise Winner of DeepSeek vs Nvidia Battle #NVDA #BABA #Deepseek ... - YouTube, accessed April 22, 2025, https://www.youtube.com/watch?v=hI5gLTVi_bk
Why Huawei & Alibaba are Surprise Winner of DeepSeek vs Nvidia Battle #NVDA #BABA #Deepseek ... - YouTube, accessed April 22, 2025, https://m.youtube.com/watch?v=hI5gLTVi_bk
DeepSeek-R2 may be released ahead of schedule next Monday : r/LocalLLaMA - Reddit, accessed April 22, 2025, https://www.reddit.com/r/LocalLLaMA/comments/1j8mrxj/deepseekr2_may_be_released_ahead_of_schedule_next/
Huawei introduces the Ascend 920 AI chip to fill the void left by Nvidia's H20 - Reddit, accessed April 22, 2025, https://www.reddit.com/r/DeepSeek/comments/1k3s0pp/huawei_introduces_the_ascend_920_ai_chip_to_fill/
Installation Description-Released Model Training Environment Setup-Released Model-ModelZoo-Ascend Documentation-Ascend Community - 昇腾社区, accessed April 22, 2025, https://www.hiascend.com/document/detail/en/ModelZoo/releasedmodel/rmtes/rmtes_00001.html
DeepSeek is running inference on the new home Chinese chips made by Huawei, the 910C, accessed April 22, 2025, https://www.reddit.com/r/LocalLLaMA/comments/1ic03lx/deepseek_is_running_inference_on_the_new_home/
DeepSeek and Huawei Collaborate to Develop Advanced AI and Cloud Technologies., accessed April 22, 2025, https://www.mfec.co.th/en/tech-talk/solution-and-services/deepseek-and-huawei-collaborate-to-develop-advanced-ai-and-cloud-technologies/
Huawei, SiliconFlow launch DeepSeek AI models on cloud - Tech in Asia, accessed April 22, 2025, https://www.techinasia.com/news/huawei-siliconflow-launch-deepseek-ai-models-cloud
Alibaba joins Microsoft, Amazon, and Huawei in offering DeepSeek - Cloud Tech News, accessed April 22, 2025, https://www.cloudcomputing-news.net/news/alibaba-joins-microsoft-amazon-and-huawei-in-supporting-deepseek-ai/
Huawei blending Pangu and Deepseek AI models in its smartphones, accessed April 22, 2025, https://www.huaweicentral.com/huawei-blending-pangu-and-deepseek-ai-models-in-its-smartphones/
Solution Overview_AI_Huawei Cloud - 华为云, accessed April 22, 2025, https://support.huaweicloud.com/intl/en-us/deepseek-aislt/deepseek_01.html
Huawei Is POWERING DeepSeek Now... What Happens? - YouTube, accessed April 22, 2025, https://www.youtube.com/watch?v=-7gOjty8Gso
Cover Story: DeepSeek Sets Up Race for Chinese Dominance in AI - Caixin Global, accessed April 22, 2025, https://www.caixinglobal.com/2025-03-03/cover-story-deepseek-sets-up-race-for-chinese-dominance-in-ai-102293734.html
r/DeepSeek - Reddit, accessed April 22, 2025, https://www.reddit.com/r/DeepSeek/new/
DeepSeek | 深度求索, accessed April 22, 2025, https://www.deepseek.com/
微软或替Electron?聊聊什么是WebView2 | 华为开发者联盟, accessed April 22, 2025, https://developer.huawei.com/consumer/cn/blog/topic/03737379734280303
Comments
Post a Comment
Please leave you comments