로그인|회원가입|고객센터|HBR Korea
페이지 맨 위로 이동
검색버튼 메뉴버튼

AMD

AMD VP Anush Elangovan Vows Stronger Korea Developer Support

Dong-A Ilbo | Updated 2025.12.16
“South Korea is an extremely important and strategic partner in AMD’s AI journey. The size of our engineering team in Korea is steadily growing. AMD appreciates everyone who takes part in the open ecosystem, from developers to journalists, and will continue to provide sustained support.”

On December 12, Anush Elangovan, Senior Vice President and General Manager of AMD AI Software, shared these thoughts during a group interview with Korean media regarding ROCm 7, which was released in September. Today, NVIDIA stands at the center of AI accelerators, and at the core of that is “CUDA,” a parallel computing platform and programming model for NVIDIA GPUs. CUDA is the fundamental software that processes the data needed to build and operate AI models on NVIDIA GPUs, and serves as a core foundation for handling AI data.

Anush Elangovan, Senior Vice President and General Manager of AMD AI Software / Source=AMD

Competitor AMD has also been providing ROCm software since 2016 to enable AI data processing on AMD GPUs, and accelerated the construction of an AMD-based AI ecosystem with the launch of ROCm 7 in September 2025. ROCm 7 officially supports the AMD Instinct MI350 platform released in June this year, and includes features such as distributed inference for efficient GPU resource utilization and capabilities for running enterprise AI. It is also being integrated with open-source frameworks such as vLLM and PyTorch, while AMD expands partnerships to secure performance and broader support.

Accordingly, AMD is collaborating with Korean AI companies such as Moreh and Mangoboost to build a user ecosystem based on AMD GPUs in the domestic market. The following is a summary of a small-scale roundtable held to address ROCm 7 developers’ questions and the latest support status.

ROCm is the foundation of AMD’s AI ecosystem, supporting everything for innovation


AMD ROCm 7 is an open software stack including drivers, development tools, and APUs. In simple terms, it is a toolkit required to develop and use AI models on AMD graphics cards / Source=AMD

Q : Please explain the main changes in ROCm and the key updates from the initial release to the present.

A : ROCm is the fundamental basis of the AMD-based AI ecosystem. Accordingly, AMD continues to support innovation and provides support related to algorithms, operators, and standardization. ROCm 7 delivers three times higher performance compared to the previous generation and introduces new approaches such as optimization of attention algorithms for LLMs. It also supports new hardware such as AMD Instinct MI350 and offers distributed inference capabilities, enabling management of enterprise-grade AI and cluster AI.

Q : What efforts are being made to strengthen compatibility in Windows environments, and what challenges lie ahead?

A : Through a direct partnership with Canonical, AMD now supports Ubuntu Linux, allowing ROCm 7 libraries to be used directly on Linux-based laptops and desktops. AMD ROCm 7 is now supported on both Windows PCs and Linux PCs.

At this point, Windows represents the most important user segment, and AMD maintains about 80 builders to support both Windows and Linux. Anyone with an AMD GPU can start using it immediately. AMD considers Windows support extremely important and continues to work on it.

The AMD ROCm 7.0.0 software stack listing hardware and basic environment, runtime, compiler, development and management tools, and supported libraries / Source=AMD

Q : Technologies from Nod.ai’s Shark (Nod.ai’s software stack, acquired by AMD in October 2023) and Silo AI (acquired by AMD in July 2024) have been partially integrated into ROCm. When is full official integration expected, and what ongoing integration efforts can you share?

A : The Nod.ai team is working to improve the core machine learning compiler, which supports the Triton programming language and the LLVM machine learning compiler framework (an open-source project offering a collection of reusable compiler and toolchain technologies), based on the IREE compiler (a converter that uses the unified MLIR intermediate representation to help a wide range of machine learning (ML) models run efficiently on any hardware). The Silo AI team is working on integration with enterprise AI, and the recently added enterprise AI features were directly secured through the acquisition of the Silo team.

The update cycle is now every 6 weeks, core AI supported on day one through early collaboration

Q : Please share the respective cycles for minor and major updates of ROCm software.

A : Until the previous version, updates were released on a six-month cycle, but now AMD is releasing updates every six weeks. Going forward, AMD will release service improvements every six weeks and deliver large-scale updates every six months.

Q : AMD has pledged and implemented day-zero model support, but from a developer’s standpoint, optimization may be more concerning than mere executability. What internal processes are in place to narrow the optimization gap?

A : To achieve day-zero realization, AMD first collaborates with leading research institutions. AMD works with organizations like OpenAI and Meta from the model-building stage, and as a result, these models run on AMD platforms at launch. Second, AMD releases support tailored to each model and optimizes it to deliver the best performance. Third, AMD provides tools to end users and customers with each subsequent update to identify issues and fine-tune performance.

List of recommended graphics cards for ROCm version 7.0.0 and above / Source=AMD

Q : On December 2, a developer released on GitHub a version of ROCm 7.1.1 that supports RDNA 2 (AMD Radeon RX 6000 series). Is there any plan to officially support older GPUs such as RDNA 2 in ROCm 7?

A : ROCm is an open-source platform, and AMD supports the developer community. Any developer can modify functions and other components according to their needs, and if there is sufficient demand among developers, AMD is willing to add such support to the official version. However, AMD needs to review the specific details of the case you mentioned, and if user demand is confirmed, inclusion in the release process will be considered.

Earlier this month, the official PyTorch blog published documentation on conducting MoE pre-training using 1,024 AMD Instinct MI325 GPUs / Source=PyTorch

Q : Earlier this month, the official PyTorch blog (an open-source machine learning library and deep learning framework) carried a post about pre-training using 1,024 AMD GPUs. The article mentions pipeline evolution, expanded kernel support, and plans to support next-generation hardware such as AMD Instinct MI450, but does not address benefits such as total cost of ownership or performance per watt. Please explain AMD’s position on this.

A : Performance per watt and power efficiency are very important in AMD’s software strategy. Even without mentioning performance per watt, AMD confirmed around a 1.2-fold performance improvement over the previous generation in that context. In terms of total cost of ownership and power, AMD continues to demonstrate strong results in both inference and training.

Furthermore, AMD emphasizes responsible AI and is working to innovate in open software strategies and energy sustainability. Regarding power consumption, anyone can approach optimization from an open-source perspective to improve power efficiency, and such improvements can be applied across the various fields in which GPUs are used.

AMD will spare no effort to expand the ROCm ecosystem


Mangoboost has a track record of submitting MLPerf inference tests for the AMD Instinct MI300 and 350 series / Source=Mangoboost

Q : Ultimately, for the AMD-based GPU ecosystem to expand, the number of ROCm-based experts and developers must grow. How is AMD supporting the developer ecosystem to build out the software ecosystem?

A : To increase the number of ROCm developers, AMD operates training programs and accessibility enhancement programs. For the developer cloud, AMD provides resources so developers can directly use AMD Instinct. AMD also collaborates with universities to offer ROCm-related education and continues to support various developer events so that developers can directly experience and adopt AMD technologies.

In Korea, AMD is working with companies such as Moreh, Clevi, and Mangoboost. These are very important partners in the AI development environment, and AMD collaborates closely with them on machine learning libraries and ROCm software. Mangoboost has contributed significantly to scaling training and inference based on AMD hardware and submitted MLPerf results this year using MI300X.

Q : Please also introduce how AMD interacts with and what role it plays in the broader general developer ecosystem.

A : Anyone with a GitHub account can use AMD hardware through the AMD Developer Cloud, and many developers are already participating. AMD is investing heavily to expand the capabilities of the Developer Cloud and also hosts hackathons and other events.

“AMD AI Developer Meetup Korea with Moreh” held on December 10 / Source=Moreh

Q : Finally, the AMD ROCm developer ecosystem in Korea is still not very large. Nevertheless, many developers are contributing to this ecosystem and working to apply it to real-world work environments and industries. As the executive overseeing AI at AMD, what message would you like to convey to Korean developers participating in AMD’s developer ecosystem?

A : ROCm is more than just software; it is a philosophy. It embodies openness, inclusivity, collaboration, and the spirit of co-development. It is difficult to claim that AMD has every solution. However, together with developers, it is possible to build the future of AI. That future will be open and will enable collaboration.

In addition, South Korea is an extremely important and strategic partner in AMD’s AI journey. The size of the engineering team in Korea is steadily growing. AMD appreciates everyone who can join the open ecosystem, from developers to journalists, and will continue to provide sustained support.

Reporter Nam Si-hyun, IT Donga (sh@itdonga.com)
AI-translated with ChatGPT. Provided as is; original Korean text prevails.
Popular News

경영·경제 질문은 AI 비서에게,
무엇이든 물어보세요.

Click!