Meta, OpenAI, and Microsoft have announced plans to use AMD’s newest AI chip, the Instinct MI300X, signaling a shift away from Nvidia graphics processors. The use of AMD’s high-end chip, set to start shipping early next year, could reduce costs for developing AI models and compete with Nvidia’s dominance in the AI chip market. The MI300X features a new architecture with 192GB of high-performance HBM3 memory, allowing faster data transfer and accommodation of larger AI models.
Meta and Microsoft, among the largest purchasers of Nvidia H100 GPUs in 2023, have already committed to using the MI300X for AI inference workloads. Microsoft’s CTO, Kevin Scott, stated that the company would offer access to MI300X chips through its Azure web service, while Oracle’s cloud would also incorporate the chips. OpenAI mentioned its support for AMD GPUs in the Triton software product used in AI research.
AMD aims to compete with Nvidia by improving its software suite ROCm to rival CUDA software, addressing a key shortcoming. The success of AMD’s chip will depend on whether companies accustomed to Nvidia will invest time and resources in adopting another GPU supplier. AMD has not disclosed the MI300X’s pricing but emphasizes the need for it to be more cost-effective than Nvidia’s offerings.
While AMD’s projected data center GPU revenue for 2024 is around $2 billion, the total market for AI GPUs is expected to reach $400 billion over the next four years, highlighting the potential and competition in the high-end AI chip market. AMD CEO Lisa Su believes the company doesn’t necessarily need to beat Nvidia to succeed but aims to capture a significant share of the expanding market.
Read More News: Click Here