NVIDIA Maintains Domination in AI Processors for Cloud and Data Center

NVIDIA Maintains Domination in AI Processors for Cloud and Data Center
Depositphotos

NVIDIA maintained its dominant position in the global market for AI processors used in the cloud and in data centers in 2020, with an 80.6% share of global revenue, according to Omdia. NVIDIA generated cloud and data center AI processor revenue totaling $3.2 billion in 2020, up from $1.8 billion in 2019. The company continued to benefit from its supremacy in the market for GPU-derived chips, which currently represent the leading type of AI processor employed in cloud and data center equipment, including servers, workstations and expansion cards.

“NVIDIA in 2020 continued to capitalize on its strong incumbent position in GPU-derived chips to maintain its leadership position in cloud and data center AI processors,“ said Jonathan Cassell, principal analyst, advanced computing, at Omdia. “With their capability to accelerate deep-learning applications, GPU-based semiconductors became the first type of AI processor widely employed for AI acceleration. And as the leading supplier of GPU-derived chips, NVIDIA has established itself and bolstered its position as the AI processor market leader for the key cloud and data center market.“

The market for AI processors is undergoing rapid growth, attracting a flood of suppliers vying to challenge NVIDIA’s leadership. Global market revenue for cloud and data center AI processors rose 79% to reach $4 billion in 2020. Revenue is expected to soar by a factor of nine to reach $37.6 billion in 2026, according to Omdia. During the past few years, competitive suppliers ranging from small startups to major semiconductor vendors have entered the AI processor market with a number of different chips, ranging from their own types of GPU-based chips, to programmable devices, to new varieties of semiconductors specifically designed to accelerate deep learning.

Behind NVIDIA, second-ranked was Xilinx, which offers field-programmable gate array FPGA products commonly used for AI inferencing in cloud and data center servers. Third-ranked was Google, whose Tensor Processing Unit (TPU) AI ASIC is employed extensively in its own hyperscale cloud operations. Fourth-placed is Intel, which is supplying its Habana AI proprietary-core AI ASSPs and its FPGA products for AI cloud and data center servers. Fifth-ranked AMD is offering GPU-derived AI ASSPs for cloud and data center servers.