AI Back-End Networks Projected to Increase the Data Center Switch Market

0
The Data Center Switch Market is expected to grow by 50 percent, driven by increased spending on switches deployed in AI back-end networks, according to the latest AI Networks for AI Workloads report from Dell’Oro Group. Currently, spending in the data center switch market is focused on front-end networks, connecting general-purpose servers. However, the rise of AI workloads necessitates the development of a new back-end infrastructure. This has intensified the competition between InfiniBand and Ethernet, with manufacturers competing for dominance in AI back-end networks. Although InfiniBand is projected to maintain its lead, Ethernet is anticipated to make significant gains, potentially capturing a 20 percentage point increase in revenue share by 2027.

“Generative AI applications usher in a new era in the age of AI, standing out for the sheer number of parameters that they have to deal with,” said Sameh Boujelbene, Vice President at Dell’Oro Group. “Several large AI applications currently handle trillions of parameters, with this count increasing tenfold annually. This rapid growth necessitates the deployment of thousands or even hundreds of thousands of accelerated nodes. Connecting these accelerated nodes in large clusters requires a data center-scale fabric, known as  AI back-end networks, which differs from the traditional front-end network used mostly to connect general-purpose servers.”

“This predicament poses the pivotal question: what is the most suitable fabric that can scale to hundreds of thousands and potentially millions of accelerated nodes while ensuring the lowest Job Completion Time (JCT)? One could argue that Ethernet is one speed generation ahead of InfiniBand. Network speed, however, is not the only factor. Congestion control and adaptive routing mechanisms are also important. We analyzed AI back-end network build-outs by the major Cloud Service Providers (such as Google, Amazon, Microsoft, Meta, Alibaba, Tencent, ByteDance, Baidu, and others) as well as various considerations driving their choices of the back-end fabric to develop our forecast,” continued Boujelbene.

Additional highlights from the AI Networks for AI Workloads Report:

  • AI networks will accelerate the transition to higher speeds. For example, 800 Gbps is expected to comprise the majority of the ports in AI back-end networks by 2025, within just two years of the latest 800 Gbps product introduction.
  • While most of the market demand will come from Tier 1 Cloud Service Providers, Tier 2/3 and large enterprises are forecast to be significant, approaching $10 B over the next five years. The latter group will favor Ethernet.

About the Report

Dell’Oro Group’s AI Networks for AI Workloads Advanced Research Report explores the use cases that optimize InfiniBand vs. Ethernet in AI back-end networks, as well as the choices made by large Cloud Service Providers. The report also provides a 5-year market forecast on a worldwide basis, by customer type, by InfiniBand vs. Ethernet, and by port speed. To learn more about the AI Networks For AI Workloads Advanced Research Report to optimize InfiniBand vs. Ethernet in AI back-end networks, click HERE.

Related News:

Dell’Oro Group Report Finds SASE Market Spending Growing

World’s First Hybrid CXL 2.0 and PCIe Gen5 Switch Launched by XConn

Share.

About Author

Taylor Graham, marketing grad with an inner nature to be a perpetual researchist, currently all things IT. Personally and professionally, Taylor is one to know with her tenacity and encouraging spirit. When not working you can find her spending time with friends and family.