Qualcomm AI200 and AI250 Redefines Data Center Inference for the AI Era

0
Qualcomm Technologies, Inc. unveiled its next-generation AI inference solutions for data centers: the Qualcomm AI200 and AI250 chip-based accelerators and racks. Leveraging the company’s NPU expertise, these solutions provide rack-scale performance and enhanced memory capacity for rapid generative AI inference, delivering exceptional performance per dollar and per watt. This marks a significant advancement in scalable, efficient, and flexible generative AI deployment across industries.

Qualcomm AI200 introduces a purpose-built rack-level AI inference solution designed to deliver low total cost of ownership (TCO) and optimized performance for large language & multimodal model (LLM, LMM) inference and other AI workloads. It supports 768 GB of LPDDR per card for higher memory capacity and lower cost, enabling exceptional scale and flexibility for AI inference.

The Qualcomm AI250 solution will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads by delivering greater than 10x higher effective memory bandwidth and much lower power consumption. This enables disaggregated AI inferencing for efficient utilization of hardware while meeting customer performance and cost requirements.

Both rack solutions feature direct liquid cooling for thermal efficiency, PCIe for scale up, Ethernet for scale out, confidential computing for secure AI workloads, and a rack-level power consumption of 160 kW.

“With Qualcomm AI200 and AI250, we’re redefining what’s possible for rack-scale AI inference. These innovative new AI infrastructure solutions empower customers to deploy generative AI at unprecedented TCO, while maintaining the flexibility and security modern data centers demand,” said Durga Malladi, SVP & GM, Technology Planning, Edge Solutions & Data Center, Qualcomm Technologies, Inc. “Our rich software stack and open ecosystem support make it easier than ever for developers and enterprises to integrate, manage, and scale already trained AI models on our optimized AI inference solutions. With seamless compatibility for leading AI frameworks and one-click model deployment, Qualcomm AI200 and AI250 are designed for frictionless adoption and rapid innovation.”

Our hyperscaler-grade AI software stack, which spans end-to-end from the application layer to system software layer, is optimized for AI inference. The stack supports leading machine learning (ML) frameworks, inference engines, generative AI frameworks, and LLM / LMM inference optimization techniques like disaggregated serving. Developers benefit from seamless model onboarding and one-click deployment of Hugging Face models via Qualcomm Technologies’ Efficient Transformers Library and Qualcomm AI Inference Suite. Our software provides ready-to-use AI applications and agents, comprehensive tools, libraries, APIs, and services for operationalizing AI.

For more information about the Qualcomm AI200 and AI250, visit the website here. These are expected to be commercially available in 2026 and 2027 respectively. Qualcomm Technologies is committed to a data center roadmap with an annual cadence moving forward, focused on industry-leading AI inference performance, energy efficiency, and industry-leading TCO.

Related News:

Beacon EmbeddedWorks Drives Innovation with Qualcomm Solutions

Digital Transformation Accelerated with Qualcomm and Cognizant Collaboration

Share.

About Author

Bio: Riley Wilson, a journalist from the bustling streets of Pittsburg, possesses a unique perspective shaped by her upbringing in the southern United States. Her unique eye reflects an urban, capitalistic driving edge softened by her southern charm. While her last career focused on community and small business, she is now taking the IT world by storm. Outside of the office, Riley gives back by volunteering her time at community-building organizations and goes for regular hikes with her dog Wilson.