At CXL DevCon 2025, XConn Technologies will showcase dynamic memory allocation powered by Compute Express Link (CXL) switch technology. This demonstration marks a significant step forward in memory flexibility, illustrating how CXL switching enables seamless, on-demand pooling and expansion of memory across diverse system architectures.
The milestone, achieved in collaboration with AMD, unlocks a new level of efficiency for cloud, artificial intelligence (AI), and high-performance computing (HPC) workloads. By dynamically allocating memory via the XConn Apollo CXL switch, data centers can eliminate over-provisioning, enhance performance, and significantly reduce total cost of ownership (TCO).
“Memory agility is the next frontier in computing, and this demonstration is a pivotal step toward delivering scalable, software-defined infrastructure for the most demanding AI and HPC environments,” said JP Jiang, Senior Vice President of Product Marketing and Management at XConn. “With CXL-enabled dynamic memory allocation, we’re showing how memory can be pooled and distributed in real time, unlocking efficiencies that static architectures simply can’t deliver.”
At the core of the demo is XConn’s Apollo CXL switch, the industry’s first to support both CXL 2.0 and PCIe 5.0 on a single chip. The switch enables terabyte-scale memory expansion with near-native latency and coherent memory access across CPUs, GPUs, and accelerators, including the 5th Gen AMD EPYC processors.
“CXL switching offers tremendous potential for the next generation of datacenter computing, especially in use cases like distributed shared memory, which can greatly enhance efficiency and reduce costs for data-intensive applications,” said Raghu Nambiar, corporate vice president, Data Center Ecosystems and Solutions, AMD. “By combining AMD EPYC processors with the XConn Apollo CXL switch, XConn are helping to deliver on highly-flexible and adaptive memory infrastructure for next-gen data centers.”
Key benefits of XConn’s CXL-based dynamic memory allocation on display during the event include:
- On-Demand Memory Pooling: Share and scale memory across systems to avoid over-provisioning.
- Low Latency Performance: Coherent access ensures memory behaves like local DRAM.
- Terabyte-Scale Expansion: Ideal for AI inference KV caching, in-memory databases, and virtualization workloads.
CXL DevCon 2025 attendees can experience the live demo at the XConn booth and learn more about the breakthrough in the technical session, “Showcasing a CXL 2.0 Memory Pooling/Sharing System,” presented by XConn’s Jiang on April 29 at 2:40 p.m.
To learn more about XConn Technologies dynamic memory allocation visit the website here.
Related News:
XConn Unveils Apollo 2 Hybrid Switch: CXL 3.1 and PCIe 6.2 Compliant
XConn Technologies Gains PCI-SIG Compliance for PCIe 5.0 Switch