Breakthrough CXL Memory Solution Targets AI Inference Workloads
As organizations rapidly adopt large language models (LLMs), generative AI, and real-time inference applications, memory…
As organizations rapidly adopt large language models (LLMs), generative AI, and real-time inference applications, memory…
XConn Technologies announced a collaborative demonstration of its Compute Express Link® (CXL®) memory pool, showcasing…
XConn Technologies and ScaleFlux announced they have successfully optimized performance and achieved interoperability between XConn’s…
At CXL DevCon 2025, XConn Technologies will showcase dynamic memory allocation powered by Compute Express…
XConn Technologies has introduced the Apollo 2 hybrid switch, combining Compute Express Link (CXL) 3.1…
XConn Technologies announced that its PCIe 5.0-capable switch, XC51256 “Apollo,” has successfully completed testing at…
XConn Technologies announced that its PCIe Gen 5 switch, the XC51256 “Apollo,” has passed NVIDIA’s…
With a mission to accelerate AI in data centers and high performance computing (HPC) applications,…
XConn Technologies announced its membership as a Contributor in the newly formed Ultra Accelerator Link™…
XConn Technologies (XConn) and MemVerge® announced they will demonstrate the industry’s first scalable CXL memory…