DDN Announces Compatibility with NVIDIA DGX H100 Systems

0
DDN® announced compatibility with the next generation of NVIDIA® DGX™ systems, each including eight NVIDIA H100 Tensor Core GPUs. NVIDIA DGX H100 supercomputers are the fourth generation of NVIDIA’s purpose-built AI system and designed to handle the most taxing training workloads like natural language processing and deep learning recommender models. These types of workloads require large data models and high-speed throughput to deliver breakthrough results, and pairing NVIDIA DGX systems with DDN’s A3I is a proven combination for AI centers of excellence worldwide.

DDN AI400X2 storage appliance compatibility with DGX H100 systems build on DDN’s field-proven deployments of DGX A100-based DGX BasePOD reference architectures (RAs) and DGX SuperPOD systems that have been leveraged by customers for a wide range of use cases. Offered as part of DDN’s A3I infrastructure solution for AI deployments, customers can scale to support larger workloads with multiple DGX systems. DDN also supports the latest NVIDIA Quantum-2 and Spectrum-4 400Gb/s networking technologies. Validated with NVIDIA QM9700 Quantum-2 InfiniBand and NVIDIA SN4700 Spectrum-4 400GbE switches, the systems are recommended by NVIDIA in the newest DGX BasePOD RA and DGX SuperPOD. With double the IO capabilities of the prior generation, DGX H100 systems further necessitate the use of high performance storage solutions like DDN’s AI400X2.

“The demand for scalable AI infrastructure continues to grow, as enterprises realize the power that AI delivers to transform their business,” said Dr. James Coomer, senior vice president for products at DDN. “We see more and more organizations that are moving from assessing AI to applying AI to deliver business results. These organizations are looking for proven infrastructure that integrates into their data center in a simple and efficient manner, which is exactly what NVIDIA DGX systems with DDN storage delivers.”

In addition to these on-premises deployment options, DDN is also announcing a partnership with Lambda to deliver a scalable data solution based on NVIDIA DGX SuperPOD with over 31 DGX H100 systems. Lambda intends to use the systems to allow customers to reserve between two and 31 DGX instances backed by DDN’s parallel storage and the full 3200 Mbps GPU fabric. This hosted offering supplies rapid access to GPU-based computing without a commitment to a large data center deployment along with a simple competitive pricing structure. Lambda chose DDN as the backend storage for this project because of DDN’s established track record of successful DGX SuperPOD deployments, as well as the expertise for storage at scale that DDN brings to the table. Lambda will also be selling DGX BasePOD and DGX SuperPOD with DDN A3I storage for customers looking to establish on-site deployments.

“As organizations continue to modernize around AI, they’re experiencing explosive demand around performance and data needs,” shared David Hall, head of high performance computing, Lambda. “To address that need, Lambda, as a market leader in the deep learning infrastructure space, is bringing NVIDIA DGX systems with DDN A3I storage into our reserved cloud offering. This provides our customers with a full-service experience coupled with industry-leading performance in a matter of weeks rather than months.”

Learn more about DDN’s new RAs, why efficient high performance parallel storage supplies a significant advantage for AI workflows, and Lambda’s GPU cloud by watching DDN’s on-demand video here.

Image licensed by freepik.com

Related News:

Four Reasons You Need to Protect Your Salesforce Environment

Supermicro Expands Storage Solutions with Petascale Storage System

Share.

About Author

Taylor Graham, marketing grad with an inner nature to be a perpetual researchist, currently all things IT. Personally and professionally, Taylor is one to know with her tenacity and encouraging spirit. When not working you can find her spending time with friends and family.