StorONE has introduced ONEai, a fully automated AI solution designed for enterprise data storage. Developed in collaboration with Phison Electronics (8299TT), a leader in NAND flash innovation, ONEai integrates Phison’s aiDAPTIV+ AI technology into StorONE’s storage platform to streamline AI adoption and enable domain-specific insights from stored data. The solution features AI-optimized GPU and memory use, smart data placement, and native support for LLM inferencing and fine-tuning within the storage layer. ONEai simplifies deployment while reducing power, hardware, and operational costs, and empowers users to perform on-premises LLM training and inferencing on proprietary data.
“ONEai sets a new benchmark for an increasingly AI-integrated industry, where storage is the launchpad to take data from a static component to a dynamic application,” said Gal Naor, CEO of StorONE. “Through this technology partnership with Phison, we are filling the gap between traditional storage and AI infrastructure by delivering a turnkey, automated solution that simplifies AI data insights for organizations with limited budgets or expertise. We’re lowering the barrier to entry to enable enterprises of all sizes to tap into AI-driven intelligence without the requirement of building large-scale AI environments or sending data to the cloud.”
“We’re proud to partner with StorONE to enable a first-of-its-kind solution that addresses challenges in access to expanded GPU memory, high-performance inferencing and larger capacity LLM training without the need for external infrastructure,” said Michael Wu, GM and President of Phison US. “Through the aiDAPTIV+ integration, ONEai connects the storage engine and the AI acceleration layer, ensuring optimal data flow, intelligent workload orchestration and highly efficient GPU utilization. The result is an alternative to the DIY approach for IT and infrastructure teams, who can now opt for a pre-integrated, seamless, secure and efficient AI deployment within the enterprise infrastructure.”
-
Native LLM training and inference built directly into the storage stack; no external AI infrastructure required
-
ONEai eliminates the need for a separate AI stack or in-house AI expertise (plug-and-play deployment) with full on-premises processing for complete data sovereignty and control over sensitive data
-
High GPU efficiency minimizes the number of GPUs required, reducing power and operational costs. Integrated GPU modules reduce AI inference latency and deliver up to 95% hardware utilization
- Tailored for real customer environments to enable immediate interaction with proprietary data, ONEai automatically tracks and updates changes to data, feeding them into ongoing AI activities