A coalition led by the Cloud Security Alliance (CSA) and Noma Security—joined by Harmonic Security and Haize Labs—has launched RiskRubric.ai, now generally available as the industry’s first AI model risk leaderboard. The platform evaluates hundreds of LLMs across six key dimensions: transparency, reliability, security, privacy, safety, and reputation.
RiskRubric.ai is now generally available as a free resource for AI builders and users who are challenged to rapidly innovate with AI, but struggle to instill confidence in AI security. Engineering teams face weeks-long approval bottlenecks while security teams lack the specialized tools to properly evaluate AI-specific risk. RiskRubric.ai eliminates AI model risk guesswork by providing instant, actionable risk grades for the most-common models enterprises deploy.
Addressing the AI Trust Crisis at Scale
RiskRubric.ai evaluates hundreds of leading AI models through rigorous testing protocols, including 1,000+ reliability prompts, 200+ adversarial security tests, automated code scans, and comprehensive documentation reviews. Each model receives objective scores from 0-100 across six risk pillars, rolling up to A-F letter grades that enable rapid risk assessment without requiring deep AI expertise.
“Every AI-forward organization faces two critical challenges: how to embed meaningful security into model selection, and how to confidently communicate AI risks to stakeholders. Without standardized risk assessments, teams are essentially flying blind,” said Niv Braun, CEO and Co-Founder of Noma Security. “RiskRubric.ai is an excellent starting point on the path to more mature and secure AI for both enterprise cybersecurity teams and AI innovators. Contexualized, evidence-based LLM risk intelligence will direct model selection so CISOs can more confidently speak to AI risk with concrete metrics, and engineering teams can accelerate AI innovation. This collaborative effort with CSA and our industry partners represents a watershed moment as we make AI model security a reality through accessibility and transparency.”
The project’s launch comes as AI agents rapidly proliferate across enterprises, with agentic models gaining increasing autonomy and access to critical business systems. Traditional security frameworks, designed for predictable, deterministic technology have proven inadequate for the breakneck pace of AI development where new models launch weekly and capabilities shift dramatically between versions. The project currently covers 150+ popular AI models including GPT-4, Claude, Llama, Gemini, and specialized enterprise models, with new assessments added continually.
“The rapid adoption and evolution of AI has created an urgent need for a standardized model risk framework that the entire industry can trust,” said Caleb Sima, Chair of the CSA AI Safety Initiative. “RiskRubric.ai embodies CSA’s mission to deliver AI security best practices, tools and education to the cybersecurity industry at large. By providing transparent, vendor-neutral assessments free to the community, we’re ensuring that organizations of all sizes can make informed decisions about AI development and deployment. This isn’t only about identifying model risk, it’s about enabling responsible AI innovation at scale.”
Industry-Wide Collaboration for Comprehensive AI Model Assessment
The Cloud Security Alliance brought together leading talent in AI security to build and deliver RiskRubric.ai, each contributing unique expertise to the project. Noma Security is also working with partners and leading AI platform providers such as Hugging Face and Databricks on the RiskRubric.ai initiative, underscoring the importance of standardized AI safety for the benefit of the global AI community.
Noma Security, as the technical architect and AI security platform provider for RiskRubric.ai, brings deep expertise in AI and agent security to the project and forms the technical backbone of the LLM risk assessment engine. “We’ve taken our experience securing millions of AI interactions monthly across the world’s most-complex Fortune 500 enterprises and channeled it into building RiskRubric.ai’s assessment methodology,” said Gal Moyal, Noma Security office of the CTO. “Our platform doesn’t just identify risks, it provides the actionable intelligence teams need to mitigate AI risk through posture management and runtime protection, all in real time. By combining insights to create AI risk context, our assessments can help prioritize and address vulnerabilities and real-world attack patterns we observed at scale.”
Michael Machado, RiskRubric.ai Product Lead, said, “Building RiskRubric.ai required solving a fundamental challenge: how do you create consistent, comparable risk metrics across wildly different AI architectures? We’ve developed an assessment framework that scales from evaluating a single model in minutes to continuously monitoring hundreds of models as they evolve. What excites me most is seeing security teams go from spending weeks on manual model reviews to getting comprehensive risk intelligence instantly. This isn’t just a leaderboard, it’s an AI ops transformation that aligns AI governance with AI innovation.”
Haize Labs, specializing in rigorous AI testing and red-teaming, contributed advanced adversarial testing methodologies to the project. “The black-box nature of modern AI systems demands sophisticated testing approaches that go beyond traditional security assessments,” said Leonard Tang, CEO of Haize Labs. “Our automated red-teaming capabilities integrated into RiskRubric.ai help uncover failure modes and vulnerabilities that might otherwise remain hidden until they’re exploited in production. This level of rigorous, dynamic testing is essential for building trust in AI systems.”
Harmonic Security, pioneers in AI Usage and Control, provided critical insights on privacy assessment and data leakage prevention. “Organizations are terrified that AI models will train on their sensitive data, and legacy DLP solutions struggle to address this,” said Alastair Paterson, CEO of Harmonic Security. “RiskRubric.ai’s privacy pillar leverages our expertise in detecting sensitive data exposure risks, helping organizations understand not just whether a model is secure, but whether it can be trusted with their most sensitive information. This granular approach to privacy assessment is crucial for maintaining compliance in an AI-driven world.”
To learn more about RiskRubric.ai, visit the website here.
Related News:
Noma Security Adds Proven Leaders to Executive Team
Noma Security Lands $100M to Safeguard the Future of AI Agents