Bugcrowd has unveiled its AI Pen Testing feature on the Bugcrowd Platform, aimed at assisting AI adopters in identifying prevalent security vulnerabilities before threat actors exploit them. This addition enhances Bugcrowd’s AI Safety and Security Solutions portfolio, complementing the previously announced AI Bias Assessment offering.
The conversational interfaces in Large Language Model (LLM) applications can be vulnerable to prompt injection, training data extraction, data poisoning, and other types of attacks. Bugcrowd AI Pen Tests are designed to uncover the most common flaws in these areas using a testing methodology based on its open-source Vulnerability Rating Taxonomy – which draws from the OWASP Top 10 for LLM Applications while adding other flaws reported by hackers on our platform.
Commoditized access to AI is revolutionizing how work is done in every industry. AI also presents new categories of potential security vulnerabilities, as reflected in President Biden’s Executive Order 14110 that calls for “AI red teaming” (methods unspecified) by all government agencies.
Many AI applications are highly integrated with other systems, amplifying risk by serving as a potential access point for wider infiltration by attackers. As generative AI becomes universally adopted, the expanded attack surface will require Bugcrowd’s unique brand of rigorous pressure testing to detect the new vulnerabilities that come along with it.
Pentesters are curated from a deep bench of trusted testers selected from the global hacker community for their skills and track record. The Bugcrowd Platform’s data-driven approach to researcher/hacker/pentester sourcing and activation, known as CrowdMatch AI, allows it to rapidly create and optimize crowds with virtually any skill set, to meet any risk reduction goal.
For over a decade, Bugcrowd’s “skills-as-a-service” approach to security has been shown to uncover more high-impact vulnerabilities than traditional methods for customers like T-Mobile, Netskope, and Telstra Corporation, while offering clearer line of sight to ROI. With unmatched flexibility and access to more than a decade of vulnerability intelligence data, the Bugcrowd Platform has evolved over time to reflect the changing nature of the attack surface – including the adoption of mobile apps, hybrid work, APIs, crypto, cloud workloads, and now AI.
“AI serves as a tool for enhancing attacker productivity, a target for exploitation of weaknesses in AI systems, and a threat due to the unintended security consequences stemming from its use,” said Dave Gerry, CEO of Bugcrowd. “With our new AI Pen Testing offering, our customers now have a solution to address any AI-based risks—ranging from standard tests for web apps, mobile apps, and networks to continuous, crowd-powered testing of complex apps, cloud services, APIs, IoT devices, and now AI systems, for maximum risk reduction.”
“The rapid adoption of LLMs in government and enterprise use cases has led to an unprecedented growth in attack surface that adversaries are already exploiting,” said Julian Brownlow Davies, VP of Advanced Services for Bugcrowd. “Bugcrowd’s world-class crowdsourced security platform with CrowdMatch AI has enabled us to bring to market high-impact AI/LLM penetration testing delivered by trusted testers with deep domain experience, providing safety and security to our customers against these evolving threats.”
For additional information on Bugcrowd AI Pen Tests and AI Safety and Security Solutions portfolio, visit the website here. Or visit Bugcrowd at the RSA Conference, May 6-9, 2024.
Related News:
Parasoft’s Software Testing Tools Boost Efficient Comprehensive Testing