SlashID has launched AI Identity Governance, introducing the first native governance capability for its identity access graph. The solution extends visibility, access controls, and lifecycle policies beyond traditional users and service accounts to include AI applications, agents, and MCP servers. This approach helps eliminate governance gaps and addresses the growing risk of Shadow AI as a source of unmanaged access to corporate data.
The release arrives after SlashID’s analysis of the April 2026 Vercel security incident, in which attackers compromised an employee’s Google Workspace account through a malicious OAuth 2.0 application originating from a third-party AI tool. Traditional governance platforms, built for SaaS applications with predictable lifecycles, cannot keep pace with AI tools. These tools are installed in seconds, inherit broad OAuth scopes, and often connect further downstream via MCP and agent frameworks.
“AI governance is fundamentally about identity and entitlements,” said Vincenzo Iozzo, SlashID’s Co-Founder. “Every time an employee authorizes a new AI assistant, connects an MCP server, or hands a task to an autonomous agent, they are effectively creating a new non-human identity with access to corporate resources. Security teams need the same visibility, policy enforcement, and lifecycle controls for those identities that they already have for users and service accounts — and they need it today, not after a year-long IGA re-platforming project.”
Enterprises are investing heavily in point solutions for AI security — DLP proxies, prompt firewalls, and CASB-style shadow AI discovery. These tools operate in isolation from the identity fabric, produce alerts without the context needed to act on them, and cannot answer the core governance question: which identities, human or non-human, can reach which resources through which AI applications. The result is that the same OAuth grant patterns that caused the Vercel breach remain unmanaged in most organizations.
SlashID’s AI Identity Governance solves these challenges with three core capabilities:
- Unified Visibility Across the AI Identity Surface: Continuous discovery of OAuth 2.0 grants issued to AI applications, MCP servers, shadow AI usage surfaced through the SlashID Browser Extension. It also covers models hosted on Amazon Bedrock, Azure OpenAI, and equivalent CSP-native services. The Access Graph models OAuth scopes as first-class edges, so security teams can see not just that a user connected to an AI app, but exactly which mailboxes, drives, calendars, or repositories that app can reach.
- Policy-Based Access Control for AI Applications and Agents: Allows teams to permit, restrict, or disable access to specific AI applications, model providers, or agentic identities using any attribute in the graph. Define rules once — for example, preventing HR or finance personnel from authorizing consumer AI tools — and enforce them continuously across the joiner-mover-leaver lifecycle, with a full audit trail for SOC 2, ISO 27001, and HIPAA reporting.
- Continuous Segregation-of-Duties Enforcement: Security teams can express toxic combinations as saved Access Graph queries — for instance, “identities with access to regulated customer data that also hold active grants to external LLMs.” These queries can be scheduled to automatically trigger remediation workflows, such as revocation, MFA step-up, ticket creation, or Slack notifications. The same primitive powers a range of AI-specific SoD policies without requiring a separate product.
Unlike standalone AI security tools, SlashID’s AI Identity Governance operates at the identity graph layer, governing AI applications with the same primitives used for SaaS, cloud, and on-premise entitlements. It requires no changes to how employees use AI, no inline proxies, and no additional agents. The solution is available today to SlashID customers at no additional cost as part of the existing Identity Governance and Administration product, covering every major identity provider, cloud, and SaaS platform SlashID already integrates with.
Related News: