Perforce Software unveiled the results of its 2025 State of Data Compliance and Security Report. The study highlights troubling trends in AI and data privacy, showing widespread uncertainty about protecting sensitive data during AI model training and the growing risk of data privacy exposures.
The State of Data Compliance and Security report reveals that 91% of organizations believe sensitive data should be allowed in AI training, yet 78% express high concern about theft or breach of model training data. This paradox and lack of understanding that sensitive data allowed into an AI model can never be removed or secured, highlights the urgent need for clearer guidance and robust solutions that support organizations to more securely accelerate AI initiatives. Given the confusion and concerns around safety, it is no surprise that 86% are planning to invest in AI data privacy solutions over the next 1-2 years.
“The rush to adopt AI presents a dual challenge for organizations: Teams are feeling both immense pressure to innovate with AI and fear about data privacy in AI,” said Steve Karam, Principal Product Manager, Perforce. “To navigate this complexity, organizations must adopt AI responsibly and securely, without slowing down innovation. You should never train your AI models with personally identifiable information (PII), especially when there are secure ways to rapidly deliver realistic but synthetic data into AI pipelines.”
The report also reveals alarming trends related to sensitive data exposures in non-production, with 60% of organizations experiencing data breaches or theft in software development, AI and analytics environments, an 11% increase from last year. Despite the known threat landscape, 84% of organizations surveyed are still allowing data compliance exceptions in non-production, thus propagating these exposures.
“These findings underscore the critical need for organizations to address the growing data security and compliance risks in non-production environments,” said Ross Millenacker, Senior Product Manager, Perforce. “There’s a perception that protecting sensitive data through measures like masking is cumbersome and manual. Too many organizations see the cure of masking data and implementing those steps as worse than the disease of allowing exceptions. But this leads to a significant vulnerability. It’s time to close these gaps and truly protect sensitive data.”
Perforce is prepared to help organizations address these risks. Earlier this month, Perforce introduced AI-powered synthetic data generation in the Delphix DevOps Data Platform. Uniting data masking, data delivery, and synthetic data generation in a single platform, and boosting it with AI, ensures privacy compliance and AI/ML model training can go hand-in-hand for businesses.
Download the full Perforce 2025 State of Data Compliance and Security Report at the website here.
Related News:
Perforce JRebel Enterprise Removes Redeploys in Cloud Development
Puppet Edge Expands Automation Across Network and Edge Devices