State of Digital Quality Report 2025: AI Testing Has Doubled In 2025

0
Applause has published its fourth annual State of Digital Quality in Functional Testing 2025 report, aimed at helping organizations enhance the quality of apps, websites, and digital experiences. The findings reveal that AI adoption in functional testing has more than doubled over the past year, though most companies emphasize the continued importance of keeping humans in the loop. The report also highlights that one-third of organizations use crowdtesting to achieve stronger digital quality outcomes.

Users remain in the driver’s seat when it comes to defining and measuring the goals of software development and QA departments. Customer satisfaction and customer sentiment/feedback are the top metrics to assess software quality, and user experience (UX) testing continues to be the most popular testing type. However, familiar challenges persist, including aggressive timelines and a lack of resources and stability across internal teams. The report’s findings are based on a recent survey of more than 2,100 software development and testing professionals around the world.

Key findings:

AI is becoming more deeply integrated into testing, but human oversight is paramount.
  • 60% of survey respondents reported that their organization uses AI in the testing process. In 2024, our AI survey revealed that only 30% were using the technology to build test cases monthly, weekly or daily – and just under 32% were using it for test reporting.
  • Organizations leverage AI to develop test cases (70%), automate test scripts (55%), and analyze test outcomes and recommend improvements (48%). Other use cases include test case analysis and prioritization, autonomous test execution and adaptation, identification of gaps in test coverage and self-healing test automation.
  • AI and automation alone cannot provide the comprehensive, end-to-end test coverage that enterprises demand. One-third of survey respondents (33%) leverage crowdtesting, an effective approach to mitigating risk through HITL test coverage, particularly in the age of agentic AI.
Significant challenges in pre-release testing persist, despite AI efficiencies.
  • With the swift rise in adoption, 80% of respondents are challenged by lack of in-house AI testing expertise.
  • Keeping up with rapidly changing requirements was the most prevalent testing challenge at 92%. Nearly a third of respondents lean on a testing partner to bridge this gap.
  • Additional obstacles to AI quality include inconsistent/unstable environments (87%) and lack of time for sufficient testing (85%).
Organizations are embracing a blended, shift-left approach to quality assurance (QA).
  • A significant shift is underway in the software development lifecycle (SDLC): While a previous survey found 42% of respondents only test at a single stage of the SDLC, this year just 15% limit testing to a single stage.
  • Over half of organizations are now addressing QA during the planning (54%), development (59%), design (52%) and maintenance (57%) phases of the SDLC. 91% of respondents reported that their team conducts multiple types of functional tests, including performance testing, user experience (UX) testing, accessibility testing, payment testing and more.
  • Of the 83% of organizations using multiple metrics to monitor digital quality, 67% use test case reporting and metrics to analyze trends and identify areas for improvement. 58% use the combined data to guide future development.

“Software quality assurance has always been a moving target,” said Rob Mason, Chief Technology Officer, Applause. “And, as our report reveals, development organizations are leaning more on generative and agentic AI solutions to drive QA efforts. To meet increasing user expectations while managing AI risks, it’s critical to assess and evaluate the tools, processes and capabilities we’re using for QA on an ongoing basis – before even thinking about testing the apps and websites themselves. ‘Are we meeting demands in terms of performance? Accuracy? Safety?’ Humans must be kept in the loop to answer these questions effectively.”

Additional findings:

Digital quality is customer-driven – UX, usability and user acceptance testing and metrics are preferred.

  • Customer satisfaction and customer sentiment/feedback are the top metrics for assessing software quality.
  • User experience (UX) testing is the most popular testing type at 68%. This type of testing leverages qualitative research to ensure digital experiences are intuitive, compelling and engaging.
  • Usability testing (59%), which measures ease-of-use, and user acceptance testing or UAT (54%) are also popular.
“Internal QA structure and consistency” was rated highly by respondents, though teams lack comprehensive documentation.
  • 69% of respondents rated their organizations’ structure and consistency around digital quality as falling into the “Excellence” and “Expansion” framework categories.
  • Yet, only 33% reported having comprehensive documentation for test cases and plans.
  • 84% of respondents find it challenging to reproduce defects with available test data – reproducing bugs is crucial to understanding, analyzing and fixing issues.

“The fact is, what we’ve long predicted has become our reality – machines can develop and validate software, to a degree,” continued Mason. “But, even agentic AI – especially agentic AI – requires human intervention to avoid quality issues that have the potential to do serious harm, given the speed and scale at which agents operate. The trick is to embed human influence and safeguards early and throughout development without slowing down the process, and we know this is achievable given the results of our survey and our own experiences working with global enterprises that have been at the forefront of AI integration.”

Applause’s State of Digital Quality content series provides insight into the latest software testing and QA practices and trends, including preferred methods and tools, as well as common challenges faced by software development and testing professionals worldwide.

Related News:

Applause Releases State of Digital Quality in Accessibility Survey

Applause Annual State of Digital Quality in AI Survey Released

Share.

About Author

Taylor Graham, marketing grad with an inner nature to be a perpetual researchist, currently all things IT. Personally and professionally, Taylor is one to know with her tenacity and encouraging spirit. When not working you can find her spending time with friends and family.