There’s a massive paradigm shift underway in the testing landscape. Currently, even small changes in code require a re-write, review and pass through a Continuous Integration (CI) system. Consider a button location moving on a website requiring a coding change. What looks like some minor changes snowballs into a two or three hour process to make the updates and takes lots of overhead for something that should not require coding changes or a human in the loop. And worse yet, there’s a finite number of people in the organization who can work on them.
To achieve the efficiency gains of testing and to get more out of it, testing needs to be democratized in a way where it doesn’t require the depth of technical skills it has historically. The days of heavy code and the reliance on traditional frameworks like Selenium and Playwright that require a dedicated coding background and frequent code adjustments, and that aren’t as responsive to changes in applications, are behind us.
To really turn the current process on its head, the whole testing paradigm needs to be thought of differently. Product owners, automation engineers and manual testers need to be able to work together on a single platform at scale using streamlined interfaces that allow them to understand why things break in testing and what to do when they do. Just like the saying “if you don’t find a way to make money while you sleep, you will work until you die” means you need to automate money making and work smarter, the same is true of testing. Tests should be running overnight and in the background and surfacing issues that require human intervention. When a test isn’t going to pass, allow for someone to intervene and make a change. AI can do the work of 100 people with 10, and the companies who don’t tap it to drive efficiencies will be left behind.
Quality Engineers’ Day Are Looking Different
Quality Engineers want to focus on mission-critical processes and flows, not moving the location of a button in 15 tests to accommodate an aesthetic change on a website or in an application. AI now helps Quality Engineers get out of the coding weeds of repetitive tasks.
Moving ahead, as Quality Engineers work more closely with AI, they will find that having a prompt engineering skill set (or being able to “talk with AI” and figure out the best way to prompt AI agents to get the best responses) will be just more important than a coding background. Being able to communicate clearly with an artificial system (from a natural language perspective), is something that people who have taken traditional coding approaches to build test automation platforms and tests will likely find a learning curve on. In this regard, junior testers with just a few years of coding experience actually may find the transition to conversing back and forth with agents easier than experienced testers and engineers.
Engineers and testers who have used CSS selectors or path expressions to find different elements will notice some slowness with AI that takes time to think about what it is selecting. Unlike Selenium, which uses defined/deterministic selectors, AI is reasoning based on what it sees on the page like a human would when navigating a tool or website. This humanlike approach will help uncover far more issues from a user experience (UX)/performance standpoint than scripted approaches. This will provide both cost savings and maintenance benefits.
AI Debunks the “Quality Theatre” of Test Outcomes
Think about when you get a car inspection sticker – have you ever had an experience where the mechanic said “I’ll pass you, but you really should x…”, or turn off an indicator light without further diagnosing/fixing the underlying problem? Well, if something really isn’t right under the hood, giving your car a sticker and sending you off with an undiagnosed issue isn’t doing you any good. The same is true in testing. If an application has variance in performance or there is instability in a test suite and a report reveals these issues but they aren’t dug into, not only might the site not operate how it should and the software quality suffers, but the user experience won’t be great either. Testers should always be focused on driving quality, not just passing tests.
With AI assistants in the loop, organizations can do just that. Testers no longer have to do coding updates because engineering moved a button, or changed a color. Engineering teams can ship twice as much and put their resources where they matter most. Better yet, self-healing platforms can adapt to UI/UX changes in applications based on contextual information, and won’t break like they would with traditional scripted approaches. While tests may be executed slower, they will uncover more issues that can be rectified that lead to much less time spent on maintenance and future test creation. The reduction in test maintenance costs can be significant—as one example, GE Healthcare reduced maintenance costs by 40%. And, this approach also pays dividends in other areas that may not be as obvious initially, like the productivity gains that accompany the conversion of manual testing to automated testing (a 5x productivity increase in the case of GE Healthcare), the increase in test maintenance coverage, and reduced maintenance costs, which have a strong ROI as well.
It’s time to let AI remove the drudgery and pain from the testing cycle. Like anything, after the initial learning curve, the road ahead is a whole lot smoother and will ensure better experiences for the users, and the engineers behind the scenes.
Learn more about how democratized testing is transforming efficiency, collaboration, and software quality at scale, at the website here.
Related News: