The Seismic Shift in How We Test Software

software, Development, The Seismic Shift in How We Test Software

Developing software is an expensive and time-consuming process, where months of hard work can be ruined by the smallest of mistakes. Businesses have learned the hard way how important it is to properly test your product; a single missing word in code has caused products to crash at launch. It’s much easier to take preventative steps to prevent these kinds of issues than it is to repair reputations after problems occur.

In the example cited above, the developers had done their due diligence. The code had been thoroughly unit tested, passed manual testing and the code followed contemporary best practices. However, a bug still slipped through and caused a launch day disaster. How can companies anticipate problems that occur even when a traditional testing methodology is followed?

Numerous technological advancements have created a seismic shift in the world of software testing. These exciting new technologies seek to offer developers and employers a greater level of code coverage in their tests and stability in their code than was previously possible. Read on to learn about these advancements.

Headless browser testing

Creating code that works across all devices and browsers is incredibly important in the modern marketplace. A website can reasonably expect to be visited by users on iPhones, Android tablets, Linux laptops and Windows PCs, all of which are potentially running different versions of different browsers. 

This wealth of devices and platforms creates a huge onus on client-side testing. With new devices coming to market all the time, it is difficult to stay on top of. Browser automation tools, such as Selenium, have become an absolute necessity in any tester’s toolkit. Headless browsers allow developers to write tests that can be carried out programmatically against different browsers, while other tools like Selenium Grid allow these tests to also be programmatically applied to many different types of devices.

This type of automation is also provided as a service, so many companies can take advantage of the ability to test their products against a wide variety of platforms and devices, without having to budget for new test devices each year.

Autodetection and autogeneration

Artificial Intelligence (AI) and machine learning have seen extensive academic research in recent years, and the application of these tools has led to developments in many different areas of software development. One such way these have impacted software testing is through autodetection and autogeneration.

AI solves a problem with the traditional approach to software testing. Testers write the code that software is tested against. While significant code coverage in tests can give the illusion of robust, well-tested software, the test suite is only as good as the tests contained within it. Particularly when user input is expected, the number of possible input combinations is too large to expect any testing team to reasonably anticipate. 

The application of AI to testing has resulted in the emergence of autodetection and autogeneration. These are tools that rely on machine learning to detect anticipated problem areas in code (such as where user input is required) and then generate tests to cover those anticipated issues.

Read more: Predictions for 2021: Asana, Code First Girls, Fiverr & More

AI toolsets can use various forms of data: product and usage analytics to capture event metadata to create page object models, server logs, API calls, and others to automatically detect test cases: depending on the robustness of this data, the experience of the AI toolset, and extent of access to a test environment, those same toolsets can automatically generate tests specified to the target environment.

Automating these otherwise time-consuming processes frees up tester time to be spent elsewhere, while also offering a level of code coverage that is hard to otherwise match – and possibly helping prevent organizational burnout as well. 


Record and playback

Machine learning has also been applied to record and playback testing. This kind of testing essentially uses macros to record a series of inputs, that can then be programmatically played back and applied to software to use as user input for testing purposes. The problem with these tests is defining them, and keeping them updated with changes as the software develops.

Many record and playback tests rely on following an expected series of DOM elements. An input box with a specific ID is expected, into which certain input data must be placed. The problem is that these often rely on UI elements, which are liable to change as the software develops. When that expected input box is no longer there, these tests will need to be rewritten, or they will fail.

Machine learning allows testers to overcome these issues with record and playback testing. For example, most testers will refer to DOM elements by ID, or possibly by a class tag. ID attributes are intended to be unique, but in practice, that is not always the case. Machine learning holistically understands all attributes of a DOM element and uses those to identify it for testing. This method is self-healing, as if one attribute changes but the rest stay the same, it can update its understanding of the element and accommodate changes without breaking testing.

Regression testing

An important part of testing new features is not to simply test that they work, but that they also do not break existing code. Regression testing is not new, but applying AI to the process helps massively speed up the process. As new features are added and software grows, so too does the number of regression tests and the time it takes to run them.

Machine learning powered by cloud-processing gives developers the processing power to handle computationally-intensive tasks with ease. The size and complexity of a testing suite is a much smaller problem when the process is automated, and machine learning can be applied to detect small changes and anomalies that are otherwise difficult to manually identify.

The future of testing

Testing is a costly and time-consuming endeavour. Software tests are not run in a vacuum but against a backdrop of expectant clients and executives. A balance must be found that prioritizes testing as much code as possible given the time provided. Each item mentioned above represents an individually small adaptation to testing, but applied as a whole, AI is creating opportunities to increase testing coverage, enhance software robustness, and save tester time. The value AI brings to testing looks set to only increase as time goes on and more companies apply it to their testing processes.

To find out more about ProdPerfect, visit

Listen to this article
Listen to
this article
Text to speech by Listencat
Text to speech
by Listencat
software, Development, The Seismic Shift in How We Test Software

Erik Fogg

Erik Fogg is the Chief Operating Officer at ProdPerfect, an autonomous E2E regression testing solution that leverages data from live user behaviour data.

Data science skills: what’s driving the surge in demand?

Boris Paillard • 19th February 2021

The demand for people with specialist data skills — like data scientists for example — has more than tripled over the last five years (+231%). With nearly every single sector, including banking, transport and retail generating an explosion in data, employers are desperately on the hunt for skilled experts who can make sense of it.