Regression testing

2 min read

by Resource on 11th May 2020

How to run a regression test

Running a regression test (2)

Let’s look at how to run a regression test. One which will check every item on every screen and report all differences.

Let’s go

A complete end-to-end test

You’ve built your data-driven regression test pack complete with a logic-driven Play List and Scripts covering a complete end-to-end test. What you need now is to know if anything whatsoever has changed since your selected baseline.

Run the test – get results.

Overnight?

While you can sit and watch your test execute, possibly on multiple physical or virtual devices to reflect different configurations, in reality, the regression test will probably be scheduled to run as part of a code promotion or simply overnight.

Results

Once complete you will have a set of results for the completed execution that will highlight failed Quality Checks, performance data and much more.

But did anything change?

Automatic capture of every attribute

TestDrive uniquely and automatically captures every important attribute of every element on every screen or page. By selecting a baseline for comparison TestDrive will now highlight every difference between the two executions, subject of course to any exceptions, such as dates perhaps, that you define.

Handling the differences

The process of handling these differences is as important as the differences themselves. It is almost certain that some differences were expected – but are there other differences, collateral damage that the regression test is designed to find?

The process can be likened to an airport luggage carousel where every difference found in a regression test is automatically captured and tracked in the same way that all the baggage from an aircraft arrives on the carousel.

Collecting your bags

Intended changes

The Business Analysts or whoever originated the intended changes (just like passengers)  should then come and ‘collect’ their bags. When viewing a test result, you can simply select a reported difference and mark it as expected.

Complete audit trail

TestDrive automatically creates a full audit trail of these differences so you can easily see who collected each bag and why they did it.​

A perfect ending?

Everything collected?

Collecting the bags shouldn’t take long, and if everything has gone as intended, all bags will have been collected, no collateral damage will have occurred, and you can get on with your next development cycle or sprint.

The lonely bag(s)

Unclaimed baggage

But what if the carousel isn’t empty? We’ve all seen at the airport and wondered about their fate. In a regression test, unclaimed bags are the red flags – there are differences between two test executions that were not expected. They need to be understood and resolved.

Safety net

The safety net has done its job – running your regression test just proved its worth!

Related topics

Related

Ready to talk testing?

We’re ready to show you how we can help reduce your business risk and test faster than ever.

Request a demo