Manual regression testing has always been a burden for testers. It’s one of the practical problems that we face, where there is a gap at the top of your testing pyramid that can’t be checked using automation. At NVM we have a suite of automated tests which do a lot of the leg work with regard to checks. This post isn’t going to be a discussion about the difference between testing and checking, or the merits of automation over manual checking. It is simply a demonstration of a problem we faced as a team, and a solution we came up with as a team.
We realised that we had a problem. Some time ago we found that manual regression was only being done by two or three people and it was taking hours and hours of those individuals time. And it was usually the same two or three people for each release. It was taking those individuals away from their ‘more exciting’ feature teams work, where we do a lot of funky exploratory testing. But more than that, it was creating division and ill feeling amongst the team where some individuals felt they were carrying the burden.
Also we saw that we weren’t getting the visibility of the process that we wanted, such as the number of checks we were doing, the quality and appropriateness of the scenarios under test; or whether these checks were already (or could be) covered by automation. We didn’t want to duplicate effort. All of these tests are maintained on our internal wiki, rather than in some impenetrable test tool. We are all responsible for maintaining the regression suite, so if we feel something needs adding or changing, we take initiative and do it ourselves.
The scenarios went through a process of review and streamlining over a number of iterations, to make sure that we had the most best possible set of checks we could. These were all described in terms of agents and supervisors operating within a contact centre, which is of course a core part of New Voice Media’s business.
Getting a kit ready for live deployment is the top priority for us, and we want to release as regularly as possible…weekly if resources and time allow. There is of course a challenge to manage the needs of our own feature teams and their priorities, but of course there is a business to support, so the release takes priority.
So to the physical execution of the regression…a whole challenge in itself.
At New Voice Media we are lucky enough to have a dedicated test lab, where we have desks, networking and telephone handsets that allow us to run our tests in one place. All available testers meet at a set time, with our PC’s, telephone handsets, tablets and other tools.
We have created a Kanban board, along with some other elements. We have story cards for each of the scenarios under test, with two separate groups in the team handling different logical streams of the application. The board allows us to see progress through the testing, delegate tasks but also gives us a chance to visually provide instant feedback to what worked well and give praise, what didn’t work well and any ideas we might have to improve the process.
This is a great example of skilled testers working together to solve a testing problem, and has started to make regression testing an enjoyable event rather than an onerous chore.