Virgin Territory



By the time you read this post the above image will be out of date. Romania Testing Conference is held in Cluj-Napoca in the north of Romania. You can find out more about it here. It is an intimate multi-track conference followed by a workshop day. There were a lot of local speakers, plus some from Holland, India, Spain and the UK. As I arrived late on the first day, I was only able to see a few speakers on the first day. 

Firstly Luis Faile, who talked about “Myths of Exploratory Testing”. A lot of his talk resonated with me, as often when I have talked to other testers or IT folk who haven’t experienced Exploratory Testing before, these are the questions that they generally want to ask…such as how do we measure it, who is accountable, what evidence is there etc etc. Luis deals with these issues in a pragmatic way and invited experiences and suggestions from the floor. Altogether a great way for good testers to champion and communicate about this particular aspect of testing to those who are not aware of its benefits.

Last up for me was the opportunity to sit on on Andy Glover’s ( Visual Testing workshop. Here he encouraged our creative abilities by getting us to think about how we can communicate visually. He inspired us to explore how visual aids such as diagrams, annotations, cartoons etc can be used to describe information, data, bugs, ideas and other testing concepts. Andy was inspired by Dan Roam’s book The Back of the Napkin which is definitely one to add to the reading list. 


RTC 2014 was a great experience for me for a number of reasons. Not only was it my first ever international conference, it was also my first opportunity to speak at a conference, running a workshop on security testing. If you are interested my slide deck is available here

The testing community in Romania is vibrant and rapidly growing. It was a great chance to see how other testers from outside the UK learn and develop their skills. All the testers that I met and talked to, either from Cluj, Bucharest or other parts of Europe, were enthusiastic about their craft, keen to learn from each other and the speakers on the circuit. This is rewarding to see, as the key objective from my workshop was that other testers would take back what they learned to their places of work and where possible implement that learning, and ultimately add value.


I was grateful to be invited to speak there and would relish the chance to do so again. I’ll be speaking at Nordic Testing Days in Estonia in a few weeks time, so expect a post on that event soon!

Reinventing Regression

Manual regression testing has always been a burden for testers. It’s one of the practical problems that we face, where there is a gap at the top of your testing pyramid that can’t be checked using automation. At NVM we have a suite of automated tests which do a lot of the leg work with regard to checks. This post isn’t going to be a discussion about the difference between testing and checking, or the merits of automation over manual checking. It is simply a demonstration of a problem we faced as a team, and a solution we came up with as a team.

We realised that we had a problem. Some time ago we found that manual regression was only being done by two or three people and it was taking hours and hours of those individuals time. And it was usually the same two or three people for each release. It was taking those individuals away from their ‘more exciting’ feature teams work, where we do a lot of funky exploratory testing. But more than that, it was creating division and ill feeling amongst the team where some individuals felt they were carrying the burden.

Also we saw that we weren’t getting the visibility of the process that we wanted, such as the number of checks we were doing, the quality and appropriateness of the scenarios under test; or whether these checks were already (or could be) covered by automation. We didn’t want to duplicate effort. All of these tests are maintained on our internal wiki, rather than in some impenetrable test tool. We are all responsible for maintaining the regression suite, so if we feel something needs adding or changing, we take initiative and do it ourselves.

The scenarios went through a process of review and streamlining over a number of iterations, to make sure that we had the most best possible set of checks we could. These were all described in terms of agents and supervisors operating within a contact centre, which is of course a core part of New Voice Media’s business.

Getting a kit ready for live deployment is the top priority for us, and we want to release as regularly as possible…weekly if resources and time allow. There is of course a challenge to manage the needs of our own feature teams and their priorities, but of course there is a business to support, so the release takes priority.

So to the physical execution of the regression…a whole challenge in itself.

At New Voice Media we are lucky enough to have a dedicated test lab, where we have desks, networking and telephone handsets that allow us to run our tests in one place. All available testers meet at a set time, with our PC’s, telephone handsets, tablets and other tools.

We have created a Kanban board, along with some other elements. We have story cards for each of the scenarios under test, with two separate groups in the team handling different logical streams of the application. The board allows us to see progress through the testing, delegate tasks but also gives us a chance to visually provide instant feedback to what worked well and give praise, what didn’t work well and any ideas we might have to improve the process.

This is a great example of skilled testers working together to solve a testing problem, and has started to make regression testing an enjoyable event rather than an onerous chore.