Mad as a March Hare – Musings on Rapid Software Testing and TestBash Brighton 2016

So another year has come around, and TestBash has come and gone. What was initially planned as a series of posts around TestBash and the Rapid Software Testing course didn’t come to be, due to an unforeseen technical hitch. So instead, this is my response to the events in Brighton last week. My annual pilgrimage to Brighton for TestBash had an extra few dimensions than usually just attending workshops and the coference day

Rapid Software Testing with Michael Bolton

So as I mentioned in my last post I was about to embark on the journey into Rapid Software Testing, on this occasion led by Michael Bolton. I’ve followed Michael’s (and James Bach’s) work for some time now, like a lot of testers who identify as context driven. I’ve been using many of the techniques, models and approaches to testing that RST champions for a while; mostly via learning from others, reading and applying techniques in practice. I have however not undertaken the course before now.

 

RST with Michael Bolton - March 2016

RST with Michael Bolton – March 2016

 
However the greatest impact upon my life as a tester was being able to talk and discuss the thinking around it with one of the authors themselves. It gave the the concepts, thinking and knowledge the practical grounding it needed, framing it within the discourse with Michael. A most rewarding experience. Now the unusual thing was that not only was I attending and learning at RST, but I was also facilitating on behalf of the Ministry of Testing. That meant that my primary concern was to the needs of the group and the coach. So balancing my needs and those of the group was challenging.

We discussed heuristics, oracles and models for testing in extreme depth, at least within the time allowed. We explored and practiced techniques for deep learning, exploration and test design and strategy. We would often revisit, review and tune our thinking on each topic, where Michael fed in to our learning, but responded actively to questions and responses, challenging and exploring each one in depth.

Here for example is one of the initial activities we did. A 15 min charter using the “Triangles” application. Here are my notes, capturing what I explored and discovered with my partner.

As a tester, I wanted to observe how the triangles application recorded data, inputs and outputs in the system:

We entered a range of values and inputs in to triangles.

  1. Integers
  2. Decimals
  3. Negative

We observed that a text file, triangles.txt was created by the application

Bug – the triangles.txt file is written to the folder above the application folder, so /thingstotest not /thingstotest/triangle

Bug – when we entered values into triangles, there was no feedback on the values, just the shapes.

Bug – 

SIDEA: -1

SIDEB: 2-

SIDEC: 1

TYPE: X

SIDEA.ERROR: ILLEGAL CHARACTER

SIDEB.ERROR: ILLEGAL CHARACTER

These illegal characters are not fed back to the user in UI, so they don’t know what is or isn’t an illegal value. Not enough information is presented to the user.

Now, the triangle application was one I had been familiar with for a while, thanks to a great session at the office led by Chris Simms (@kinofrost on Twitter). My familiarity was not the point here. I reviewed my notes from that session, and they were different interms of info captured. I also had more time available in a lunchtime session. We had the opportunity to dig deep and produce some good work, some good testing in both sessions, but recognising the limited time we had is a factor that will need to be applied in all testing sessions.

Triangles is a fairly simple application. You enter three variables for the size and shape, and it responds with the type of triangle and an appropriate image of the shape. It also has a log file which records the values entered, and the application responses. The depth of the groups testing belies the simplicity of the application under test. Each group found different problems and further questions to ask, there were also many overlaps in our observations. However the learning was being blue to design testing on the fly, with little or no prior information. Easy, right?

Well, no, not easy. Asking good questions is never easy. And that’s the whole point of RST, I feel. A lot of traditional testing practice expects us to read documents, write documents and write test scripts. Testing through those approaches appears to be somewhat of an afterthought. 

RST challenges us  to ask good questions. Through asking and answering good questions we develop good testing ideas, strategies and approaches. The documentation becomes the notes we take, or in our case the mind maps we captured in another session. Our advocacy and responsibility for our testing and the products of our testing are at the heart of what RST is to me.

Below are some notes from the team, captured during the course. Bearing in mind that these are incomplete and a work in progress. They do have problems of ‘translational’ and ‘transactional’ awareness between the group, the coach (Micheal) and me (the scribe). Trying to capture people’s thoughts and learning is really hard, especially in large and vocal groups. The handwriting is mine, but this is the work of the whole group.

 

RST - Heuristics and Oracles

RST – Heuristics and Oracles

  
RST - Properties of Good Bug Reports

RST – Properties of Good Bug Reports

  
RST -  PEOPLE WORKING - a mnemonic for problem reporting

RST – PEOPLE WORKING – a mnemonic for problem reporting

  
RST -    Some testing models

RST – Some testing models

 

There is a lot to digest and process from RST, probably too much to share in a busy blog post. In summary, RST was an incredible experience. It has afforded me the opportunity for me to both challenge and consolidate my existing learning, enhance my note taking and observation skills whilst testing. It’s also allowed me through facilitation to place a greater priority on the learning experience of my colleagues than that of my own. It was hugely valuable, and I would jump at the chance to do it again. I’m grateful for Rosie Sherry for letting me facilitate on behalf of Ministry of Testing (thats me being  all corporate in the red MOT T-shirt); and to Michael for his time, knowledge and insight. Many thanks!

Rapid Software Testing  Alumni - March 2016

Rapid Software Testing Alumni – March 2016

TestBash Brighton 2016 – Workshops

So on to TestBash. The workshop day has been an event that has been introduced both through demand for rich learning opportunities in the testing community,  also the Ministry of Testing’s desire to create an environment where that can happen. In the afternoon I was running my own workshop on proxy tools, which I will leave others to reflect on in public.

For my needs I like to balance the technical learning I get with more soft skills. It’s an area I have huge problems with. Technical learning is often a case of broad reading, practice and being open to developing the skills required. My personal route to developing a reputation as a ‘security expert’ has meant that I’ve had to focus hard on the technical skills, rather than developing other elements of what testing (and software development) can be.

Christina Ohanian (@ctohanian) and Nicola Sedgwick (@nicolasedgwick) ran their workshop “Connecting the Dots: Empowering people through play” in the morning session, and I have to say it has been one of the best professional decisions I have made attending this workshop.

Christina and Nicola are great proponents of the power of play to engage and develop people within the workplace. They have worked together at The App Business, and have developed a great rapport with both each other and the folks in the workshop.

Through a series of activities, both practical and thought provoking the group were encouraged to develop our thinking and learning to enable us to solve problems and adapt to change. 

The activities were: 

  1. In a circle, we each gave our name and revealed a fact about ourselves. We then went round the circle again and had to recall the name and fact of each person in sequence. This was as much a challenge as it was an icebreaker. Memory is hugely at play here but it does fail you sometimes. Some names and facts were easier to remember tha others, perhaps because the were unusual. The rhythm and pattern of the sequence almost became second nature by the end of the game, so by the end we had all gotten to know each other through the confines of the activity. 
  2. Lego story boards  – using an iterative process, we built a narrative story board, an then constructed the narrative in Lego. At various points in the activity, a new element was introduced, or a complication that meant we needed to replan and refactor our work. In short order, we discovered that our resources, imagine and ability to adapt was to be put to the test.
  3. Project Jenga – the team were split into three groups, developers, testers and designers. With Nicola acting as a project manager, we were asked to design and build a mobile application to accompany the conference. Through discussion, we had to meet certain acceptance criteria, explore problems and risks. With each encounter we had to remove a block from a Jenga tower. With each move the risk increased that the project (or Jenga) would collapse. This activity allowed us to explore our questioning skills, as well as our empathy and co-operation with other teams.
  4. Posters – in groups we were asked to design a poster for TestBash, using certain acceptance criteria. We then wrote a description of our poster which was then shared with the other team, who then had to draw a poster using our instructions, and vice versa. A fantastic activity that dug into our ability to analyse and interpret instructions and acceptance criteria, whilst engaging with the others in the team in a visual medium. We then compared each teams efforts to see which was closest to the desired product.

   
  

   
Nicola and Christina did a fantastic job. I said on Twitter that this was one of the most fantastic learning experiences I have ever had, and I meant it. I still have a lot to analyse and interpret, so that I can apply the learning practically in the workplace. Maybe we can use some of these activities or others to enhance our communication, our empathy and our ability to adapt.

TestBash Brighton 2016 – Conference

The conference day in hindsight was a bit of a blur for me. With all of my might I tried to concentrate on Lisa Crispin and Emma Armstrong‘s opening talk “Building the Right Thing: How Testers Can Help”.  Unfortunately (or fortunately) I was up next on the bill, so I was a little distrcted trying to maintain focus on not collapsing in a heap. Lisa and Emma kicked off TestBash with a bang, with an insightful exploration on how testers can be the guiding light on projects, ensuring that not only teams do the job, but do right. 

Lisa Crispin and Emma Armstrong

Lisa Crispin and Emma Armstrong

After the break, one of my favourite talks was Katrina Clokie‘s “A Pairing Experiment”.  This talk described and explored how Katrina lead an developed pairing activities within her team at the Bank of New Zealand. When questioned on the challenge of convincing managers to relinquish team members for paired work, she responded that whilst they might be losing one tester for an hour a week, they’d be getting an extra tester for another hour each week. I’ve been lucky enough to see Katrina speak before about her work, and each time it’s been a revelation. She works hard to develop the testing at every organisation she has worked at, an beyond into the community itself. Great stuff!

Katrina Clokie

Katrina Clokie

Up next was John Stevenson (@steveo1967). In his half talk/half conversation “Model fatigue and how to break it” he invited us to examine critically the models we use for testing every day. 

He challenged us to reevaluate the models we use, cut them up, adapt the ones we find successful, combine them with other models, throw away ones we don’t use, create new ones if needs be. He challenges us to be diverse in our approaches to testing, no relying on this are models all the time. That way we can find out more interesting information about our testing. John is a great presenter, who engaged and enthused the audience, inviting them to through questions at him for he last 10 mins of his time on stage, rather than talk to the end of his time.

It’s something I can reflect on in my own day to day work, where we use a model to evaluate and plan our user stories the testing that is discussed in those stories. It has been adapted and changed over time to suit our needs, and I am sure will be changed or even chucked away if it doesn’t suit our purpose in the future.

 

John Stevenson

John Stevenson

 
Later on in the day we heard from Patrick Prill, in his debut talk ” Accepting Ignorance – The Force of a Good Tester”. He led us in a discussion of how ignorance is not necessarily a wilful lack of knowledge, but just an absence of knowledge. That developing through understanding where our ignorance exists, we can develop our knowledge. It’s a huge force for change in testing. This is the gap between what we know, and what we don’t. The reflection on that upon our work as testers is where this talk had its greatest impact. 

Where Patrick was not an experienced speaker (you wouldn’t know that from his talk), he utilises his many years of testing experience in Germany, the problems both cultural and technical that he encounters in his work which gave his talk huge insight. 

Patrick Prill

Patrick Prill

After lunch we had our guest speaker, Grammy award winning singer (and tester) Michael Wansley (@teewanz) give us a highly entertaining, somewhat controversial but engaging talk “Test/QA A gatekeepers experience”. Testers as gatekeepers is not a very popular paradigm amongst the vocal members of the (particularly) context driven testing community. But within the wider view of testing as a process that is involved in developing and selling products, gatekeepers are often what testers and testing are perceived as. 

It’s a popular view (one that I subscribe to) that testers should be information providers, learners, investigators but not necessarily decision makers about whether software ‘goes live’ or not. We may be part of that decision making prcoesses, but not the arbiter of it.

It’s within this context that I have a certain amount of empathy with Michael’s experience of working on a number of iterations of the Microsoft Windows operating system. He understands that testing cannot exist in a vacuum, where there isn’t recourse to customers, managers and Vice Presidents, or consequences of screwing up. His talk did (quite rightly) invite comment, and Michael stood up for his view honestly and with vigour. Whether you agree or disagree with his view, that should be applauded.

 

Michael Wansley

Michael Wansley

  
Zachary Borelli introducing Michael Wansley

Zachary Borelli introducing Michael Wansley

 

After that it was “Having all your testers code: It doesn’t have to be a big deal” by Anna Baik (@TesterAB) and Andrew Morton (@TestingChef) on the challenging task of ensuring all testers contribute to the automation strategy at Brightpearl. Now I have to give some personal interest here, as not only are Anna and Andrew friends of mine, they are also former colleagues of mine from my time contracting at Brightpearl in Bristol.

It’s a fast paced, highly charged environment of great development and testing across the business. I was tasked with testing integrations between the Brighpearl service and a number of third parties. I didn’t get too involved on the automation side of things, but I do know what a challenge it was to implement. 

This was I think for many a challenging talk to follow, as the style was unusual (no slides), but the content was highly pertinent and valuable to many teams now trying to grow and mature their testing capabilities and automation strategies.

Anna Baik and Andrew Morton

Anna Baik and Andrew Morton

As a tester who too often focuses on the technical rather than human elements of testing, the next talk turned out to be my absolute favourite of the day. “Do testers need a thick skin, or should we admit we’re simply human” by Nicola Sedgwick (@nicolasedgwick) was a bold and brave exploration our ability to communicate, or failings as testers to sometimes not recognise problems not with software but in ourselves.

One of the key aspects of this talk was our response to stress, how it compounds upon other stress. Where there is a lack of challenging activity, or work we care about can lead to either boredom or even more stress. Some of my close friends in testing know that the last couple of years have been difficult for me, professionally and personally, and for this reason this talk really resonated with me. Nicola challenged us to ask what kind of tester we were. Well, I’m not sure I can answer that question yet, but I’ll be one that never forgets that humans are fallible, in a world that increasingly looks to punish those who fail to realise that.

 

Nicola Sedgwick

Nicola Sedgwick

 
So to the final talks of the day – next was my friend and mentor Bill Matthews (@bill_matthews) who introduced us to the concept of Smart Algorithms. The maths and logical flows that allow systems to learn, recognise patterns and process data based on a wide range of inputs and variables. He challenged us to examine the potential testing concerns that might arise from working within such applications – a really complex problem which Bill was able to present with humour and deep, practical knowledge. I have to add here, that with glasses I am a Golden Retriever, but without I am a German Shepherd.

Bill Matthews

Bill Matthews

And finally…Nicola Owen (@NicolaO55) also from New Zealand, but recently relocated to Sweden to work with the great folks at House of Test. In “Nowhere to hide: Adjusting to being a team’s sole tester” Nicola guided us through two case studies were she was the sole tester on two very differs projects. She reflected upon her experience with great depth, clarity and insight, what she learned, her developing confidence and skill. In one case study she felt insulated from the problems that software development teams encountered, and in the other far more exposed as the sole tester. In each she presented how she approached each problem and dealt with it head on. Another awesome talk, to round off the day.

 

Nicola Owen

Nicola Owen

 
So to round off the proceedings, our host Vernon Richards (@TesterFromLeic) and his able assistant Mark Tomlison (Mark Tomlinson) lead us into a round of always amazing 99 second talks. This is the first time I have not done a 99 second talk, so it was refreshing to just sit back and enjoy. Highlights for me were Emma Keavney’s rap (@EmJayKay80) and Deborah Lee’s sit in (@DeborahLee89). Also a special mention to the new Software Testing Clinic (@TesterClinic) announced by Mark Winteringham (@2bittester) and Dan Ashby (@danashby04), which I hope to get involved in soon! Well done to all involved. A great potential showcase for future speaking talent I hope.

 

Mark Winteringham and Dan Ashby from Software Testing Clinic

Mark Winteringham and Dan Ashby from Software Testing Clinic

  
Deborah Lee

Deborah Lee

  
Emma Keavney

Emma Keavney

 
So, to wrap up, TestBash 2016 I felt was an enormous success, both from a personal point of view, and in terms of the rude health of the conference. Rosie has done a great job again this year, and I hope to be involved again in future.

 

After party with Jess, Rosie and Helena

After party with Jess, Rosie and Helena

  
Breakfast with Chris, Nicola and Martin

Breakfast with Chris, Nicola and Martin

  

Rapid Software Testing – Before

This is the first in a series of posts on my experiences of RST and the TestBash conference this week.

I’m on my way to Brighton today, to facilitate Rapid Software Testing, led by Michael Bolton. I’m nervous about that, but I’m more nervous about this. 

My day is off to a great start. Overslept by 30 minutes, I need to wear my layers rather than pack them, and my train into Brighton is cancelled. 

Bus replacement service to Eastbourne

So, to anyone who travels regularly on the British transport network, you’ll be familiar with the phenomenon that is the bus replacement service. 

The bus is full, and I’m sat in the jump seat next to the driver, having picked up everyone from Hastings to Eastbourne on the way. There are probably many buses and bus drivers doing similar work across the country. (Subsequent seat moves to allow an elderly lady to sit down, and I’m now on the train from Eastbourne to Brighton, via Lewes.)

It makes me think of the services we test, when they are non performant or under stress. What do systems do when they are under heavy load, or a link in the chain is broken? How do you monitor and check that the system is performing as it should?

Clearly a system of checks and monitoring have come together to arrange this bus I’m travelling on. Service performance was seen to be dysfunctional due to a systems failure, so an additional service was put in place to pick up the slack.

What can testers learn from this?

Well, my first observation is to consider what your weak areas are. Is it the infrastructure, the application or the connectivity between systems? Do you know why they are weak, or can you improve or replace them.

As I’ve seen today, a replacement or temporary service isn’t necessarily better or more comfortable, but it is getting where I need to go.

I could have easily waited to get a lift from my Mum, but she was off conducting her own business elsewhere. I would still get there, but maybe not on time.

What monitoring do you have in place?

Monitoring isn’t just for your operations teams. At NewVoiceMedia, the DevOps team use all sorts of tools to allow us to keep an eye on performance, load, volume, through put, page impressions, browser usage as well as where any breaks in our systems might be. 

It’s hugely important so we can adapt to problems, or see them off before they become issues to our customers. Peak times (like the rush hour on the transport network) are one of the main concerns. 

Why is this a problem for testers?

Well, it isn’t a problem really. It’s more of a change of mindset. As organisations have to change and evolve to meet customer needs, testers need to adapt too.

Testers can and should be more aware of the wider needs of customers who need to use performant systems, rather than just having a narrow focus on the applications only.

We should be clear and concise in our communications, and be involved in the decisions that underpin our systems.

Why?

Well, in a DevOps organisation everyone has to muck in and get their hands dirty. Sure, there are people with specialist roles and positions of responsibility. But I see testers as the glue that holds systems together. We can get involved at any point, and not just on the application layer. 

More and more will be expected of testers as organisations change to meet customer need, and we will have to meet that challenge. 

So…RST

I’ve been wanting to do this course for years. And by chance, luck or fate I have the opportunity to do so now. I’ll be facilitating, so my priorities will be on the needs of Michael and the group, rather than my own.

It’s going to be a huge challenge, and like the needs of any complex system I will need to adapt.

I like to ask a lot of questions, but I anticipate a need to allow the group to generate those questions rather than myself. I’ve been told in the past that I can sometimes “not shut up” or “meander” during groups discussions.

It’s taken a lot of time and mindful thinking to try and control my natural instincts to ask questions or share knowledge, where others might not be willing, unable or be nervous. And I need to be be aware of that for the next three days.

It’s going to be epic.  Just like the scenery today.

My home, The South Downs