Introducing…Ticket Magpie

Solving a problem of learning

I’d like to introduce you to a little project that David Hatanian and I have been working on. David is a member of the fantastic team at Codurance, and we first started working together on this project in February 2016.

Following my experiences at European Testing Conference in Bucharest, I realised the time had come for me to create and build my own vulnerable application. This was so that I would be able to run my own workshops on security testing, coach my colleagues and other testers aswell as demonstrating vulnerabilities; such as the OWASP Top 10.

My initial forays into learning security testing relied upon learning from a number of publicly available web applications. These include AltoroMutual, Gruyere from Google, and  Supercar Showdown by Troy Hunt.

I also worked closely with Bill Matthews, initially shadowing him, but then helping him to deliver workshops at international conferences. For these workshops, he built his own web application, Ace Encounters, which is a travel and wild adventure website.

Of course, using a real world application to practice these skills is highly illegal. So, students of security testing need a safe place to practice and learn. We aren’t hackers after all, we are testers. We aren’t there to steal, undermine or attack. We are there to explore and learn.

Pairing with David has been incredibly rewarding for us both. I’ve supported him with his understanding of security vulnerabilities, and he has supported me with my learning of object orientated programming (in this case Java).

A couple of months ago I ran a session using Ticket Magpie,  for the testers at NewVoiceMedia. The session was well received, and everyone appeared to have fun. The team there are really great at generating interesting test ideas, developing their skills, and following through with practical application of their learning. Taking this out into the wider community of testers was to be the next step, at Test Masters Academy.

i-love-shiny-things

Get Ticket Magpie

Ticket Magpie is easy to get, from David’s Github project. Check it out here and follow the instructions on the page. Here is some additional installation guidance.

Local Installation

  1. Install the components locally on your machine. You’ll need Maven, Java Development Kit and the Ticket Magpie project.
  2. Configure the JAVA_HOME and PATH environment variables, appropriate to your operating system. (Supports MacOS, Windows and Linux)
  3.  Run the application from the command line.
  4. You may choose to set up your own database, or run it in memory whilst the application is running.

Virtual Machine Installation

  1. Install Oracle VirtualBox or your favourite virtualisation tool on your machine
  2. Create a virtual machine using your OS of choice.
    • I like to use Linux Mint for this. It’s lightweight and easy to configure.
    • Remember to give your VM enough space, or make it dynamic. 8gb should more than suffice
  3. Follow the steps above and on the Github page for the project and you can’t go wrong.

Docker (this is by far the quickest and easiest way of getting things running)

  1. Install Docker on your machine
  2. Run the application from the Docker Hub image, using the provided command line:
    docker run -e "SPRING_PROFILES_ACTIVE=hsqldb" -p8080:8080 "dhatanian/ticketmagpie"

Running TicketMagpie

Once TicketMagpie is installed on your chosen environment, run the appropriate command line, then navigate your browser to:

http://localhost:8080

If you are successful, your browser should display the application, and it should look like this:

ticketmagpie-the-place-to-get-all-the-tickets

Ticket Magpie

Bug Hunt

I invite you to have a go at exploring Ticket Magpie. There are some fun features for you to take a look at. I’m not going to spoil things for you by listing everything here. You might also find some interesting problems.

Because the application runs on your local machine, docker or VM, you can use any technique, tool and gnarly hack you want, without harming anything or anyone else.

Take your time and let me know what you think. If you feel the need, you are welcome to use this form to provide feedback about the application: Ticket Magpie Survey. Alternatively, just message me on Twitter, or comment on this blog.

Good Luck, and Thanks!

game-over-man-game-over

 

A community in contrast

I’ve not blogged recently for various reasons, both personal and professional. But on the anniversary of my blog, I want to return with a more positive attitude to it after a fallow period. This is a quick blog as way of a catch up over the last few months activities (other than my professional and personal ones). It’s an opportunity to share some of the highlights of my experiences in the testing community recently, which have been warm and welcoming during some difficult times.

A few months ago I attended the inaugural Brighton Testing Meetup, catching up with some of the good folk I last met at TestBash 3. Brighton is sort of my home town, yet I have never worked there so having a foot in the pond that is the testing community there has been a great thing. We talked, we ate and drank and shared ideas. Early plans have been made for my future involvement, leading talks and discussions around some exciting testing topics. Emma Keaveny and Kim Knup are developing a vibrant new community of interest and I can’t wait to be more involved. Roll on 2015.

The community of testing is as varied and as exciting as the variety of people who work and learn within it. This is a good thing, perhaps the greatest thing about the community…and this is where the contrast lies.

The same week I went to Brighton, I also attended the latest Special Interest Group in Software Testing conference. SIGIST is organised by established, more academic people in the testing industry, on behalf of the BCS. It meets quarterly in London. There were a number of interesting topics being discussed, but it didn’t set my heart on fire. Only one or two talks out of the whole day really engaged me with the subject matter. Whilst there was the opportunity to learn from some experienced practitioners, there  wasn’t the same emphasis on collaborative learning, challenging established testing paradigms and positive enquiry. It wasn’t a bad experience, it just didn’t make me more passionate about my craft, nor help me understand something new about testing. It was good however to catch up with some people who I have met before, and some who I hadn’t…but were on my radar. Namely Tonnvane WiswellDeclan O’Riordan, Paul Gerrard, and Mike Jarred.

Another recent experience has been with some of the free, online and collaborative forums for learning and discussion that I have participated in. Firstly, Stephen Blower’s Testing Couch forum. This is a free and open Skype forum for any testers who are interested in talking about their craft. In the couple of times that I have attended, the chat has always been productive, supportive and non judgemental. Stephen makes this forum available periodically, usually every month or two. It’s a fantastic opportunity for experienced or novice testers to throw ideas around, be challenged and share thinking and learning.

Lastly, and probably my most positive experience was being a guest speaker in October’s Weekend Testing Europe forum. I was sharing my recent learning and experience in software testing, leading the attendees in an exploratory session with security as the focus. To a lot of the people during the chat, security testing was a new concept for which they had little experience or opportunities to learn. It was incredibly rewarding to be able to facilitate this session, not only on a personal level, but also to see many others taking up the challenge of securing their applications, and considering security as part of their testing.

Amy Phillips and Neil Studd have really breathed new life into Weekend Testing Europe, which had been dormant for a while. Keep an eye out for WTEU in the future, as it is a great way of keeping in touch with the testing community around the world. Be prepared to go in with eyes open, lots of questions, and a hunger to learn. All you need to do is  volunteer two hours of your time on a Sunday afternoon. It sure beats watching Columbo repeats or traipsing round a garden centre.

So, that’s it for now. I’ll be blogging again soon. The Test Doctor will return!

Something for the weekend, sir?

In what seems to now have been a storming comeback, the European chapter of Weekend Testing was a breath of fresh air in the learning opportunities for testers. You can find a link to the latest session here. Ably facilitated by Amy Phillips (@itjustbroke) and Neil Studd (@neilstudd) the session was dynamic and a great chance to talk with other testers in a relaxed environment. I didn’t even have to leave my house!

The main focus of the session was heuristics, how we understand, use and learn from them. There is a lot of great material on what heuristics are and how they can be used to inform and drive our testing ideas and execution. I won’t dwell too much on these areas but just hope to point you to some useful material:

Elizabeth Hendrikson’s Testing Heuristics Cheat Sheet

Michael Bolton’s blog post – heuristics for understanding heuristics

Anyway, my main take away from this session was the ruts that sometimes as a tester that we might sometimes get stuck in. I chose the Constraints heuristic, utilising data type attacks upon the World Chat Clock application we were all discussing.

I found myself falling back onto what now I feel to be a bit of a party piece. I immediately decided to perform a few simple XSS and SQL Injection attacks against the application. As I expected but couldn’t be sure, was that the application’s user interface would prevent these kinds of basic security vulnerabilities from being exploited. I did ultimately find a way of injecting XSS, via OWASP Mantra, but not getting it to expose any data. The bug did however cause some interesting display and wrapping issues.

Rather than looking at the functionality, usability, accessibility and its overall purpose somehow I have begun to think the worst about the software under test before I have given myself a chance to really take the time to evaluate it critically, honestly and objectively. I immediately questioned how secure the application was before I considered any other factors.

In my work at New Voice Media, I am part of a cross functional development team, and part of a community of testing interest within the business. During this time I’ve taken onboard a lot of security testing skills, with still a lot more left to learn. It may be that I have taken these skills to heart and want to use them at any opportunity, to develop them further, to discover more about the underlying behaviour of the application under test.

Yet sometimes I feel guilty that I am not approaching the testing of software from any number of other directions, using other skills and techniques. Maybe the newer skills I have learned are higher up in my priority list in my mind before I take other approaches. So, there are of course biases at play here. I’d like to explore that further and challenge them in the future.

Perhaps this has something to do with the way I personally learn things? Early in my career everything was driven from scripts and spreadsheets. There was no impetus to learn better ways of testing, only how to get testing done faster with fewer bugs and more coverage. I was learning how to manage my testing, but not being critical of the testing I was doing, nor evaluating the testing of other people.

Now this kind of learning is the bread and butter of the testers I work with now. We learn, explore, test, check, learn some more, share, improve and the cycle continues. A much more positive way of working. It’s not without its problems, as quite rightly so, you are much more accountable for your work, justifying your choices and decisions. There is a certain level of emotional maturity that we as testers need to develop in order to sustain this cycle, be accountable, share our learning appropriately, learn well from mistakes and improve from them.

This is one of the reasons why I enjoyed Weekend Testing so much. You can’t really hide or be a silent observer. You need to get stuck in and get your hands dirty!

A couple of hours on a Sunday afternoon in the past has not been a huge cost to me, as I would only be doing a bit of housework, DIY, gardening, Scouting, sport or watching something geeky on TV. Soon though however my weekends will be taken up with the ultimate challenge of parenthood, so chances to learn with peers in a relaxed environment will become fewer and far between. More on that learning experience and how it relates to testing another time.

Weekend Testing: infinitely better and more rewarding than mowing your lawn. Thanks to Neil and Amy for running such a fun and exciting session. The same goes to the other participants for the opportunity to learn from you and the excellent conversation.

Reinventing Regression

Manual regression testing has always been a burden for testers. It’s one of the practical problems that we face, where there is a gap at the top of your testing pyramid that can’t be checked using automation. At NVM we have a suite of automated tests which do a lot of the leg work with regard to checks. This post isn’t going to be a discussion about the difference between testing and checking, or the merits of automation over manual checking. It is simply a demonstration of a problem we faced as a team, and a solution we came up with as a team.

We realised that we had a problem. Some time ago we found that manual regression was only being done by two or three people and it was taking hours and hours of those individuals time. And it was usually the same two or three people for each release. It was taking those individuals away from their ‘more exciting’ feature teams work, where we do a lot of funky exploratory testing. But more than that, it was creating division and ill feeling amongst the team where some individuals felt they were carrying the burden.

Also we saw that we weren’t getting the visibility of the process that we wanted, such as the number of checks we were doing, the quality and appropriateness of the scenarios under test; or whether these checks were already (or could be) covered by automation. We didn’t want to duplicate effort. All of these tests are maintained on our internal wiki, rather than in some impenetrable test tool. We are all responsible for maintaining the regression suite, so if we feel something needs adding or changing, we take initiative and do it ourselves.

The scenarios went through a process of review and streamlining over a number of iterations, to make sure that we had the most best possible set of checks we could. These were all described in terms of agents and supervisors operating within a contact centre, which is of course a core part of New Voice Media’s business.

Getting a kit ready for live deployment is the top priority for us, and we want to release as regularly as possible…weekly if resources and time allow. There is of course a challenge to manage the needs of our own feature teams and their priorities, but of course there is a business to support, so the release takes priority.

So to the physical execution of the regression…a whole challenge in itself.

At New Voice Media we are lucky enough to have a dedicated test lab, where we have desks, networking and telephone handsets that allow us to run our tests in one place. All available testers meet at a set time, with our PC’s, telephone handsets, tablets and other tools.

We have created a Kanban board, along with some other elements. We have story cards for each of the scenarios under test, with two separate groups in the team handling different logical streams of the application. The board allows us to see progress through the testing, delegate tasks but also gives us a chance to visually provide instant feedback to what worked well and give praise, what didn’t work well and any ideas we might have to improve the process.

This is a great example of skilled testers working together to solve a testing problem, and has started to make regression testing an enjoyable event rather than an onerous chore.

Image