Wednesday, 27 August 2014

How to create a visual test coverage model

Creating a visual test coverage model to show test ideas in a mind map format is not a new idea. However it can be a challenging change in paradigm for people who are used to writing test cases that contain linear test steps. Through teaching visual modelling I have had the opportunity to observe how some people struggle when attempting to use a mind map for the first time.

Though there is no single right way to create a visual test coverage model, I teach a simple approach to help those testers who want to try using mind maps but aren't sure where to begin. I hope that from this seed, as people become confident in using a mind map to visualise their test ideas, they would then adapt this process to suit their own project environment.

Function First

The first step when considering what to test for a given function is to try and understand what it does. A good place to start a mind map is with the written requirements or acceptance criteria.

Imagine a story that includes developing the add, edit, view, and delete operations for a simple database table. The first iteration of the visual test coverage model might look like this:


Next consider whether all the behaviour of this function is captured in the written requirements. There are likely to be items that have not been explicitly listed. The UI may provide a richer experience than was originally requested. The business analyst may think that "some things just go without saying". There may be application level requirements that apply to this particular function.

Collaboration is the key to discovering what else this function can do. Ask a business analyst and a developer to review the mind map to be sure that every behaviour is captured. This review generally doesn't take long, and a quick conversation early in the process can prevent a lot of frustration later on.

Imagine that the developer tells us that the default design for view includes sort, filter, and pagination. Imagine that the business analyst mentions that we always ask our users to confirm before we delete data. The second iteration of the visual test coverage model might look like this:

Think Testing

With a rounded understanding of what the function does the next thing to consider is how to test it.

For people that are brand new to using a mind map, my suggestion is to start by thinking of the names of the test cases that they would traditionally scope. Instead of writing down the whole test case name, just note the key word or phrase that differentiates that case from others. This is a test idea.

Test ideas are written against the behaviour to which they apply. This means that tests and requirements are directly associated, which supports audit requirements.

Imagine that the tester scopes a basic set of test ideas. The third iteration of the visual test coverage model might look like this:

Expand your horizons

When inspiration evaporates, the next challenge is to consider whether the test ideas captured in the model are sufficient. There are some excellent resources to help answer this question.

The Test Heuristics Cheat Sheet by Elisabeth Hendrickson is a quick document to scan through, and there is almost always a Data Type Attack that I want to add to my model. The Heuristic Test Strategy Model by James Bach is longer, but I particularly like the Quality Criteria Categories that prompt me to think of non-functional test ideas that may apply. Considering common test heuristics can help achieve better test coverage than when we think alone.

Similarly, if there are other testers working in the project team ask them to review the model. A group of testers with shared domain knowledge and varied thinking are an incredibly valuable resource.

Imagine that referring to test heuristic resources and completing a peer review provides plenty of new test ideas. The fourth iteration of the visual test coverage model would have a lot of extra branches!

Lift Off!

From this point the visual test coverage model can be used in a number of ways; a base for structured exploratory testing using session based testing, a visual representation of a test case suite, a tool to demonstrate whether test ideas are covered by automated checks or testing, or as a radar to report progress and status of test execution. Regardless of use, the model is likely to evolve over time.

I hope that this process encourages those who are new to visual test coverage modelling to try it out.

Tuesday, 12 August 2014

Context Driven Change Leadership

I spent my first day at CAST2014 in a tutorial facilitated by Matt Barcomb and Selena Delesie on Context Driven Change Leadership. I thoroughly enjoyed the session and wanted to share three key things that I will take from this workshop and apply in my role.

Change models as a mechanism for feedback

The Satir Change Model shows how change affects team performance over time.

Selena introduced this model at the end of an exercise designed to demonstrate organisational change. She asked us each to mark on the model where we felt our team were at by placing an X mark at the appropriate point on the line.

Most of the marks were consistently placed in the integration phase. There were a couple of outliers in new status quo and a single person in resistance. It was a quick and informative way to gauge the feeling of a room full of people that had been asked to implement change.

I often talk about change in the context of a model, but had never though to use it as a mechanism for feedback; this is definitely something I will try in future.

Systems Thinking

Matt introduced systems thinking by talking about legs. If we were to consider each bone or muscle in the leg in isolation, then they would mean less than if we considered the leg as a whole.

Matt then drew a parallel to departments within an organisation. Where people are focused on their individual pieces, but not the system as a whole, then there is opportunity for failure.

Matt spoke about containers, differences, and exchanges (the CDE model by Glenda Eoyang [1]). These help identify the opportunities to manipulate connections within a complex system.

Containers may be physical, like a room, but they can also be implicit. One example of an implicit container that was discussed in depth was performance reviews, which may drive behaviour that impacts connections between individuals, teams and departments in both positive and negative ways.

Differences may include obvious differences like gender, race, culture, or language. It also includes subtle differences like the level of skill within a team. To manipulate connections you could amplify a difference to create innovation, dampen a difference to remove harmful behaviour, or choose to ignore a difference that is not important.

Exchanges are the interactions between people. One example is how communication flows within the organisation. Is it hierarchical via a management structure, or freely among employees? Another is when someone comes to work in a bad mood they can lower the morale of those around them. Conversely, when one person is excited and happy they can improve the mood of the whole team.

In our end of day retrospective, Betty took the idea of exchanges further:

How will I apply all this?

I have spent a lot of time recently focused on my team. When I return to work, I'd like to step back and attempt to model the wider system within my own organisation. Within this system I want to identify what containers, differences, and exchanges are present. From this I have information to influence change through connections instead of solely within my domain.

Fantastic Facilitation

Matt and Selena had planned a full day interactive exercise to take a group of 30 people through a simulated organisational change.

We started by becoming employees of a company tasked with creating wind catchers. The first round of the exercise was designed to show the chaos of change. I was one of many who spent much of this period feeling frustrated at a lack of activity.

At the start of round two, Erik Davis pulled a bag of Lego from his backpack. He usurped the existing chain of command in our wind catcher organisation to ask Matt, in his role as "the market", whether he had any interest in wind catchers made from Lego. As a result, a small group of people who just wanted to do something started to build Lego prototypes.

Matt watched the original wind catcher organisation start to splinter, and came over to Erik to suggest that the market would also be interested in housing. Being a much more appropriate and easy item to build from Lego, there was a rapid revolt. Soon I was one of around seven people who were working in a new start up, located in the foyer of the workshop area, building houses from Lego.

There was a lot of interesting observations from the exercise itself, but as someone who also teaches I was really impressed by the adaptability of the facilitators. Having met Matt at TestRetreat on Saturday, I knew that he had specially purchased a large quantity of pipe cleaners for the workshop. Now here we were using Lego to build houses, which was surely not what he had in mind!

When I raised this during the retrospective, both Matt and Selena gave excellent teaching advice.

Selena said that when she first started to teach workshops, she wanted them to go as she had planned. What she had since discovered was that if she gave people freedom, within certain boundaries, then the participants often had a richer experience.

Matt expanded this point by detailing how to discover those boundaries that should not move. He tests the constraints of an exercise by removing and adding rules, thinking in turn about how each affects the ultimate goal of the activity.

As a result of this workshop I intend to challenge some of the exercises that I run. I suspect that I am not providing enough freedom for my students to discover their own lessons within the learning objective I am ultimately aiming to achieve.

Sunday, 3 August 2014

Creating a common test approach across multiple teams

I was recently involved in documenting a test strategy for a technology department within a large organisation. This department runs an agile development process, though the larger organisation does not, and they have around 12 teams working across four different applications.

Their existing test strategy document was over five years old and no longer reflected the way that testers were operating. A further complication was that every team had moved away from the original strategy in a different direction and, as a result, there was a lack of consistent delivery from testers across the department.

To kick things off, the Test Manager created a test strategy working group with a testing representative from each application. I was asked to take a leadership role within this group as an external consultant, with the expectation that I would drive the delivery of a replacement strategy document. After an initial meeting of the group, we scheduled our first one hour workshop session.

Before the workshop

I wanted to use the workshop for some form of Test Strategy Retrospective activity, but the one I had used before didn't quite suit. In the past I was seeking feedback from people with different disciplines in a single team. This time the feedback would be coming from a single discipline across multiple teams.

As preparation for the workshop, each tester from the working group was asked to document the process that was being followed in their team. Though each documented process looked quite different, there were some commonalities. Upon reading through these, I decided that the core of the new test strategy was the high-level test process that every team across the department would follow, and that finding this would be the basis of our workshop.

I wanted to focus conversation on the testing activities that made up this core process without group feeling that other aspects of testing were being forgotten. I decided to approach this by starting the workshop with an exercise in broad thinking, then leading the group towards specific analysis.

When reflecting on my own observations of the department, and reading though the documented process from each application, I thought that test activities could be categorised into four groups.

  1. Every tester, in every team in the department, does this test activity, or should be.
  2. Some testers do this activity, but not everyone.
  3. This test activity happens, but the testers don't do it. It may be done by other specialists within the team, other departments within the organisation, or a third party.
  4. Test activities that never happen.

I wrote these categories up on four coloured pieces of paper:

At the workshop

To start the workshop I stuck these categories across a large wall from left to right.

I asked the group to reflect on what they did in their roles and write each activity on an appropriately coloured post-it note. For example, if I wrote automated checks for the middleware layer, but thought that not everyone would do so, then I would write this activity on a blue post-it note.

After five minutes of thinking, everyone got up and stuck their post-it notes on the wall under the appropriate heading. We stayed on our feet through the remainder of the session.

The first task using this information was to identify where there were activities that appeared in multiple categories. There were three or four instances of disagreement. It was interesting to talk through the reasoning behind choices and determine the final location of each activity.

Once every testing activity appeared in only one place we worked across the wall backwards, from right to left. I wanted to discuss and agree on the areas that I considered to be noise in the wider process so that we could concentrate on its heart.

The never category made people quite uncomfortable. The test activities in this category were being consistently descoped, even though the group felt that they should be happening in some cases. There was a lot of discussion about moving these activities to the sometimes category. Ultimately we didn't. We wanted to reflect to the business that these activities were consistently being considered as unimportant.

As we talked through what others were doing, we annotated the activities with those who were responsible for it. The level of tester input was also discussed, as this category included tasks happening within the team. For example, unit testing was determined to be the developer's responsibility, but the tester would be expected to understand the coverage provided.

When we spoke about what people might do, most activities ended up shifting to the left or the right. Either they were items that were sometimes completed by the tester when they should have been handled elsewhere, or they were activities that should always be happening.

Finally we talked through what everyone was doing. We agreed on common terminology where people had referred to the same activities using different labels. We moved the activities into an agreed end-to-end test process. Then we talked through that process to assess whether anything had been forgotten.

After the workshop

At the end of the hour, the group had clarity of how their individual approach to testing would fit together in a combined vision. The test activities that weren't common were still captured, and those activities outside the tester's area of responsibility were still articulated. This workshop created a strong common understanding within the working group, which made the process of formalising the discussion in a document relatively easy. I'd recommend this approach to others tasked with a similar brief.