RSS

Monthly Archives: April 2010

Lessons learned in software testing – an experience report

In the google group Software Testers New Zealand, a discussion thread was posted about testers favourite testing books. One of the main books mentioned is Lessons learned in software testing.

Cynthia, a tester based in Auckland New Zealand wrote this great experience report using the book Lessons learned in software testing (quoted verbatim with permission).

“On 9 April 2010 08:33, Cynthia wrote:
It seems like everyone is recommending Lessons Learned in Software
Testing! I do too.
I have it on my desk and dip into it quite often.

I thought I would give an example of it being useful to me in a practical way.
Yesterday I found a bug, quite late in the development. It was quite an obvious one too, once you’d seen it. But it took me ages to “see” it.
I was kicking myself for not seeing it earlier, and thought I’d review how I missed it for so long, so to refresh my mind and give a different focus I glanced through the chapter “Thinking Like a Tester”.

Wow, there it was, Lesson 41 – “When you miss a bug, check whether the miss is surprising or just the natural outcome of your strategy”.

I realised that my strategy had been a bit “off”. The function I was testing was to provide some mathematical and date information to our client’s customers. I’d spent ages making sure the dates and calculations were correct.
But I hadn’t asked the right question – what was the reason our client wanted to give their customers this information? What was the problem they wanted us to solve for them by providing this function?

The bug was that calculation results were being presented in different numerical formats in different places – as 3-decimal places, as 2- decimal places and as integers. They were all correct. But they looked confusing, especially when rounding came into play.

And one of the problems our client wanted us to solve was to reduce the number of phone calls at their Call Centres from confused customers.(This was not documented in their Requirements, but had been discussed informally).

So I finally “saw” that this correct data didn’t solve their problem, when it was presented that way.

Now I know that I need to add into my strategy for each new bit of  functionality that I work on – always ask the question “What is the real problem we’re trying to solve here?”

Does anyone else have stories about how their favourite testing books help in real life?

Regards
Cynthia”

What a fantastic experience report – thank you for sharing Cynthia and thank you to the authors of Lessons learned in software testing – Cem Kaner, James Bach and Bret Pettichord.

Advertisements
 
Leave a comment

Posted by on April 12, 2010 in Test Planning, Testing

 

Tags: ,

What actually happens on an Agile Project – part 1

I have frequently been asked about how work actually changes on an Agile project. Here is the first part of what I hope will become a detailed examination of what actually goes on inside an Agile project:

I’ve been working with a team at a company in New Zealand recently, mentoring and coaching their implementation of Agile practices.

Last week we had a series of workshops to scope and define a new project. This is a major rewrite of all their branch systems, with an SOA link to the core legacy system backend. The basic architecture has already been defined, largely based on the existing technology environment the company works with.

The team consists of project manager, iteration manager, developers, testers, business analysts, the project sponsor (senior manager responsible for the branch network), a branch manager, and four branch staff members– one from head office, three from branches around the country.

We started the workshops with the project sponsor setting the scene – explaining his vision for what the company is trying to achieve. We then used a brainstorming approach to define the key success criteria for the project. Each person wrote their own understanding of their key success criteria for the project on post-it notes which were then stuck onto a flipchart. We grouped and clustered the statements, following which we agreed on the key drivers for the work (which is more important – deliver on time, meet budget, meet functional requirements, meet quality needs …).

Once we had a common understanding of the goals and drivers for the project we spent the bulk of the afternoon of the first day identifying key stakeholders, constraints, assumptions and risks – again using a brainstorming and discussion approach. We ended the day with a set of flipcharts around the room that identified the overall scope of the project, key success criteria, assumptions, constraints and risks (with trigger points and mitigating actions identified for the currently agreed most important risks).

We then defined the key features that the new system needs to have in currently understood priority order. This formed the basis for the next day’s discussion about specific features and releases. The second day started by reviewing the wallware from the first session, double checking that we had a common perspective. A number of items were reprioritised at this point as people had pondered them overnight.

We then took each feature in turn and broke it down to the level of user stories. Some of the stories were in the <as a> <i want> <so that> structure, but most were simple one-line statements of high level requirements. We then prioritised the stories and roughly grouped them into three releases.

It is understood that the releases are not fixed, but this is an initial starting point. We focused most of the teams’ time on features and qualities which must be delivered in the first release, which has a clear target date. The remaining features have been identified and ordered, but only in a loose fashion – this is expected to change as we actually do the work.

One clear outcome that has already been of benefit is the way expectations have been set. The project sponsor acknowledged that their initial goals were unrealistic, and already expects to either cut scope or spend more money than was originally allocated.

The business analysts have found that their role is largely facilitating the workshops, and asking a lot of “what if” questions. They are also taking responsibility for recording the decisions that are being made in the workshops and keeping the team focused on the task at hand.

The next step is to take the top 10 or so stories and expand them in detail. A workshop has been scheduled, involving the same team for next week. The technical team are setting up environments and doing all the iteration zero tasks in the meantime, so we’ll be ready to start work with the workshop.

I’ll post another message explaining what happens with the next workshop – we’ve labelled it “story elaboration” and the intent is to expand on the stories so the team can start work on iteration one immediately after the workshop. The key user representatives will be collocated with the technical team for the duration of the project and there is strong management support, so the project should be set up for success.

Many Thanks to the team – you know who you are 🙂 

Posted by Shane Hastie

 

 

Tags:

An Agile Maturity Model?

Yesterday (perhaps not by coincidence) Scott Ambler published an article in Dr Dobbs Journal titled The Agile Maturity Model

In the article he offers five levels of “maturity” that organisations and teams go through as they move from blind faith acceptance of agile mantras through to considered application of sensible practices and techniques drawn from good practices irrespective of their source.

The five levels he describes are:

Level 1: The Rhetorical Stage.  At this level teams often latch onto the rules and practices of their chosen Agile methodology with the fervor of religious converts.  He states “Level 1 agile teams typically succeed because agile strategies were applied on hand-picked pilot projects, by a small team of flexible and often highly-skilled people, and were given sufficient management support.”

Level 2: The Certified stage.  He is particularly scathing about level 2 implementations.  “The distinguishing feature of the Certified stage is that most team members have stayed awake in a two-day certification course and have successfully parroted back agile rhetoric in a multiple guess test which few people seem to fail.”

Level 3:  The Plausible stage.  At this level organisations and teams are applying common sense to the application of appropriate techniques that work within the context of the ecosystem of that particular organisation.  “We also abandon the certification faade and instead focus on gaining the skills, and understanding the strategies, required to successfully deliver high-quality working software to our stakeholders and thereby pay down the integrity debt built up during the Certified stage”

Level 4: The respectable Stage.  This level is about realising the need to deliver not just software but fully working solutions that deliver business value.  “Furthermore, disciplined agile delivery teams at this stage are self organizing within an appropriate governance framework, recognizing that they work within the constraints of a larger organizational ecosystem”

 Level 5: The Measured Stage.  At this level teams and organisations are actively improving their processes and approaches based on objective metrics collected in real-time, scaling their practices and adjusting their projects to ensure a continuous focus on maximum return on investment. Level 5 agile practitioners are truly respectful of history, recognizing that the people who built the systems which are currently running the world had a bit of a clue after all.

In our training and work with teams and organisations, I believe the Software Education team helps them start at Level 4 and 5 and remain there – are we succeeding?

Posted by Shane Hastie

 
Leave a comment

Posted by on April 2, 2010 in Agile

 

Tags: