Month: June 2012

Adaptability vs Context-Driven

Last week Rik Marselis send me an email pointing me to an article with the title “The adaptability of a perceptive tester”.  He added: “Have you read this article? Should appeal to you!”. The article is written by a couple of Dutch (Sogeti) testers. And they, so the introduction tells me, get together once in a while to discuss the profession of testing and quality assurance. This time they discussed some remarkable examples of their experience that perceptive testers (who are aware of the specific situation they’re in) adapt their approach to fit the specific needs.

I replied to Rik with the following email:

Hey Rik,

Nice article, I had already seen it. But adaptive or perceptive is not context-driven. I also totally disagree with the conclusion:
“Together we concluded that although TMap NEXT was introduced some six years ago it still has a solid foundation for any testing project since the essence of adaptiveness makes sure that any perceptive tester can collaborate in doing the right things at the right moment in any project and ensure that the proper quality is delivered to solve the problem of the stakeholders.”

TMap Next contains a number of dogmas (or rather is based on a number of dogmas) like: testing can be scheduled, the number of test cases is predictable, content of a test case is predictable, sequence of process, etc.
Therefore I think TMap Next is not usable in every situation. At least not effectively and efficiently. Being adaptive is good, but I can imagine situations that TMap Next has to be adjusted rigorously that the result is no longer recognizable as TMap. In addition: TMap Next says that ET is a technique: a good example of another dogma. And it shows me that TMap is not about testing but about a process and deliverables … Maybe I should write a blog about this.

Regards,
Huib

Rik replied with this email:

Hi Huib,

We really need to reserve some time to discuss this from different sides because some things that you say I totally disagree with. A conscious tester can handle any situation with TMap. I think whether ET is a technique or approach is really a non-discussion. TMap calls it a technique so you can approach testing in different ways in different situations. And since TMap itself is a method you cannot call ET a method too.

I think Context-driven means you choose your approach depending on the situation.
I think Adaptive means you choose your approach depending on the situation.

Perceptive means conscious, as you are aware of the situation, you can choose an appropriate approach. Well, it is worth discussing.

Regards,
Rik

Okay, so let’s discuss!

Exploratory testing
Let’s start with the ET discussion. What does TMap say about this? ET is a test design technique. And the definition of a test design technique (on page 579 of the book “TMap Next for result driven testing”): “a test design technique is a standard method to derive, from a specific test basis, test cases that realise a specific coverage”. Test cases are described on page 583 of the same book: “a test case must therefore contain all of the ingredients to cause that system behaviour and determine wheter or not is it correct … Therefore the following elements must be recognisable in EVERY test case regardless of the test design technique is used: Initial situaion, actions, predicted result”.

Let’s connect the dots: ET is called a test design technique. A test design technique is defined as: a method used to derive test cases. But ET doesn’t use test cases, not in the way TMap defines them. It can, but most of the time it doesn’t… Mmm, an inconsistency with image, a claim, expectations, the product, statues & standards. I would say: a blinking red zero! Or in other words, there /is/ something wrong here!

What is Exploratory Testing? Paul Carvalho wrote an excellent blog post simply titled “What is Exploratory Testing?” on this topic and I suggest people to read this if they what to understand what ET is. Elisabeth Hendrickson says: “Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next.”

Michael Bolton and the context-driven school like to define it as: “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to optimize the quality of his or her work by treating test design, test execution, test interpretation, and test-related learning as mutually supportive activities that continue in parallel throughout the project.”. Michael has a collection of interesting resources about ET and it can be found here.

So Rik, your argument “since TMap itself is a method you cannot call ET a method too” is total bullshit! It sounds to me as “there is only one God…”.

Context-driven testing
Don’t get me wrong, being adaptive and perceptive is great, but that doesn’t make testing context-driven. A square is a rectangle but a rectangle is not a square! Please have a look at the context-driven testing website by Cem Kaner. Also read the text of the key note “Context-Driven Testing” Michael Bolton gave last year at CAST 2011. In his story you will see that adaptive (paragraph 4.3.4) is only a part of being context-driven. I admit, it is not easy to really comprehend context-driven testing.

Do you think it was TMap Next that was the common success factor in the stories shared in the article… I doubt it!

Basic training for software testers must change

This blog post was originally written as an column for www.testnewsonline.com (English) and www.testnieuws.nl (Dutch).

On this blog I recently wrote about my meeting with James Bach with the provocative title: “What they teach us in TMap Class and why it is wrong“. Mid July I go to San Jose for the CAST conference. During the weekend preceding I participate in Test Coach Camp. The title of the post is the title of a proposal that I submitted to discuss at Test Coach Camp.

In the past I have been a trainer for quite a few ISTQB and TMAP courses. The groups attending the training were often a mix of inexperienced and experienced testers. The courses cover topics like: the reason for testing, what is testing, the (fundamental) processes, the products that testers create, test levels, test techniques, etc. In these three-day courses all exercises are done on paper. Throughout the whole training not once actual software is tested!? I wonder if courses for developers exist where no single line of code is written.

In San Jose at Test Coach Camp I want to discuss the approach of these courses with my peers. How can we improve them? I feel these courses are not designed to prepare testers to test well. Let alone to encourage testers to become excellent in their craft.

During my dinner with James, I asked him what he would do if he would train novices to become good testers. He replied that he would let them test some software from the start. He would certainly not start with lectures on processes, test definitions and vocabulary. During a session the student will (unknowingly) use several techniques that will be named and can be further explained when stumbled upon. A beautiful exploratory approach I would like to try myself: learning by doing! But there are many more opportunities to improve testing courses. People learn by making mistakes, by trying new things. Testing is much more about skills than about knowledge. Imagine a carpenter doing a basic training. His training will mainly consist of exercises! My neighbour is doing a course to become furniture maker. She is learning the craft by many hours of practice creating work pieces. Practice is the biggest part of her training!

One of the comments on my blog opposed to the suggestion by James Bach. Peter says: “I have been both a tester and trainer in ISTQB and TMap. Yes we can make testing fun but without a method that testing has no structure and more importantly has no measurable completion. How will those new people on “more practical” course know when they have finished? What tests did they do? What did they forget? What defect types did they target? Which ones did they not look for? What is the risk to the system? My view after 40 years as a developer and tester is that this idea might be fun but is not just WRONG but so dangerously wrong that I am sad that no one else has seen it.”

What do you think?

Let’s Test 2012: an awesome conference! – Part 4

Wednesday 9th: Let’s Test day 3: sessions

Keynote Scott Barber

Last day of this awesome conference. Because we went to bed quite late (or early, it depends how you look at it), I was a bit hungover. But the adrenalin for my upcoming talk made my hangover disappear unbelievable quickly. The day started with a keynote by Scott Barber titled “Testing Missions in Context From Checking To Assessment”. I had no clue of where this talk would bring us but the title intrigued me. Scott started with a fun incoming message where he was asked to test a website and all discrepancies in production would be blamed on him. The message ended with the rhetorical question: “do you accept this mission?”. Scott talked about missions and tasks and he gave some nice examples using his military history. His advice: “always look at the mission two command levels up”. At the end of his talk he presented his new “Software System Readiness Assessment” and the Software System Assessment Report Card. I think I like his model, but I have to give it some more thought.

Scott Barber - Testing Missions in Context From Checking To Assessment

Scott relaxed on a bar stool on stage

Sessions

On Wednesday there were only 2 sessions because there was also a second keynote in the afternoon. I went to Michael Albrechts talk “from Good to Great with xBTM”. Michael talked about Session and Thread based test management. Both are very good, but combined they are great, he claims. He showed the tool SBTExecute. I haven’t had the chance to try it yet, but it looks promising. His talked showed how SBTM and TBTM can be combined using mind maps and how this approach can be used in agile.

My talk “So you think you can test?” was planned in the second session right after lunch. The rooms was packed and Zeger van Hese (did I mention that he is program chair of EuroStar 2012?) was facilitating the session. What could go wrong? After my talk there was a nice Open Season with some great questions. Thanks all for attending and participating, thank Zeger for facilitating. I hope everybody enjoyed my talk as much as I did.

Michael Albrecht - From Good to Great with xBTM

Look who's talking

So you think you can test? (Photo: Zeger van Hese)

Keynote Julian Harty

Julian did a magnificent keynote titled “Open Sourcing Testing”. He called testers to action to share their stuff with others so all can benefit, learn and eventually become better testers. One of his slides said: Ask “what you I share?” that doesn’t risk “too much”. And I think he is right. We should share more and we should be more open about what we do. Sure there is a lot of stuff which is too privacy-sensitive to share, but why should we reinvent the wheel in every organisation or project? By sharing we can also learn faster…

Julian Harty - Open Sourcing Testing

The End: Organisors on stage saying goodbye

The end…

Here you find all the presentations and lots of other blogs about Let’s Test 2012. I had a wonderful time, met loads of great people and learned a lot. So I think I can truly say: it was an awesome conference. I already registered for Let’s Test 2013 … Count down has begun. See you there? Or if you can’t wait so long: maybe we meet in San Jose at CAST 2012 in a couple of weeks? This promises to be an awesome conference as well!

People leaving, taxis driving on and off

My physical take aways: Julian, Rikard, Torbjörn and Scott: thanks!