Category: Exploratory Testing (Page 2 of 2)

Adaptability vs Context-Driven

Last week Rik Marselis send me an email pointing me to an article with the title “The adaptability of a perceptive tester”.  He added: “Have you read this article? Should appeal to you!”. The article is written by a couple of Dutch (Sogeti) testers. And they, so the introduction tells me, get together once in a while to discuss the profession of testing and quality assurance. This time they discussed some remarkable examples of their experience that perceptive testers (who are aware of the specific situation they’re in) adapt their approach to fit the specific needs.

I replied to Rik with the following email:

Hey Rik,

Nice article, I had already seen it. But adaptive or perceptive is not context-driven. I also totally disagree with the conclusion:
“Together we concluded that although TMap NEXT was introduced some six years ago it still has a solid foundation for any testing project since the essence of adaptiveness makes sure that any perceptive tester can collaborate in doing the right things at the right moment in any project and ensure that the proper quality is delivered to solve the problem of the stakeholders.”

TMap Next contains a number of dogmas (or rather is based on a number of dogmas) like: testing can be scheduled, the number of test cases is predictable, content of a test case is predictable, sequence of process, etc.
Therefore I think TMap Next is not usable in every situation. At least not effectively and efficiently. Being adaptive is good, but I can imagine situations that TMap Next has to be adjusted rigorously that the result is no longer recognizable as TMap. In addition: TMap Next says that ET is a technique: a good example of another dogma. And it shows me that TMap is not about testing but about a process and deliverables … Maybe I should write a blog about this.

Regards,
Huib

Rik replied with this email:

Hi Huib,

We really need to reserve some time to discuss this from different sides because some things that you say I totally disagree with. A conscious tester can handle any situation with TMap. I think whether ET is a technique or approach is really a non-discussion. TMap calls it a technique so you can approach testing in different ways in different situations. And since TMap itself is a method you cannot call ET a method too.

I think Context-driven means you choose your approach depending on the situation.
I think Adaptive means you choose your approach depending on the situation.

Perceptive means conscious, as you are aware of the situation, you can choose an appropriate approach. Well, it is worth discussing.

Regards,
Rik

Okay, so let’s discuss!

Exploratory testing
Let’s start with the ET discussion. What does TMap say about this? ET is a test design technique. And the definition of a test design technique (on page 579 of the book “TMap Next for result driven testing”): “a test design technique is a standard method to derive, from a specific test basis, test cases that realise a specific coverage”. Test cases are described on page 583 of the same book: “a test case must therefore contain all of the ingredients to cause that system behaviour and determine wheter or not is it correct … Therefore the following elements must be recognisable in EVERY test case regardless of the test design technique is used: Initial situaion, actions, predicted result”.

Let’s connect the dots: ET is called a test design technique. A test design technique is defined as: a method used to derive test cases. But ET doesn’t use test cases, not in the way TMap defines them. It can, but most of the time it doesn’t… Mmm, an inconsistency with image, a claim, expectations, the product, statues & standards. I would say: a blinking red zero! Or in other words, there /is/ something wrong here!

What is Exploratory Testing? Paul Carvalho wrote an excellent blog post simply titled “What is Exploratory Testing?” on this topic and I suggest people to read this if they what to understand what ET is. Elisabeth Hendrickson says: “Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next.”

Michael Bolton and the context-driven school like to define it as: “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to optimize the quality of his or her work by treating test design, test execution, test interpretation, and test-related learning as mutually supportive activities that continue in parallel throughout the project.”. Michael has a collection of interesting resources about ET and it can be found here.

So Rik, your argument “since TMap itself is a method you cannot call ET a method too” is total bullshit! It sounds to me as “there is only one God…”.

Context-driven testing
Don’t get me wrong, being adaptive and perceptive is great, but that doesn’t make testing context-driven. A square is a rectangle but a rectangle is not a square! Please have a look at the context-driven testing website by Cem Kaner. Also read the text of the key note “Context-Driven Testing” Michael Bolton gave last year at CAST 2011. In his story you will see that adaptive (paragraph 4.3.4) is only a part of being context-driven. I admit, it is not easy to really comprehend context-driven testing.

Do you think it was TMap Next that was the common success factor in the stories shared in the article… I doubt it!

Basic training for software testers must change

This blog post was originally written as an column for www.testnewsonline.com (English) and www.testnieuws.nl (Dutch).

On this blog I recently wrote about my meeting with James Bach with the provocative title: “What they teach us in TMap Class and why it is wrong“. Mid July I go to San Jose for the CAST conference. During the weekend preceding I participate in Test Coach Camp. The title of the post is the title of a proposal that I submitted to discuss at Test Coach Camp.

In the past I have been a trainer for quite a few ISTQB and TMAP courses. The groups attending the training were often a mix of inexperienced and experienced testers. The courses cover topics like: the reason for testing, what is testing, the (fundamental) processes, the products that testers create, test levels, test techniques, etc. In these three-day courses all exercises are done on paper. Throughout the whole training not once actual software is tested!? I wonder if courses for developers exist where no single line of code is written.

In San Jose at Test Coach Camp I want to discuss the approach of these courses with my peers. How can we improve them? I feel these courses are not designed to prepare testers to test well. Let alone to encourage testers to become excellent in their craft.

During my dinner with James, I asked him what he would do if he would train novices to become good testers. He replied that he would let them test some software from the start. He would certainly not start with lectures on processes, test definitions and vocabulary. During a session the student will (unknowingly) use several techniques that will be named and can be further explained when stumbled upon. A beautiful exploratory approach I would like to try myself: learning by doing! But there are many more opportunities to improve testing courses. People learn by making mistakes, by trying new things. Testing is much more about skills than about knowledge. Imagine a carpenter doing a basic training. His training will mainly consist of exercises! My neighbour is doing a course to become furniture maker. She is learning the craft by many hours of practice creating work pieces. Practice is the biggest part of her training!

One of the comments on my blog opposed to the suggestion by James Bach. Peter says: “I have been both a tester and trainer in ISTQB and TMap. Yes we can make testing fun but without a method that testing has no structure and more importantly has no measurable completion. How will those new people on “more practical” course know when they have finished? What tests did they do? What did they forget? What defect types did they target? Which ones did they not look for? What is the risk to the system? My view after 40 years as a developer and tester is that this idea might be fun but is not just WRONG but so dangerously wrong that I am sad that no one else has seen it.”

What do you think?

Let’s Test 2012: an awesome conference! – Part 4

Wednesday 9th: Let’s Test day 3: sessions

Keynote Scott Barber

Last day of this awesome conference. Because we went to bed quite late (or early, it depends how you look at it), I was a bit hungover. But the adrenalin for my upcoming talk made my hangover disappear unbelievable quickly. The day started with a keynote by Scott Barber titled “Testing Missions in Context From Checking To Assessment”. I had no clue of where this talk would bring us but the title intrigued me. Scott started with a fun incoming message where he was asked to test a website and all discrepancies in production would be blamed on him. The message ended with the rhetorical question: “do you accept this mission?”. Scott talked about missions and tasks and he gave some nice examples using his military history. His advice: “always look at the mission two command levels up”. At the end of his talk he presented his new “Software System Readiness Assessment” and the Software System Assessment Report Card. I think I like his model, but I have to give it some more thought.

Scott Barber - Testing Missions in Context From Checking To Assessment

Scott relaxed on a bar stool on stage

Sessions

On Wednesday there were only 2 sessions because there was also a second keynote in the afternoon. I went to Michael Albrechts talk “from Good to Great with xBTM”. Michael talked about Session and Thread based test management. Both are very good, but combined they are great, he claims. He showed the tool SBTExecute. I haven’t had the chance to try it yet, but it looks promising. His talked showed how SBTM and TBTM can be combined using mind maps and how this approach can be used in agile.

My talk “So you think you can test?” was planned in the second session right after lunch. The rooms was packed and Zeger van Hese (did I mention that he is program chair of EuroStar 2012?) was facilitating the session. What could go wrong? After my talk there was a nice Open Season with some great questions. Thanks all for attending and participating, thank Zeger for facilitating. I hope everybody enjoyed my talk as much as I did.

Michael Albrecht - From Good to Great with xBTM

Look who's talking

So you think you can test? (Photo: Zeger van Hese)

Keynote Julian Harty

Julian did a magnificent keynote titled “Open Sourcing Testing”. He called testers to action to share their stuff with others so all can benefit, learn and eventually become better testers. One of his slides said: Ask “what you I share?” that doesn’t risk “too much”. And I think he is right. We should share more and we should be more open about what we do. Sure there is a lot of stuff which is too privacy-sensitive to share, but why should we reinvent the wheel in every organisation or project? By sharing we can also learn faster…

Julian Harty - Open Sourcing Testing

The End: Organisors on stage saying goodbye

The end…

Here you find all the presentations and lots of other blogs about Let’s Test 2012. I had a wonderful time, met loads of great people and learned a lot. So I think I can truly say: it was an awesome conference. I already registered for Let’s Test 2013 … Count down has begun. See you there? Or if you can’t wait so long: maybe we meet in San Jose at CAST 2012 in a couple of weeks? This promises to be an awesome conference as well!

People leaving, taxis driving on and off

My physical take aways: Julian, Rikard, Torbjörn and Scott: thanks!

Let’s Test 2012: an awesome conference! – Part 3

Tuesday 8th May: Let’s Test day 2: Sessions

Keynote Rob Sabourin

After a “tired” but funny introduction by Paul “Hungover” Holland, Rob Sabourin climbed the stage to do a great keynote titled “Elevator Parable” in which he told a story about a conversation he overhead in the elevator. The central thing is his talk was a bug from this conversation. Rob entertained the audience with voice mail messages from famous testers like James Bach and Rex Black. After every message the audience was asked to triage the bug. A very entertaining and interesting talk. Duncan has written a much more comprehensive blog post about this talk and you can find it here.

Paul "hungover" Holland

Rob Sabourin (photo by Anders Dinsen)

Rob Sabourin – Elevator Parable

Sessions

After the keynote I went to four really good sessions:

  1. Markus Gärtner – “Charter your tests”: in which we worked in small groups to create charters to test an application of our choice. A nice “dojo” style exercise with some good discussions in the retrospective part.
  2. Rikard Edgren – “Curing Our Binary Disease”: a great presentation in which Rikard warns us for the Binary Disease. A serious disease with four symptoms: pass/fail addiction, coverage obsession, metrics tumor and sick test techniques.
  3. Louise Perold – “Tales from the financial testing trenches”: for me, working for a bank, this case study was very interesting. She shared her experiences testing context-driven in the financial domain: some topics she covered were low-tech dashboard, reporting with mind maps, learning & motivation, effect mapping and debriefing.
  4. Anne-Marie Charrett – “Coaching Testers”: in this session Anne-Marie did a short introduction presentation of the coaching model she and James Bach have developed. After that she did a short coaching session via skype on the beamer to show how it works. Next the group was invited to coach 5 anonymous testers via skype on the laptops in the back of the room. The exercise was fun and it was interesting to see how a coaching session via skype evolves. David was one of the testers coached, read his write-up.

Markus Gärtner - Charter your tests

Working in groups in Markus’ session

Sharing and discussing results

Some Results
and flip chart art


Lunch Outside (who is the person in the middle?)


Rikard Edgren - Curing Our Binary Disease


Louise Perold - Tales from the financial testing trenches


Anne-Marie Charrett - Coaching Testers


Live coaching testers


Evening fun

Again the evening brought lots of fun: Let’s Test had an amazing evening program including guided art and nature tours, sports, open space, lighting talks, competitions in the test lab, quizzes and … sponsored free beer! Great to have all attendees of the conference present at the same venue all night. This creates a wonderful ambiance with fun, good conversations and you meet a lot of interesting people.

Xbox ski fun

Let's Try


Test Lab Heroes


The Test Lab packed


Ideal game for testers: Set!


Sun comes up, time to go to bed!


Let’s Test 2012: an awesome conference! – Part 2

Monday May 7th: Let’s Test Tutorial day

Keynote Michael Bolton
Ola opened the conference officially with this great song by one of my favorite bands. When the music started I looked around the grand hall to see where they had hidden the canons… you never know.

And we rocked! The first keynote was Michael Bolton, who did a great talk “If it is not context-driven, you can’t do it here”. Reminding us in one of his first slides that the title is ironic. See the live blog by Markus Gärtner for a full report on this great talk. There were a lot of tweets during the talk like these: “Adopt or adapt a clients context is part of the paradox of being a context driven tester”, “Mature people don’t try to get rid of failure, they manage it” and “Testers are in the business of reducing damaging certainty”. Meike Mertsch created some awesome drawings to capture the opening and keynote. This reminds me that I want to do the same course Markus and Meike did…

The event hall filling up

Michael Bolton – If it’s not CD you can’t do it here!

Live blogging by @MeikeMertsch

Tutorial Fiona Charles
After the keynote it was tutorial time and I joined Fiona Charles on the topic : “test leadership”. A nice big group of 25-30 people sat in a circle and we introduced ourselves shortly and explained the motivation to be in this tutorial. I always enjoy the variety of reasons why people chose a specific session. During the day we did a couple of interesting exercises and debriefed them quite extensive. In one of the exercises was, we were decided in two groups. Each group had to create a leadership challenge for the other group in 45 minutes. The other group would get 45 minutes to solve it. After creating the challenges, they we both solved by the groups. The interesting thing in these exercises is while working on the exercises, you are also the subject of the exercises and you are aware of that fact. Got some interesting insides and take-aways to chew on.

Fiona and some attendees


The tutorial group


More tutotial attendees


Working in groups

Teamwork during exercise

More discussion

Results drawn by Markus

Results in a mind map

An evening in the Test Lab
After a beautiful diner it was time for the evening program. There was so much to do but I chose the test lab, since I like actual testing with my peers on these occasions. We did a group exercise planning collaborative with corkboard.me as planning tool. Here I noticed the common problem if people all have their own device: everybody is too much focussed on what they are doing and not really collaborating. A lot of lessons and a good experience again, so cheers to James and Martin who organized the lab here. The rest of the evening we spend having a couple of beers and discussing al kinds of test and non-test related topics.

Working with corkboard


Discussion and concentration

A lot of hard work

Let’s Test 2012: an awesome conference! – Part 1

The world of software testing is changing and Context-driven testing (CDT) is upcoming. In the USA it is better known and more applied than with us in Europe. Overseas in the USA the Association for Software Testing (AST) is fairly context-driven. They organize a conference called CAST every year where CDT is one of the main topics. In 2011 the theme of the whole conference was context-driven testing. People like Cem Kaner, Michael Bolton and James Bach travel the world to learn others about CDT. They encourage others to create peer workshops like DEWT in the Netherlands, SWET in Sweden and GATE in Germany. These peer workshops help spread the word about CDT. At other conferences CDT gets some attention, but that isn’t enough… In the summer of 2011 five brave gentlemen from Sweden decided to create something beautiful: a context-driven conference in Europe! Let’s Test was born. I am very pleased to be one of the approx. 140 people who took part in the first Let’s Test ever. I feel very honoured that I was part of the totally excellent line-up of speakers that spoke there.

Me and Peter (aka Simon) Schrijver arrived at Runö in Åkersberga early Sunday morning after picking up Fiona Charles at her hotel in Stockholm. Here we met the organisers of the conference: Johan Jonasson, Hendrik Andersson, Ola Hyltén, Torbjörn Ryber and Hendrik Emilsson. The venue is beautiful!

The venue: Runö – Möten & Events

Hotel buildings 1 and 2

Keynote hall (l) and main building (r)

My Room

The view

More view

The garden

Sunday May 6th: LTWET (LEWT goes Sweden / Let’s LEWT)

After a quick breakfast I joined a LEWT-style peer workshop organized by James Lyndsay. Peter joined the people who were trained as facilitators for the Let’s Test conference by Paul Holland. Later that morning some of them would join the peer workshop. The theme of the peer workshop was “design” and we started with a small group: James Lyndsay, Desi (James’ wife), Neil Thompson, Fiona Charles and me. Later Paul Holland, Ilari Aegerter, Ben Kelly, Torbjörn Ryber, Rikard Edgren, Peter/Simon Schrijver and Simon Morley joined.

Dot voting monkeys

Great topics


Fiona, Paul and Ilari

Torbjörn and Peter/Simon

Desi, Simon and Rikard

James, Ben and Neil

Neil on relation between analysis and design


Rikard on Charisma Testing

Simon on Experimental design

Sunday evening May 6th: pre Let’s Test fun

After dinner drinks

More drinks in the cafe

They also have coke 😉

Testing Dojo

Last week I organized a Testing Dojo. I wanted to try a dojo for a long time and I was quite disappointed that Markus Gärtner was ill at Agile Testing Days so the testing dojos were cancelled. What do you do when you want to experience a testing dojo? Right! You just get a bunch of people in a room and try it yourself! To make this dojo a success I read the website Testingdojo.org from start to finish and scanned testing-challenges.org for a good challenge. I also skyped with Markus for an hour to discuss the dojo. For me the goals of the dojo were:

  • Experience what a testing dojo is (useful? fun?)
  • Learn from group
  • Practice stuff we learned in RST last year

At our dojo 20 people showed up: mostly testers but also a business analysts and a developer. After ordering pizza for all, I did a short intro presentation. I explained what a testing dojo is, what the rules are for the dojo and the mission for the evening. I wrote down the rules on a flip-over and put that on the wall:

  • Work in pairs
  • We recognize 3 roles:
    1. Driver / tester
    2. Recorder / note taker
    3. Observers (audience)
  • Switch roles after 5 minutes:
    1. Tester –> audience
    2. Recorder –> tester
    3. Audience –> recorder
  • Everybody in audience (observers) take notes using checklist
  • Audience does not interfere with testing, unless they are asked

Mission:
Test parking calculator Gerald R.Ford International Airport using SFDPOT and the Test Heuristic Cheat Sheet.

Observer checklist:

  • What happened?
    • Testing
    • Communication
    • Collaboration
  • How do they communicate?
  • Are they listening to each other?
  • Who is in charge?
  • Also note emotions
  • What path do the testers take?
    • depth
    • breadth
    • Explore first: what do we have?
    • When does what happen
    • What happens if they find a bug?

I gave the group a sneak preview of the parking calculator. I also showed them the webpage with the information on the car parking and the parking rates. While preparing the dojo I thought the participants would like to do some preparation while eating pizza. I figured they could think about their approach and maybe write down some test ideas or other stuff they thought that could be useful. But the group wanted to start right away. There were some questions about the exact purpose of testing this application. I told the group, while playing the role of product owner, the goal is to inform me if this calculator needs adjustments.

In the room the participants were sitting in a u-shape all facing the screen were the beamer projected the display of the laptop in front of the room. The pair behind the laptop was facing the group. I think it’s not easy to test when 20 people are watching you and you are sitting in front of the room. In the back of the room were 2 flip overs with the parking rates. We decided that everybody would have 3 minutes of testing time and after 3 minuted we would switch roles. After three pairs had finished their testing we stopped for a short retrospective to discuss the progress so far. We noticed that nobody was in charge really and communication between the pairs was minimal. They were trying stuff, without having a clear strategy or approach. The first pair could get away by telling they were exploring the application to see what they had in front of them, what it could do, etc. The other pairs had some sort of strategy but couldn’t explain. A discussion on the further approach took place and the group decided to create a list on a flip over with items that needed testing. I noticed that SFDPOT wasn’t used explicitly and the strategy followed by the group was almost fully focussed on testing functionality.

After a flip over with the “test strategy” was created by one of the participants, testing was more focussed. But still I noticed that it was difficult for the participants to test in pairs (communication) and keep focussed without scripted tests (structure). None of the pairs debriefed or communicated with the next group. To fill this “gap” I asked every pair after their time was up, if I could “tick off” one of the items from the test strategy. One of the participants created a list of test ideas before he took his place behind the laptop. It was fun to see a lot of different styles and approaches. I also noticed that being observer is a rather difficult job. Not a lot of notes were taken on observations despite the checklist on the wall. Most of the participants focussed their observations on the testing and the application being tested, rather then on the people testing and their communication and collaboration.

After all pairs had their go testing, we did a retrospective. Below the feedback from the group.

Lessons learned:

  • Group was too big, a group of up to 10 people is better.
  • More time to test: with a smaller group it’s probably better.
  • Give the group a bit more structure. It is helpful when the starting point is clear.
  • Stop more often to evaluate. Idea: 1. Mission – 2. Test – 3. Self debrief – 4. Group feedback – 5. Test again

It was a great evening, although it didn’t went as smoothly as I hoped. But we learned a lot about testing dojos and I think it was a successful evening. I hope the attendees went home with the same feeling. In a small group we discussed the dojo somewhat further and the reactions were enthusiastic. There is still enough room for improvement, as always: practice makes perfect! For a first time I think this was quite a successful evening. A second dojo is already planned. I am looking forward to it.

Other notes made during the test dojo:

 

Newer posts »