Category: Context-Driven (Page 3 of 4)

Misconceptions about testing

Shmuel Gershon’s tweet pointed me to an article on the scrum alliance website Agile Methodology Is Not All About Exploratory Testing by Dele Oluwole. I share Sigge Birgissons concerns: “the post clearly shows what I mean when having deep concerns about the knowledge of testing in agile community”.

I think Dele doesn’t fully understand what testing is or at least he uses a different definition than I do. And certainly he doesn’t understand exploratory testing. Rikard Edgren wrote an open letter to explain what testing is. Please read his high level summary of software testing to understand what testing means to me.

"It is imperative to state in clear terms why Agile testing cannot be all about exploratory testing"

(The text in these gray frames are quotes form the article by Dele)

I wrote a post some weeks ago about agile testing titled what makes agile testing different: agile testing isn’t that much different from “other” testing. Why do some people think agile is so different? To me there is no such thing as Agile testing. There is testing in an agile context. And every agile context is probably different. So saying that agile testing cannot be about exploratory testing makes no sense to me.

It is unequivocally the case that: you cannot estimate your time for exploratory testing, i.e., assign points realistically

Estimating testing is an interesting topic. This blogpost by Morgan Ahlström nicely emphasizes that estimates are guesses. Martin Jansson of the Test Eye writes about “utopic estimations” here. Michael Bolton wrote a lot about estimation here, here and here. He explains that testing is an open-ended task which depends on the quality of the products under test. The decision to ship the product (which includes a decision to stop testing) is a business decision, not a technical one.

Especially the fifth part of the black swan series is interesting in this discussion because Michael writes about the fallacies surrounding “development” and “testing” phases (by James Bach). He also explains why estimating the duration of a “testing project” by counting “test cases” or “test steps” is not a smart thing to do.

"You cannot plan for exploratory testing, as you do not have defined expected results."

Why are some people so obsessed by expected results? And why is there a need to have expected results to be able to plan testing? Expected results can be very helpful, but there is much more to quality then doing some tests with an expected result. A definition that I like is by Jerry Weinberg: “Quality is value to some person”. To understand this, you might want to read this blogpost to understand that there is more to quality than the absence of bugs. Also have a look at the excellent free ebook “The Little Black Book on Test Design” by Rikard Edgren. On page 2-6 he uses the “testing potato” to explain that there are more important aspects to the system than the requirements only.

"There is no defined scope for exploratory testing."

In the exploratory testing I do in my projects there is a “scope”. I do targeted and focused testing using charters, sessions and planning my testing using a dashboard that resembles a scrum board. Have a look at the slides of my presentation “Boosting the testing power with exploration“.

Using a coverage outline in a mind map or a simple spreadsheet, I keep track of what I have tested. My charters (a one to three sentence mission for a testing session) help me focus, my wrap-up and/or debriefings help me determine how good my testing was. My notes and the sessions sheets keep track of what I have done in my testing. Like in scrum, I use “standup” meetings to plan my testing. In these meetings we discuss progress, risks, priorities and charters to be executed. This helps us to make sure we do the best testing we can do continuously.

"The tester, product owner, and Scrum team are not in control."

I am not sure if I understand what you are trying to point out here. Are you saying that the team is not in control when doing exploratory testing? My model above shows that you can be in control when doing exploratory testing when done right. Exploratory testing is NOT ad hoc or unstructured when done right. If you do it right you will have control!

"There is no measure of progress, as testers cannot determine when testing is enough."

How do we determine how much testing is enough? Stopping heuristics might help here. No matter how simple the system is we are building, there are simply to many variables to test everything. So testing is about making choices what to test and what not to test. Even with a huge amount of automated checks, we can not check/test everything (to understand the difference between testing and checking read this). Testing is not about doing X test cases and when they all pass, you are done. Testing is providing information for managers to make good decisions. And when do managers have enough information to inform their decisions?

Still, not many Agile projects will require just two phases, like integration and regression. But it's definitely not only exploratory testing that's needed, as is erroneously believed in some quarters.

I am not sure what you mean by two phases. What do you mean by testing in phases? I like to use the agile testing quadrants when I try to explain how I think of testing in an agile context.

A team is developing software and the programmers do testing before checking the software in and making it available to the team. How do we call that sort of testing? Unit testing? But is that /only/ testing done by the programmer? I argue that the programmer might do all kinds of testing before checking his code in, even functional and acceptance tests. They probably will create a lot of automated checks and maybe even do some exploratory testing to see if the software meets their expectations: quickly testing some usability and performance aspects. So before the software is checked in, the programmer has covered testing in all 4 quadrants. This does not mean testing is done. More testing can be done, it depends on the context. What does the product owner wants to know? What are the risks involved? How much time do we have left? When discussing a test strategy I try not to speak about phases, I like to discuss what gets covered and why. What information is needed by the team and it’s stakeholders? When talking about coverage I do not mean code coverage but test coverage: the extent to which we have traveled over some map (model) of the system.

So I don’t think you can say “only exploratory testing is needed or not”.

Dele concludes his article with this statement:

It is the responsibility of the tester (and the Agile/Scrum team) to ensure that acceptance testing is in line with the expectation of the product owner. If we agree that there is an expectation, we therefore have to design test cases (even if minimal) that will verify the specified acceptance criteria."

Dear Dele please read about testing and exploratory testing. Some good starting points are these lists of resources on this blog or the one made by Michael Bolton: resources on Exploratory Testing, Metrics, and Other Stuff. I am happy to point you to more good sources of information if you are interested. Just let me know.

Software Quality Characteristics in Dutch

At EuroStar 2012 in Amsterdam, Henrik Emilsson did a talk about the Software Quality Characteristics poster made by The Test Eye. After his talk he asked if someone was interested in translating this poster into other languages. Me and my DEWT colleagues happily picked up this gaunlet and we proudly present the Dutch translation: Software Kwaliteit Kenmerken. I use this checklist often when preparing my testing. Now available in the Dutch language!

 

Why use checklists?

The modern world has given us stupendous know-how. Yet avoidable failures continue to plague us in health care, government, the law, the financial industry—in almost every realm of organized activity. And the reason is simple: the volume and complexity of knowledge today has exceeded our ability as individuals to properly deliver it to people—consistently, correctly, safely. We train longer, specialize more, use ever-advancing technologies, and still we fail. Atul Gawande makes a compelling argument that we can do better, using the simplest of methods: the checklist (source: Amazon.com)

What makes agile testing different?

Last week Pete Walen asked me the following question via twitter: The Question (read it carefully!): What is it that makes Agile Testing different from “other” testing?

Agile vs agile?

What is agile testing? And what is agile? Let me first say that I do not distinguish agile and Agile. For me it is all the same. Agile is a mindset, a way of looking at the world. For me it is not a process or method. It is more a container than a way of working. This blogpost discusses agile vs Agile and I like it. It describes agile as a mindset and Agile with a capital A as something commercial: “The problem here is that all of these sensible suggestions got formalized into “Agile”, with a Capital A. This set of suggestions for “things that seem to work” got boxed up into a package with a bow on top, to be sold to companies and managers.”

Agile testing?

And agile testing? What is agile testing? I rather say testing in an agile context instead of agile testing. Good testing in an agile context is done by looking first to the details of the specific situation. Remember the 7th principle of context-driven testing: “Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.”

Testing is an essential part of software realization. Implementing testing in an agile context is a challenge. It comes with some interesting challenges for testers:

  • by the iterative nature of working, there is less time to test compared to the testing most of us are used to in a more traditional (waterfall) context. It requires a different approach to testing. There is a different phasing to testing in an agile environment.
  • how can I make sure that I can perform sufficient testing fast enough to keep up with the project?
  • testers need to ensure that “self-managing teams” do enough testing
  • cope with the changing team dynamics in which people work and where interaction is important
  • integrating structured testing in an environment where change is common
  • deliver added value as a tester when there is no software to test

Testers need to deliver immediate value. In an agile environment rapid feedback allows the team continuously forward. Testing should instantly provide useful and understandable information about the status of the products in development. It allows the team to deliver insightful value to the business continuously and to make maximum progress.

Different?

So what is the difference? I think the testing itself is not so much different, it is the context in which you do the testing is different. If you are aiming on differences between waterfall and agile my list of most important differences would be:

  • less time to prepare, execute and report (short sprints).
  • iterative and incremental approach: excellent unit testing is essential.
  • test automation (some rather call it automated checking or tool assisted testing) is essential for fast feedback and continuous integration.
  • role change: less testing, more coaching. Testers become “test coaches” or “quality directors” to make sure the team is doing sufficient testing. Enough (not too much nor too little) and of good quality.
  • cope with less certainty: change is common. Test documentation needs to deal with change by being transparent, using simple dashboards and light weight test documentation.
  • team work: where many testers are used to work in TEST teams, they are now working in DEVELOPMENT teams.
  • continuous critical thinking: testers need to help the team by thinking critical about the impact and risks. Where testers were used to do that upfront while writing documents like master test plans, they now have to do that continuously throughout the project: in grooming sessions, daily standups, planning sessions, etc. But also in their own work: making choices about what to cover: broad and depth.

Again: the testing itself is not so much different, it is the context in which you do the testing is different!

On the scale of Context-driven…

In the last edition of Testkrant (in Dutch) I published an article on context-driven testing called “I am a context-driven tester! Huh? Really? So?“. In this article I (try to) explain what context-driven testing means and why I think I am context-driven. Jan Jaap Cannegieter reacted via email asking an interesting question which has crossed my mind several times already. The following quote is from his email but translated and slightly changed:

“Isn’t everyone context-driven to some extend? And I mean that on a sliding scale. People who always use the same method and implements this method slightly different every time are maybe 2% driven context (I have combined context-driven and context-aware, sorry for simplification). The Jedi tester using dozens of test methods that he blends to a unique test approach to apply in a specific situation is perhaps 98% context-driven.”

ETscaleJon Bach presented a “freedom” scale in his presentation Telling Your Exploratory Story at Agile 2010 Conference. Jon contrasts scripted testing and exploratory testing by plotting them in the freedom scale above.

Could such a scale also be applied to being a context-driven tester? Contrasting “Context-oblivious” with “Context-driven”? Maybe putting “context-aware” somewhere in the middle of the scale? Context-driven, context-oblivious and context-aware are explained on the website www.context-driven-testing.com.

cdt_scaleI am not totally happy with this model yet, but can’t put my finger on it how to improve it. There is more to being context-driven as only applying methods and techniques. I also ask myself what is the added value of such a scale? I think it helps testers understand the differences between context-oblivious, context-aware and context-driven better. It might also make it easier to bridge the gap between the extremes or even advocate that everybody is or can be context-driven in some extend?

What do you think?

DEWT2 was awesome!!

Last Friday (October 5th) a bunch of software testing nerds and one agile girl gathered in Hotel Bergse Bossen in Driebergen talk about software testing with the central theme “experience reports: implementing context-driven testing”. Ruud published almost all the photos I took on the DEWT website, so I have to write a report in text here. Thanks dude 😉

After some drinks and having dinner we gathered in the conference room called the chalet. I think they call it the chalet because of the humid smell inside because it certainly didn’t look like a chalet.

But never mind, the room was good enough to do a peer conference. Lighting talks were on the program for Friday and Jean-Paul started with a talk in which he asked the question: “Is the context-driven community elitist?”. Jean-Paul sees a lot of tweets and blogs from people in the context-driven community who seem to look down on the rest of the testing community sending the message “look how great we are!”. Is this effective behaviour given the fact that the context-driven community wants to change the way people test, he asked himself.

Joris had a short and clear answer to the first question: “yes!” (the context-driven community is elitist). We had a long discussion that went everywhere but never close to an answer to question about effectiveness and how it can be done better. Never the less it was a valuable and fun discussion. I will blog about this later this year.

We had a great evening/night with home brew beer “Witte Veer”, Belgium beer brought by Zeger and a bottle of Laphroaig quarter cask brought by Joris. Sitting at the fire place a lot of us stayed up until late

Saturday morning the last DEWTs and guests arrived. With 23 people in the room we had a great group. Our guest were: Markus Gärtner, Ilari Henrik Aegerter, Tony Bruce, Gerard Drijfhout, Pascal Dufour, Rob van Steenbergen, Derk-Jan de Grood, Joep Schuurkes, Bryan Bakker, Lilian Nijboer and Philip-Jan Bosch. The DEWTS: Adrian Canlon, Ruud Cox, Philip Hoeben, Zeger van Hese, Jeanne Hofmans, Joris Meerts, Ray Oei, Jeroen Rosink, Peter Schrijver, Jean-Paul Varwijk and myself.

Peter was our facilitator and for the first time we used k-cards at DEWT to facilitate the discussion. I really like this method, but Lillian didn’t. She had an interesting discussion on twitter with some DEWTs.

Jean-Paul: I like the LAWST format we are using for #DEWT2
Lilian: @Arborosa i don’t. impersonal and inhibits useful discussion
Markus: @llillian @Arborosa What’s the improvement you’re suggesting? 🙂
Lilian: @mgaertne @Arborosa but i am invited and feel i at least have to try it this way
Jean-Paul: @llillian @mgaertne Invite us to an agile event to learn alternatives
Markus: @Arborosa @llillian Funny thing, I would like to introduce more focused-facilitated sessions quite often in agile discussions using k-cards.
Lilian: @mgaertne @Arborosa different ppl prefer different things 🙂

Ilari kicked of with a presentation about the introduction of context-driven testing at eBay. He had two very interesting ideas: the first is “Monday Morning Suggestion” a short email to his team with tips, tricks or interesting links. The second was that he supports his team in getting better and learning by paying for their books and conference visits. Great stuff!

Second was Markus Gärtner with a story about him coaching a colleague to become a better tester and more context-driven. He talked about his lessons learned coaching where teaching gradually turned into coaching. He also gave some insight in the transpection sessions he did using the socratic method.

The third presentation was about changing people to become more context-driven by Ray Oei. An interesting discussion developed about how to get people to change. We focus too much on the people who do not want to change instead of working with the people who do want to change. Pascal asked an interesting question which I put on twitter: “If all testers become context-driven overnight, are we happy?” James Bach replied: “Yes, we are happy if all testers become CDT” and “We are happy if all testers take seriously the world around them, and how it works, and dump authoritarianism”. To me it doesn’t really matters, I do hope that more people become CDT, but I prefer good software testing no matter how them call it.

After the discussion Joris said he was attending the wrong peer conference since he expected a lot of discussion about testing instead of what he calls “people management”. An interesting question, but isn’t testing also a lot about people? Just an hour late we went for lunch and a walk in a wet forest. After the walk the group photo was taken.

The forth talk was by Ruud Cox who talked about testing in a medical environment. He described the way he has been working. He and his colleague tester use exploratory testing to learn and explore. Scripted testing was used to do the checking. Ruud explained that exploratory testing fits very well in an R&D environment. After his talk we had an interesting discussion about auditors and their role in testing.

Around 16:00 it was my turn to do my talk about implementing context-driven testing within Rabobank. I did a short version of the talk I will do at EuroStar in November telling about the challenges we had and what we did to change our context. What did we do? What worked and what didn’t? Some interesting questions were asked. And we had a nice discussion about the personal side of becoming more context-driven where Joep explained how he became context-driven. The first stage was being interested and he went looking for a book. The second stage took him over a year where he struggled with the stuff he learned at RST and trying to adapt his way of working. The third and last stage is where he doesn’t need anything anymore because he will always find a way to make things work.

After my talk and the discussion the day ended. We did a quick round to discuss our experiences and thanked the people for attending, organizing and facilitating. Our agile girl still doesn’t know what to think about the facilitation with the k-cards, she obviously had to get used to discussing with the cards. I am curious how she thinks about them when she had the chance to think about it some more.

All together I had a great time. Pity we didn’t play the Kanban game on Friday. But I hope Derk-Jan will give me another chance at the game night at TestNet in November. Thanks all for the awesome weekend!!

DEWT3 is already being planned for April 20 and 21 next year and James Bach will attend. Let me know if you are interested in joining us in April 2013. The topic hasn’t been decided yet, but if you have great ideas please share them.

Standing left to right: Ray Oei, Jean-Paul Varwijk, Adrian Canlon, Markus Gärtner, Ruud Cox, Joris Meerts, Pascal Dufour, Philip Hoeben, Gerard Drijfhout, Bryan Bakker, Derk Jan de Grood, Joep Schuurkes, Lilian Nijboer, Philip-Jan Bosch, Jeroen Rosink, Jeanne Hofmans
Kneeling left to right: Tony Bruce, Zeger van Hese, Ilari Henrik Aegerter, Huib Schoots, Peter Simon Schrijver

15 test bloggers you haven’t heard about, but you should…

Everybody knows about James Bach, Michael Bolton, Lisa Crispin, Elisabeth Hendrickson, Gojko Adzic and other famous test bloggers. If not, this is a serious wake up call: WAKE UP! Open your eyes and start reading some of these excellent blogs about testing. But these are many other blogs on testing worthwhile reading. I have a large list of blogs by colleagues which I read regularly. I went through this list and selected these fairly unknown bloggers for you. I invite you to read their blogs, comment on them and come back often. As you can see it is an international community!

1. Australia: David Greenlees – Martial Tester

David is from down under and I “know” him form the Miagi-Do School of software testing. He has a goal in his testing life to get mentioned on the commendations list by James Bach. A great goal and while typing this and reading through the list, I want to get mentioned in that list too!

2. Belgium: Zeger van Hese – Testsidestory

Fellow DEWT, test friend and honorary Dutch. Did I mention that Zeger is program chair for EuroStar 2012? How can a EuroStar program chair be so unknown while is a) as great and fun guy to hang out with and b) a great tester who has done awesome presentations and writes great blog posts? “On being context-driven” and “Finding Porcini” are some of the great posts I like.

3. Canada: Iain McCowatt – Exploring Uncertainty

Iain was one of the trainers when I took BBST foundations last April and I started to read his blog. He did a great series on regression testing and writes well thought blog posts. Can anyone tell me how to pronounce his name?

4. Finland: Pekka Marjamäki – How do I Test

Pekka is an interesting guy. I “met” him via twitter and a while later I found his blog. Interesting stuff! Even Rex Black has commented recently on one of his posts. I still have to talk to him about Testing with the stars. Read about it on his blog. Very interesting concept.

5. Japan: Ben Kelly – Testjutsu

I met Ben in Sweden at Let’s Test in 2012 where he did a great presentation called The Testing Death. Ben is a thoughtful person of no so many words but still fun the hang around with. Give his blog a read. He isn’t very active recently but you still can find some jewels there.

6. Netherlands: Jean-Paul Varwijk – The wondrous world of software testing

He cannot be absent in this list. My fellow DEWT and testing buddy at work. Jean-Paul has an impressive track record the last year: he managed to get a black belt at the Miagi-Do School of software testing while doing his introduction challenge (which is very rare), he impressed James Bach with his answer to a challenge given to him over dinner and he does a couple of national and international talks on big testing conference all over Europe this year. A guy to keep an eye on…

7. Netherlands: Joep Schuurkens – The testing curve

I met Joep at Let’s Test conference in Sweden and found out he is actually Belgium 😉 Joep is context-driven and writes about that on his blog, his posts on The Seven Basic Principles of the context-driven school are great! Joep is also a great guy to hang out with (at least on a conference he is).

8. Romania: Jari Laakso – Software Testimonies

Jari is hyperactive on twitter and loves puzzles! He creates his own very challenging puzzles and blogs about that. Find him on skype and he will have you puzzling for at least a day. This guys like to get challenged, read this post about “18 Testing Challenges from Santhosh Tuppad”.

9. Scotland: Darren McMillan – Better Testing

Darren is been around quite a while and isn’t very active lately. But he is the king of mind mapping in testing. His blog post “Mind Mapping 101” has 53 comments and is awesome, just like “Essential mind mapping: Rapid test design”. If you want to do something with mind mapping come to my tutorial on Agile Testing days OR read Darren’s blog. He by the way also writes about other topics.

10. Sweden: Johan Jonasson – Testing Thoughts

Not a very original name for a blog since Shmuel Gershon uses the same name for his great blog. [Update: Johan changed the name to: “Let’s talk testing”]. Johan is one of the people behind Let’s Test and after reading all the posts on the conference he must have thought he is able to do that too, well have a look yourself. I like his post on “Thinking Visually” very much.

11. Sweden: Rikard Edgren, Henrik Emilsson and Martin Jansson – Thoughts form the Test Eye

These gentlemen were also highly involved in Let’s Test and are blogging for some time now. They get help from Torbjörn Ryber and Henrik Andersson who also write interesting stuff. This blog is stuffed with great posts, hours of interesting reading guaranteed.

BTW: download the two free books written by Rikard and Torbjörn! Both highly recommended!!

12. Sweden: Simon Morley – The Tester’s Headache

Simon has a blog since 2009 and I am reading his stuff since a year or maybe a little more. I met him at Agile Testing Days last year and had some great conversations with him. Simon has a sharp mind and that shows in his writings…

13. Switzerland: Ilari Henrik Aegerter – ilari.com

Ilari lives in Switzerland and does online coaching like James and Anne-Marie and he does a great job. Ilari is fun and writes interesting stuff. Have you answered his question in the post “Major Consensus Narrative, Asking Supposedly Hyper-Smart Questions and Being Context-Driven” already?

14. UK: Duncan Nisbet – Bespoke Testing

I met Duncan at Let’s Test in Sweden last May but before that we did BBST together and we are both in Miago-Do. Duncan is a polar bear: when everybody was cold in Sweden and was wearing jackets and sweaters, Duncan still walked around in shorts and on flip-flops. He wrote a nice post about working together on-line: “Collaboration In Testing”

15. UK: Tony Bruce – Have you ever danced with the tester?

I met Tony last year at Test Bash and over dinner I found out he is a great guy. Cheerful, subtle and fun. Tony is the driving force behind the test gatherings in the UK. Meetups of testers in a bar with short talks and lots of beer! One of his last post is great: “Which do you have? A job or a career?”

 

Adaptability vs Context-Driven

Last week Rik Marselis send me an email pointing me to an article with the title “The adaptability of a perceptive tester”.  He added: “Have you read this article? Should appeal to you!”. The article is written by a couple of Dutch (Sogeti) testers. And they, so the introduction tells me, get together once in a while to discuss the profession of testing and quality assurance. This time they discussed some remarkable examples of their experience that perceptive testers (who are aware of the specific situation they’re in) adapt their approach to fit the specific needs.

I replied to Rik with the following email:

Hey Rik,

Nice article, I had already seen it. But adaptive or perceptive is not context-driven. I also totally disagree with the conclusion:
“Together we concluded that although TMap NEXT was introduced some six years ago it still has a solid foundation for any testing project since the essence of adaptiveness makes sure that any perceptive tester can collaborate in doing the right things at the right moment in any project and ensure that the proper quality is delivered to solve the problem of the stakeholders.”

TMap Next contains a number of dogmas (or rather is based on a number of dogmas) like: testing can be scheduled, the number of test cases is predictable, content of a test case is predictable, sequence of process, etc.
Therefore I think TMap Next is not usable in every situation. At least not effectively and efficiently. Being adaptive is good, but I can imagine situations that TMap Next has to be adjusted rigorously that the result is no longer recognizable as TMap. In addition: TMap Next says that ET is a technique: a good example of another dogma. And it shows me that TMap is not about testing but about a process and deliverables … Maybe I should write a blog about this.

Regards,
Huib

Rik replied with this email:

Hi Huib,

We really need to reserve some time to discuss this from different sides because some things that you say I totally disagree with. A conscious tester can handle any situation with TMap. I think whether ET is a technique or approach is really a non-discussion. TMap calls it a technique so you can approach testing in different ways in different situations. And since TMap itself is a method you cannot call ET a method too.

I think Context-driven means you choose your approach depending on the situation.
I think Adaptive means you choose your approach depending on the situation.

Perceptive means conscious, as you are aware of the situation, you can choose an appropriate approach. Well, it is worth discussing.

Regards,
Rik

Okay, so let’s discuss!

Exploratory testing
Let’s start with the ET discussion. What does TMap say about this? ET is a test design technique. And the definition of a test design technique (on page 579 of the book “TMap Next for result driven testing”): “a test design technique is a standard method to derive, from a specific test basis, test cases that realise a specific coverage”. Test cases are described on page 583 of the same book: “a test case must therefore contain all of the ingredients to cause that system behaviour and determine wheter or not is it correct … Therefore the following elements must be recognisable in EVERY test case regardless of the test design technique is used: Initial situaion, actions, predicted result”.

Let’s connect the dots: ET is called a test design technique. A test design technique is defined as: a method used to derive test cases. But ET doesn’t use test cases, not in the way TMap defines them. It can, but most of the time it doesn’t… Mmm, an inconsistency with image, a claim, expectations, the product, statues & standards. I would say: a blinking red zero! Or in other words, there /is/ something wrong here!

What is Exploratory Testing? Paul Carvalho wrote an excellent blog post simply titled “What is Exploratory Testing?” on this topic and I suggest people to read this if they what to understand what ET is. Elisabeth Hendrickson says: “Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next.”

Michael Bolton and the context-driven school like to define it as: “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to optimize the quality of his or her work by treating test design, test execution, test interpretation, and test-related learning as mutually supportive activities that continue in parallel throughout the project.”. Michael has a collection of interesting resources about ET and it can be found here.

So Rik, your argument “since TMap itself is a method you cannot call ET a method too” is total bullshit! It sounds to me as “there is only one God…”.

Context-driven testing
Don’t get me wrong, being adaptive and perceptive is great, but that doesn’t make testing context-driven. A square is a rectangle but a rectangle is not a square! Please have a look at the context-driven testing website by Cem Kaner. Also read the text of the key note “Context-Driven Testing” Michael Bolton gave last year at CAST 2011. In his story you will see that adaptive (paragraph 4.3.4) is only a part of being context-driven. I admit, it is not easy to really comprehend context-driven testing.

Do you think it was TMap Next that was the common success factor in the stories shared in the article… I doubt it!

Let’s Test 2012: an awesome conference! – Part 4

Wednesday 9th: Let’s Test day 3: sessions

Keynote Scott Barber

Last day of this awesome conference. Because we went to bed quite late (or early, it depends how you look at it), I was a bit hungover. But the adrenalin for my upcoming talk made my hangover disappear unbelievable quickly. The day started with a keynote by Scott Barber titled “Testing Missions in Context From Checking To Assessment”. I had no clue of where this talk would bring us but the title intrigued me. Scott started with a fun incoming message where he was asked to test a website and all discrepancies in production would be blamed on him. The message ended with the rhetorical question: “do you accept this mission?”. Scott talked about missions and tasks and he gave some nice examples using his military history. His advice: “always look at the mission two command levels up”. At the end of his talk he presented his new “Software System Readiness Assessment” and the Software System Assessment Report Card. I think I like his model, but I have to give it some more thought.

Scott Barber - Testing Missions in Context From Checking To Assessment

Scott relaxed on a bar stool on stage

Sessions

On Wednesday there were only 2 sessions because there was also a second keynote in the afternoon. I went to Michael Albrechts talk “from Good to Great with xBTM”. Michael talked about Session and Thread based test management. Both are very good, but combined they are great, he claims. He showed the tool SBTExecute. I haven’t had the chance to try it yet, but it looks promising. His talked showed how SBTM and TBTM can be combined using mind maps and how this approach can be used in agile.

My talk “So you think you can test?” was planned in the second session right after lunch. The rooms was packed and Zeger van Hese (did I mention that he is program chair of EuroStar 2012?) was facilitating the session. What could go wrong? After my talk there was a nice Open Season with some great questions. Thanks all for attending and participating, thank Zeger for facilitating. I hope everybody enjoyed my talk as much as I did.

Michael Albrecht - From Good to Great with xBTM

Look who's talking

So you think you can test? (Photo: Zeger van Hese)

Keynote Julian Harty

Julian did a magnificent keynote titled “Open Sourcing Testing”. He called testers to action to share their stuff with others so all can benefit, learn and eventually become better testers. One of his slides said: Ask “what you I share?” that doesn’t risk “too much”. And I think he is right. We should share more and we should be more open about what we do. Sure there is a lot of stuff which is too privacy-sensitive to share, but why should we reinvent the wheel in every organisation or project? By sharing we can also learn faster…

Julian Harty - Open Sourcing Testing

The End: Organisors on stage saying goodbye

The end…

Here you find all the presentations and lots of other blogs about Let’s Test 2012. I had a wonderful time, met loads of great people and learned a lot. So I think I can truly say: it was an awesome conference. I already registered for Let’s Test 2013 … Count down has begun. See you there? Or if you can’t wait so long: maybe we meet in San Jose at CAST 2012 in a couple of weeks? This promises to be an awesome conference as well!

People leaving, taxis driving on and off

My physical take aways: Julian, Rikard, Torbjörn and Scott: thanks!

Let’s Test 2012: an awesome conference! – Part 3

Tuesday 8th May: Let’s Test day 2: Sessions

Keynote Rob Sabourin

After a “tired” but funny introduction by Paul “Hungover” Holland, Rob Sabourin climbed the stage to do a great keynote titled “Elevator Parable” in which he told a story about a conversation he overhead in the elevator. The central thing is his talk was a bug from this conversation. Rob entertained the audience with voice mail messages from famous testers like James Bach and Rex Black. After every message the audience was asked to triage the bug. A very entertaining and interesting talk. Duncan has written a much more comprehensive blog post about this talk and you can find it here.

Paul "hungover" Holland

Rob Sabourin (photo by Anders Dinsen)

Rob Sabourin – Elevator Parable

Sessions

After the keynote I went to four really good sessions:

  1. Markus Gärtner – “Charter your tests”: in which we worked in small groups to create charters to test an application of our choice. A nice “dojo” style exercise with some good discussions in the retrospective part.
  2. Rikard Edgren – “Curing Our Binary Disease”: a great presentation in which Rikard warns us for the Binary Disease. A serious disease with four symptoms: pass/fail addiction, coverage obsession, metrics tumor and sick test techniques.
  3. Louise Perold – “Tales from the financial testing trenches”: for me, working for a bank, this case study was very interesting. She shared her experiences testing context-driven in the financial domain: some topics she covered were low-tech dashboard, reporting with mind maps, learning & motivation, effect mapping and debriefing.
  4. Anne-Marie Charrett – “Coaching Testers”: in this session Anne-Marie did a short introduction presentation of the coaching model she and James Bach have developed. After that she did a short coaching session via skype on the beamer to show how it works. Next the group was invited to coach 5 anonymous testers via skype on the laptops in the back of the room. The exercise was fun and it was interesting to see how a coaching session via skype evolves. David was one of the testers coached, read his write-up.

Markus Gärtner - Charter your tests

Working in groups in Markus’ session

Sharing and discussing results

Some Results
and flip chart art


Lunch Outside (who is the person in the middle?)


Rikard Edgren - Curing Our Binary Disease


Louise Perold - Tales from the financial testing trenches


Anne-Marie Charrett - Coaching Testers


Live coaching testers


Evening fun

Again the evening brought lots of fun: Let’s Test had an amazing evening program including guided art and nature tours, sports, open space, lighting talks, competitions in the test lab, quizzes and … sponsored free beer! Great to have all attendees of the conference present at the same venue all night. This creates a wonderful ambiance with fun, good conversations and you meet a lot of interesting people.

Xbox ski fun

Let's Try


Test Lab Heroes


The Test Lab packed


Ideal game for testers: Set!


Sun comes up, time to go to bed!


Let’s Test 2012: an awesome conference! – Part 2

Monday May 7th: Let’s Test Tutorial day

Keynote Michael Bolton
Ola opened the conference officially with this great song by one of my favorite bands. When the music started I looked around the grand hall to see where they had hidden the canons… you never know.

And we rocked! The first keynote was Michael Bolton, who did a great talk “If it is not context-driven, you can’t do it here”. Reminding us in one of his first slides that the title is ironic. See the live blog by Markus Gärtner for a full report on this great talk. There were a lot of tweets during the talk like these: “Adopt or adapt a clients context is part of the paradox of being a context driven tester”, “Mature people don’t try to get rid of failure, they manage it” and “Testers are in the business of reducing damaging certainty”. Meike Mertsch created some awesome drawings to capture the opening and keynote. This reminds me that I want to do the same course Markus and Meike did…

The event hall filling up

Michael Bolton – If it’s not CD you can’t do it here!

Live blogging by @MeikeMertsch

Tutorial Fiona Charles
After the keynote it was tutorial time and I joined Fiona Charles on the topic : “test leadership”. A nice big group of 25-30 people sat in a circle and we introduced ourselves shortly and explained the motivation to be in this tutorial. I always enjoy the variety of reasons why people chose a specific session. During the day we did a couple of interesting exercises and debriefed them quite extensive. In one of the exercises was, we were decided in two groups. Each group had to create a leadership challenge for the other group in 45 minutes. The other group would get 45 minutes to solve it. After creating the challenges, they we both solved by the groups. The interesting thing in these exercises is while working on the exercises, you are also the subject of the exercises and you are aware of that fact. Got some interesting insides and take-aways to chew on.

Fiona and some attendees


The tutorial group


More tutotial attendees


Working in groups

Teamwork during exercise

More discussion

Results drawn by Markus

Results in a mind map

An evening in the Test Lab
After a beautiful diner it was time for the evening program. There was so much to do but I chose the test lab, since I like actual testing with my peers on these occasions. We did a group exercise planning collaborative with corkboard.me as planning tool. Here I noticed the common problem if people all have their own device: everybody is too much focussed on what they are doing and not really collaborating. A lot of lessons and a good experience again, so cheers to James and Martin who organized the lab here. The rest of the evening we spend having a couple of beers and discussing al kinds of test and non-test related topics.

Working with corkboard


Discussion and concentration

A lot of hard work

« Older posts Newer posts »