Category: Exploratory Testing (page 1 of 2)

What testing can learn from social science – The video

What Testers Can Learn From Social Sciences

This is the video of my talk at TestBash 2.0 in Brighton March 2013.

TestBash 2.0 – What Testers Can Learn From Social Sciences – Huib Schoots from Software Testing Club on Vimeo.

What testing can learn from social science – Part 5

What can testing learn from social science?
Why is this important to testers? My conclusion is that testing and the social sciences are very much interconnected and there are many lessons that we could learn from this area. We should see what we can apply to our daily testing jobs. We should be more aware of what we do in testing: for example social research, making observations, doing critical thinking and most importantly continuing to learn. We should start learning from what people have done and are still doing in the social sciences. Testers should not only focus on quantitative analysis like bug counts or test case pass or fail, but also do qualitative research. Test reports should be stories about the product and the testing we did (see Michael Bolton’s article on test reporting and the telling of the story). We should use the numbers to support or backup our story. I often see it the wrong way around: lots of tables full of counts that do not tell us anything without the context. Managers do tend to draw their own conclusions and make decisions based upon this data, if we do not help them by telling the story. Again, testing is about collecting information for people who matter to enable them to make informed decisions.

Coverage?
If a manager comes up to you and asks you:

“So what is the coverage?”
or
“How many test cases do you have?“

What do you say? It is really hard to talk to managers who are obsessed by numbers and think that testing is about the number of test cases, right and wrong, green and red.

Consider this next time you talk to them. Make a simple calculation of all the possible tests you could do for the project you are working on regardless of how simple or complex it is. Think of all the possible combination both positive and negative.

What number have you ended up with?

  • 1 thousand?
  • 1 million?
  • 1 billion?
  • More?

The number of possible things that you can test are endless, exhaustive testing is futile. Even a simple requirement has infinite possibilities to test.

So what is the coverage if we do 1,000,000 test cases?

  • Coverage: 1,000,000 divided by infinite is very close to zero!
  • Coverage: 10,000.000 divided by infinite is still very close to zero!

So no matter how many test cases you have the coverage of all possible test cases you could have done is close to zero. This is why risk, priority and making choices become important for testing but that is a different topic.

Be a scientist
Science is important. It gave us critical thinking and that helps us proving the theories we have about the product. Try to prove yourself wrong instead of proving yourself right.

Ask critical questions:

  • Could it be something else?
  • Is this what I expected?
  • What did I do differently?
  • What else can I do?
  • How can I explain what I did?

While testing we should practice critical thinking: question things we encounter, make sure that what we see, is true (or not). While thinking we should be aware of fast conclusions, biases and fallacies. We often do it the wrong way around. If we focus too much on the numbers and the averages we miss the outliers: the unique random events that can do the most damage.

Qualitative research
The grounded theory method is a research method that operates almost in a reverse fashion from traditional social science research. You start with a view, theory or expectation before you start. However as you gather more data when testing, your theory/expectation becomes more ‘grounded’ upon the information you uncover or discover. Basically as you experience and gather more and more data you change your perspective and viewpoint. This is what social scientist do when they go and live with social groups. They have something they want to find out – an assumption or otherwise – and find out if it is right or not by taking part (compare missions in Exploratory testing).

In qualitative research done by anthropologists for example the context of the research is very important. Here they accept and deal with ambiguity, situational specific results and partial answers. Qualitative data deals with meanings. Use it when you want to understand the underlying thoughts and intentions.

Observation
Testers should not observe from the side-line. We should act like anthropologists do: become part of the group you are observing. Let me give you an example from my own experience.

A couple of years ago I worked as a test coordinator for a Media Company selling newspapers and magazines. We were implementing a new CRM system and my assignment was to organize the user acceptance testing. For some days I worked with the people selling the newspapers to learn how they were selling subscriptions and while doing that I learned what was important to them. First I was observing, but after a day I was selling newspapers myself and really learned how the users were working and what it took to make the department successful. We had requirements and designs, but the stuff I found out on processes and user sentiments was also very valuable to do testing. There were important steps not documented. I saw the people use the software in ways I didn’t expect and that wasn’t written down anywhere. I asked them what they did and why and they answered me “that is normally how we do this”. I learned that these people took short cuts to do their work. The team who were designing and building the software had no idea what I was talking about when I told them about my observations.

Humans will always take the shortest quickest route and the one that requires the least amount of thinking. This made clear how important it is to find out what people are thinking. And more important: the reasoning why they do the things they do. The product is a solution. If the problem isn’t solved, the product doesn’t work (5th basic principle of context-driven testing).

Now I know it is called qualitative research and I think every team developing software should do something similar. Try to really understand the users and the environment in which the product will be used. IT is often way to focussed on technical stuff. Testers go out there and meet the people who are using the product. Be part of their world for a while and start asking those critical questions.

Humans
Software is build by humans for humans. Social science is about people. Software should solve problems and help humans. To really solve a problem, we need to know more about how the users work, what they think, how they feel, their emptions, their desires. Too often I hear development teams say things like: “The user should not do that”. Or the all time classic: “no real user would ever do that!”. IT is way too technical focussed.

Read John’s blog titled “The Human Element”. It is an awesome story about his wife, a nurse, who explains why the human element is very important in her work. You see the parallel with our work?

Now it is your turn!?
Use the reading list to learn more about what social sciences, biases and other relevant topics. I am curious and I want to learn from you too. So please share your thoughts and experiences with social science.

“Great software is not produced or tested in factories, but in studios
and rehearsal halls.” (Michael Bolton)

I owe John Stevenson and Michael Bolton many thanks for their inspiration, great discussions and reviewing these blogs.

Reading List

What testing can learn from social science – Part 4

Social science: three presentations
Social science is about society, human nature and human interaction. It is an umbrella term to refer to sciences like anthropology, economics, education, linguistics, communication studies, sociology and psychology.

Anthropology teaches us about how people life, interact and something about culture. Education and didactic helps acquire new or modifying existing knowledge, behaviours, skills, values, or preferences. It helps us understand how we learn and how we can teach others. Sociology teaches us empirical investigation and critical analysis and gives insight in human social activity. Psychology is the study of the mind and behaviour and helps testers understand individuals and groups. Now how is this useful in testing? I’ll try to answer that question later. Let me first tell you about three awesome presentations on the subject of social science and testing.

Testing as a social science
Cem Kaner did a talk titled “Testing as a social science first” time in 2006 (slides are here). I haven’t had the pleasure to see the talk myself but the slides drew my interest. Cem made me aware that to test effectively, our theories of error have to be theories about the mistakes people make and when and why they make them. We design and run tests in order to gain useful information about the product’s quality.

Testing is always a search for information. Cem talks about measurement and metrics and the dangers of using metrics wrongly to measure test completeness (new updated article on this can be found here). He argues that bad models are counter productive. Cem also touched the topic of inattentional blindness in which humans often don’t see what they don’t pay attention to. He reminded us that programs never see what they haven’t been told to pay attention to. This is especially valuable when thinking about test automation. When testing we can’t pay attention to all the conditions. The systems under test are simply to complex and there are to many factors that are variable (and uncontrollable). He concludes that thinking in terms of human issues leads us into interesting questions:

  • What tests we are running and why?
  • What risks are we anticipating and how?
  • Why are these risks important?
  • What we can do to help our clients gather the information they need?

At EuroStar 2012 in Amsterdam I track chaired two excellent talks, which inspired me to study the subject of social science and qualitative research more.

Curing Our Binary Disease
Rikard Edgren talked about the getting cured from the Binary Disease (slides are here, video is here). The binary disease is when testers don’t provide useful information, because they aren’t allowed by (project) managers. They demand counting passes & fails and insist everything must be verifiable. The binary disease limits our thinking. Testers are addicted to counting passes and fails and don‘t communicate what is most important. When addicted there is no attention to serendipity moments. A model can help testers find important things, but a percentage number might not include things that are important. Therefore a coverage model is useful to get ideas but is not useful as a metric of completion. In his talk he introduces the testing potato to show that there are more things important besides written requirements. More about the potato can be found in his fabulous must read free eBook “The Little Black Book on Test Design”.

Testing Through The Qualitative Lens
Michael Bolton’s (slides of the StarEast version are here) talk elaborated differences between physical and social sciences. In physics, humans are ideally irrelevant and mostly get in the way of the experiment. Use quantitative and qualitative research methods and accept high tolerance for ambiguity, context-specific results and be aware of biases while doing research. We should value “partial answers that might be useful”. You do qualitative research when you want to understand something. You do quantitative research to inform that understanding. Quantitative research put human values first; use participant observation and practice storytelling and narration. Software testing is the investigation of systems composed of people, computer programs, products, and the relationships between them. Excellent testing is more like anthropology: interdisciplinary, systems-focused, investigative, and uses storytelling.

To be continued… part 5: So what can we learn from social science?

What testing can learn from social science – Part 3

People are predictably irrational
You think you are rational, but you are not. People fail to realize the irrationality of their actions and believe they are acting perfectly rational, possibly due to flaws in their reasoning. People’s actual interests differ from what they believe to be their interests. We have mechanisms that have evolved to give optimal behaviour in normal conditions lead to irrational behaviour in abnormal conditions. Many people put on one “mask” for one group of people and another for a different group of people. Many will become confused as to which they really are or which they wish to become (source: wikipedia). The subject of irrational behaviour is huge. I recommend you to read more about it. We can predict irrational behaviour to a degree due to lots of studies and work done in this field.

John gave me two book tips by Dan Ariel on this topic that I haven’t checked myself yet:

Or check this website also by Dan Ariel: Predictably Irrational – Investigating the Hidden Forces that Shape Our Decisions

You are not so smart
A great collection of examples that show people are easily fooled can be found in the book called “You are not so smart” by David McRaney. This book is a dose of psychology research served in tasty anecdotes that will make you better understand both yourself and others. The author describes cognitive biases, logical fallacies and heuristics. For example there is the well known “confirmation bias” where you tend to look for information that confirms your beliefs and ignore the information that challenges them. Another interesting phenomenon is the availability heuristic: a mental short cut that occurs when people make judgements about the probability of events by how easy it is to think of examples. The availability heuristic operates on the notion that, “if you can think of it, it must be important.” Examples are lotteries where you only see the winners so you might think it is easy to win. Or school shootings in the USA. People believe that since Columbine there are more and more school shootings but the opposite is true! Before Columbine there where more, but we don’t know about them. After reading this book an interesting thinking exercise can be to recognize the biases and fallacies in your thinking and testing.

Thinking fast and slow
Daniel Kahneman wrote a fascinating book about how our brain works “Thinking, fast and slow” which has been a bestseller for some time now. This book changed the way I think about thinking. Although it was sometimes hard to read for my as a non native English speaker, I almost read the book in one go. The book is about two different ways our brain works: System 1 is fast, instinctive and emotional. System 2 is slower, more deliberative, and more logical. I encourage testers to read the book and watch this video. In the video the author explains the main points from his book. You also might want to have a look at this shorter video where the same stuff is made more visual. The book will help you understand how your brain works and it will also make you aware how people make judgements and come to conclusions. Read what software tester Andy Patterson writes about on his thoughts of the book here.

There is a great video with Daniel Kahneman and Nassim Taleb (The Black Swan) in which they talk at the New York Public library about how individuals and humans make decisions – a fascinating video to watch – details and access to video can be found here.

Dancing gorillas
An interesting source the read to learn more about inattentional blindness and other illusions of memory and knowledge is the book “the invisible Gorilla” by Christopher Chabris and Daniel Simons. It makes you aware of how you can be fooled by your illusions and perception. More reading on gorillas and inattentional blindness is this article. Alan Page loves the gorilla! Especially the video. Check what he has to say about the gorilla here.

To be continued… part 4: social science

What testing can learn from social science – Part 2

Testers need to do a lot of thinking. To me testing is an investigation, gathering and providing information about things that are important. I like the definition by Jerry Weinberg: “testing is gathering information with the intention of informing a decision”. Rikard Edgren recently wrote an excellent “open letter” to define testing. Testing is much more than finding bugs or checking if requirements are met.

Systems thinking
We should not only investigate the “system under test” but also take related products in mind. What about the people using all these products or the organisations and processes in which the products are used? Testers should know more about systems thinking: the process of understanding how things, regarded as systems, influence one another within a whole (source: wikipedia).

A system is not just a collection of things. A system is an interconnected set of elements that is coherently organized in a way that achieves something. It must consist of three things; elements, interconnection and a function or purpose (source: “thinking in systems: a primer – Donnella Meadows”). If you want to learn more about systems thinking, you might want to watch this youtube movie by Russel Ackhoff and read this post by Aleksis Tulonen about what you can learn form Ackhoff.

In one of my projects my client was moving a hospital from several old locations to a huge new building. It was logical that the location codes changed since it was a new building with a very different layout. But initially they forgot to oversee that this location code was actually used as department code in several information systems. And these systems used the codes to book costs (finance), plan staff (HR) and distribution of food and medicine (logistics). Moving to the new building without overseeing the full impact would have paralysed the whole information landscape. Defining a temporary coding and making minor changes to several systems solved the problem.

Critical thinking
Testing can be seen as a form of research: investigating the system and finding information about it. In research critical thinking is important. Collecting, analysing and interpreting information requires critical thinking skills. Critical thinking to me is about thinking (critically) about your own personal thinking. Framing your own assumptions and using this to try to remove bias and hopefully clarifying your thoughts with reasoning.

In this video James Bach helps to gain quick understanding of critical thinking by asking three simple questions:

  • Huh? What does this mean? What is the point?
  • Really? Are you absolutely certain? How can I know?
  • So? Where does this lead? So what?

These questions are very helpful for understanding and to think critical about anything. This picture (click to enlarge) is taken from the book “Critical Thinking: a user’s manual” by Debra Jackson and Paul Newberry. This book is a helpful source to learn about critical thinking.

Rule of Three
“If I can’t think of at least three different interpretations of what
I received, I haven’t thought enough about what it might mean.”
(Jerry Weinberg)

Creative thinking
At EuroStar in Amsterdam I met John Stevenson who has an excellent blog with the intriguing title: “The expected result was 42. Now what was the test?”. We talked about what testing can learn from social sciences and early this year we had some fantastic conversations via skype. John pointed me to some very interesting readings about qualitative research: “Qualitative Data Analysis: a user-friendly guide for social scientists“ by Ian Dey. On his blog he wrote some very interesting posts related to testing and social science you might want to read:

John is currently writing an awesome series of articles about creative and critical thinking. Part 1 of “Creative and Critical Thinking and Testing” can be found here. From there you can find the other parts about the different styles of thinking.

So why is this important?
Systems thinking reminds us to look at the big picture and see systems as a whole. What is the purpose of the organisation we work for? And is the project we are doing contributing to that? Creative thinking helps us to solve problems in a creative way or come up with more things to test and how to do it effectively. Critical thinking helps you to really understand what you are doing. Like in research we have to process large amounts of data and make sense of it. But we also have to recognize, analyse and evaluate (see critical thinking diagram above) information, arguments and problems.

Thinking is an under appreciated subject. Thinking is very important for testers and we should learn from science: doing research, learn to design and perform experiments, collect, organize and analyse data and use the results to decide on the next steps in our work. Critical thinking helps us ask better questions in our projects and identify problems faster. It also helps avoid traps: biases and assumptions. More about that in my next post.

To be continued… part 3: irrationality and biases

What testing can learn from social science – Part 1

Last February and March I have had the privilege to talk at Belgium Testing Days in Brussel and TestBash in Brighton about what testing can learn from social science. In a series of daily blog posts I am going to write about this subject: why I choose this topic, what sources I studied and finally how I have applied this stuff to my work.

Rapid Software Testing
In the Rapid Software Testing Class I took in 2011 Michael Bolton talked about being empirical and a critical thinker as a tester. About collecting data from experiments using a heuristic and exploratory approach. About reporting by telling stories in testing instead of only reporting figures and numbers. Testing is about providing valuable information to inform management decisions. This awesome class empowered me to connect the dots of stuff I had been thinking about for years. It also pointed me towards a lot of books and information “outside” the IT and testing domain. It also triggered me to learn more about social science.

Test reports
Do you recognize test reports like this? (click the report the enlarge).

I used to write test reports like that. I was counting test cases and issues and advising my clients to take applications in production. But what do these numbers tell us? What if we didn’t test the most important functionality is the software? Numbers don’t mean anything without context!

Another example was an assignment I did at a telecom company years ago. Testing was estimated by numbers of test cases. We have 8 weeks to test so we can do 800 test cases, was a normal way to plan and estimate testing. Somewhere along the project my project manager told me his budget had been cut 10%. He asked me to drop 80 test cases from our 800 test cases scope. What was he thinking? As if all test cases take equally long to create, execute and report?

Exact or social science?
Testing and informatics (the science of information) are often seen as exact or physical science. People perceive that computers always do exactly the same. This gets reflected in the way they think about testing: a bunch of repeatable steps to see if the program is working and the requirements are met, but is that really what testing is all about? I like to think of testing more as a social science. Testing is not only about technical computer stuff, it is also about human aspects and social interaction.

Traditionally the focus in testing is on technical and analytical skills, however testing requires a lot more! Testing is also about communication, human behaviour, collaboration, culture, social interaction and (critical, creative and systems) thinking. The seven basic principles of the Context-Driven School tell us that people, working together, are the most important part of any project’s context. That good software testing is a challenging intellectual process and only through judgement and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Quality
Can we measure the quality of software? And can we do that objectively? When I ask people about quality they often refer to requirements. “Quality is compliance to functional and non-functional requirements”. In my experience I have never seen a document that contained all requirements for a software product. We can argue that requirement engineers have to do a better job. Are they doing a bad job? Can we solve the problem by writing better requirements? When discussing quality I like to use coffee as example. I like strong, black coffee without any sugar or milk. But what if you do not like coffee? For somebody who doesn’t drink coffee, my cup of coffee has no value at all. But is still the same cup. How can that be? And how about the taste? Why does coffee from an average office machine doesn’t taste very well while it meets the “requirements” I just mentioned. And what if I change my mind? Not so long ago I drank lots of cappuccino, nowadays I don’t like that any more. That is why I like the definition by Jerry Weinberg and the additions made by James Bach and Michael Bolton.

Quality is value to some person (Jerry Weinberg)
Quality is value to some person who matters (James Bach)
Quality is value to some person at some time (Michael Bolton)

I began to believe that there is much more to quality than requirements alone. I also believe that software quality is very subjective and will change over time. To better understand the subjective, human aspects of software quality I started to study social science in general and our thinking and qualitative research in particular.

Qualitative and quantitative research
Quantitative research is about quantities and numbers. The results are based on numeric analysis and statistics. There is nothing wrong with numbers, but we need to understand the story behind these numbers! Like the test report example: what is the story behind these numbers? What did we test? And how good was our testing? That is where the qualitative aspects come in. Qualitative research is focused on differences in quality and is usually for more exploratory purposes. It is more open to different interpretations. Qualitative research accepts and deals with ambiguity, situational specific results and partial answers. When doing this, testers may be more prone to bias and personal subjectivity.

To be continued… In part 2 I will discuss critical and systems thinking.

Misconceptions about testing

Shmuel Gershon’s tweet pointed me to an article on the scrum alliance website Agile Methodology Is Not All About Exploratory Testing by Dele Oluwole. I share Sigge Birgissons concerns: “the post clearly shows what I mean when having deep concerns about the knowledge of testing in agile community”.

I think Dele doesn’t fully understand what testing is or at least he uses a different definition than I do. And certainly he doesn’t understand exploratory testing. Rikard Edgren wrote an open letter to explain what testing is. Please read his high level summary of software testing to understand what testing means to me.

"It is imperative to state in clear terms why Agile testing cannot be all about exploratory testing"

(The text in these gray frames are quotes form the article by Dele)

I wrote a post some weeks ago about agile testing titled what makes agile testing different: agile testing isn’t that much different from “other” testing. Why do some people think agile is so different? To me there is no such thing as Agile testing. There is testing in an agile context. And every agile context is probably different. So saying that agile testing cannot be about exploratory testing makes no sense to me.

It is unequivocally the case that: you cannot estimate your time for exploratory testing, i.e., assign points realistically

Estimating testing is an interesting topic. This blogpost by Morgan Ahlström nicely emphasizes that estimates are guesses. Martin Jansson of the Test Eye writes about “utopic estimations” here. Michael Bolton wrote a lot about estimation here, here and here. He explains that testing is an open-ended task which depends on the quality of the products under test. The decision to ship the product (which includes a decision to stop testing) is a business decision, not a technical one.

Especially the fifth part of the black swan series is interesting in this discussion because Michael writes about the fallacies surrounding “development” and “testing” phases (by James Bach). He also explains why estimating the duration of a “testing project” by counting “test cases” or “test steps” is not a smart thing to do.

"You cannot plan for exploratory testing, as you do not have defined expected results."

Why are some people so obsessed by expected results? And why is there a need to have expected results to be able to plan testing? Expected results can be very helpful, but there is much more to quality then doing some tests with an expected result. A definition that I like is by Jerry Weinberg: “Quality is value to some person”. To understand this, you might want to read this blogpost to understand that there is more to quality than the absence of bugs. Also have a look at the excellent free ebook “The Little Black Book on Test Design” by Rikard Edgren. On page 2-6 he uses the “testing potato” to explain that there are more important aspects to the system than the requirements only.

"There is no defined scope for exploratory testing."

In the exploratory testing I do in my projects there is a “scope”. I do targeted and focused testing using charters, sessions and planning my testing using a dashboard that resembles a scrum board. Have a look at the slides of my presentation “Boosting the testing power with exploration“.

Using a coverage outline in a mind map or a simple spreadsheet, I keep track of what I have tested. My charters (a one to three sentence mission for a testing session) help me focus, my wrap-up and/or debriefings help me determine how good my testing was. My notes and the sessions sheets keep track of what I have done in my testing. Like in scrum, I use “standup” meetings to plan my testing. In these meetings we discuss progress, risks, priorities and charters to be executed. This helps us to make sure we do the best testing we can do continuously.

"The tester, product owner, and Scrum team are not in control."

I am not sure if I understand what you are trying to point out here. Are you saying that the team is not in control when doing exploratory testing? My model above shows that you can be in control when doing exploratory testing when done right. Exploratory testing is NOT ad hoc or unstructured when done right. If you do it right you will have control!

"There is no measure of progress, as testers cannot determine when testing is enough."

How do we determine how much testing is enough? Stopping heuristics might help here. No matter how simple the system is we are building, there are simply to many variables to test everything. So testing is about making choices what to test and what not to test. Even with a huge amount of automated checks, we can not check/test everything (to understand the difference between testing and checking read this). Testing is not about doing X test cases and when they all pass, you are done. Testing is providing information for managers to make good decisions. And when do managers have enough information to inform their decisions?

Still, not many Agile projects will require just two phases, like integration and regression. But it's definitely not only exploratory testing that's needed, as is erroneously believed in some quarters.

I am not sure what you mean by two phases. What do you mean by testing in phases? I like to use the agile testing quadrants when I try to explain how I think of testing in an agile context.

A team is developing software and the programmers do testing before checking the software in and making it available to the team. How do we call that sort of testing? Unit testing? But is that /only/ testing done by the programmer? I argue that the programmer might do all kinds of testing before checking his code in, even functional and acceptance tests. They probably will create a lot of automated checks and maybe even do some exploratory testing to see if the software meets their expectations: quickly testing some usability and performance aspects. So before the software is checked in, the programmer has covered testing in all 4 quadrants. This does not mean testing is done. More testing can be done, it depends on the context. What does the product owner wants to know? What are the risks involved? How much time do we have left? When discussing a test strategy I try not to speak about phases, I like to discuss what gets covered and why. What information is needed by the team and it’s stakeholders? When talking about coverage I do not mean code coverage but test coverage: the extent to which we have traveled over some map (model) of the system.

So I don’t think you can say “only exploratory testing is needed or not”.

Dele concludes his article with this statement:

It is the responsibility of the tester (and the Agile/Scrum team) to ensure that acceptance testing is in line with the expectation of the product owner. If we agree that there is an expectation, we therefore have to design test cases (even if minimal) that will verify the specified acceptance criteria."

Dear Dele please read about testing and exploratory testing. Some good starting points are these lists of resources on this blog or the one made by Michael Bolton: resources on Exploratory Testing, Metrics, and Other Stuff. I am happy to point you to more good sources of information if you are interested. Just let me know.

DEWT2 was awesome!!

Last Friday (October 5th) a bunch of software testing nerds and one agile girl gathered in Hotel Bergse Bossen in Driebergen talk about software testing with the central theme “experience reports: implementing context-driven testing”. Ruud published almost all the photos I took on the DEWT website, so I have to write a report in text here. Thanks dude 😉

After some drinks and having dinner we gathered in the conference room called the chalet. I think they call it the chalet because of the humid smell inside because it certainly didn’t look like a chalet.

But never mind, the room was good enough to do a peer conference. Lighting talks were on the program for Friday and Jean-Paul started with a talk in which he asked the question: “Is the context-driven community elitist?”. Jean-Paul sees a lot of tweets and blogs from people in the context-driven community who seem to look down on the rest of the testing community sending the message “look how great we are!”. Is this effective behaviour given the fact that the context-driven community wants to change the way people test, he asked himself.

Joris had a short and clear answer to the first question: “yes!” (the context-driven community is elitist). We had a long discussion that went everywhere but never close to an answer to question about effectiveness and how it can be done better. Never the less it was a valuable and fun discussion. I will blog about this later this year.

We had a great evening/night with home brew beer “Witte Veer”, Belgium beer brought by Zeger and a bottle of Laphroaig quarter cask brought by Joris. Sitting at the fire place a lot of us stayed up until late

Saturday morning the last DEWTs and guests arrived. With 23 people in the room we had a great group. Our guest were: Markus Gärtner, Ilari Henrik Aegerter, Tony Bruce, Gerard Drijfhout, Pascal Dufour, Rob van Steenbergen, Derk-Jan de Grood, Joep Schuurkes, Bryan Bakker, Lilian Nijboer and Philip-Jan Bosch. The DEWTS: Adrian Canlon, Ruud Cox, Philip Hoeben, Zeger van Hese, Jeanne Hofmans, Joris Meerts, Ray Oei, Jeroen Rosink, Peter Schrijver, Jean-Paul Varwijk and myself.

Peter was our facilitator and for the first time we used k-cards at DEWT to facilitate the discussion. I really like this method, but Lillian didn’t. She had an interesting discussion on twitter with some DEWTs.

Jean-Paul: I like the LAWST format we are using for #DEWT2
Lilian: @Arborosa i don’t. impersonal and inhibits useful discussion
Markus: @llillian @Arborosa What’s the improvement you’re suggesting? 🙂
Lilian: @mgaertne @Arborosa but i am invited and feel i at least have to try it this way
Jean-Paul: @llillian @mgaertne Invite us to an agile event to learn alternatives
Markus: @Arborosa @llillian Funny thing, I would like to introduce more focused-facilitated sessions quite often in agile discussions using k-cards.
Lilian: @mgaertne @Arborosa different ppl prefer different things 🙂

Ilari kicked of with a presentation about the introduction of context-driven testing at eBay. He had two very interesting ideas: the first is “Monday Morning Suggestion” a short email to his team with tips, tricks or interesting links. The second was that he supports his team in getting better and learning by paying for their books and conference visits. Great stuff!

Second was Markus Gärtner with a story about him coaching a colleague to become a better tester and more context-driven. He talked about his lessons learned coaching where teaching gradually turned into coaching. He also gave some insight in the transpection sessions he did using the socratic method.

The third presentation was about changing people to become more context-driven by Ray Oei. An interesting discussion developed about how to get people to change. We focus too much on the people who do not want to change instead of working with the people who do want to change. Pascal asked an interesting question which I put on twitter: “If all testers become context-driven overnight, are we happy?” James Bach replied: “Yes, we are happy if all testers become CDT” and “We are happy if all testers take seriously the world around them, and how it works, and dump authoritarianism”. To me it doesn’t really matters, I do hope that more people become CDT, but I prefer good software testing no matter how them call it.

After the discussion Joris said he was attending the wrong peer conference since he expected a lot of discussion about testing instead of what he calls “people management”. An interesting question, but isn’t testing also a lot about people? Just an hour late we went for lunch and a walk in a wet forest. After the walk the group photo was taken.

The forth talk was by Ruud Cox who talked about testing in a medical environment. He described the way he has been working. He and his colleague tester use exploratory testing to learn and explore. Scripted testing was used to do the checking. Ruud explained that exploratory testing fits very well in an R&D environment. After his talk we had an interesting discussion about auditors and their role in testing.

Around 16:00 it was my turn to do my talk about implementing context-driven testing within Rabobank. I did a short version of the talk I will do at EuroStar in November telling about the challenges we had and what we did to change our context. What did we do? What worked and what didn’t? Some interesting questions were asked. And we had a nice discussion about the personal side of becoming more context-driven where Joep explained how he became context-driven. The first stage was being interested and he went looking for a book. The second stage took him over a year where he struggled with the stuff he learned at RST and trying to adapt his way of working. The third and last stage is where he doesn’t need anything anymore because he will always find a way to make things work.

After my talk and the discussion the day ended. We did a quick round to discuss our experiences and thanked the people for attending, organizing and facilitating. Our agile girl still doesn’t know what to think about the facilitation with the k-cards, she obviously had to get used to discussing with the cards. I am curious how she thinks about them when she had the chance to think about it some more.

All together I had a great time. Pity we didn’t play the Kanban game on Friday. But I hope Derk-Jan will give me another chance at the game night at TestNet in November. Thanks all for the awesome weekend!!

DEWT3 is already being planned for April 20 and 21 next year and James Bach will attend. Let me know if you are interested in joining us in April 2013. The topic hasn’t been decided yet, but if you have great ideas please share them.

Standing left to right: Ray Oei, Jean-Paul Varwijk, Adrian Canlon, Markus Gärtner, Ruud Cox, Joris Meerts, Pascal Dufour, Philip Hoeben, Gerard Drijfhout, Bryan Bakker, Derk Jan de Grood, Joep Schuurkes, Lilian Nijboer, Philip-Jan Bosch, Jeroen Rosink, Jeanne Hofmans
Kneeling left to right: Tony Bruce, Zeger van Hese, Ilari Henrik Aegerter, Huib Schoots, Peter Simon Schrijver

Adaptability vs Context-Driven

Last week Rik Marselis send me an email pointing me to an article with the title “The adaptability of a perceptive tester”.  He added: “Have you read this article? Should appeal to you!”. The article is written by a couple of Dutch (Sogeti) testers. And they, so the introduction tells me, get together once in a while to discuss the profession of testing and quality assurance. This time they discussed some remarkable examples of their experience that perceptive testers (who are aware of the specific situation they’re in) adapt their approach to fit the specific needs.

I replied to Rik with the following email:

Hey Rik,

Nice article, I had already seen it. But adaptive or perceptive is not context-driven. I also totally disagree with the conclusion:
“Together we concluded that although TMap NEXT was introduced some six years ago it still has a solid foundation for any testing project since the essence of adaptiveness makes sure that any perceptive tester can collaborate in doing the right things at the right moment in any project and ensure that the proper quality is delivered to solve the problem of the stakeholders.”

TMap Next contains a number of dogmas (or rather is based on a number of dogmas) like: testing can be scheduled, the number of test cases is predictable, content of a test case is predictable, sequence of process, etc.
Therefore I think TMap Next is not usable in every situation. At least not effectively and efficiently. Being adaptive is good, but I can imagine situations that TMap Next has to be adjusted rigorously that the result is no longer recognizable as TMap. In addition: TMap Next says that ET is a technique: a good example of another dogma. And it shows me that TMap is not about testing but about a process and deliverables … Maybe I should write a blog about this.

Regards,
Huib

Rik replied with this email:

Hi Huib,

We really need to reserve some time to discuss this from different sides because some things that you say I totally disagree with. A conscious tester can handle any situation with TMap. I think whether ET is a technique or approach is really a non-discussion. TMap calls it a technique so you can approach testing in different ways in different situations. And since TMap itself is a method you cannot call ET a method too.

I think Context-driven means you choose your approach depending on the situation.
I think Adaptive means you choose your approach depending on the situation.

Perceptive means conscious, as you are aware of the situation, you can choose an appropriate approach. Well, it is worth discussing.

Regards,
Rik

Okay, so let’s discuss!

Exploratory testing
Let’s start with the ET discussion. What does TMap say about this? ET is a test design technique. And the definition of a test design technique (on page 579 of the book “TMap Next for result driven testing”): “a test design technique is a standard method to derive, from a specific test basis, test cases that realise a specific coverage”. Test cases are described on page 583 of the same book: “a test case must therefore contain all of the ingredients to cause that system behaviour and determine wheter or not is it correct … Therefore the following elements must be recognisable in EVERY test case regardless of the test design technique is used: Initial situaion, actions, predicted result”.

Let’s connect the dots: ET is called a test design technique. A test design technique is defined as: a method used to derive test cases. But ET doesn’t use test cases, not in the way TMap defines them. It can, but most of the time it doesn’t… Mmm, an inconsistency with image, a claim, expectations, the product, statues & standards. I would say: a blinking red zero! Or in other words, there /is/ something wrong here!

What is Exploratory Testing? Paul Carvalho wrote an excellent blog post simply titled “What is Exploratory Testing?” on this topic and I suggest people to read this if they what to understand what ET is. Elisabeth Hendrickson says: “Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next.”

Michael Bolton and the context-driven school like to define it as: “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to optimize the quality of his or her work by treating test design, test execution, test interpretation, and test-related learning as mutually supportive activities that continue in parallel throughout the project.”. Michael has a collection of interesting resources about ET and it can be found here.

So Rik, your argument “since TMap itself is a method you cannot call ET a method too” is total bullshit! It sounds to me as “there is only one God…”.

Context-driven testing
Don’t get me wrong, being adaptive and perceptive is great, but that doesn’t make testing context-driven. A square is a rectangle but a rectangle is not a square! Please have a look at the context-driven testing website by Cem Kaner. Also read the text of the key note “Context-Driven Testing” Michael Bolton gave last year at CAST 2011. In his story you will see that adaptive (paragraph 4.3.4) is only a part of being context-driven. I admit, it is not easy to really comprehend context-driven testing.

Do you think it was TMap Next that was the common success factor in the stories shared in the article… I doubt it!

Basic training for software testers must change

This blog post was originally written as an column for www.testnewsonline.com (English) and www.testnieuws.nl (Dutch).

On this blog I recently wrote about my meeting with James Bach with the provocative title: “What they teach us in TMap Class and why it is wrong“. Mid July I go to San Jose for the CAST conference. During the weekend preceding I participate in Test Coach Camp. The title of the post is the title of a proposal that I submitted to discuss at Test Coach Camp.

In the past I have been a trainer for quite a few ISTQB and TMAP courses. The groups attending the training were often a mix of inexperienced and experienced testers. The courses cover topics like: the reason for testing, what is testing, the (fundamental) processes, the products that testers create, test levels, test techniques, etc. In these three-day courses all exercises are done on paper. Throughout the whole training not once actual software is tested!? I wonder if courses for developers exist where no single line of code is written.

In San Jose at Test Coach Camp I want to discuss the approach of these courses with my peers. How can we improve them? I feel these courses are not designed to prepare testers to test well. Let alone to encourage testers to become excellent in their craft.

During my dinner with James, I asked him what he would do if he would train novices to become good testers. He replied that he would let them test some software from the start. He would certainly not start with lectures on processes, test definitions and vocabulary. During a session the student will (unknowingly) use several techniques that will be named and can be further explained when stumbled upon. A beautiful exploratory approach I would like to try myself: learning by doing! But there are many more opportunities to improve testing courses. People learn by making mistakes, by trying new things. Testing is much more about skills than about knowledge. Imagine a carpenter doing a basic training. His training will mainly consist of exercises! My neighbour is doing a course to become furniture maker. She is learning the craft by many hours of practice creating work pieces. Practice is the biggest part of her training!

One of the comments on my blog opposed to the suggestion by James Bach. Peter says: “I have been both a tester and trainer in ISTQB and TMap. Yes we can make testing fun but without a method that testing has no structure and more importantly has no measurable completion. How will those new people on “more practical” course know when they have finished? What tests did they do? What did they forget? What defect types did they target? Which ones did they not look for? What is the risk to the system? My view after 40 years as a developer and tester is that this idea might be fun but is not just WRONG but so dangerously wrong that I am sad that no one else has seen it.”

What do you think?

Older posts