Do we need testers?

The first version was first published on linkedin on February 5, 2020 titled “Do we need testers? No! Do we need skilled testing? Yes!”. I added my new insights to this blogpost.

Last week I did a talk at Agile, Testing & DevOps Showcase in Amsterdam. My topic was “Testing in modern times“.

In agile and especially DevOps approaches the motto is: automated everything! Companies like Facebook claim they do not have testers at all. Microsoft only has SDET (software development engineers in Test), other companies are T-shaping developers to do the testing. New kid on the block is AI and machine learning, that will definitely replace testing I hear people claim. What is really happening globally?

Do we no longer need testers? I am not sure anymore. Why? Because testers have a bad name. If you cannot automate nowadays, you are no longer valuable, people say. That makes me sad. I think skilled testing is super important! Testing informs decisions about value, quality and risk by learning about the product (and a serious professional tester also will give you insight in the status of the project or team they are working in). Modern technology and tooling is reducing the need for dedicated testers. Not by replacing testers, but by reducing certain types of risks we used to test. More and more people are getting obsessed by “automate everything”. Sadly even some testers I meet are obsessed by trying to automate everything they do…

How can we make valuable software for our clients? I believe that personal leadership and collaboration ultimately makes the difference. The quality of software is crucial nowadays. Therefore I like to focus on an integrated quality approach. As a tester, a mentor and a coach, I help people and teams learn effectively and continuously develop with attention to sustainable adaptability that leads to improving (team) results and way of working. It enables teams to create more customer value by building quality solutions!

In IT we need insights in risks. Risks and value. For that we need to learn continuously. And I think we need smart people who do skilled testing, determined to find problems that matter. Teams need to create insight and overview in the risks we take by creating, releasing and using (IT) products. Often people with excellent testing skills excel in doing this… I hear new job titles as “Quality coach” and “Quality Engineer”. Is this the way to go? Well if that solves the problem of “automation obsession” and a lack of testing skills in teams? I am game. 


So far the original post on linkedin. Recent events make me doubt if we ever get to a point where we do not need dedicated testers.

Dan Ashby reacted on linkedin:

"It's a really interesting question. My take: yes, we need testers... because we need skilled testing, and the Dev communities haven't kept up with what skilled testing is and how to do it. One thing too: Facebook use offshore testers (lots of them via a tester contracting company - I know people that work at FB and at the 3rd party testing company). Also MS have done a U-turn too, and they also employ testers again as well as SDETs. They also have test manager roles again too. Maybe lots of the big companies have hit that realisation that they needed good testing (and hence needed the testers who have those skills)? If so, hopefully the smaller companies that tend to imitate the big companies will soon follow suit. 😁"

I like what he says here: “we need testers… because we need skilled testing.” Although there are some developers who have really good testing skills, many are not interested in testing nor learning to do skilled testing. So I think he has a point there. But dedicated testers alone do not solve the problem. We need better testers and better thinking about testing too! 

I see a huge fixation on “test automation” and this is causing us to lose connection with the human, social purposes of software development and testing. The essence of software development is that during development, we learn about what we need, what the customer really wants and how the product we are building actually works. This is research and development, learning along the way, and needing sense making and feedback to get it right.

This “Vision on the Future of Software Testing” has nothing to do with skilled testing. This shows that even an institute like ISTQB does not understand IT in general nor testing. That makes me so sad!

Test approaches like TMap and ISTQB are neglecting the human aspects of testing. They try to approach testing with mechanistic thinking and are not dealing with the complexity and uncertainty that developing software and dealing with people brings. Leading test experts told me we cannot make testing too complicated or else testers will not understand.

Recently the new TMap book was published called “Quality for DevOps teams”. Reading how TMap deals with risk analysis and test strategy gives me goosebumps. The risk analysis is just a list of quality attributes with a simple calculation (possible impact x chance of failure) and based on the number we assigned a “risk class” (high, medium, low). And based on the risk class we assign the test intensity in dots. See for yourself what it looks like here and here. Now let’s look at some quotes from the book that made my eyes roll.

Chapter 47 “Experience-based testing”

“Exploratory testing is an experience-based approach of testing, the most important approach of experience-based testing in our opinion. We distinguish coverage-based and experience-based testing. Others use terms like scripted testing and free-style testing, but we prefer the division in focus on either experience or coverage.”

All testing that you do uses experience, because there is no way you can shut it off. And doing any test will give you some coverage. I guess they just do not know how to talk about coverage in a way that makes sense.  Why make this strict distinction in only two categories? There are so many more ways to classify test techniques (note: what TMap calls “approach” is called “technique” in BBST).  There is no strict distinction between test techniques. See slide 62 of BBST Test DesignEvery test addresses all of these. A specific technique typically addresses 1 to 3 of them, leaving the rest to be designed into the individual test“. Personally I like the way BBST classifies techniques looking at the driving ideas behind the testing.

“The main downside of applying error guessing is the lack of documentation. Therefore, tests are not reproducible. This may result in a developer not being able to investigate an anomaly, the tester not being able to retest a fix, and the test cannot be added to a regression test set.”

Why do tests need to be reproducible? If a tester is capable of telling or showing the developer what goes wrong, you do not need any documentation. I think with enough product knowledge, it is not that difficult to retest. So we are solving a problem in the wrong way, aren’t we?

Chapter 48 “Is there any value in unstructured testing?”

“Any testing lacking a plan containing what to do and what to expect of a system, or lacking preparation of the test, is unstructured. This is also called ad-hoc testing. Some people see a great advantage in unstructured testing because, as they say: “You can start testing right away.” That is, without “losing” any time on preparation.”

The only structure TMap knows is plans and test cases. Structure is “the arrangement of and relations between the parts or elements of something complex”. Michael Bolton wrote about this here. Testing is about learning and learning involves mental models. TMap seems to have no attention at all about how people learn. Deep learning in the beginning is a confusing process, but it gets clearer along the way. By letting it rest (defocus), we give our brains the chance to process the learned information and integrate it with the models we have in our heads. So good (mental) models are important. These “mechanistic test approaches” forget the whole learning part.

“When you have an IT system that is of good quality, the testers do their unstructured testing and don’t encounter any faults or failures. Can they now say the quality is good and that there are no significant risks? Did they really measure the quality and risks? No, the only thing they can truly say is they did “some” testing and didn’t find any problems. However, they cannot explain which requirements or quality risks have been covered. They are not even sure which parts of the system have been covered.”

This is an interesting approach taken by TMap here which is called an “appeal to ignorance”. The testers cannot explain so the approach must be wrong! But is the approach the problem or are skills of the testers the problem here?

7 Rules for Positive, Productive Change – a book review

At Agile Testing Days 2019 during the keynote “I can’t do this… alone! A Tale of Two Learning Partners” by Lisi Hocke and Toyer Mamoojee I got inspired by their story about learning pacts. During the keynote Nicole Errante and I started a learning pact too. In our first call we created a plan. One of the books I added to our pact was “7 Rules for Positive, Productive Change – Micro Shifts, Macro Results” by Esther Derby. In this blog post Nicole and I share a summary and our learnings from the book. 

Huib: The book is about change and from the first page on it resonated with me. Esther opens with: “People hire me because they want different outcomes and different relationships in their workplaces. My work almost always involves change at some level…” and that is exactly what I do and have been doing for many years now. While reading and discussing the book with Nicole I recognized so many things from my own experience. The introduction talks about change as a social process! Work and life in general is heavily influenced by social processes and everything we do has major social aspects to it. An aspect that unfortunately often is underexposed especially when people want to be in control. Best practices do not work in complex situations in which we find ourselves in IT often, we know that from the work of Dave Snowden and the cynefin framework. This is why I got so inspired by Context-driven testing years ago. Finally I found people who were taking the human aspects in testing and IT seriously. Not trying to approach (testing) problems with mechanistic thinking, not seeing IT as technology centered, not striving for certainty but being okay with uncertainty and a community where human interaction and feelings played a prominent role in solving problems. This book takes the same approach with change. Change is a social thing. Esther’s book hands you the interventions she calls rules to improve and help change to happen. The rules are heuristics or guidelines that will help change to happen. 

Nicole: Change in life, whether work or personal, is inevitable. The philosopher John Locke said “Things of this world are in so constant a flux, that nothing remains long in the same state.” But we, as humans, tend to be resistant to change; we like stability, routines, and the known. However, maybe we would be less resistant to change if it was implemented in a way that considered this human side of it. That’s one of the things I love about Esther’s book – it constantly keeps people in the focus of the change process. The success of the change is not just about the process but about the people in it. I have been lucky enough to work at the same company for the past 13 years but that means my experience is a bit more narrow than Huib’s when it comes to experiencing change in the workplace. However, a few years ago our company wanted to make the move from a very waterfall-based software development process to one that was more agile. I think most people at my company would agree that the change was more painful and took more time than anyone expected. It is through the lens of that change that I read this book – what could we have done to make the change process better? And what lessons can we learn for future changes?

Summary: The introduction ends with Lessons Learned from a project Esther did. Many of those I recognized and made me want to read the book even more. The lessons are: 

  • Skill and will aren’t always the problem
  • Training is useful and necessary, but it’s not sufficient
  • Standardizing nonstandard work may make matters worse
  • Long feedback loops delay learning and improvement
  • Observed patterns result from many underlying influences

The first chapter deals with change by attraction. If you try to force change upon people, they will react with resistance. Mandating change makes people feel a loss of control and they have no personal buy-in to the change. Esther says “At best, coercion, rewards, and positional authority result in compliance, not engagement…” You don’t want people just going through the motions, you want them actively involved and eager to give things a try. In order for change to happen, things need to be learned and other things need to be unlearned. There is no best practice that works in every situation in knowledge work, quite the opposite: we need to experiment to find out what works and what doesn’t. It is a matter of responding to people, adapting to their needs and attracting and engaging people instead of pushing and persuading. 

  1. Strive for Congruence

Congruence is an alignment of a person’s interior and exterior worlds, balancing the needs and capabilities of self, others, and context. Ignoring other people’s needs and capabilities is probably the most common cause of incongruence. When this incongruence happens, you are in a stress state. When people are stressed, it is hard to think, learn, or engage. You cannot have successful change when learning and engagement are suppressed. Congruence is essential for change by attraction. Congruence contributes to safety, which is essential for people to solve problems, to learn and to speak up about mistakes and things they don’t know. Being empathetic will help you understand where someone else is coming from and what they have to lose by changing, thus avoiding ignoring the context of others in the process of change. Empathy helps people feel safe and understood. Empathy and congruence go hand-in-hand and are essential for making long term changes. At the end of chapter 2, Esther lists a couple of questions you can use to be more congruent.

  1. Honor the past, present and people

When implementing change, it is important to show respect to existing belief systems, the experiences and knowledge people have, and the effort people have made to keep things going with the system currently in place. Build trust and relationships before coaching others. People seldom think that they themselves are wrong. They also may want to improve but most do not want to hear from an outsider that they are doing it wrong. So we have to choose our language with care. Remember that while you have ideas of how things can be better, the people you want to change know things that you don’t know that will be important in this process. By acknowledging and exploring the negative space of change, we prevent unpleasant surprises along the way. Again Esther has a great list of questions to discover what lives in this negative space. People don’t resist change, they respond to its implementation. You can learn from reasons behind the responses to help adapt the change. Use Transformational Communication (inquiry, dialogue, conversation, understanding) instead of Convincing and Persuading (advocacy, debate, argument, defending) to gain openness, trust, and shared understanding. Finally, don’t take for granted what works by only focussing on the problem. Build upon what already works.

  1. Assess what is

Every system is perfectly designed to get the result that it does (W. Edwards Deming). Change starts from where you are now and paying attention to the context increases the change problems will get solved. How did the existing conditions in the organization produce the current patterns and results? This chapter introduces 3 techniques intended to look beyond symptoms and find influencing factors: 

  1. Containers, differences, and exchanges (CDE), it describes the three conditions that determine the speed, direction, and path of a system as it self-organizes. This will help discover both the formal structures and invisible structures that factor into the behavior of the organizational system.
  2. SEEM Model: Steering, Enabling-and-Enhancing, Making. This model shows the different perspectives of people in the organization on the basic set of concerns each company has: how to achieve clarity so people know what to do, what conditions are needed for people to do good work, and what productive constraints will streamline decision making and guide actions and interactions.
  3. Circle of Influences: to see how the factors influenced one another and where to find virtuous and vicious loops. This method helps to find the many influencing factors that result in problems, instead of focussing too much on the problems itself. Find factors that have influence on several others as possible places to run experiments on.
  1. Attend to networks

An organisation has a formal and an informal side. Informal social networks within the organisation have great influence and cannot be ignored, although they are not visible on the org chart. The most important social networks for change are those that people turn to and trust for advice. That is why Esther suggests to map the networks within the organisation to be able to use them productively and not to break them inadvertently. You can also enhance existing networks by reducing the number of hops between people that are not directly connected to reduce bottlenecks. Networks can also carry rumors. Esther suggests to capture them to find out what people worry about using a rumor control board. Finally, if people do not want to change something, just do it with other people that do (change by attraction). The resisters will probably follow when they see that other people are doing it. This is the fear of missing out in action.

  1. Experiment

Solving big complex problems is not easy because many factors are involved and they cannot be solved independently. Trying to solve them in big changes will cause big disruptions and that’s risky. Small experiments foster learning and will engage the people you work with. Experiments are FINE (Fast feedback, Inexpensive, require No permission and are Easy). Landing zones make big changes small by defining intermediate states to which the organisation can evolve. After reaching the landing zone, you can reassess whether your bigger goal is still relevant and course correct as you need. Safe to fail probes are good examples of experiments. Find something that you can try without asking for budget or permission. Don’t worry about failing – keeping the experiment small means any risk should be contained and you can learn from what went wrong. Esther lists a great set of questions to assist in shaping the experiments and another set to test assumptions. Reflecting on what works in the experiments involves double-loop learning

  1. Guide and allow for variation

Knowledge work and complex organizations need to allow for variation and emergence to perform effectively. Unnecessary standardization will lead to inefficiency and suboptimal behavior. Coherence is more desirable than consistency and that is why Esther suggests using boundary stories: to help people focus on gaining a similar outcome. Boundary stories give people a guideline on reaching the outcome you want (and avoiding those you don’t) while allowing people to mindfully decide how to get there based on their unique situation. Also change will be evolutionary: small evolutionary steps towards the end goal. Landing zones are useful, so is a horizon map: a thinking tool where you start with the desired outcome and work from right to left filling in conditions and constraints needed for the change to take place. Since change is social, it requires changing habits of thought and cognitive frameworks. Change will happen if we manage to influence metaphors and narratives within the organisation. Explain the outcomes you want and why, then add some boundary stories as a guide to how to get there. This will allow people to refine the change based on their knowledge and experience, thus owning the change rather than being forced into it.

  1. Use your self

People bring their personalities, characteristics, belief systems, and life experiences to work and this influences what they do and how they do it. This includes you, the person involved in bringing the change. Change is a social process which needs personal connection with the people involved. This works best using empathy, curiosity, patience, and observation. These skills can and must be practiced. A nice list of questions to help prompt empathy is given. Esther also supplies nice overviews with types of questions and how to focus questions to be curious and patient. These questions help to avoid why questions, which often make people defensive. The question and how you ask it, determines the answer you get. Making sense of your observations requires bias awareness and testing your observations. Be generous when trying to interpret the motivation behind what people do and the results they achieve.

Learnings Huib

It was fun to work with Nicole and read the book together talking about two chapters each time we met. Sharing our stories and experiences with change and discussing situations at work helped to understand what the book is about and to get ideas where we could try the things we were reading. The parts on empathy, curiosity and patience really resonated to me. Like I described in my blogpost “Mastering my mindset” I become more and more aware that growth, learning, improving and change needs empathy. I am working on that. This book gave me more tools and inspiration to get there. I already used several questions from the lists in the book and I enjoy using them. The landing zones are a great way to create small steps of change. I’m now working on a horizon map with a scrum master to get insight into what is going on in his team. I cannot wait to work with the team on the map.

Learnings Nicole

I agree with Huib that the method we used to read the book a couple chapters at a time and then discussing really was a fun way to read a book.It really helped reinforce the learnings of each chapter as well. Being able to share what we thought and our experiences helped not only make things clearer but also gave insight on where to apply it to our work. When our organization went through the big agile change, the main reason I thought it went rough was that people didn’t understand the reason behind the change. While that is true, Esther’s book also helped me realize there were other factors involved as well. The change we bit off was too big: we should have started where we were at and done smaller experiments to learn and adjust along the way. People’s knowledge, experience, and feelings should have also been considered in order to get them actively involved in the change. I look forward to being able to apply the lessons in this book to future changes in our organization. I also want to incorporate some of the questions from the book to help work through the day-to-day challenges that we face in our team.

Finally

The book is easy to read and has some great stories in there to illustrate the rules and lessons. It has many valuable and ready to use lists of questions and methods that will help in experimenting with change. Every chapter ends with a great set of takeaways which summarizes what you just read in different wording. We absolutely recommend this book to anybody dealing with change in their work.

Mastering my mindset

Dutch version is here

People perceive me as extremely confident. One of my personal mentors once said: “you are fearless”. That is only the outside. Inside I am soft and insecure like most people are. I have fears, quite a few to be honest. My behavior and my inside weren’t congruent.

In my life, I like challenges and when I want something, I go for it. In those cases my determination and will to learn or to achieve something are just bigger than the fear of failing. Personal leadership is important in my life. I am trying to become a better version of myself every day. I want to be an even better person. My passion and energy in combination with being a fighter brought me where I am today. But it had a lot of “collateral damage”. Since my youth I have been struggling with depression, low self esteem and restlessness. Over time I got much better in dealing with them. But I still suffer from burn-out and depression complaints and mental health issues once in a while. My pitfalls are: fighting all the time, pushing people too much, going in debates to win, not able to turn myself off. In September 2019 I started a new episode in my life: I started working with Bureau Idee in Haarlem after an anxiety attack. Working with Peter Spelbos was like coming home. He helped me reflect and made me aware of aspects of myself that I have known for long, but never realized the impact of my thoughts and inner beliefs on my behavior and mental health.

On one hand, I am someone who is always ready to help others. A good and dear friend who is very good at his job. On the other hand I am afraid of many things because of my fear of failure in combination with perfectionism. Having a low self-esteem drains energy because you have to deal with the fact that you care about what others think of you. I see patterns where I put on a mask and hide my real self. Another personal mentor told me that “I am good at breaking down the walls of others and with those stones making my own wall higher and stronger”. I find it difficult to make myself vulnerable. I also see a pattern of running away from problems when it gets too difficult. My head is full of negative thoughts, all the time.

But I am dealing with them. Together with my therapist I gained deeper knowledge about myself, my inner beliefs and he helped me reflect on what to work on. He helped me to take matters into my own hands. By writing this, I feel strong and confident that I will overcome my issues and become a better version of myself. I am already reaping the fruits of my efforts.

As a coach, I teach people that vulnerability and self-reflection is important. By sharing this, I want to be a role model and show that having mental issues is okay as long as you work on them. Being vulnerable is nothing to be ashamed of; it is a super power. I believe that personal leadership and collaboration ultimately makes the difference in work and in your personal life. Mindset is the most important thing to become the best version of yourself. Vitality is key in that! To do the best work and to live the best life, physical and mental health are important and they are directly related! I learned that I need to become mentally strong by being less aggressive and more assertive (have a look at this awesome video). I started guarding my limits and borders, reasoning from my position and being less judgemental, becoming a better listener, being more humble and thankful, started meditating and practicing tolerance. I found a lot of inspiration in the Dutch book “Master your mindset” by Michael Pilarczyk.

I am mastering my mindset. I am dealing with it! It feels good and it makes me feel strong.

Watch your thoughts, they become your words
Watch your words, they become your actions
Watch your actions, they become your habits
Watch your habits, they become your character
Watch your character, it becomes your destiny
― Lao Tzu


Meester over je gedachten (Master your Mindset)

Mensen ervaren mij als extreem zelfverzekerd. Een van mijn persoonlijke mentoren zei ooit: “je bent onbevreesd”. Dat is alleen de buitenkant. Binnen ben ik zacht en onzeker zoals de meeste mensen. Ik heb angsten, nogal wat om eerlijk te zijn. Mijn gedrag en mijn binnenkant waren niet congruent.

In mijn leven houd ik van uitdagingen en als ik iets wil, ga ik ervoor. In die gevallen zijn mijn vastberadenheid en wil om iets te leren of te bereiken gewoon groter dan de angst om te falen. Persoonlijk leiderschap is belangrijk in mijn leven. Ik probeer elke dag een betere versie van mezelf te worden. Ik wil een nog beter persoon zijn. Mijn passie en energie in combinatie met de straatvechter in mij, brachten me waar ik nu ben. Maar het had veel “bijkomende schade”. Sinds mijn jeugd heb ik te kampen met depressies, een laag zelfbeeld en rusteloosheid. Na verloop van tijd wist daar ik steeds beter mee te dealen. Maar ik heb nog steeds af en toe last van burn-out en depressie klachten en daardoor psychische problemen. Mijn valkuilen zijn: de hele tijd vechten, mensen te veel pushen, debatten aangaan om te winnen, mezelf (mentaal en lichamelijk) niet kunnen uitschakelen. In september 2019 begon ik een nieuw tijdperk in mijn leven: ik begon te werken met Bureau Idee in Haarlem na een angstaanval. Werken met Peter Spelbos was als thuiskomen. Hij hielp me nadenken en maakte me bewust van aspecten van mezelf die ik al lang ken, maar nooit de impact van mijn gedachten en innerlijke overtuigingen op mijn gedrag en mentale gezondheid besefte.

Aan de ene kant ben ik iemand die altijd klaar staat om anderen te helpen. Een goede en dierbare vriend die heel goed is in zijn werk. Aan de andere kant ben ik bang voor veel dingen vanwege mijn faalangst in combinatie met perfectionisme. Het hebben van een laag zelfbeeld verspilt energie omdat je je druk maakt om wat anderen van je denken. Ik zie patronen waar ik een masker op zet en mijn echte zelf verberg. Een andere persoonlijke mentor vertelde me dat “ik goed ben in het afbreken van de muren van anderen en met die stenen mijn eigen muur hoger en sterker maak”. Ik vind het moeilijk om mezelf kwetsbaar op te stellen. Ik zie ook een patroon van weglopen van problemen wanneer het te moeilijk wordt. Mijn hoofd is altijd vol met negatieve gedachten.

Maar ik ben ermee aan de slag gegaan. Samen met mijn therapeut heb ik diepgaande kennis over mezelf en mijn innerlijke overtuigingen ontdekt. Hij heeft me geholpen na te denken over waar ik aan moest werken. Hij hielp me het heft in eigen handen te nemen. Door dit te schrijven, voel ik me sterk en zelfverzekerd dat ik mijn problemen zal overwinnen en een betere versie van mezelf zal worden. Ik pluk nu de vruchten van mijn inspanningen.

Als coach leer ik mensen dat kwetsbaarheid en zelfreflectie belangrijk zijn. Door dit te delen, wil ik een rolmodel zijn en laten zien dat het hebben van mentale problemen prima is zolang je eraan werkt. Kwetsbaar zijn is niet iets om je voor te schamen; het is een superkracht. Ik geloof dat persoonlijk leiderschap en samenwerking uiteindelijk het verschil maakt in werk en in je persoonlijke leven. Mindset is het belangrijkste om de beste versie van jezelf te worden. Vitaliteit staat daarbij centraal! Om het beste werk te doen en het beste leven te leiden, is fysieke en mentale gezondheid belangrijk en die zijn direct aan elkaar gerelateerd! Ik heb geleerd dat ik mentaal sterk kan worden door minder agressief en meer assertief te zijn (bekijk deze geweldige video). Ik begon mijn grenzen te bewaken en aan te geven aan anderen, vanuit mijn eigen positie te redeneren en minder oordelend te zijn, een betere luisteraar te worden, nederiger en dankbaar te zijn, begon te mediteren en tolerantie te oefenen. Ik vond veel inspiratie in het Nederlandse boek ‘Master your mindset‘ van Michael Pilarczyk.

Ik ben bezig om mijn mindset te trainen. Ik werk aan mijn manier van denken! Het voelt goed en ik voel me sterk.

Let op je gedachten, ze worden je woorden
Let op je woorden, ze worden je acties
Let op je acties, ze worden je gewoontes
Let op je gewoonten, ze worden je karakter
Let op je karakter, het wordt je bestemming
- Lao Tzu

The art of reflection

“Once is coincidence
Twice is striking
Three times is pattern”

October 19-21 2018 DEWT held their 8th annual peer conference with “Developing expertise in software testing” as theme. I had the honour to open the conference with my experience report called “Mentoring and coaching to develop skills”. In the open season after my experience report and during the rest of the peer conference we talked about reflection on several occasions. I think one of the most important skills in learning is reflection.

My vision on learning
Learning is the process of acquiring new or adapting existing knowledge, behaviours, skills, values ​​or preferences. Learning is much more than knowing: putting the learned into practice and gaining experience with it is important to truly internalise real knowledge and to gain skill. Learning must be linked to experiences from daily practice. By reflecting on knowledge and skills, so-called learning loops are created.

Learning is an ongoing process: the world is changing fast and to be excellent in your role requires many skills. So it is important to keep up! To learn effectively, learning should come from yourself, with intrinsic motivation and personal responsibility.

Learning requires a positive and open learning environment in which you can safely come out of your comfort zone to try new things. The safe environment gives confidence to make mistakes and to experiment with new knowledge and skills. It also demands a certain degree of challenge. How big the challenge is, is different for everyone. It is not that you always have to come out of your comfort zone radically. Right outside of your comfort zone, in your stretch zone,  you learn best.

Effective learning requires focused learning with clear learning objectives and preferably evaluation criteria. It should be clear what you want to learn and how you can measure that. Where do you want to grow? And how do you know that you have made a step? By setting clear learning objectives, you focus on learning.

Levels of learning
There are different levels of learning: (ref: Joris van de Griendt)

  1. In single-loop learning (improvement, behavioural improvement), it is about the visible and concrete behavioural level (what): you do something and that has a certain effect. This effect may be desirable or not, and based on that assessment, you can adjust your behaviour or not.
  2. Double loop (reframing, behavioural renewal) If you want to change your behaviour permanently, there is double-loop learning: researching which patterns the behaviour involves (how). Which patterns and mechanisms are behind the behaviour? Which helpful or obstructing thoughts are involved? Insight into this can provide a more well-founded choice to change behaviour.
  3. Triple-loop learning (transformation, behavioural development) goes even deeper. You involve your values, your purpose, the why question: why do I really want to change this behaviour? What important values ​​in me support this and what stops me?

What is reflection?
Reflection is a process of exploring and examining ourselves, our perspectives, attributes, experiences and actions / interactions. It helps us gain insight and see how to move forward. (ref: University of Edinburgh).

When we reflect, we deeply consider something that we might not otherwise have given much thought to. This helps us to learn. Reflection is concerned with consciously looking at and thinking about our experiences, actions, feelings, and responses, and then interpreting or analysing them in order to learn from them. (ref: The Open University).

Reflection is looking back at your behaviour in a certain situation. You reflect on that situation by asking yourself meaningful questions to make you think about the situation. The difference between just thinking and reflection is the intention to learn. There is a difference between evaluation and reflection. Evaluation means making a judgement about something you did. For example: did you reach your goal? Did I do the right thing? While reflecting means creating a safe space to investigate behaviour without making a judgement with the intention to grow.

Like learning, there are different levels of reflecting:

  1. Single-loop reflection focuses on behaviour and actions and is very close to evaluation.
  2. Double-loop reflection means trying to get hold of underlying convictions that interfere with the adjustment of interaction and behaviour.
  3. Triple-loop reflection is about motives and matters that touch on their own identity. There is often a relationship with issues at a higher level, that of the organisation or even of an entire system.

Iceberg

(image credit: Dutch Vision Institute)

Above the waterline behaviour is perceptible and statements are audible. But opinions, beliefs, feelings and emotions are not visible; they are below the waterline. These invisible elements, however, are often the motives for visible behaviour. An important part of the reflection will therefore consist of researching these deeper layers of the Iceberg.

By not addressing all layers within one’s competence management, you allow the coachee to act for incongruously (say A and do B, or vent a belief that contradicts his own motivation); just like external fragmentation (non-integration in the context), internal fragmentation (in the context of do-thinking motives) also leads to a real risk of energy loss.

Korthagen

(ref: How do I use the Korthagen reflection circle diagram?)

Korthagen’s reflection cycle is a tool or a strategy to be followed for learners to gain insight into their educational performance and to improve this. By applying this cycle step-by-step, one learns to systematically reflect a skill to be learned.

Phase 1: Describe the experience/situation you wish to reflect upon. What was the actual situation? You can do this by using the STARR(S) method: Situation-Task-Action-Result-Reflection-Strengthen (see appendix).

  • What did I have to do in this situation?
  • What action did I actually take?
  • What was the outcome of this action?

Phase 2: Looking back: What exactly happened?

  • What did I see?
  • What did I do?
  • What did I think?
  • What did I feel?

Phase 3: Awareness of essential aspects

  • What does that mean to me now?
  • What is the problem (or the positive discovery)?
  • What has all that caused? What does it involve?

Phase 4: Alternative methods

  • What alternative methods do I see (solutions or ways of making use of what I have discovered)?
  • What are their advantages and disadvantages?
  • What will I remember for next time?

Phase 5: Trial/action

  • What do I want to achieve?
  • What should I watch out for?
  • What do I want to try out?

Danger of thinking too much

Reflecting is an active activity that demands skills. It often happens that professionals think they reflect, while they are actually worrying.

This points out the important differences:

Worry

  • Involved in itself, looking from our own perspective, alone
  • Focused on mistakes
  • Focused on judging and condemning
  • Global approach
  • Mono causal approach

Reflection

  • Involved in the problem, also looking at a perspective outside of oneself, in contact with others
  • Focused on solutions
  • Focused on understanding
  • Analytical approach
  • Multi causal approach

Tips for reflecting

You can reflect on every situation and every problem that concerns you. You can learn a lot from that, but the pitfall is that you will be overwhelmed by the information and will keep looking back endlessly. Another danger is that you may feel that you are actually doing a good job – the work is going well, there is no criticism from colleagues or your supervisor – and so you see no reason to reflect. Yet it can also be very instructive to reflect on yourself and your way of acting.

The following tips can help you reflect:

  • Choose a concrete situation and look back on that specific moment and your course of action
  • Reflect regularly and ‘schedule’ at least once a week a reflection moment, preferably at a fixed time
  • Ask yourself open questions
  • Explain judgments about yourself; first see what really happened before judging yourself
  • Reflect in a methodical way, for example by going through a list of questions or use a reflection model
  • Reflect not only on problem situations but also on success experiences
  • Use feedback from others to reflect from that point of view
  • Read more about reflection to get inspired. This is a nice blogpost about reflection: How Self-Reflection Gives You a Happier and More Successful Life. For more inspiration read this: Tools to help you with self-reflection

Together you learn even more

It is already very instructive to consider your own functioning in this way, but it can be even more profitable if you do this together with others. Do not try to judge here either. Not about others, nor about yourself: you feel free to tell everything. For example, start doing intervision with peers or colleagues. More about intervision here: Intervision: what is it about?

Start a journal

Writing in a journal regularly (preferably daily on a specific time) helps to analyse your professional and personal growth. Journaling can give you a different perspective on things. Writing in your journal is a very useful tool to help you understand yourself and the world around you. Write down activities, thoughts, ideas, reasons, actions, techniques and reflections on specific topics or skills you want to improve. By writing in a journal you get an overview of your thoughts in which you can identify patterns. Journaling helps you to get thoughts and ideas out of your head but more important it enables making sense of things that happened. After doing something related to your learning goal, take notes on your observations, summarise facts and experiences. Also write down how it makes you feel.

More about reflective journaling in this article: Reflective Journals and Learning Logs.

 

Here are two helpful checklists to help you reflect:

Considerations when testing a software application in a context-driven way

Written by: Joris Meerts (main author), Huib Schoots and Ruud Cox all working at Improve Quality Services in the Netherlands.

 Introduction
One of the cornerstones of context-driven testing (Lesson learned in software testing by  Kaner, Bach, & Pettichord, 2001) is that the way of working when testing software is determined by the situation in which the tester finds himself. A good approach is not driven by a prescribed process or by a collection of steps that one habitually executes. Instead, it arises from the use of skills that ensure that testing matches the circumstances of the software project. Within the framework of context-driven testing, a wide range of skills is discussed, including critical thinking, modeling and visualization, note-taking and applying heuristics. It is not easy to learn these skills. One way to sharpen them is to do exercises and discuss the results. This was the purpose of a meeting of four testers at Improve Quality Services. In the following report we will elaborate on the exercises that were done during that meeting and on the results.

Purpose of this report
As we pointed out in the introduction, it is not easy to learn the skills associated with context-driven testing. Practitioners of context-driven testing regularly refer to professional literature that describes these skills. But applying these skills is a process of trial and error. That is why it is important to simulate real life situations and learn from exercises we do. That was the purpose of the meeting. In this report we share the steps we took and the results of those steps, for example in the form of notes, sketches or models. We also share our experiences to make it easier to apply the skills in practice.

The meeting
The meeting was held on February 14, 2017 at the office of Improve Quality Services in Eindhoven. The participating testers, Jos Duisings, Ruud Cox, Joris Meerts and Huib Schoots are all employees of Improve Quality Services.

Background information
Date: 14 February 2017
Location: Improve Quality Services Eindhoven
Start: 9 am
End: 5 pm
Team A: Jos Duisings, Ruud Cox
Team B: Joris Meerts, Huib Schoots

The assignment

The assignment of the meeting is to select and execute tests on a software application. It is carried out by two teams of two testers each. This makes it possible to take different approaches and to provide feedback from one team to the other.

Division in sessions
The meeting is divided into sessions in advance. The sessions each have their own goal and their own evaluation.

  • Welcome & introduction
  • Creating a coverage outline (per team)
  • Debriefing the coverage outline
  • Drafting a test strategy (as a group)
  • Debriefing the test strategy
  • Selecting a few charters
  • Performing a test session (charter) per team.
  • Debriefing the test session
  • Retrospective

Introduction of the application to be tested
The application to be tested has been selected prior to the meeting. We choose the application Task Coach (Task Coach, 2018), an open source to-do list manager. This software is publicly available and runs on multiple platforms (Windows and Mac OS). The application is relatively simple but still offers sufficient complexity to be able to test thoroughly. According to the description on the website, Task Coach is ” a simple open source to-do manager to keep track of personal tasks and to-do lists. It is designed for composite tasks, and also offers effort tracking, categories, notes and more.” The participants have not worked with Task Coach before and are therefore not familiar with this specific piece of software.

Creating a coverage outline
Because Task Coach is new for all participants, the software will have to be explored. Only then can we say more about what can be tested in the given time. Such an exploration of the product can be done in many different ways. During the meeting we chose to make a ‘product coverage outline’, inspired by the Heuristic Test Strategy Model of James Bach (Bach, 1996). The product coverage outline provides an overarching view of the product in which the functionality of the software is divided into seven different categories. The categories are described in the mnemonic SFDIPOT. The software tester is reminded by this heuristic that when mapping a product he can look at Structure, Function, Data, Interfaces, Platform, Operations and Time. By looking at the application from these perspectives, a relatively complete sketch is created of what the software product is and what it can do. However, the mnemonic does not only serve for exploration. The ‘map’ of the application that is created in this way can also be used to visualize the coverage of the tests and to select the tests to be carried out.

The exercise
We decide that both teams will be given half an hour for creating a product coverage outline and that each team is free to decide which in which format the outline is presented. The choice of a half-hour time limit is mainly motivated by the fact that the software must also be tested before the end of the day. There is no room to spend much more time making the product outline.

It turns out that half an hour is not enough time for mapping an application that, despite the fact that it claims to be simple, still contains a lot of functions. Team B decides to create a mind map in which the seven categories are elaborated. This mind map is not finished after half an hour. Especially the branch ‘Function’, in which the functions of the application are described, is very large and has a deeply nested structure. To build this structure, team B explores the application by clicking through it and captures the functionality (in text or in image) in the mind map. Due to time constraints, team B presents a product sketch that is not finished. This triggers a discussion about what we expect the mind map to look like given the available time. The expectations, such as completeness, can be related to the purpose of the mind map. Through a discussion it becomes clear that no clear goal has been defined for drawing up the product sketch and that the result of the assignment is difficult to assess.

Preference for the mind map
A mind map is a tool that is used by software testers on a regular basis to create a product outline. The tree structure of the mind map lends itself to classifying (grouping) properties. It is easy to start with an empty mind map, enter the categories from SFDIPOT and expand these. Moreover, a navigable structure is created in this way. The mind map forms a kind of geographical map in which you can choose to zoom in and zoom out to determine the location of something in relation to the whole. Software essentially consists of zeros and ones and the software product is an abstract concept. By making the ‘map’ the abstract software gets a concrete shape. A mind map can also be used as an instrument for reporting on the results and the progress of the tests. So there are a number of good reasons to work with a mind map from the beginning.

Team A starts the session by creating a mind map but after a short time it switches to a different form. When studying Task Coach, the team finds out that there is a help file in the application. After a short study, the help file appears to describe a large part of the application. The categories from SFDIPOT all come back in the file to a sufficient extent and so Team A chooses to use this file, converted to Word format, as a product outline. By marking the categories in the document with different colors, structure is added to the document. In addition, the table of contents provides a global overview of the functionality in the application; the details are mentioned in the paragraphs. In this way, team A delivers a product sketch that is relatively complete within the stipulated time.

Drafting the test strategy
The result of the first session is a product outline. With this product sketch and the knowledge gained during the exploration of the software it is easier to discuss a test strategy. Because exhaustive testing of an application is not feasible for the majority of software applications and because exhaustive testing in many cases does not yield better information, it is desirable to make choices with regard to the test work to be performed. We find these choices in the test strategy.

By making the product outline we have gained insight into a number of aspects of the application. We take these aspects into account and we hope to indicate per category whether this category needs to be tested and if so with which depth. The most decisive factor in that decision is the risk the company encounters when the software is put to use. Risk is a multifaceted concept and can only be determined if the software product is highlighted from different angles. In any case, the perspective of the tester alone is not sufficient to get a good picture of the risks of a software product. During the second session it becomes clear that a representative is missing who can reason about risk from the perspective of a user or of the organization. This point is also discussed in the debriefing of the second session and we conclude that for that reason the risk assessment is incomplete.

Assessing risk
In the Heuristic Test Strategy Model three dimensions are discussed that influence a test strategy. These dimensions are the project, the product and the requirements that are set for that product; the quality characteristics. Risks play a role in each of these dimensions. In the exercise, we decide not to look at the project risks. As far as product risks are concerned, we make our own assessment with regard to the categories of the product. We consider categories that are used the most, are the most prone to errors and categories where a possible error has the most impact. Because there are no users involved in the exercise, we try to place ourselves in the role of the users. This way the following product categories are discussed.

  • The primary process from the Operations category. This will tell us which functionality is used the most.
  • Using the application by multiple users from the Operations category. We recognize risks associated with synchronization: tasks are not updated properly.
  • The importing and exporting of tasks from the Interfaces category.
  • Dealing with reminders from the Functionality category. Reminders are crucial for not missing appointments.
  • Dealing with date and time from the Time category. Date and time play an important role in planning tasks, they touch the core of the application.

After we have identified the categories of the product, we try to assess which quality aspects of the product are the most in demand. Here too, it has been decided to use our own assessments as a guideline. The following quality aspects are named:

  • Functionality,
  • Usability and
  • Charisma

Coverage
We briefly discuss the coverage of the tests that we want to carry out. Since the tasks that are maintained in Task Coach can have different statuses, the tester can, from the perspective of state transitions, visualize the coverage level of the tests carried out. The state transitions are covered in the different paths through the application that a user travels. These paths are helpful when mapping the coverage. Furthermore, coverage can be looked at from the perspective of the test data used.

Drafting the charters
We decide to make two charters (Session Based Test Management) and to execute them. We prioritize the product categories mentioned in the risk analysis based on our own insights. From this we conclude that ‘the primary process’ and ‘dealing with date and time’ are the two most important product categories. We translate these two categories into charters. In one charter we will look at the primary process, in the other charter the handling of date and time will be investigated. Each team selects a charter. After the charter has been executed, each team reports the work done to the other team in a debrief session. During this short evaluation, the tester will tell his story and answer any questions. The evaluation has several goals, such as assessing whether the mission is successful, translating new insights into follow-up sessions, looking at notes and descriptions of findings and coaching the tester. Due to the debrief, the understanding of the application under test grows.

The charter for testing the primary process
To formulate a starting point for this charter, we look at the risks that can be associated with the execution of the primary process. There is a risk that no task can be created. Or it could be that the task is created but errors occur when saving or modifying a task or changing the status of a task. These considerations lead to the following mission that serves as the starting point for the charter: “Explore the basic flow of tasks using CRUD to discover/test the life cycle of a task”.

The charter for testing the handling of date and time
Prior to drafting the charter for testing the handling of date and time, the following test ideas are mentioned:

  • Start and end times are the same
  • The end time is before the start time
  • The date of a task is far in the past or far in the future
  • System time
  • Work in different time zones
  • Dealing with winter time and summer time
  • Different date formats

With these test ideas in mind, the following charter is drawn up: “Explore tasks using date/time to discover date and time related bugs”.

Execution of the charters

Testing the primary process
The charter for testing the primary process (basic flow) begins with the creation of a task. Several tasks are created using the button on the taskbar. Subsequently, several underlying tasks are created for a single task, up to 8 levels deep. Under one of the underlying tasks, a new tree structure of underlying tasks is created. During the creation of the task structure, questions arise about the maximum depth of the task structure and about the sorting of the underlying tasks. In addition, it is found that it is possible to delete underlying tasks and then to undo these deletions. It is proposed to further investigate this functionality in a separate charter, since it is suspected that this is not always going well. The resulting tree structure is removed by means of the ‘Delete’ button on the taskbar. After this, all tasks have disappeared.

On the taskbar there is also a button that offers the possibility to create new tasks from a template. There are two default templates available. Tasks are created with these templates. It appears that it is not possible to create underlying tasks from a template. The question is why this functionality is not available for underlying tasks. Finally, the created tasks are deleted.

To get a better picture of how templates work in the application, the help text is read over templates. Furthermore, the team explores the menu structure in search of more functionality that could be related to the use of templates. From the ‘File’ menu it is possible to save tasks as templates, to import templates and to edit templates. A template is created.

From the dialog that appears when choosing to import a template it appears that template files have the extension ‘.tsktmpl’. But when a task is saved as a template, it is not possible to find out whether a template file is created from this task and, if so, where this file is stored. The newly created templates are visible when one chooses to create a new task based on a template from the toolbar.

A task is created, saved and checked if this task can be imported. Again, all created tasks are deleted by selecting the menu option Edit > Delete. The team looks a bit further at the options for removing tasks. It appears that there is a shortcut combination for removing tasks. The combination is Ctrl + DEL. The team wonders why it is not possible to simply use the Delete key.

In summary, this session looked at creating, viewing, editing and deleting tasks and underlying tasks. A finding has been made regarding the undoing of changes. It turns out that undoing the removal of tasks in a tree structure of tasks and underlying tasks does not produce the same structure as the original structure. Following the session, it is proposed to look more extensively at the use of templates in new charters. Also the functionality for creating, viewing, editing and deleting tasks and underlying tasks deserves more attention, especially because this functionality can be called from many different places in the application. The various possibilities have not all been tested in the first session. Furthermore, a charter could be made for creating a task depending on another task.

Feedback
At the end of the session, the report of the session was discussed with a member of the other team with the aim of obtaining feedback about the course of the session in a debrief. The feedback shows that there was not enough time to complete the charter. To complete the charter another thirty to sixty minutes would be needed. It is noted that the description of the detected bug is not clear enough. It also becomes clear that not all possible statuses of a task have actually been addressed in the session. Finally, a new charter is suggested for testing filtering and sorting.

Testing the handling of date and time
The session is started with a clean installation of the application. First a new task is created. Attention is given to the Data tab on which various data can be entered. The date and time can be changed in separate input fields. The time can be changed by means of a dropdown or by using the arrow keys on the keyboard. It turns out that ’00:00′ cannot be used as time. It appears that the range of time can be set under the ‘Preferences’ menu. After an adjustment, ’00:00′ can be entered.

It is noted that the planned start date and the planned end date are not included in the calculation of duration. But it is unclear what this data will be used for. When the planned end date is in the past, the task will turn red. When the planned end date is in the future, the task will turn purple. It is remarkable that the planned start date can be after the planned end date. Apparently there is no validation on the order of data. Dates that are far in the future are accepted as valid dates. To change a date, the task will first have to be closed and then opened again.

An interesting finding occurs when, at the planned start date, the year is set for 1900. If this is the case, the planned start date cannot be changed after closing and reopening the task. During the study of this finding, the team finds out that the application creates a log file (in the Documents folder) in which any errors are logged. The attempt to adjust the planned start date results in the following error message: “ValueError: year = 1853 is before 1900; the datetime strftime () methods require year> = 1900”. After this finding, further attention is paid to other functionality around time and date. For example, a reminder on a task is set to 5 minutes. The reminder works.

In addition to planning tasks TaskCoach can also be used to keep track of the time spent per task. After some research it appears that this functionality is complex and that it is not easy to find out how TaskCoach deals with time spent. Time tracking can be started with a button, but can also be done by increasing the time spent in a separate tab. The team notices that when entering an hour of time spent, the actual time spent shown on the tab is just a few seconds. It is possible to aggregate time spent at month level. If we do this, we see that the descriptions that we have listed for each entry are combined in a single text field.

The time that an application uses depends on the time setting on the system. For this reason, the team manipulates the system time of a MacBook Pro. The calendar is changed to a Coptic calendar. This adjustment causes the application to crash after startup. In the log a line appears mentioning an error relating to an invalid date format.

Finally, the team looks at the connection between the budgeted time and the time spent. TaskCoach is able to calculate the remaining time on the basis of the variables mentioned. Some tests are performed with variations in hours, minutes and seconds. TaskCoach handles all these variations correctly.

Feedback
A team member verbally reports on the session to a team member of the other team in a debrief. The feedback shows that this report contains a lot of detailed information. To be able to place the detailed information, a framework is needed. The product outline could have been used as a framework. One new charter emerges from the feedback, namely the testing of the synchronization of tasks between systems with different system times.

Conclusion
The meeting is concluded with the completion of the test sessions. Looking back we conducted, in a day with two teams, a number of tests on an application that was unknown beforehand. We have shown that techniques and methods exist that help the tester to acquire knowledge about the application, develop a strategy and perform tests. Exploring the application provides insights that serve as a starting point for risk assessments and for conducting test sessions with a well-defined goal. By quickly arriving at concrete and well-substantiated tests, the tester provides valuable feedback on the application in a short period of time. The test sessions are debriefed and this provides starting points for further deepening where necessary.

Due to the popularity of agile methods, it is common for the tester to be asked to test something, while at that moment he has incomplete insight into the functionality of the application. Moreover, the tester is expected to deliver results within a limited time. It requires an approach in which the tester quickly draws up and executes a strategy by modeling, thinking critically, discovering and investigating. This is the approach that we applied during the meeting described above.

A Clash of Models
The creation of the product coverage outline led to quite different outlines being delivered by the teams. These differences and their possible causes are discussed in a separate article, written by Joris Meerts and Ruud Cox. The article is called A Clash of Models.

References
Bach, J. (1996). Heuristic Test Strategy Model.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons Learned in Software Testing. John Wiley & Sons.
Task Coach. (2018). Retrieved from Task Coach: http://taskcoach.org/

Let’s stop talking about testing, let’s start thinking about value

This year Alex Schladebeck and I did two keynotes titled “Let’s stop talking about testing, let’s start thinking about value” at QA Expo in Spain and TestNet in the Netherlands. This blogpost has the most important points we made in our talk.

The keynote was inspired by some of our frustrations: “Testing is under appreciated” (Alex) and “Most testers are unable to explain what we do” (Huib). I wrote about my frustration back in 2016 already. This blogpost is about my frustration that most testers cannot come up with a decent definition of testing. And even worse: a big majority of the people who call themselves professional testers are not able to explain what testing is and how it works! They have trouble explaining what they are testing and why they are doing specifically the thing they are doing! How can anybody take a tester seriously who cannot explain what he is doing all day?

Alex’s frustration is that testing is not valued by others. Developers are seen as the rockstars of the project because they create the software that adds value. But why are testers often not valued?

  • Lowered expectations for testing expertise by stuff like ISO standards and ISTQB: I wrote about certification and standards before. ISTQB and standards put too much emphasis on process and documentation, rather than the real testing. By assuming there can be a standard, you say that there is one best way to organize and document your testing. But isn’t your test strategy heavily dependent on its context? When using standards we tend to focus on complying with the standard, and lose sight of the real goal. This sort of goal displacement is a familiar problem in many situations. Also, the idea that you can learn how to test is a couple of days of training is dangerous. Remember lesson 272: if you can get a black belt in only two weeks, avoid fights (Lessons Learned in Software Testing: A Context-Driven Approach by Bach, Kaner and Pettichord).
  • Avoiding controversy: nowadays more and more people advocate to be nice! I think that we confuse being nice, with being kind! An interesting article about this phenomenon is written by Marcia Sirota. Of course we need to respect other people, but to push the testing craft forward, we need to have firm discussions and disagree with others way more often. Being nice doesn’t help. Serious feedback does!
  • We devalue our own work by becoming tool jockeys: unfortunately there are too many testers (and teams) out there who focus too much on automation as much as possible. Why? Because they can! The testers in those teams are often so busy doing automation that they do not have the time to test anything…
  • We do not stand up for our craft: we do not fight back enough when other people say they do not need testers, or if they tell us how to do our jobs to name a few examples. We have to learn “testers self-defence: to stand up to people who try to dictate how do our jobs. We have to learn how to organize effective (and efficient) testing. And we need to learn how to talk about our work in a way others understand. This requires practice!
  • We do not learn or practice enough: testing is difficult! We have to deal with complexity, ambiguity, change and people. Testing is a craft, not something you do as a hobby. To become a craftsperson, you have to practice (also see my blogpost: a road to awesomeness).
  • We don’t know how to talk about testing: as said before: how can anybody take a tester seriously who cannot explain what he is doing all day? To be really valuable, testers need to learn to talk about their testing in a way others understand and find valuable.

So looking at these things, are we okay with this? I don’t think so. But what can we do about it? We are trapped in this vicious circle: we need to talk about testing! It is good for our soul to explain what I did and why, but we don’t know how to talk about our testing in a way that others understand.

Alex and I listed some traps:

  • Stories decay into Numbers: testing is about providing information to enable others to make informed decisions. The number of test cases or the number of bugs do not really matter. It is the story about the product and the risks involved. Those numbers might back up your story, but they do not tell the story!
  • A performance decays into Deliverables: testing is about finding problems, collecting information, exploring and experimenting to discover new information. Sure, documents and stuff sometimes help us, but testing is a performance. (James Bach talks about that here: a test is a performance and here: Test cases are not testing: towards a culture of test performance).
  • Test strategy decays into Test execution: when was the last time you saw a really good test strategy? In many cases I find master test plans where everything is described except the strategy. It is hard to create a test strategy and it is even harder to write it down or visualise it. Many testers I meet focus on test execution: creating test cases and scenarios and calling that the strategy.
  • Tool supported testing decays into Automation: testing using tools is a great idea. It gives us more opportunities to test and improves testability. But as said earlier: it becomes a problem when we focus too much on automation or even try to automate all our work. We cannot automate testing.
  • Many kinds of coverage decays into One kind of coverage: testing benefits from diversity! You find a certain type of bug with a certain test technique or approach. By using lots of different views, approaches and techniques, we find more problems.
  • Learning activity decays into Formalized static tasks: testing is learning about the product for our stakeholders. It’s not about verification and validation, there is much more to it. I like to replace such words with challenge the belief that (verify) and investigate (validate). Those activities provide the valuable information we need.
  • Balance risk and uncertainty decays into Certainty: people like to be comfortable and we like to give other comfort as well. But as testers we need to stay unsure, when others are sure. It is our job to keep asking critical questions. We are not here to give confidence or comfort, we are here to demolish unwarranted confidence! Also keep in mind that to find new unexpected problems, we have to go where nobody has thought of and nobody went before us. That will cause confusion which feels uncomfortable for many. I learned to be okay with confusion, since this is essential for learning new things.
  • Business Impact decays into Bugs: some testers are frustrated when bugs aren’t fixed. But that is part of the deal: some things that bug us, are just not important enough.
  • Product story decays into Testing jargon: I think this is the main problem for people not listening to testers. We talk jargon and about what we do in detail too much. We say stuff like: “We’ve executed 17 test cases in the system test, we’ve automated 50% of the test cases for area C and now have 30% code coverage. We found three major and five medium bugs”. And we are surprised that nobody will listen. We need to talk about the product! So you have found 8 bugs? Who cares? Talk about the risks involved, about the threats to the value of the product.

So maybe testers need to stop talking about testing?
Well, not exactly. We need to remember that the information from testing enables other people to do better work! So the testing itself isn’t always interesting, but the story about the results and the impact on the business is!

Just imagine a conversation between a tester and the PO.
Tester: The testing is going well!
PO: Okay, great. How is the product?
Testers: It sucks!

The role of testers

What is the role of testers? Testers see things for what they are. Testers help others make informed decisions about quality, because we think critically about software. This means creating awareness about the state of the product by staying sceptical when everybody else is sure. So we have to know what our clients want from testing. What information do they need to take these decisions? Project managers have one big question to be answered: are there problems that threaten the on-time, successful completion of the product?

Product Risk Knowledge Gap

I like to explain testing using the “Product Risk Knowledge Gap” like we teach in RST. Knowledge Gaps are the things that we need to learn in order to make good decisions. We need to learn about the product to close the knowledge gap. The more we know, the less risky our decisions will be. Testers should focus on questions like: what does the client need to know right now? What might hinder the successful completion of the product? What role do I need to take on in this situation to ensure we achieve our aims? Does this information matter? To whom?

But there is a way to avoid talking about testing. Just find enough questions and problems so that your stakeholders simply won’t have time to ask you questions back! Also, if you tell a credible story and give them the information they need, nobody cares how you got the information in the first place. In this case you need to stand your ground: tell people what they need to hear despite what they want to hear. Again: it’s your job to see things for what they are. If you give people the chance to doubt what you are doing, because you do not deliver the information they need, they will start asking questions about how you do your job. And if you have to talk about how you do your testing, then prepare to be able to tell a damned good story about your testing. Something they can understand and relate to.

The testing story

The testing story by Rapid Software Testing can help you tell that story. Tell a story about the product, what you saw, what you did to gather that information and how valuable that information is. (See “Braiding The Stories” by Michael Bolton). The testing story contains three stories that feed into each other:

  1. The product story: a qualitative report on how the product can work, how it fails, and how it might fail in ways that matter to our clients.
  2. The testing story: to make the product story credible, the testing story is about how we configured, operated, observed, and evaluated the product; what we actually did and what we actually saw.
  3. The quality of testing story: to make the testing story credible, tell a story about the quality of the testing. Describe why the testing we’ve done has been good enough. It includes details on what made testing harder or slower and what we might need or recommend in order to provide better, more accurate, more timely information.

Modern testing
As testers we do way more than only testing. We are enablers of testing by doing all kind of other things to be a service to the team and our clients. Researching this, Alex and I found the Modern testing principles by Alan Page and Brent Jensen. There is a lot of good stuff in there, and yet we feel that there is not enough focus on the actual testing in their principles. Furthermore, we think that the seventh principle “We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.” is formulated too negatively. We do not talk about dedicated test specialist as a function. But we like to talk about testing skills. And although we think there should not be a need for a dedicated testing specialist, we see too many people in teams who do not like testing. Passion (or at least motivation) for what you do, is conditional to become good at anything. So we created our own testing principles (inspired by the modern testing principles of course):

  1. Deliver insight into status of the product
  2. Practice (and enact) critical thinking
  3. Enable testing: lead, coach, teach, support
  4. Discuss testability
  5. Explore & experiment
  6. Promote waste removal / avoidance
  7. Help to accelerate the team
  8. Advocate continuous improvement
  9. Foster quality culture
  10. Keep critical distance and close social distance

Stop talking about testing?

So do we need to stop talking about testing? Not really. But we need to talk about the product, risks and value more. We can talk about actual testing only to back our story up or if they ask questions. And even then, we need to make our story understandable and relatable to others. Make sure you are a service to the team. We created our own testing principles to explain what value we add. We also have a pretty clear story on what testing is and how it adds value. We did this by practicing our stories many times. But we also figured out our own testing paradigm. That makes is easier to talk about what we do and how we add value.

Software Development is research & development: a series of experiments that ultimately lead to a suitable solution. We are dealing with customers who do not know exactly what they want. Furthermore, we are dealing with the complexity, confusion, changes, new insights and half answers. That requires research. As we team we are looking for what works and what doesn’t. Testing is of great importance for this! Testing provides insight and overview. Testing shines a light on the actual status of product and project. These insights enable others to make better decisions and eventually make better products.

The slide are here.

Note: this is an Alex Schladebeck and Huib Schoots co-production and this blogpost was co-authored by Alex. So where you read I, you could read Alex and I.

Creating a Test Strategy

At EuroStar 2017 I did a experiential workshop with Pekka Marjamaki and Carsten Feilberg called “The Magic of Sherlock Holmes – Test Strategy in a blink of an eye”. The goal of this full day workshop was to teach to create a Test Strategy rapidly so you can start testing as soon as possible. This blog post describes the summary of what we taught and publishes the example I made for the participants.

What is a Test Strategy?

In the workshop we defined Test Strategy as a solution to a complex problem: how do we meet the information needs of the testers & stakeholders in the most efficient way possible. In Rapid Software Testing we define Test Strategy as “the set of ideas that guide your test design or choice of tests to be performed”. We also talk about logistics and test plan. Logistics is the set of ideas that guide your application of resources to fulfilling the test strategy and test plan is the set of ideas that guide your test project. A Test Plan is the sum of logistics and the strategy. 

Rikard Edgren did an excellent workshop on Test Strategy at EuroStar 2014. In his workshop he says: Test strategy contains the ideas that guide your testing effort; and deals with what to test, and how to do it. (Some people mean test plan or test process, which is unfortunate…). It is in the combination of WHAT and HOW you find the real strategy. If you separate the WHAT and the HOW, it becomes general and quite useless.

What influences the Test Strategy?

Your Test Strategy is influenced by many factors.

  • Context:  your testing is influenced by the details of the specific situation like information available, the tester(s) doing the testing, what has been tested before, what tools and environments are available, how much time you have, etc.
  • Missions: what do your stakeholders need to know about the product? 
  • Risks: testing is mostly motivated by problems that might happen (risks). We want to find the important problems in the product. 
  • Product: products have many dimensions. By modelling the product we find important and unique aspects of the product.
  • Quality Criteria: various criteria or requirements that define what the product should be for the stakeholders.
  • Testing: your testing changes the Strategy constantly. Each experiment learns you more about the product and the risks involved.

How to create a Test Strategy?

To create a Test Strategy, you have to examine the factors mentioned above. This can be done in several activities (which do not need to be done specifically in this order). Most likely you will do this in a iterative way, building your Test Strategy as you go.

  1. Missions for your testing
  2. Product analysis
  3. Oracles & information sources
  4. Quality characteristics
  5. Context: project environment
  6. Test strategies

I like to use to Heuristic Test Strategy Model (HTSM). It reminds me what to think about when if am creating my Test Strategy and tests.

The HTSM is a model which consists of several sets of heuristics (more about heuristics here and here).  The full model can be found here.

  • Project Environment helps to understand our context and missons: MIDTESTD (mission, information, developer relations, test team, equipment & tools, schedule, test items, deliverables).
  • Product Elements helps to identify dimensions and factors of the product that could be examined in a test: SFDIPOT (structure, function, data, interfaces, platform, operations, time).
  • Quality Criteria helps to identify value and threats to various criteria of the product: CRISP DUCCS (capability, reliability, installability, security, performance, development, usability, charisma, compatibility, scalability). In this case you could also use the software quality characteristics by the Test Eye.
  • Risk analysis reveals potential problem. Risks motivate your testing. But the testing itself is risk analysis in itself: after analysing potential risks your testing informs you about the actual problems and learn new aspects of the product which helps you identify new risks. Each test has its own strategy: FDSFSCURA​ (function testing, domain testing, stress testing, flow testing, scenario testing, claims testing, user testing, risk testing, automatic checking​). To come up with good Test Ideas is an important skill. Erik Brickarp has an excellent blog post called How to come up with test ideas.
  • Identifying Oracles and Information sources helps learning about the product and identify potential problems. To design a good test strategy we need to know what’s important.

Examples of Test Strategy

Before giving you my example, I like to link to two great example of how to create a thorough Test Strategy by Rikard Edgren.

The exercise in the workshop was defined as follows:

Product: tinyurl.com/SherlockES

Approach used to create my Test Strategy below:

  1. Look at mission and things we know (from the class) [1 min]
  2. Explore wix platform website [1 min]
  3. Explore wix casies website and create SFDIPOT mindmap while learning about the product [5-10 min]
  4. Risk analysis [5-10 min]
  5. Think of test ideas / approaches to deal with risks [5-10 min]
  6. Wrap-up. Create testing story about what I know already. List next steps [5 min]
  7. Tidy document and add some comments to make it readable for students

Total time used: 50-60 min

1. What do we know (and what important questions do I still have):

  • No developers available –> No access to code
  • Target customers? Who are they?

(Considering the short time period, I choose not to do a thorough context analysis using MIDTESTD, if I had more time, I would do so).

Mission:

Casies is web shop build with the Wix platform where customers can buy a case for their mobile phone. Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released.

Most important quality criteria:

  • Usability & charisma
  • Reliability and security of the purchase process
  • Functionality
    • Find, sort & filter
    • Purchase, cart, payment
    • Bestsellers
    • Contact
  • Performance

2. Look at Wix platform site

(url: https://www.wix.com/features/main)

Product exploration: look at website about Wix platform

Claims about the product:

  • Easy Drag and Drop
  • Free & Reliable Hosting
  • App market –> what else is there?
  • Mobile Friendly
  • loads of templates
  • SEO

3. Explore Wix casies

Start using the product

Product exploration: play with casies website using SFDIPOT

Download Xmind mind map (created in Xmind Zen beta4)

4. Risks

The risk mentioned here are probably too vague in some cases. Since risk analysis is a continuous process I will update risks later, making them more concrete and actionable. Also I will add more risks while testing.

  • Web shop not available
  • Web shop not easy to use
  • Target customers do not like the web shop
  • Web shop is not secure: customer data accessible by 3rd party
  • Customer cannot add items to cart
  • Customer cannot buy items in cart
  • Customer cannot find the items wanted
  • Web shop is not easy to find

5. Risks – Testing

Test ideas

Used document Software Quality Characteristics by Testeye

(More info on session based testing: here)

6. Testing Story & Next Steps

Looking at the website I found that the web shop doesn’t look complete to me: there is no possibility to check out and pay. Is this okay? To be able to do thorough testing to fulfil the mission “Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released” the website needs to be completed and payment functionality needs to be added. I am also interested in the maintenance model: how can I add cases? This would be very handy to create more test data and play with parameters in there to see how that comes out in the shop. Does this need to be part of my testing?

The results of my short initial exploration are captured in the SFDIPOT mind map I made while playing & interacting with the product. After that I made an initial risk analysis. I haven’t gone deep on anything yet.

Next steps will be talking to the product owner with my initial test strategy. If the payment module isn’t available soon, I will start with testing the first three charters, although I will not be able to fully do the purchase process charter, so I have to split this charter and focus on the cart part only.

  • Purchase process: cart & payment including investigation of fields
    (using the Test heuristic cheat sheet)
  • Finding cases – sorting & filtering cases
  • GUI tour: check all links and info

Used heuristics

Below an abstract of the “Software Quality Characteristics” by Testeye used as heuristics for creating this Test Strategy

Usability

  • Affordance: product invites to discover possibilities of the product.
  • Intuitiveness: it is easy to understand and explain what the product can do.
  • Minimalism: there is nothing redundant about the product’s content or appearance.
  • Learnability: it is fast and easy to learn how to use the product.
  • Memorability: once you have learnt how to do something you don’t forget it.
  • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
  • Operability: an experienced user can perform common actions very fast.
  • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
  • Control: the user should feel in control over the proceedings of the software.
  • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
  • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
  • Consistency: behavior is the same throughout the product, and there is one look & feel.
  • Tailorability: default settings and behavior can be specified for flexibility.
  • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
  • Documentation: there is a Help that helps, and matches the functionality.

Charisma

  • Uniqueness: the product is distinguishable and has something no one else has.
  • Satisfaction: how do you feel after using the product?
  • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
  • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
  • Curiosity: will users get interested and try out what they can do with the product?
  • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
  • Hype: should the product use the latest and greatest technologies/ideas?
  • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
  • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
  • Directness: are (first) impressions impressive?
  • Story: are there compelling stories about the product’s inception, construction or usage?

Reliability

  • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
  • Robustness: the product handles foreseen and unforeseen errors gracefully.
  • Stress handling: how does the system cope when exceeding various limits?
  • Recoverability: it is possible to recover and continue using the product after a fatal error.
  • Data Integrity: all types of data remain intact throughout the product.
  • Safety: the product will not be part of damaging people or possessions.
  • Disaster Recovery: what if something really, really bad happens?
  • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?

Security

  • Authentication: the product’s identifications of the users.
  • Authorization: the product’s handling of what an authenticated user can see and do.
  • Privacy: ability to not disclose data that is protected to unauthorized users.
  • Security holes: product should not invite to social engineering vulnerabilities.
  • Secrecy: the product should under no circumstances disclose information about the underlying systems.
  • Invulnerability: ability to withstand penetration attempts.
  • Virus-free: product will not transport virus, or appear as one.
  • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
  • Compliance: security standards the product adheres to.

Performance

  • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
  • Resource Utilization: appropriate usage of memory, storage and other resources.
  • Responsiveness: the speed of which an action is (perceived as) performed.
  • Availability: the system is available for use when it should be.
  • Throughput: the products ability to process many, many things.
  • Endurance: can the product handle load for a long time?
  • Feedback: is the feedback from the system on user actions app
  • Scalability: how well does the product scale up, out or down?

Final thoughts

As my example shows: you can create a Test Strategy in an hour. Of course this Test Strategy is not complete. But after the first tests (3 sessions) we learn and discover more about the product, so we can identify new risks, which inform new missions and will help us come up with new Test Ideas. Our Test Strategy will grow over time!

Extra info:

  • The slides are here.
  • The pictures and flipcharts from the workshop are here.

References:

Does certification have value or not?

I read a blogpost in Dutch named “Does certification have value or not?” by Jan Jaap Cannegieter. I wanted to reply, but there was no option to reply, so I decided to turn my comments into a blogpost. Since the original blogpost is in Dutch I have translated it here.

The proponents claim that you prove to have a foundation in testing with certification, you possess certain knowledge and it supports education.” (text in blue is from the blogpost, translated by me).

Three things are said here:

  1. prove to have foundation
    Foundation? What foundation? You learn a few terms/definitions and an over-simplified “standard” process? And how important is this anyway? Also the argument of an common language is nicely debunked by Michael Bolton here: “Common languages aint so common
  2. possess certain knowledge
    When passing an exam, you indicate to be able to remember certain things. It doesn’t prove you can apply that knowledge. And is that knowledge really important in our craft? I think knowledge is over appreciated and skills are undervalued. I’d rather have someone who has the skills to play football well instead of somebody who knows the rules. From a foundation training, wouldn’t you at least expect to learn the basic testing skills? In no ISTQB training, students use a computer. Imagine giving someone a driver’s license without having ever sat in a car …
  3. supports education
    Really? Can you tell me how? I think the opposite is true! As an experienced teacher (I also did my share of certification training in the past), my experience is that there is too much focus on passing the exam rather than learning useful skills. Unfortunately, preparing the students for the exam takes a lot of time and focus away from the stuff that really matters. Time I would rather use differently.

Learning & tacit knowledge

So how do people learn skills? There are many resources I could point to. Try these:

In his wonderful book “The psychology of software testing” John Stevenson talks about learning op page 49:

The “sit back and listen” approach can be effective in acquiring information but appears to be very poor in the development of thinking skills or acquiring the necessary knowledge to apply what has been explained. The majority of trainers have come to realise the importance of hands on training “Learn by doing” or “experiential learning”.

John points to resources like: Learningfromexperience.com and the book “Experiential learning: experience as the source of learning and development” by David Kolb. Also Jerry Weinberg has written books on experiential learning.

The resources on learning skills mentioned by me earlier, will tell you that experienced people know what is relevant and how things are related. Also practice, experimentation and reflection are important parts of learning. Learning of a skill  depends heavily on tacit knowledge. On page 50 in his book John Stevenson writes:

Pakivi Tynjaklak makes an interesting comment in the International Journal of Educational research: “The key to professional development is making explicit that which has earlier been tacit and implicit, and thus opening it to critical reflection and transformation” – This means that what we learn may not be something we can explain easily (tacit) and that as we learn we try to find ways to make it explicit. This is the key to understanding and knowledge when we take something which is implicit and make it explicit. Therefore, able to reflect on what is learned and explaining our understanding.

And since testing is collecting information or learning about a product, the importance of tacit knowledge also applies to testing: John writes in his book on page 197:

However testing is about testing the information we do not know or cannot explain (the hidden stuff). To do this we have to use tacit knowledge (skills, experience, thinking) and we need to experience it to be able to work it out. This is what is meant by tacit knowledge“.

Back to the blogpost:

The opponents say certification only shows that you’ve learned a particular book well, it says nothing about the tester’s ability and can be counterproductive because the tester is trained to a standard tester.

  • Learned a particular book
    Agree, see arguments 1 and 2 above.
  • it says nothing about the tester’s ability
    Agree, see my argumentation on skills in point 2 above: “knowledge is over appreciated and skills are undervalued”. To learn we need practice and reflection. Also tacit knowledge is an important part of learning.
  • Trained to a standard tester
    Agree. No testing that I know of, is standard. Testing is driven by context. And testers with excellent skills have the ability to work in any context without using standards or templates. Have a look at the Ted Talk by Dr. Derek Cabrera “How Thinking Works“. He explains that critical thinking is a skill that is extremely important. Schools (and training providers)  nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Students are learned to follow instructions, like painting by the numbers or fill in templates. To fix this, we need to learn how to think better! Learning to paint by numbers is exactly what certification based on knowledge does with testers! Read more about learning, thinking and how to become an excellent tester in one of my earlier blogpost: “a road to awesomeness“.

Comparison with driving license
Does a driving license show anything? Well, at least you have studied the traffic rules well and know them. And, while driving, it is quite useful if we all use the same rules. If you doubt that, you should drive a couple of rounds in Mumbai.

In testing we should NEVER use the same rules as a starting point. “The value depends on the context!” . Driving in Mumbai or anywhere by strictly adhering to the rules, will result in accidents and will get you killed. You need skills to drive a car and be able to anticipate, observe, respond to unexpected behaviour of others, etc. This is what will keep you out of trouble while driving.

As I explained earlier on the TestNet website: this comparison is wrong in many ways. For a driver’s license, you must do a practical exam. And to pass the practice exam, most people will take lessons! You will be driving for at least 20 hours before your exam. And the exam is not a laboratory: it means you go on the (real) road with a real car. A multiple-choice exam does not even remotely resemble a real situation. That’s also how pointless ISTQB or TMap certificates are. Nowhere in the training or the exam, the student uses software nor does the the student has to test anything!

This is the heart of the problem! People do not learn how to test, but they learn to memorize outdated theory about testing. Unfortunately in many companies new and inexperienced testers are left unattended in complex environments without the right supervision and support!

So what would you prefer in your project: someone who can drive a car (someone who has the basic skills to test software), or someone knows the rules (someone who knows all the process steps and definitions by heart?). In addition, ISTQB states that the training is intended for people with 6 months of experience here.  So how are new testers going to learn the first 6 months?

The foundation for a tester?

The argument that the ISTQB foundation training provides a basis for a beginner to start is nonsense! It teaches the students a number of terms and a practically unusable standard process. In addition, there is a lot of theory about test techniques and approaches, but the practical implementation is lacking. There are many better alternatives as described in the resources earlier in this blogpost: learning by doing! Of course with the right guidance, support and supervision. Teach beginners the skills to do their work, as we learn the skills to drive a car in driving lessons. In a safe environment with an experienced driver next to us. Until we are skilled enough to do it without supervision. Sure, theory and explicit knowledge are important, but skills are much more important! And we need tacit knowledge to apply the explicit knowledge in our work.

So please stop stating that foundation training like TMap and ISTQB are a good start for people to learn about testing. There aren’t. Learning to drive a car starts with practicing actually driving the car.

Jan Jaap states he thinks a tester should be certified: “And what about testers? I think that they should also be certified. From someone who calls himself a professional tester we may expect some basic knowledge and knowledge about certain methods?“.
I think we may expect professional testers to have expertise in different methods. They should be able to do their job, which demands skills and knowledge. We may expect a bit more from professional testers than only some basic knowledge and knowledge about methods.

Many of the well-known certification programs originated when IT projects looked very different and, in my view, these programs did not grow with the developments. So they train for the old world

Absolutely true.

Another point where the opponents have a point is the value purchasing departments or intermediaries attach to certificates. In many of the purchasing departments and intermediaries, the attitude seems that if someone has a certificate, it is also a good tester. And to say that, more is needed.

It is indeed very sad that this is the main reason why certificates are popular. Many people get certificated because of the popular demand of organisations who do not recognise the true value of these certificates. Organisations are often not able (or do not what to spend the time needed) to recognise real professional testers and so they rely on certificates. On how to solve this problem I did a webinar “Tips, Tricks & Lessons Learned for Hiring Professional Testers” and wrote an article about it for Testing Circus.

Learning goals & value

On the ISTQB website I found the Foundation Level learning goals. Let’s have a look at them. Quotes from the website are in purple.

Foundation Level professionals should be able to:

  • Use a common language for efficient and effective communication with other testers and project stakeholders.
    Okay, we can check if the student knows how ISTQB defines stuff with an exam. However, understanding what it means or how to deal with it in a daily practice is very different. Also, again, common language is a myth.
  • Understand established testing concepts, the fundamental test process, test approaches, and principles to support test objectives.
    Concepts and test process: okay, you can check if a student remembers these. However, the content is old and outdated and in many places incorrect! I think understanding approaches cannot be checked in a (multiple-choice) exam. Maybe some definitions, but how to apply them? No way.
  • Design and prioritize tests by using established techniques; analyze both functional and non-functional specifications (such as performance and usability) at all test levels for systems with a low to medium level of complexity.
    Design and prioritize tests? Interesting. Where is this trained? Or being tested in the exam? Analyse specifications? That is not even part of the training. Applying some techniques is, but there is a lot more to designing and prioritize tests and analysing specifications.
  • Execute tests according to agreed test plans, and analyze and report on the results of tests.
    Execution of tests nor analysis or reporting of test results is part of the exam. In class only the theory about test reporting is discussed but never practiced.
  • Write clear and understandable incident reports.
    How do you check this with a multiple choice exam? And how you train this skill without actually testing software in class? No exercises in class that actually ask you to write such reports.
  • Effectively participate in reviews of small to medium-sized projects.
    The theory about reviews is part of the class. To effectively participate in reviews, you need to do it and learn from experience.
  • Be familiar with different types of testing tools and their uses; assist in the selection and implementation process.
    Some tools and their goals and uses are mentioned in class. So I will agree with the first part. But to assist in selection and implementation, again you need skills.

So looking at the learning goals above, I doubt if the current classes teach this. The exam for sure doesn’t prove that a foundation level professionals is be able to do this things. A lot of promises that are just wrong! Certificate training like ISTQB-F and TMap as they are now are simply not worth the money! The training and exam are mostly 3 days and cost around 1.700 euro in the Netherlands. I think that is a crazy investment for what you get in return… There are better ways to invest that money, time and effort!

I think that a more valuable 3 day foundation training is doable. But surely not the way it is done now by TMap or ISTQB. I’ve written a blog post about it years ago: “What they teach us in TMap Class and why it is wrong!“.


More blogs / presentations about certification:

Test improvement in an agile/CDT environment

This post and the article have been updated on April 4 2017.

One day during a team meeting at Joep‘s previous job at a bank the Team Manager of Testing, listed a number of topics his testers could work on in the coming months. One of those topics was “testing maturity”. This topic was on the list not because this manager was such a fan of maturity models, but because the other team managers (Business Analysis and Development) had produced one for their own teams and higher management would like to have one for testing as well. And although Joep saw little value in a classic five-tiered maturity model either, he was intrigued by the question: so what can you do with respect to maturity models that is of value?

Joep asked Huib to help him think of a way to create a valuable, context-driven way to work on maturity. Since Huib had been working for the same bank, they met and discussed the possibilities. Soon they found out that the criteria should be variable since maturity depends on context. They started experimenting with stack ranking and quite soon they had the first version of their “maturity model”.

Maturity or improvement?

After discussing the first version of the article with James and Michael, we felt the need to update our article. Their comments helped us realize that we needed to explore maturity and maturity models a bit more. After doing this, we decided to rename our model into a test improvement model.

Maturity mission: better testing

What is the mission a maturity assessment? We think the assessment should be a pathway to better testing. As a part of solving problems we think the mission should be: “An investigation of strengths and weaknesses. A starting point for a discussion about potential (testing) problems and how to solve them.” Or as James Bach says: A maturity model is plan for achieving maturity. And this is exactly what we created. Our maturity model isn’t anything like the staged, fixed models available in the market. Maybe we shouldn’t call our method a maturity model, since basically it isn’t. It is a tool designed to help teams assess and improve their testing. It is a method supported by a card game that helps teams retrospect and identify strengths and weaknesses in their way of working, the stuff they create, the team, their skills and context.

Result

After a first try-out at the bank Joep worked, we let it rest for a while. After a couple of months we wrote this article. It is the first version and it needs to be refined and polished. The heuristics lists are probably to long and need to be reduced. We think of this model as a card game that can be played with teams.

Currently we are also working on an agile version of this model, a card game for agile teams to assess their “maturity” to help them to find possible areas for improvements. More about that later.

We are curious about your thoughts. What do you think? Maybe you want to try the game? Feel free to try it out. We hope you will share your experiences with us.

Article (pdf) – Card game (pdf)

The slides are of the meetup about our model are here.

Help Linnea

“There is a saying that it takes a whole village to raise a child. Now we need a whole village to save our Linnea”

Linnea, Kristoffer Nordströms daughter, is five and a half years and comes from Karlskrona in Sweden. Her world revolved up until recently around My Little Ponies, riding her bicycle and popcorn… lots of popcorn. She who has one best friend: her beloved big brother Kristian.
That was her world – until a few months ago when she suddenly and shockingly became afflicted, and got emergency surgery for a brain tumor.
After the operation, we hoped that the bad news would end. But now the family lives in the hospital and has been told that the tumor is an aggressive variety called DIPG (Diffuse Intrinsic Pontine Glioma). The short story is that there is a heart-breakingly minimal chance of survival using established treatments.

There is a possible treatment that we are now aiming for: one that means the tumor is treated through catheters implanted directly into the tumor. Studies and reports show that such a direct treatment gives Linnea the best chance of one day becoming healthy. The cost of treatment and the journeys are very high. Higher than the average person can pay for: £ 65.000 for the first operation and then £ 6.500 for treatments thereafter. In the current situation, it is unclear how many of these Linnea will need.

Please help Kristoffer and his family!

Update July 3 2017

Great news!! The treatment seems to work, Kristoffer writes on his Facebook: http://www.facebook.com/kristoffer.nordstrom.792
We know a lot of people are waiting for this so me and Giedre want to give you the fantastic news.
Linnea had her third Intra Arterial treatment today and all went well without complications, this was the first time that we added the immunotherapy to the treatment, Linneas immune system is now being taught how to recognise and attack the tumor cells itself (Autologous Dendritic Cell Immunotherapy)!
The amazing news of today is that the treatment continues to do its job and we now see further shrinkage in the tumor from the last treatment three weeks ago.
The doctors see a distinct reduction in size since last time!
We now know that this is a treatment that is working for Linnea and for the other children here in Mexico, there are now over 30 families from all over the world here.
The treatment is very effective, but also very expensive, with the combined Immunotherapy and Chemotherapy the cost is 30.000USD (250.000SEK) every third week, later once Linneas immunesystem is trained the treatment will go back to chemotherapy only.
We realise we need to do this for an unforseen time going forward and looking at the costs of each treatment and our budget we need to ask all of you who have helped us get here to help us even further in saving our daughter.
Any donations, big or small, are more than welcome, you can help us at our fundraising site.
Thank you so much everyone who has helped us get here.
Swish donations from Sweden are also very welcome, the number is: +46 723 58 09 53
« Older posts