Author: Huib Schoots (page 1 of 7)

Considerations when testing a software application in a context-driven way

Written by: Joris Meerts (main author), Huib Schoots and Ruud Cox all working at Improve Quality Services in the Netherlands.

 Introduction
One of the cornerstones of context-driven testing (Lesson learned in software testing by  Kaner, Bach, & Pettichord, 2001) is that the way of working when testing software is determined by the situation in which the tester finds himself. A good approach is not driven by a prescribed process or by a collection of steps that one habitually executes. Instead, it arises from the use of skills that ensure that testing matches the circumstances of the software project. Within the framework of context-driven testing, a wide range of skills is discussed, including critical thinking, modeling and visualization, note-taking and applying heuristics. It is not easy to learn these skills. One way to sharpen them is to do exercises and discuss the results. This was the purpose of a meeting of four testers at Improve Quality Services. In the following report we will elaborate on the exercises that were done during that meeting and on the results.

Purpose of this report
As we pointed out in the introduction, it is not easy to learn the skills associated with context-driven testing. Practitioners of context-driven testing regularly refer to professional literature that describes these skills. But applying these skills is a process of trial and error. That is why it is important to simulate real life situations and learn from exercises we do. That was the purpose of the meeting. In this report we share the steps we took and the results of those steps, for example in the form of notes, sketches or models. We also share our experiences to make it easier to apply the skills in practice.

The meeting
The meeting was held on February 14, 2017 at the office of Improve Quality Services in Eindhoven. The participating testers, Jos Duisings, Ruud Cox, Joris Meerts and Huib Schoots are all employees of Improve Quality Services.

Background information
Date: 14 February 2017
Location: Improve Quality Services Eindhoven
Start: 9 am
End: 5 pm
Team A: Jos Duisings, Ruud Cox
Team B: Joris Meerts, Huib Schoots

The assignment

The assignment of the meeting is to select and execute tests on a software application. It is carried out by two teams of two testers each. This makes it possible to take different approaches and to provide feedback from one team to the other.

Division in sessions
The meeting is divided into sessions in advance. The sessions each have their own goal and their own evaluation.

  • Welcome & introduction
  • Creating a coverage outline (per team)
  • Debriefing the coverage outline
  • Drafting a test strategy (as a group)
  • Debriefing the test strategy
  • Selecting a few charters
  • Performing a test session (charter) per team.
  • Debriefing the test session
  • Retrospective

Introduction of the application to be tested
The application to be tested has been selected prior to the meeting. We choose the application Task Coach (Task Coach, 2018), an open source to-do list manager. This software is publicly available and runs on multiple platforms (Windows and Mac OS). The application is relatively simple but still offers sufficient complexity to be able to test thoroughly. According to the description on the website, Task Coach is ” a simple open source to-do manager to keep track of personal tasks and to-do lists. It is designed for composite tasks, and also offers effort tracking, categories, notes and more.” The participants have not worked with Task Coach before and are therefore not familiar with this specific piece of software.

Creating a coverage outline
Because Task Coach is new for all participants, the software will have to be explored. Only then can we say more about what can be tested in the given time. Such an exploration of the product can be done in many different ways. During the meeting we chose to make a ‘product coverage outline’, inspired by the Heuristic Test Strategy Model of James Bach (Bach, 1996). The product coverage outline provides an overarching view of the product in which the functionality of the software is divided into seven different categories. The categories are described in the mnemonic SFDIPOT. The software tester is reminded by this heuristic that when mapping a product he can look at Structure, Function, Data, Interfaces, Platform, Operations and Time. By looking at the application from these perspectives, a relatively complete sketch is created of what the software product is and what it can do. However, the mnemonic does not only serve for exploration. The ‘map’ of the application that is created in this way can also be used to visualize the coverage of the tests and to select the tests to be carried out.

The exercise
We decide that both teams will be given half an hour for creating a product coverage outline and that each team is free to decide which in which format the outline is presented. The choice of a half-hour time limit is mainly motivated by the fact that the software must also be tested before the end of the day. There is no room to spend much more time making the product outline.

It turns out that half an hour is not enough time for mapping an application that, despite the fact that it claims to be simple, still contains a lot of functions. Team B decides to create a mind map in which the seven categories are elaborated. This mind map is not finished after half an hour. Especially the branch ‘Function’, in which the functions of the application are described, is very large and has a deeply nested structure. To build this structure, team B explores the application by clicking through it and captures the functionality (in text or in image) in the mind map. Due to time constraints, team B presents a product sketch that is not finished. This triggers a discussion about what we expect the mind map to look like given the available time. The expectations, such as completeness, can be related to the purpose of the mind map. Through a discussion it becomes clear that no clear goal has been defined for drawing up the product sketch and that the result of the assignment is difficult to assess.

Preference for the mind map
A mind map is a tool that is used by software testers on a regular basis to create a product outline. The tree structure of the mind map lends itself to classifying (grouping) properties. It is easy to start with an empty mind map, enter the categories from SFDIPOT and expand these. Moreover, a navigable structure is created in this way. The mind map forms a kind of geographical map in which you can choose to zoom in and zoom out to determine the location of something in relation to the whole. Software essentially consists of zeros and ones and the software product is an abstract concept. By making the ‘map’ the abstract software gets a concrete shape. A mind map can also be used as an instrument for reporting on the results and the progress of the tests. So there are a number of good reasons to work with a mind map from the beginning.

Team A starts the session by creating a mind map but after a short time it switches to a different form. When studying Task Coach, the team finds out that there is a help file in the application. After a short study, the help file appears to describe a large part of the application. The categories from SFDIPOT all come back in the file to a sufficient extent and so Team A chooses to use this file, converted to Word format, as a product outline. By marking the categories in the document with different colors, structure is added to the document. In addition, the table of contents provides a global overview of the functionality in the application; the details are mentioned in the paragraphs. In this way, team A delivers a product sketch that is relatively complete within the stipulated time.

Drafting the test strategy
The result of the first session is a product outline. With this product sketch and the knowledge gained during the exploration of the software it is easier to discuss a test strategy. Because exhaustive testing of an application is not feasible for the majority of software applications and because exhaustive testing in many cases does not yield better information, it is desirable to make choices with regard to the test work to be performed. We find these choices in the test strategy.

By making the product outline we have gained insight into a number of aspects of the application. We take these aspects into account and we hope to indicate per category whether this category needs to be tested and if so with which depth. The most decisive factor in that decision is the risk the company encounters when the software is put to use. Risk is a multifaceted concept and can only be determined if the software product is highlighted from different angles. In any case, the perspective of the tester alone is not sufficient to get a good picture of the risks of a software product. During the second session it becomes clear that a representative is missing who can reason about risk from the perspective of a user or of the organization. This point is also discussed in the debriefing of the second session and we conclude that for that reason the risk assessment is incomplete.

Assessing risk
In the Heuristic Test Strategy Model three dimensions are discussed that influence a test strategy. These dimensions are the project, the product and the requirements that are set for that product; the quality characteristics. Risks play a role in each of these dimensions. In the exercise, we decide not to look at the project risks. As far as product risks are concerned, we make our own assessment with regard to the categories of the product. We consider categories that are used the most, are the most prone to errors and categories where a possible error has the most impact. Because there are no users involved in the exercise, we try to place ourselves in the role of the users. This way the following product categories are discussed.

  • The primary process from the Operations category. This will tell us which functionality is used the most.
  • Using the application by multiple users from the Operations category. We recognize risks associated with synchronization: tasks are not updated properly.
  • The importing and exporting of tasks from the Interfaces category.
  • Dealing with reminders from the Functionality category. Reminders are crucial for not missing appointments.
  • Dealing with date and time from the Time category. Date and time play an important role in planning tasks, they touch the core of the application.

After we have identified the categories of the product, we try to assess which quality aspects of the product are the most in demand. Here too, it has been decided to use our own assessments as a guideline. The following quality aspects are named:

  • Functionality,
  • Usability and
  • Charisma

Coverage
We briefly discuss the coverage of the tests that we want to carry out. Since the tasks that are maintained in Task Coach can have different statuses, the tester can, from the perspective of state transitions, visualize the coverage level of the tests carried out. The state transitions are covered in the different paths through the application that a user travels. These paths are helpful when mapping the coverage. Furthermore, coverage can be looked at from the perspective of the test data used.

Drafting the charters
We decide to make two charters (Session Based Test Management) and to execute them. We prioritize the product categories mentioned in the risk analysis based on our own insights. From this we conclude that ‘the primary process’ and ‘dealing with date and time’ are the two most important product categories. We translate these two categories into charters. In one charter we will look at the primary process, in the other charter the handling of date and time will be investigated. Each team selects a charter. After the charter has been executed, each team reports the work done to the other team in a debrief session. During this short evaluation, the tester will tell his story and answer any questions. The evaluation has several goals, such as assessing whether the mission is successful, translating new insights into follow-up sessions, looking at notes and descriptions of findings and coaching the tester. Due to the debrief, the understanding of the application under test grows.

The charter for testing the primary process
To formulate a starting point for this charter, we look at the risks that can be associated with the execution of the primary process. There is a risk that no task can be created. Or it could be that the task is created but errors occur when saving or modifying a task or changing the status of a task. These considerations lead to the following mission that serves as the starting point for the charter: “Explore the basic flow of tasks using CRUD to discover/test the life cycle of a task”.

The charter for testing the handling of date and time
Prior to drafting the charter for testing the handling of date and time, the following test ideas are mentioned:

  • Start and end times are the same
  • The end time is before the start time
  • The date of a task is far in the past or far in the future
  • System time
  • Work in different time zones
  • Dealing with winter time and summer time
  • Different date formats

With these test ideas in mind, the following charter is drawn up: “Explore tasks using date/time to discover date and time related bugs”.

Execution of the charters

Testing the primary process
The charter for testing the primary process (basic flow) begins with the creation of a task. Several tasks are created using the button on the taskbar. Subsequently, several underlying tasks are created for a single task, up to 8 levels deep. Under one of the underlying tasks, a new tree structure of underlying tasks is created. During the creation of the task structure, questions arise about the maximum depth of the task structure and about the sorting of the underlying tasks. In addition, it is found that it is possible to delete underlying tasks and then to undo these deletions. It is proposed to further investigate this functionality in a separate charter, since it is suspected that this is not always going well. The resulting tree structure is removed by means of the ‘Delete’ button on the taskbar. After this, all tasks have disappeared.

On the taskbar there is also a button that offers the possibility to create new tasks from a template. There are two default templates available. Tasks are created with these templates. It appears that it is not possible to create underlying tasks from a template. The question is why this functionality is not available for underlying tasks. Finally, the created tasks are deleted.

To get a better picture of how templates work in the application, the help text is read over templates. Furthermore, the team explores the menu structure in search of more functionality that could be related to the use of templates. From the ‘File’ menu it is possible to save tasks as templates, to import templates and to edit templates. A template is created.

From the dialog that appears when choosing to import a template it appears that template files have the extension ‘.tsktmpl’. But when a task is saved as a template, it is not possible to find out whether a template file is created from this task and, if so, where this file is stored. The newly created templates are visible when one chooses to create a new task based on a template from the toolbar.

A task is created, saved and checked if this task can be imported. Again, all created tasks are deleted by selecting the menu option Edit > Delete. The team looks a bit further at the options for removing tasks. It appears that there is a shortcut combination for removing tasks. The combination is Ctrl + DEL. The team wonders why it is not possible to simply use the Delete key.

In summary, this session looked at creating, viewing, editing and deleting tasks and underlying tasks. A finding has been made regarding the undoing of changes. It turns out that undoing the removal of tasks in a tree structure of tasks and underlying tasks does not produce the same structure as the original structure. Following the session, it is proposed to look more extensively at the use of templates in new charters. Also the functionality for creating, viewing, editing and deleting tasks and underlying tasks deserves more attention, especially because this functionality can be called from many different places in the application. The various possibilities have not all been tested in the first session. Furthermore, a charter could be made for creating a task depending on another task.

Feedback
At the end of the session, the report of the session was discussed with a member of the other team with the aim of obtaining feedback about the course of the session in a debrief. The feedback shows that there was not enough time to complete the charter. To complete the charter another thirty to sixty minutes would be needed. It is noted that the description of the detected bug is not clear enough. It also becomes clear that not all possible statuses of a task have actually been addressed in the session. Finally, a new charter is suggested for testing filtering and sorting.

Testing the handling of date and time
The session is started with a clean installation of the application. First a new task is created. Attention is given to the Data tab on which various data can be entered. The date and time can be changed in separate input fields. The time can be changed by means of a dropdown or by using the arrow keys on the keyboard. It turns out that ’00:00′ cannot be used as time. It appears that the range of time can be set under the ‘Preferences’ menu. After an adjustment, ’00:00′ can be entered.

It is noted that the planned start date and the planned end date are not included in the calculation of duration. But it is unclear what this data will be used for. When the planned end date is in the past, the task will turn red. When the planned end date is in the future, the task will turn purple. It is remarkable that the planned start date can be after the planned end date. Apparently there is no validation on the order of data. Dates that are far in the future are accepted as valid dates. To change a date, the task will first have to be closed and then opened again.

An interesting finding occurs when, at the planned start date, the year is set for 1900. If this is the case, the planned start date cannot be changed after closing and reopening the task. During the study of this finding, the team finds out that the application creates a log file (in the Documents folder) in which any errors are logged. The attempt to adjust the planned start date results in the following error message: “ValueError: year = 1853 is before 1900; the datetime strftime () methods require year> = 1900”. After this finding, further attention is paid to other functionality around time and date. For example, a reminder on a task is set to 5 minutes. The reminder works.

In addition to planning tasks TaskCoach can also be used to keep track of the time spent per task. After some research it appears that this functionality is complex and that it is not easy to find out how TaskCoach deals with time spent. Time tracking can be started with a button, but can also be done by increasing the time spent in a separate tab. The team notices that when entering an hour of time spent, the actual time spent shown on the tab is just a few seconds. It is possible to aggregate time spent at month level. If we do this, we see that the descriptions that we have listed for each entry are combined in a single text field.

The time that an application uses depends on the time setting on the system. For this reason, the team manipulates the system time of a MacBook Pro. The calendar is changed to a Coptic calendar. This adjustment causes the application to crash after startup. In the log a line appears mentioning an error relating to an invalid date format.

Finally, the team looks at the connection between the budgeted time and the time spent. TaskCoach is able to calculate the remaining time on the basis of the variables mentioned. Some tests are performed with variations in hours, minutes and seconds. TaskCoach handles all these variations correctly.

Feedback
A team member verbally reports on the session to a team member of the other team in a debrief. The feedback shows that this report contains a lot of detailed information. To be able to place the detailed information, a framework is needed. The product outline could have been used as a framework. One new charter emerges from the feedback, namely the testing of the synchronization of tasks between systems with different system times.

Conclusion
The meeting is concluded with the completion of the test sessions. Looking back we conducted, in a day with two teams, a number of tests on an application that was unknown beforehand. We have shown that techniques and methods exist that help the tester to acquire knowledge about the application, develop a strategy and perform tests. Exploring the application provides insights that serve as a starting point for risk assessments and for conducting test sessions with a well-defined goal. By quickly arriving at concrete and well-substantiated tests, the tester provides valuable feedback on the application in a short period of time. The test sessions are debriefed and this provides starting points for further deepening where necessary.

Due to the popularity of agile methods, it is common for the tester to be asked to test something, while at that moment he has incomplete insight into the functionality of the application. Moreover, the tester is expected to deliver results within a limited time. It requires an approach in which the tester quickly draws up and executes a strategy by modeling, thinking critically, discovering and investigating. This is the approach that we applied during the meeting described above.

A Clash of Models
The creation of the product coverage outline led to quite different outlines being delivered by the teams. These differences and their possible causes are discussed in a separate article, written by Joris Meerts and Ruud Cox. The article is called A Clash of Models.

References
Bach, J. (1996). Heuristic Test Strategy Model.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons Learned in Software Testing. John Wiley & Sons.
Task Coach. (2018). Retrieved from Task Coach: http://taskcoach.org/

Let’s stop talking about testing, let’s start thinking about value

This year Alex Schladebeck and I did two keynotes titled “Let’s stop talking about testing, let’s start thinking about value” at QA Expo in Spain and TestNet in the Netherlands. This blogpost has the most important points we made in our talk.

The keynote was inspired by some of our frustrations: “Testing is under appreciated” (Alex) and “Most testers are unable to explain what we do” (Huib). I wrote about my frustration back in 2016 already. This blogpost is about my frustration that most testers cannot come up with a decent definition of testing. And even worse: a big majority of the people who call themselves professional testers are not able to explain what testing is and how it works! They have trouble explaining what they are testing and why they are doing specifically the thing they are doing! How can anybody take a tester seriously who cannot explain what he is doing all day?

Alex’s frustration is that testing is not valued by others. Developers are seen as the rockstars of the project because they create the software that adds value. But why are testers often not valued?

  • Lowered expectations for testing expertise by stuff like ISO standards and ISTQB: I wrote about certification and standards before. ISTQB and standards put too much emphasis on process and documentation, rather than the real testing. By assuming there can be a standard, you say that there is one best way to organize and document your testing. But isn’t your test strategy heavily dependent on its context? When using standards we tend to focus on complying with the standard, and lose sight of the real goal. This sort of goal displacement is a familiar problem in many situations. Also, the idea that you can learn how to test is a couple of days of training is dangerous. Remember lesson 272: if you can get a black belt in only two weeks, avoid fights (Lessons Learned in Software Testing: A Context-Driven Approach by Bach, Kaner and Pettichord).
  • Avoiding controversy: nowadays more and more people advocate to be nice! I think that we confuse being nice, with being kind! An interesting article about this phenomenon is written by Marcia Sirota. Of course we need to respect other people, but to push the testing craft forward, we need to have firm discussions and disagree with others way more often. Being nice doesn’t help. Serious feedback does!
  • We devalue our own work by becoming tool jockeys: unfortunately there are too many testers (and teams) out there who focus too much on automation as much as possible. Why? Because they can! The testers in those teams are often so busy doing automation that they do not have the time to test anything…
  • We do not stand up for our craft: we do not fight back enough when other people say they do not need testers, or if they tell us how to do our jobs to name a few examples. We have to learn “testers self-defence: to stand up to people who try to dictate how do our jobs. We have to learn how to organize effective (and efficient) testing. And we need to learn how to talk about our work in a way others understand. This requires practice!
  • We do not learn or practice enough: testing is difficult! We have to deal with complexity, ambiguity, change and people. Testing is a craft, not something you do as a hobby. To become a craftsperson, you have to practice (also see my blogpost: a road to awesomeness).
  • We don’t know how to talk about testing: as said before: how can anybody take a tester seriously who cannot explain what he is doing all day? To be really valuable, testers need to learn to talk about their testing in a way others understand and find valuable.

So looking at these things, are we okay with this? I don’t think so. But what can we do about it? We are trapped in this vicious circle: we need to talk about testing! It is good for our soul to explain what I did and why, but we don’t know how to talk about our testing in a way that others understand.

Alex and I listed some traps:

  • Stories decay into Numbers: testing is about providing information to enable others to make informed decisions. The number of test cases or the number of bugs do not really matter. It is the story about the product and the risks involved. Those numbers might back up your story, but they do not tell the story!
  • A performance decays into Deliverables: testing is about finding problems, collecting information, exploring and experimenting to discover new information. Sure, documents and stuff sometimes help us, but testing is a performance. (James Bach talks about that here: a test is a performance and here: Test cases are not testing: towards a culture of test performance).
  • Test strategy decays into Test execution: when was the last time you saw a really good test strategy? In many cases I find master test plans where everything is described except the strategy. It is hard to create a test strategy and it is even harder to write it down or visualise it. Many testers I meet focus on test execution: creating test cases and scenarios and calling that the strategy.
  • Tool supported testing decays into Automation: testing using tools is a great idea. It gives us more opportunities to test and improves testability. But as said earlier: it becomes a problem when we focus too much on automation or even try to automate all our work. We cannot automate testing.
  • Many kinds of coverage decays into One kind of coverage: testing benefits from diversity! You find a certain type of bug with a certain test technique or approach. By using lots of different views, approaches and techniques, we find more problems.
  • Learning activity decays into Formalized static tasks: testing is learning about the product for our stakeholders. It’s not about verification and validation, there is much more to it. I like to replace such words with challenge the belief that (verify) and investigate (validate). Those activities provide the valuable information we need.
  • Balance risk and uncertainty decays into Certainty: people like to be comfortable and we like to give other comfort as well. But as testers we need to stay unsure, when others are sure. It is our job to keep asking critical questions. We are not here to give confidence or comfort, we are here to demolish unwarranted confidence! Also keep in mind that to find new unexpected problems, we have to go where nobody has thought of and nobody went before us. That will cause confusion which feels uncomfortable for many. I learned to be okay with confusion, since this is essential for learning new things.
  • Business Impact decays into Bugs: some testers are frustrated when bugs aren’t fixed. But that is part of the deal: some things that bug us, are just not important enough.
  • Product story decays into Testing jargon: I think this is the main problem for people not listening to testers. We talk jargon and about what we do in detail too much. We say stuff like: “We’ve executed 17 test cases in the system test, we’ve automated 50% of the test cases for area C and now have 30% code coverage. We found three major and five medium bugs”. And we are surprised that nobody will listen. We need to talk about the product! So you have found 8 bugs? Who cares? Talk about the risks involved, about the threats to the value of the product.

So maybe testers need to stop talking about testing?
Well, not exactly. We need to remember that the information from testing enables other people to do better work! So the testing itself isn’t always interesting, but the story about the results and the impact on the business is!

Just imagine a conversation between a tester and the PO.
Tester: The testing is going well!
PO: Okay, great. How is the product?
Testers: It sucks!

The role of testers

What is the role of testers? Testers see things for what they are. Testers help others make informed decisions about quality, because we think critically about software. This means creating awareness about the state of the product by staying sceptical when everybody else is sure. So we have to know what our clients want from testing. What information do they need to take these decisions? Project managers have one big question to be answered: are there problems that threaten the on-time, successful completion of the product?

Product Risk Knowledge Gap

I like to explain testing using the “Product Risk Knowledge Gap” like we teach in RST. Knowledge Gaps are the things that we need to learn in order to make good decisions. We need to learn about the product to close the knowledge gap. The more we know, the less risky our decisions will be. Testers should focus on questions like: what does the client need to know right now? What might hinder the successful completion of the product? What role do I need to take on in this situation to ensure we achieve our aims? Does this information matter? To whom?

But there is a way to avoid talking about testing. Just find enough questions and problems so that your stakeholders simply won’t have time to ask you questions back! Also, if you tell a credible story and give them the information they need, nobody cares how you got the information in the first place. In this case you need to stand your ground: tell people what they need to hear despite what they want to hear. Again: it’s your job to see things for what they are. If you give people the chance to doubt what you are doing, because you do not deliver the information they need, they will start asking questions about how you do your job. And if you have to talk about how you do your testing, then prepare to be able to tell a damned good story about your testing. Something they can understand and relate to.

The testing story

The testing story by Rapid Software Testing can help you tell that story. Tell a story about the product, what you saw, what you did to gather that information and how valuable that information is. (See “Braiding The Stories” by Michael Bolton). The testing story contains three stories that feed into each other:

  1. The product story: a qualitative report on how the product can work, how it fails, and how it might fail in ways that matter to our clients.
  2. The testing story: to make the product story credible, the testing story is about how we configured, operated, observed, and evaluated the product; what we actually did and what we actually saw.
  3. The quality of testing story: to make the testing story credible, tell a story about the quality of the testing. Describe why the testing we’ve done has been good enough. It includes details on what made testing harder or slower and what we might need or recommend in order to provide better, more accurate, more timely information.

Modern testing
As testers we do way more than only testing. We are enablers of testing by doing all kind of other things to be a service to the team and our clients. Researching this, Alex and I found the Modern testing principles by Alan Page and Brent Jensen. There is a lot of good stuff in there, and yet we feel that there is not enough focus on the actual testing in their principles. Furthermore, we think that the seventh principle “We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.” is formulated too negatively. We do not talk about dedicated test specialist as a function. But we like to talk about testing skills. And although we think there should not be a need for a dedicated testing specialist, we see too many people in teams who do not like testing. Passion (or at least motivation) for what you do, is conditional to become good at anything. So we created our own testing principles (inspired by the modern testing principles of course):

  1. Deliver insight into status of the product
  2. Practice (and enact) critical thinking
  3. Enable testing: lead, coach, teach, support
  4. Discuss testability
  5. Explore & experiment
  6. Promote waste removal / avoidance
  7. Help to accelerate the team
  8. Advocate continuous improvement
  9. Foster quality culture
  10. Keep critical distance and close social distance

Stop talking about testing?

So do we need to stop talking about testing? Not really. But we need to talk about the product, risks and value more. We can talk about actual testing only to back our story up or if they ask questions. And even then, we need to make our story understandable and relatable to others. Make sure you are a service to the team. We created our own testing principles to explain what value we add. We also have a pretty clear story on what testing is and how it adds value. We did this by practicing our stories many times. But we also figured out our own testing paradigm. That makes is easier to talk about what we do and how we add value.

Software Development is research & development: a series of experiments that ultimately lead to a suitable solution. We are dealing with customers who do not know exactly what they want. Furthermore, we are dealing with the complexity, confusion, changes, new insights and half answers. That requires research. As we team we are looking for what works and what doesn’t. Testing is of great importance for this! Testing provides insight and overview. Testing shines a light on the actual status of product and project. These insights enable others to make better decisions and eventually make better products.

The slide are here.

Note: this is an Alex Schladebeck and Huib Schoots co-production and this blogpost was co-authored by Alex. So where you read I, you could read Alex and I.

Creating a Test Strategy

At EuroStar 2017 I did a experiential workshop with Pekka Marjamaki and Carsten Feilberg called “The Magic of Sherlock Holmes – Test Strategy in a blink of an eye”. The goal of this full day workshop was to teach to create a Test Strategy rapidly so you can start testing as soon as possible. This blog post describes the summary of what we taught and publishes the example I made for the participants.

What is a Test Strategy?

In the workshop we defined Test Strategy as a solution to a complex problem: how do we meet the information needs of the testers & stakeholders in the most efficient way possible. In Rapid Software Testing we define Test Strategy as “the set of ideas that guide your test design or choice of tests to be performed”. We also talk about logistics and test plan. Logistics is the set of ideas that guide your application of resources to fulfilling the test strategy and test plan is the set of ideas that guide your test project. A Test Plan is the sum of logistics and the strategy. 

Rikard Edgren did an excellent workshop on Test Strategy at EuroStar 2014. In his workshop he says: Test strategy contains the ideas that guide your testing effort; and deals with what to test, and how to do it. (Some people mean test plan or test process, which is unfortunate…). It is in the combination of WHAT and HOW you find the real strategy. If you separate the WHAT and the HOW, it becomes general and quite useless.

What influences the Test Strategy?

Your Test Strategy is influenced by many factors.

  • Context:  your testing is influenced by the details of the specific situation like information available, the tester(s) doing the testing, what has been tested before, what tools and environments are available, how much time you have, etc.
  • Missions: what do your stakeholders need to know about the product? 
  • Risks: testing is mostly motivated by problems that might happen (risks). We want to find the important problems in the product. 
  • Product: products have many dimensions. By modelling the product we find important and unique aspects of the product.
  • Quality Criteria: various criteria or requirements that define what the product should be for the stakeholders.
  • Testing: your testing changes the Strategy constantly. Each experiment learns you more about the product and the risks involved.

How to create a Test Strategy?

To create a Test Strategy, you have to examine the factors mentioned above. This can be done in several activities (which do not need to be done specifically in this order). Most likely you will do this in a iterative way, building your Test Strategy as you go.

  1. Missions for your testing
  2. Product analysis
  3. Oracles & information sources
  4. Quality characteristics
  5. Context: project environment
  6. Test strategies

I like to use to Heuristic Test Strategy Model (HTSM). It reminds me what to think about when if am creating my Test Strategy and tests.

The HTSM is a model which consists of several sets of heuristics (more about heuristics here and here).  The full model can be found here.

  • Project Environment helps to understand our context and missons: MIDTESTD (mission, information, developer relations, test team, equipment & tools, schedule, test items, deliverables).
  • Product Elements helps to identify dimensions and factors of the product that could be examined in a test: SFDIPOT (structure, function, data, interfaces, platform, operations, time).
  • Quality Criteria helps to identify value and threats to various criteria of the product: CRISP DUCCS (capability, reliability, installability, security, performance, development, usability, charisma, compatibility, scalability). In this case you could also use the software quality characteristics by the Test Eye.
  • Risk analysis reveals potential problem. Risks motivate your testing. But the testing itself is risk analysis in itself: after analysing potential risks your testing informs you about the actual problems and learn new aspects of the product which helps you identify new risks. Each test has its own strategy: FDSFSCURA​ (function testing, domain testing, stress testing, flow testing, scenario testing, claims testing, user testing, risk testing, automatic checking​). To come up with good Test Ideas is an important skill. Erik Brickarp has an excellent blog post called How to come up with test ideas.
  • Identifying Oracles and Information sources helps learning about the product and identify potential problems. To design a good test strategy we need to know what’s important.

Examples of Test Strategy

Before giving you my example, I like to link the Screen Pluck Test Analysis by Rikard Edgren. A great example of how to create a thorough Test Strategy: Test Analysis example by Rikard Edgren.

The exercise in the workshop was defined as follows:

Product: tinyurl.com/SherlockES

Approach used to create my Test Strategy below:

  1. Look at mission and things we know (from the class) [1 min]
  2. Explore wix platform website [1 min]
  3. Explore wix casies website and create SFDIPOT mindmap while learning about the product [5-10 min]
  4. Risk analysis [5-10 min]
  5. Think of test ideas / approaches to deal with risks [5-10 min]
  6. Wrap-up. Create testing story about what I know already. List next steps [5 min]
  7. Tidy document and add some comments to make it readable for students

Total time used: 50-60 min

1. What do we know (and what important questions do I still have):

  • No developers available –> No access to code
  • Target customers? Who are they?

(Considering the short time period, I choose not to do a thorough context analysis using MIDTESTD, if I had more time, I would do so).

Mission:

Casies is web shop build with the Wix platform where customers can buy a case for their mobile phone. Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released.

Most important quality criteria:

  • Usability & charisma
  • Reliability and security of the purchase process
  • Functionality
    • Find, sort & filter
    • Purchase, cart, payment
    • Bestsellers
    • Contact
  • Performance

2. Look at Wix platform site

(url: https://www.wix.com/features/main)

Product exploration: look at website about Wix platform

Claims about the product:

  • Easy Drag and Drop
  • Free & Reliable Hosting
  • App market –> what else is there?
  • Mobile Friendly
  • loads of templates
  • SEO

3. Explore Wix casies

Start using the product

Product exploration: play with casies website using SFDIPOT

Download Xmind mind map (created in Xmind Zen beta4)

4. Risks

The risk mentioned here are probably too vague in some cases. Since risk analysis is a continuous process I will update risks later, making them more concrete and actionable. Also I will add more risks while testing.

  • Web shop not available
  • Web shop not easy to use
  • Target customers do not like the web shop
  • Web shop is not secure: customer data accessible by 3rd party
  • Customer cannot add items to cart
  • Customer cannot buy items in cart
  • Customer cannot find the items wanted
  • Web shop is not easy to find

5. Risks – Testing

Test ideas

Used document Software Quality Characteristics by Testeye

(More info on session based testing: here)

6. Testing Story & Next Steps

Looking at the website I found that the web shop doesn’t look complete to me: there is no possibility to check out and pay. Is this okay? To be able to do thorough testing to fulfil the mission “Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released” the website needs to be completed and payment functionality needs to be added. I am also interested in the maintenance model: how can I add cases? This would be very handy to create more test data and play with parameters in there to see how that comes out in the shop. Does this need to be part of my testing?

The results of my short initial exploration are captured in the SFDIPOT mind map I made while playing & interacting with the product. After that I made an initial risk analysis. I haven’t gone deep on anything yet.

Next steps will be talking to the product owner with my initial test strategy. If the payment module isn’t available soon, I will start with testing the first three charters, although I will not be able to fully do the purchase process charter, so I have to split this charter and focus on the cart part only.

  • Purchase process: cart & payment including investigation of fields
    (using the Test heuristic cheat sheet)
  • Finding cases – sorting & filtering cases
  • GUI tour: check all links and info

Used heuristics

Below an abstract of the “Software Quality Characteristics” by Testeye used as heuristics for creating this Test Strategy

Usability

  • Affordance: product invites to discover possibilities of the product.
  • Intuitiveness: it is easy to understand and explain what the product can do.
  • Minimalism: there is nothing redundant about the product’s content or appearance.
  • Learnability: it is fast and easy to learn how to use the product.
  • Memorability: once you have learnt how to do something you don’t forget it.
  • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
  • Operability: an experienced user can perform common actions very fast.
  • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
  • Control: the user should feel in control over the proceedings of the software.
  • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
  • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
  • Consistency: behavior is the same throughout the product, and there is one look & feel.
  • Tailorability: default settings and behavior can be specified for flexibility.
  • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
  • Documentation: there is a Help that helps, and matches the functionality.

Charisma

  • Uniqueness: the product is distinguishable and has something no one else has.
  • Satisfaction: how do you feel after using the product?
  • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
  • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
  • Curiosity: will users get interested and try out what they can do with the product?
  • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
  • Hype: should the product use the latest and greatest technologies/ideas?
  • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
  • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
  • Directness: are (first) impressions impressive?
  • Story: are there compelling stories about the product’s inception, construction or usage?

Reliability

  • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
  • Robustness: the product handles foreseen and unforeseen errors gracefully.
  • Stress handling: how does the system cope when exceeding various limits?
  • Recoverability: it is possible to recover and continue using the product after a fatal error.
  • Data Integrity: all types of data remain intact throughout the product.
  • Safety: the product will not be part of damaging people or possessions.
  • Disaster Recovery: what if something really, really bad happens?
  • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?

Security

  • Authentication: the product’s identifications of the users.
  • Authorization: the product’s handling of what an authenticated user can see and do.
  • Privacy: ability to not disclose data that is protected to unauthorized users.
  • Security holes: product should not invite to social engineering vulnerabilities.
  • Secrecy: the product should under no circumstances disclose information about the underlying systems.
  • Invulnerability: ability to withstand penetration attempts.
  • Virus-free: product will not transport virus, or appear as one.
  • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
  • Compliance: security standards the product adheres to.

Performance

  • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
  • Resource Utilization: appropriate usage of memory, storage and other resources.
  • Responsiveness: the speed of which an action is (perceived as) performed.
  • Availability: the system is available for use when it should be.
  • Throughput: the products ability to process many, many things.
  • Endurance: can the product handle load for a long time?
  • Feedback: is the feedback from the system on user actions app
  • Scalability: how well does the product scale up, out or down?

 

Final thoughts

As my example shows: you can create a Test Strategy in an hour. Of course this Test Strategy is not complete. But after the first tests (3 sessions) we learn and discover more about the product, so we can identify new risks, which inform new missions and will help us come up with new Test Ideas. Our Test Strategy will grow over time!

Extra info:

  • The slides are here.
  • The pictures and flipcharts from the workshop are here.

References:

Does certification have value or not?

I read a blogpost in Dutch named “Does certification have value or not?” by Jan Jaap Cannegieter. I wanted to reply, but there was no option to reply, so I decided to turn my comments into a blogpost. Since the original blogpost is in Dutch I have translated it here.

The proponents claim that you prove to have a foundation in testing with certification, you possess certain knowledge and it supports education.” (text in blue is from the blogpost, translated by me).

Three things are said here:

  1. prove to have foundation
    Foundation? What foundation? You learn a few terms/definitions and an over-simplified “standard” process? And how important is this anyway? Also the argument of an common language is nicely debunked by Michael Bolton here: “Common languages aint so common
  2. possess certain knowledge
    When passing an exam, you indicate to be able to remember certain things. It doesn’t prove you can apply that knowledge. And is that knowledge really important in our craft? I think knowledge is over appreciated and skills are undervalued. I’d rather have someone who has the skills to play football well instead of somebody who knows the rules. From a foundation training, wouldn’t you at least expect to learn the basic testing skills? In no ISTQB training, students use a computer. Imagine giving someone a driver’s license without having ever sat in a car …
  3. supports education
    Really? Can you tell me how? I think the opposite is true! As an experienced teacher (I also did my share of certification training in the past), my experience is that there is too much focus on passing the exam rather than learning useful skills. Unfortunately, preparing the students for the exam takes a lot of time and focus away from the stuff that really matters. Time I would rather use differently.

Learning & tacit knowledge

So how do people learn skills? There are many resources I could point to. Try these:

In his wonderful book “The psychology of software testing” John Stevenson talks about learning op page 49:

The “sit back and listen” approach can be effective in acquiring information but appears to be very poor in the development of thinking skills or acquiring the necessary knowledge to apply what has been explained. The majority of trainers have come to realise the importance of hands on training “Learn by doing” or “experiential learning”.

John points to resources like: Learningfromexperience.com and the book “Experiential learning: experience as the source of learning and development” by David Kolb. Also Jerry Weinberg has written books on experiential learning.

The resources on learning skills mentioned by me earlier, will tell you that experienced people know what is relevant and how things are related. Also practice, experimentation and reflection are important parts of learning. Learning of a skill  depends heavily on tacit knowledge. On page 50 in his book John Stevenson writes:

Pakivi Tynjaklak makes an interesting comment in the International Journal of Educational research: “The key to professional development is making explicit that which has earlier been tacit and implicit, and thus opening it to critical reflection and transformation” – This means that what we learn may not be something we can explain easily (tacit) and that as we learn we try to find ways to make it explicit. This is the key to understanding and knowledge when we take something which is implicit and make it explicit. Therefore, able to reflect on what is learned and explaining our understanding.

And since testing is collecting information or learning about a product, the importance of tacit knowledge also applies to testing: John writes in his book on page 197:

However testing is about testing the information we do not know or cannot explain (the hidden stuff). To do this we have to use tacit knowledge (skills, experience, thinking) and we need to experience it to be able to work it out. This is what is meant by tacit knowledge“.

Back to the blogpost:

The opponents say certification only shows that you’ve learned a particular book well, it says nothing about the tester’s ability and can be counterproductive because the tester is trained to a standard tester.

  • Learned a particular book
    Agree, see arguments 1 and 2 above.
  • it says nothing about the tester’s ability
    Agree, see my argumentation on skills in point 2 above: “knowledge is over appreciated and skills are undervalued”. To learn we need practice and reflection. Also tacit knowledge is an important part of learning.
  • Trained to a standard tester
    Agree. No testing that I know of, is standard. Testing is driven by context. And testers with excellent skills have the ability to work in any context without using standards or templates. Have a look at the Ted Talk by Dr. Derek Cabrera “How Thinking Works“. He explains that critical thinking is a skill that is extremely important. Schools (and training providers)  nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Students are learned to follow instructions, like painting by the numbers or fill in templates. To fix this, we need to learn how to think better! Learning to paint by numbers is exactly what certification based on knowledge does with testers! Read more about learning, thinking and how to become an excellent tester in one of my earlier blogpost: “a road to awesomeness“.

Comparison with driving license
Does a driving license show anything? Well, at least you have studied the traffic rules well and know them. And, while driving, it is quite useful if we all use the same rules. If you doubt that, you should drive a couple of rounds in Mumbai.

In testing we should NEVER use the same rules as a starting point. “The value depends on the context!” . Driving in Mumbai or anywhere by strictly adhering to the rules, will result in accidents and will get you killed. You need skills to drive a car and be able to anticipate, observe, respond to unexpected behaviour of others, etc. This is what will keep you out of trouble while driving.

As I explained earlier on the TestNet website: this comparison is wrong in many ways. For a driver’s license, you must do a practical exam. And to pass the practice exam, most people will take lessons! You will be driving for at least 20 hours before your exam. And the exam is not a laboratory: it means you go on the (real) road with a real car. A multiple-choice exam does not even remotely resemble a real situation. That’s also how pointless ISTQB or TMap certificates are. Nowhere in the training or the exam, the student uses software nor does the the student has to test anything!

This is the heart of the problem! People do not learn how to test, but they learn to memorize outdated theory about testing. Unfortunately in many companies new and inexperienced testers are left unattended in complex environments without the right supervision and support!

So what would you prefer in your project: someone who can drive a car (someone who has the basic skills to test software), or someone knows the rules (someone who knows all the process steps and definitions by heart?). In addition, ISTQB states that the training is intended for people with 6 months of experience here.  So how are new testers going to learn the first 6 months?

The foundation for a tester?

The argument that the ISTQB foundation training provides a basis for a beginner to start is nonsense! It teaches the students a number of terms and a practically unusable standard process. In addition, there is a lot of theory about test techniques and approaches, but the practical implementation is lacking. There are many better alternatives as described in the resources earlier in this blogpost: learning by doing! Of course with the right guidance, support and supervision. Teach beginners the skills to do their work, as we learn the skills to drive a car in driving lessons. In a safe environment with an experienced driver next to us. Until we are skilled enough to do it without supervision. Sure, theory and explicit knowledge are important, but skills are much more important! And we need tacit knowledge to apply the explicit knowledge in our work.

So please stop stating that foundation training like TMap and ISTQB are a good start for people to learn about testing. There aren’t. Learning to drive a car starts with practicing actually driving the car.

Jan Jaap states he thinks a tester should be certified: “And what about testers? I think that they should also be certified. From someone who calls himself a professional tester we may expect some basic knowledge and knowledge about certain methods?“.
I think we may expect professional testers to have expertise in different methods. They should be able to do their job, which demands skills and knowledge. We may expect a bit more from professional testers than only some basic knowledge and knowledge about methods.

Many of the well-known certification programs originated when IT projects looked very different and, in my view, these programs did not grow with the developments. So they train for the old world

Absolutely true.

Another point where the opponents have a point is the value purchasing departments or intermediaries attach to certificates. In many of the purchasing departments and intermediaries, the attitude seems that if someone has a certificate, it is also a good tester. And to say that, more is needed.

It is indeed very sad that this is the main reason why certificates are popular. Many people get certificated because of the popular demand of organisations who do not recognise the true value of these certificates. Organisations are often not able (or do not what to spend the time needed) to recognise real professional testers and so they rely on certificates. On how to solve this problem I did a webinar “Tips, Tricks & Lessons Learned for Hiring Professional Testers” and wrote an article about it for Testing Circus.

Learning goals & value

On the ISTQB website I found the Foundation Level learning goals. Let’s have a look at them. Quotes from the website are in purple.

Foundation Level professionals should be able to:

  • Use a common language for efficient and effective communication with other testers and project stakeholders.
    Okay, we can check if the student knows how ISTQB defines stuff with an exam. However, understanding what it means or how to deal with it in a daily practice is very different. Also, again, common language is a myth.
  • Understand established testing concepts, the fundamental test process, test approaches, and principles to support test objectives.
    Concepts and test process: okay, you can check if a student remembers these. However, the content is old and outdated and in many places incorrect! I think understanding approaches cannot be checked in a (multiple-choice) exam. Maybe some definitions, but how to apply them? No way.
  • Design and prioritize tests by using established techniques; analyze both functional and non-functional specifications (such as performance and usability) at all test levels for systems with a low to medium level of complexity.
    Design and prioritize tests? Interesting. Where is this trained? Or being tested in the exam? Analyse specifications? That is not even part of the training. Applying some techniques is, but there is a lot more to designing and prioritize tests and analysing specifications.
  • Execute tests according to agreed test plans, and analyze and report on the results of tests.
    Execution of tests nor analysis or reporting of test results is part of the exam. In class only the theory about test reporting is discussed but never practiced.
  • Write clear and understandable incident reports.
    How do you check this with a multiple choice exam? And how you train this skill without actually testing software in class? No exercises in class that actually ask you to write such reports.
  • Effectively participate in reviews of small to medium-sized projects.
    The theory about reviews is part of the class. To effectively participate in reviews, you need to do it and learn from experience.
  • Be familiar with different types of testing tools and their uses; assist in the selection and implementation process.
    Some tools and their goals and uses are mentioned in class. So I will agree with the first part. But to assist in selection and implementation, again you need skills.

So looking at the learning goals above, I doubt if the current classes teach this. The exam for sure doesn’t prove that a foundation level professionals is be able to do this things. A lot of promises that are just wrong! Certificate training like ISTQB-F and TMap as they are now are simply not worth the money! The training and exam are mostly 3 days and cost around 1.700 euro in the Netherlands. I think that is a crazy investment for what you get in return… There are better ways to invest that money, time and effort!

I think that a more valuable 3 day foundation training is doable. But surely not the way it is done now by TMap or ISTQB. I’ve written a blog post about it years ago: “What they teach us in TMap Class and why it is wrong!“.


More blogs / presentations about certification:

Test improvement in an agile/CDT environment

This post and the article have been updated on April 4 2017.

One day during a team meeting at Joep‘s previous job at a bank the Team Manager of Testing, listed a number of topics his testers could work on in the coming months. One of those topics was “testing maturity”. This topic was on the list not because this manager was such a fan of maturity models, but because the other team managers (Business Analysis and Development) had produced one for their own teams and higher management would like to have one for testing as well. And although Joep saw little value in a classic five-tiered maturity model either, he was intrigued by the question: so what can you do with respect to maturity models that is of value?

Joep asked Huib to help him think of a way to create a valuable, context-driven way to work on maturity. Since Huib had been working for the same bank, they met and discussed the possibilities. Soon they found out that the criteria should be variable since maturity depends on context. They started experimenting with stack ranking and quite soon they had the first version of their “maturity model”.

Maturity or improvement?

After discussing the first version of the article with James and Michael, we felt the need to update our article. Their comments helped us realize that we needed to explore maturity and maturity models a bit more. After doing this, we decided to rename our model into a test improvement model.

Maturity mission: better testing

What is the mission a maturity assessment? We think the assessment should be a pathway to better testing. As a part of solving problems we think the mission should be: “An investigation of strengths and weaknesses. A starting point for a discussion about potential (testing) problems and how to solve them.” Or as James Bach says: A maturity model is plan for achieving maturity. And this is exactly what we created. Our maturity model isn’t anything like the staged, fixed models available in the market. Maybe we shouldn’t call our method a maturity model, since basically it isn’t. It is a tool designed to help teams assess and improve their testing. It is a method supported by a card game that helps teams retrospect and identify strengths and weaknesses in their way of working, the stuff they create, the team, their skills and context.

Result

After a first try-out at the bank Joep worked, we let it rest for a while. After a couple of months we wrote this article. It is the first version and it needs to be refined and polished. The heuristics lists are probably to long and need to be reduced. We think of this model as a card game that can be played with teams.

Currently we are also working on an agile version of this model, a card game for agile teams to assess their “maturity” to help them to find possible areas for improvements. More about that later.

We are curious about your thoughts. What do you think? Maybe you want to try the game? Feel free to try it out. We hope you will share your experiences with us.

Article (pdf) – Card game (pdf)

The slides are of the meetup about our model are here.

Help Linnea

“There is a saying that it takes a whole village to raise a child. Now we need a whole village to save our Linnea”

Linnea, Kristoffer Nordströms daughter, is five and a half years and comes from Karlskrona in Sweden. Her world revolved up until recently around My Little Ponies, riding her bicycle and popcorn… lots of popcorn. She who has one best friend: her beloved big brother Kristian.
That was her world – until a few months ago when she suddenly and shockingly became afflicted, and got emergency surgery for a brain tumor.
After the operation, we hoped that the bad news would end. But now the family lives in the hospital and has been told that the tumor is an aggressive variety called DIPG (Diffuse Intrinsic Pontine Glioma). The short story is that there is a heart-breakingly minimal chance of survival using established treatments.

There is a possible treatment that we are now aiming for: one that means the tumor is treated through catheters implanted directly into the tumor. Studies and reports show that such a direct treatment gives Linnea the best chance of one day becoming healthy. The cost of treatment and the journeys are very high. Higher than the average person can pay for: £ 65.000 for the first operation and then £ 6.500 for treatments thereafter. In the current situation, it is unclear how many of these Linnea will need.

Please help Kristoffer and his family!

Update July 3 2017

Great news!! The treatment seems to work, Kristoffer writes on his Facebook: http://www.facebook.com/kristoffer.nordstrom.792
We know a lot of people are waiting for this so me and Giedre want to give you the fantastic news.
Linnea had her third Intra Arterial treatment today and all went well without complications, this was the first time that we added the immunotherapy to the treatment, Linneas immune system is now being taught how to recognise and attack the tumor cells itself (Autologous Dendritic Cell Immunotherapy)!
The amazing news of today is that the treatment continues to do its job and we now see further shrinkage in the tumor from the last treatment three weeks ago.
The doctors see a distinct reduction in size since last time!
We now know that this is a treatment that is working for Linnea and for the other children here in Mexico, there are now over 30 families from all over the world here.
The treatment is very effective, but also very expensive, with the combined Immunotherapy and Chemotherapy the cost is 30.000USD (250.000SEK) every third week, later once Linneas immunesystem is trained the treatment will go back to chemotherapy only.
We realise we need to do this for an unforseen time going forward and looking at the costs of each treatment and our budget we need to ask all of you who have helped us get here to help us even further in saving our daughter.
Any donations, big or small, are more than welcome, you can help us at our fundraising site.
Thank you so much everyone who has helped us get here.
Swish donations from Sweden are also very welcome, the number is: +46 723 58 09 53

A Road to Awesomeness

At TestBash Manchester I did a presentation titled “A road to awesomeness”. In this talk I tried to explain how testers can become awesome. I talked about learning and testing skills. To prepare this talk I created a mind map to list skills and ways to learn. The mind map is here and will be continuously updated in the future. Let me know if you have things to add.

The 4-hour tester

In another talk at TestBash Manchester Helena Jeret-Mäe and Joep Schuurkes talked about the 4-hour Tester Experiment.  They took the the challenge of teaching new testers useful skills in only 4 hours. They focus on 5 skills:

  • Interpretation
  • Modeling
  • Test design
  • Note taking
  • Bug reporting

Their experiment is an interesting try to teach software testers essential skills in only 4 hours. Of course it is impossible to teach anyone to test in 4 hours.  But the 4-hour tester has simple exercises that take at most one hour to complete. Interesting approach to help inexperienced testers to get started.

Learning

To become awesome, you have to learn a lot. But how do we learn? I like the 10/20/70 model: 70 percent of learning comes form job-related experiences, 20 percent from interactions with others, and 10 percent from formal educational. So training is only a small part of a learning journey. Interaction with others, like communities and networking, coaching and mentoring are a part of it as well. But the biggest part of learning takes place from work related experience. This doesn’t mean just working. Many years of working doesn’t let you necessarily learn effective if you do not get feedback, reflect on what you do in the right environment. Doing the same work for 10 years doesn’t give you 10 years of useful experience. If you never retrospect, reflect or get any feedback, I’d say you have 10 times 1 year of experience. To make learning effective in your job you need:

  • Concrete, challenging and achievable tasks
  • Realistic application of skills, processing and reflection
  • Personal interpretation, exchange with others and constructive feedback
  • A safe environment to experiment and make mistakes

But there is more to learning. In her TED Talk “Learning how to learn” Barbara Oakley talks about two fundamental learninglearning-how-to-learn modes of our brain: focused mode and the more relaxed diffused mode. When your are learning, you want to go back and forth between these two modes. When you are stuck, you defocus going into the diffused mode, generating new ideas. If you want to learn more about this, I recommend the coursera MOOC “Learning How to Learn: Powerful mental tools to help you master tough subjects“. Two of the things I took from that course is that active listening (e.g. by asking questions) is far more effective. Also learning by doing is a great way to learn fast.

Learning is the most important skill for a tester. And learning is closely related to thinking skills. I’ll come to that later.

What makes an awesome tester?

An awesome testers has many skills. In my blogpost “Heuristics for recognizing professional testers” I described 18 heuristics to recognize professional testers. This was the starting point for my talk at TestBash Manchester. The first steps to help me think about aspects to become awesome, was creating a mind map with a couple of things to focus on: what makes an awesome tester: who (characteristics), what (skills) and how (ways to learn and how to become an expert).

awesome-testers-mapThere are many skills that make an awesome tester. I think the most important skill is the ability to learn! Remember the definition of testing we use in Rapid Software Testing: “Testing is evaluating a product by learning about it through exploration and experimentation”. Besides learning I identified  5 categories of skills testers need:

  • Thinking skills
  • Testing skills
  • Technical skills
  • Domain skills
  • Soft skills

And of course there are many, many skills that make these categories. Have a look at the mind map to learn more about these skills. It is very important to recognise what skills are involved in being a great tester. If you want to learn, you need to know what to focus on. Not knowing which skill to train, will result in unfocussed and ineffective learning. I see many testers apply test techniques without knowing what skills are used and which methods are underneath the technique. This makes them apply techniques as recipes or trick. Being able to apply approaches, tactics and techniques effectively in any situation requires the right skills.

Thinking skills

Learning and thinking are closely related. While researching thinking and thinking skills, I stumbled upon a great TED talk by Dr. Derek Cabrera called ”How Thinking Works”. He talks about the schooling system and about 4 universal thinking skills. Schools nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Kids are learned to follow instructions, like painting by the numbers. To fix this, we need to learn how to think better and for that he suggests four thinking skills: DSRP. A simple set of rules to become better in systems thinking.

  • Making Distinctions – Any idea or thing can be distinguished from other ideas or things
  • Organizing Systems – Any ideas or thing can be split into parts or lumped into a whole
  • Recognizing Relationships – Any idea or thing can relate to other ideas or thing
  • Taking Perspectives – Any idea or thing can be the point or the view of a perspective

In his book “Systems thinking made simple“, Dr. Cabrera talks metacognition. “Systems thinking is a particular type of metacognition that focuses on and attempts to reconcile the mismatch between one’s mental models and how the real world works”, he continues to describe acts of metacognition: “Awareness that everything you experience is a mental model that approximates (either poorly or well) the real world”, “The ability to distinguish among cognition (thinking), emotion (feelings), and conation (motivations) and the awareness of how these different human capacities influence our mental models and behavior”. Recognizing that models are a big part of our thinking makes you a better tester. But also that your biases and emotions influence your thinking are great insights we have to take into account in our testing.

A second TED talk I recommend is “How to think, not what to think” by Jesse Richardson. He says: “The way we approach education needs to adapt! What’s different in teaching children how to think, we are involving them in the process of their own learning. Instead of just telling them to memorize the right answer, we ask them to engage their own minds, their own awareness by questioning things, attaining understanding not just knowledge. And that involvement, that engagement, is so important, because it keeps the spark of curiosity alive.”

He mentions the famous TED Talk by Ken Robinson: “Do schools kill creativity?“. One of my all time favorites.

Activities in testing
To fully understand what skills we use in testing, I made a list of testing activities. In Rapid Software Testing we teach the 9 Elements of testing and I took these as a starting point.

  1. Model the test space and risks
  2. Determine coverage
  3. Determine oracles
  4. Determine test procedures
  5. Configure the test system
  6. Operate the test system
  7. Observe the test system
  8. Evaluate the test results
  9. Report test results

The next step was to zoom in on the different elements in the list. Zooming in on “Model the test space and risks” we can distinguish:

  • Context Analysis
  • Creating a product Coverage Outline
  • Test Plan
  • Test scope
  • Risk & Value Analysis

I did this with all 9 elements and came to a pretty long list of activities. Maybe you can think of more activities testers do, let me know if you do.

This (making distinctions or factoring) exercise resulted in a huge list of things we do in testing and products we make. The next step was to lists the skills that we use, doing these activities. Obviously the first thing I thought of was learning and thinking. Researching this, I stumbled upon a lot of interesting stuff, of which part I have described earlier. But there was more, way more as you can see in my mind map. And this mind map expands every time I visit it.  Factoring the activities and skills helps me to see what is really going on while we are testing.

So how do you become awesome?

So now we have a map of activities and skills. But how do we learn them? And where do we start?

The most important part of becoming awesome is knowing what to learn and how. The first step is to know what to learn and focus on that. Stop trying to learn 10 things at the same time, it doesn’t work! A way to get to know what you want to learn is writing a Personal Development Plan answering 5 simple questions: Who am I? What are my skills? What do I want? What do I need?  How do I get there? This can give you great insight in your skills, ambition and learning needs. The second step is to find mentors who can help you and give you feedback. After that it is “just” a matter of doing it and practice!

I learned to be awesome by doing many things:

  • Observing others and myself: by becoming manager and a coach I found out that observing others gives you great insight in how testing is done and what the common problems are for people who are learning how to test. Observing myself, which is quite difficult in the beginning, I learned what I was doing and I found out that this was a big eye-opener and learning enabler. Using a journal a writing down stuff every day on my way home, gave me great insights in what my problems were. Knowing your problems, is the first step to solving them.
  • Explaining, presenting, teaching & coaching: by explaining stuff to others, you learn to structure your thoughts. To be able to present or  teach you really have to know your stuff. And still every time I teach, I learn new things about what I teach. New examples, new problems, new ways of doing things. Also answering difficult questions is very helpful learn deeply about topics. That is way I think we should encourage debate and questioning at conference way more.
  • Pairing: pairing is a fun and nice way to learn from others. Seeing how others do things is helpful and has many learning opportunities. Also the feedback you get from your pairing-partner is valuable.
  • Writing my blog: same thing as explaining and teaching. Writing this blog post made me think about what o write and how to explain. I needed to structure my thoughts. Great exercise.
  • Keeping a journal: see observing other and myself.
  • Always having a notebook with me: to practice note taking and it also helps me not to forget interesting stuff that others tell me. It is great to read back notes from conferences and courses from way back. It refreshes my memory and I often see new insights I didn’t see before.
  • Discussing & debating testing: see explaining.
  • Trying new stuff: experimenting is fun and also a way to learn about new techniques, tools, approaches, etc.
  • My coaches and mentors: I have had and still have many mentors. All these people help me to become better every day. It is great to have many mentors, each with their own expertise and specialties. I have a big international network of people I know and who I work with. So many sources of knowledge and skill help me quickly find an answer to almost every difficult testing problem I have.

My advise is to try the same exercise I did creating the mind map. Create your own model of test activities and skills. Observe what you do, how you do it and make your own mind map (or any other way you would like to model it). Come back and compare yours to my mind map.

So why is this important?

I see may similarities with the problems in the current schooling system and in testing:  over-engineering the content curriculum, students do not learn to think, they learn to memorize stuff and to follow instructions (like painting by the numbers). In most testing classes testers learn tricks, procedures and the use of template like approaches in stead of the required skills to do an excellent job. In Rapid Software Testing (RST) we teach thinking skills. We learn testers how to effectively think and solve problems instead of using tricks, templates and standards to do their work. By learning to think better, you learn better. And isn’t testing all about learning? Also, if you are able to think, you are able to do better work, apply your skills in the right way. Adapt to changing context, find your own solution to problems that occur. We also teach testers to dynamically manage their focus: focus/defocus is similar to fundamental learning modes Prof. Dr. Barbara Oakley talks about in her TED Talk.

The DSRP model (distinctions, systems, relationships, perspectives) is very useful in testing. In RST we talk about models and learn our students how to model (distinctions, systems and relationships) and to think from different perspectives using a diverse strategy using heuristics. Great to see that the stuff we teach is actually backed up by scientists who study metacognition (thinking about thinking) and epistemology (the study of knowledge). Making a product coverage outline for instance helps you see distinctions and relations and helps systems thinking.

Good luck on your journey to awesomeness!

References

The slides I used in Manchester are here: slideshare.
The updated slides are here: pdf.

All talks at TestBash are recorded. My talk is here.

Context Driven Clichés

Today I read a blogpost by Olaf Agterbosch on the ViQiT site with the intriguing title “Context Driven Clichés“. Since it is written in Dutch, I will first translate his post.

Do you recognize the following? In recent years I often hear the words “Context Driven ‘when it comes to developments in ICT. Apparently, those using them consider the context in their activities in other words the overall environment in which their activities take place. As if nobody kept that in mind before, these people talk about the importance of the environment and the way their products, services and processes have to connect to the context.

I sometimes wonder why people choose to kick in a open door and yell how well it’s working. But what’s even worse: people parroting it. Everyone jumps on the bandwagon and before you know it the latest hype is born. A hype that is fully wrung out by, for example consultants, who will not fail to emphasize the importance of this development. You really can not live without!

Old wine new bottles. Old wine in Agile Bags. Old wine in faded bags …

Apparently there is marketing potential.

Context Driven testing is exemplar for an over hyped branch on the Agile tree. The nice thing is that there is nothing wrong with it, you are aware of the environment in which you perform your job. What I regret is that many people get sucked into this development. Running along with the latest hype.

Does it pay? Possibly. A prediction of the trends for the coming years, we will get occupied with:

Context Driven Requirements engineering;
Context Driven Application Management;
Context Driven Directed Lining;
Context Driven Project Management;
Context Driven CRM;
Context Driven Innovation;
Context Driven Migration;
Context Driven Whatever etc.

What did you say? We were doing that for long, weren’t we? We obviously haven’t been paying attention.

Now let us just go back to work. And yes, work, like the good ones among us have always done it in a Context-Driven way. Or Risk-Based. Or Business Case Driven. But please stop those cheap semantic tricks. Try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value. Perhaps less familiar to you and others, but effective!

Not sure what his problem with context-driven is…

Fortunately, I hear about people becoming more and more context-driven. To me being context-driven is not just keeping the context or environment in mind, it is way more that that… As I wrote in a post on the DEWT  blog: “Context-driven testing made my testing more personal. Not doing stuff everybody does, but it encouraged me to develop my own style. It is a mind set, a paradigm and a culture. It is not only about what I do, it is more about who I am!”

A hype? Maybe because it gets more and more attention. Although I think it isn’t. Far from it! A hype, according to the free dictionary:

  1. Excessive publicity and the ensuing commotion
  2. Exaggerated or extravagant claims made especially in advertising or promotional material
  3. An advertising or promotional ploy
  4. Something deliberately misleading; a deception

I can’t see why context-driven testing would be a hype. Exaggerated? Misleading? A promotional ploy? Excessive publicity?

Old wine in new bags? I don’t think so. The saying means “things are presented differently, but not fundamentally changed.” I think context-driven testing is fundamentally different. Of course testers has been taking context in consideration for years. But how well do they do that? I still hear things like “that’s how we do things now here” or “you have to play by the standard/procedure” quite often. I almost never hear testers speak about serious context analysis and adapting their approaches accordingly. But there is a lot more to context-driven testing then taking context into consideration. To name a few:

  • developing and using skills to effectively support the software development project
  • learning tester to be less dependent on documentation
  • modeling by mapping of various aspects of the software product
  • diversify tactics, approaches and techniques
  • thinking skills like logical reasoning, using heuristics and critical thinking
  • dealing with complexity, ambiguity, coping with half answers and changes

In the last paragraph Olaf states that the good testers among us, have always been using a context-driven approach. Really? How does he define good testers? And if they use a context-driven approach, why is he complaining? Unfortunately I see too many testers not using context-driven approaches and creating a lot of waste!

Then he continues “try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value“. If you look at testing as performing a trick, I can see that Olaf sees context-driven testing as a cheap semantic trick! The opposite is true: adding real value is what context-driven testing is trying. Context-driven testers do this by focusing on their skills, using heuristics, considering cost verus value in all they do, continuous learning by deliberate practice, etc. When I look at the most commonly used test approaches in the Netherlands (TMap and ISTQB), I wonder how they add value by focusing on standards, document heavy procedures, use of many templates, best practices and using  the same approach every time.

Leaves me with one question… What is Olaf’s problem with context-driven exactly?

Too controversial?

On May 11 2016 TestNet (*)  held her spring conference with “Strengthen your foundation: new skills for testers” as the central theme. The call for papers that was send out made me frown.  It said:

“In the final keynote of the TestNet autumn event, speaker Rini van Solingen referred to the end of software testing as we know it. ‘What one can learn in merely four weeks, does not deserve to be called a profession’, he stated. But is that true? Most of our skills, we learn on the job. There are many tools, techniques, skills, hints and methods not typical for the testing profession but essential for enabling us to do a good job nonetheless. Furthermore the testing profession is constantly evolving as a result of ICT and business trends. Not only functional testing, but also performance, security or other test varieties. This presses us to expand our knowledge, not just the testing skills, but also of the contexts in which we do our jobs. The TestNet Spring Event 2016 is about all topics that are not addressed in our basic testing course, but enable us to do a better job: knowledge, skills, experience.”

I think that there are a lot of skills that are not addressed in our “basic testing course” where they should have been addressed. I am talking about basic testing skills! So I wrote an abstract for a keynote for the conference:

The theme for the spring event is “Strengthen your foundation: new skills for testers”. My story takes a step back: to the foundation! Because I think that the foundation of most testers is not as good as they think. The title would then be: “New skills for testers: back to basics!

Professional testers are able to tell a successful story about their work. They can cite activities and come up with a thorough overview of the skills they use. They are able to explain what they do and why. they can report progress, risk and coverage at any time. They will gladly explain what oracles and heuristics they use, know everything about the product they are testing and are deliberately trying to learn continuously.

It surprises me that testers regularly can’t give a proper definition of testing. Let alone that they are able to describe what testing is. A large majority of people who call themselves professional testers can not explain what they do when they are testing. How can anyone take a tester seriously if he/she can not explain what he/she is doing all day? Try it: go to one of your testing colleagues and ask what he or she is doing and why it contributes to the mission of the project. Nine out of ten testers I’ve asked this simple question, start to stutter.

What do you exactly do if you use a “data combination test” or a “decision table”? What skills do you use? “Common sense” in this context does not answer the question because it is not a skill, is it? I think of: modeling, critical thinking, learning, combine, observe, reasoning, drawing conclusions just to name a few. By looking in detail at what skills you are actually using, helps you recognize which skills you could/should train. A solid foundation is essential to build on it in the future!

How can you learn the right skills if you do not know what skills you are using in the first place? In this presentation I will take the audience back to the core of our business: skills! By recognizing the skills and training them, we are able to think of and talk about our profession with confidence. The ultimate goal is to tell a good story about why we test and value it adds.

We need a solid foundation to build on!

My keynote wasn’t selected. So I send it in as a normal session, since I really am bothered by the lack of insight in our community. But it didn’t make it on the conference program as a normal session either. Why?  Because it is too controversial they told me. After applying for the keynote the chairman called me to tell me that they weren’t going to ask me to do a keynote because the did want a “negative” sound on stage. I guess I can imagine that you do not want to start the day with a keynote who destroys your theme by saying that we need to strengthen our foundation first before moving on.

But why is this story too controversial for the conference at all? I guess it is (at least in the eyes of the program committee) because we don’t like to admit that we lack skills. That we don’t really know how to explain testing. I wrote about that before here.  It bothers me that we think our foundation is good enough, while it really isn’t! We need to up our game and being nice and ignoring this problem isn’t going to help us. A soft and nice approach doesn’t wake people up. That is why I wanted to shake this up a bit. To wake people up and give them some serious feedback … I wrote about serious feedback before here. But the Dutch Testing Community (represented by TestNet) finds my ideas too controversial…

 


(*) TestNet is a network of, by and for testers. TestNet offers its members the opportunity to maintain contacts with other testers outside the immediate work environment and share knowledge and experiences from the field.

Must read: A Context-Driven Approach to Automation in Testing

Test Automation is a hot item in our industry. Many people talk about it and much has been written on this topic. Sadly there is still a lot of misconception about test automation. Also, some people say context-driven testing is anti test automation. I think that is not true. Context-driven testers use different names for it and they are more careful when they speak about automation and tooling to aid their testing. Also, context-driven testers have been fighting the myths that testing can be automated for years. In 2009 Michael Bolton wrote his famous blog post “Testing vs. checking“. Later flowed up by “Testing and checking refined” and “Exploratory testing 3.0“. These tremendous important blog post learn us about how context-driven testers define testing and that testing is a sapient process. A process that relies on skilled humans. Recently Michael Bolton and James Bach have published a white paper to share their view on automation in testing. A vision of test automation that puts the tester at the center of testing. This is a must read for everyone involved in software development.

The follow text is taken from the “A Context-Driven Approach to Automation in Testing” white paper written by James Bach and Michael Bolton.

We can summarize the dominant view of test automation as “automate testing by automating the user.” We are not claiming that people literally say this, merely that they try to do it. We see at least three big problems here that trivialize testing:

  1. The word “automation” is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that.
  2. Output checking can be automated, but testers do so much more than that.
  3. Automated output checking is interesting, but tools do so much more than that.

robotAutomation comes with a tasty and digestible story: replace messy, complex humanity with reliable, fast, efficient robots! Consider the robot picture. It perfectly summarizes the impressive vision: “Automate the Boring Stuff.” Okay. What does the picture show us?

It shows us a machine that is intended to function as a human. The robot is constructed as a humanoid. It is using a tool normally operated by humans, in exactly the way that humans would operate it, rather than through an interface more suited to robots. There is no depiction of the process of programming the robot or controlling it, or correcting it when it errs. There are no broken down robots in the background. The human role in this scene is not depicted. No human appears even in the background. The message is: robots replace humans in uninteresting tasks without changing the nature of the process, and without any trace of human presence, guidance, or purpose. Is that what automation is? Is that how it works? No!

The problem is, in our travels all over the industry, we see clients thinking about real testing, real automation, and real people in just this cartoonish way. The trouble that comes from that is serious…

Read more in the fabulous white paper “A Context-Driven Approach to Automation in Testing” by James Bach and Michael Bolton.

Older posts