Month: October 2018

Considerations when testing a software application in a context-driven way

Written by: Joris Meerts (main author), Huib Schoots and Ruud Cox all working at Improve Quality Services in the Netherlands.

 Introduction
One of the cornerstones of context-driven testing (Lesson learned in software testing by  Kaner, Bach, & Pettichord, 2001) is that the way of working when testing software is determined by the situation in which the tester finds himself. A good approach is not driven by a prescribed process or by a collection of steps that one habitually executes. Instead, it arises from the use of skills that ensure that testing matches the circumstances of the software project. Within the framework of context-driven testing, a wide range of skills is discussed, including critical thinking, modeling and visualization, note-taking and applying heuristics. It is not easy to learn these skills. One way to sharpen them is to do exercises and discuss the results. This was the purpose of a meeting of four testers at Improve Quality Services. In the following report we will elaborate on the exercises that were done during that meeting and on the results.

Purpose of this report
As we pointed out in the introduction, it is not easy to learn the skills associated with context-driven testing. Practitioners of context-driven testing regularly refer to professional literature that describes these skills. But applying these skills is a process of trial and error. That is why it is important to simulate real life situations and learn from exercises we do. That was the purpose of the meeting. In this report we share the steps we took and the results of those steps, for example in the form of notes, sketches or models. We also share our experiences to make it easier to apply the skills in practice.

The meeting
The meeting was held on February 14, 2017 at the office of Improve Quality Services in Eindhoven. The participating testers, Jos Duisings, Ruud Cox, Joris Meerts and Huib Schoots are all employees of Improve Quality Services.

Background information
Date: 14 February 2017
Location: Improve Quality Services Eindhoven
Start: 9 am
End: 5 pm
Team A: Jos Duisings, Ruud Cox
Team B: Joris Meerts, Huib Schoots

The assignment

The assignment of the meeting is to select and execute tests on a software application. It is carried out by two teams of two testers each. This makes it possible to take different approaches and to provide feedback from one team to the other.

Division in sessions
The meeting is divided into sessions in advance. The sessions each have their own goal and their own evaluation.

  • Welcome & introduction
  • Creating a coverage outline (per team)
  • Debriefing the coverage outline
  • Drafting a test strategy (as a group)
  • Debriefing the test strategy
  • Selecting a few charters
  • Performing a test session (charter) per team.
  • Debriefing the test session
  • Retrospective

Introduction of the application to be tested
The application to be tested has been selected prior to the meeting. We choose the application Task Coach (Task Coach, 2018), an open source to-do list manager. This software is publicly available and runs on multiple platforms (Windows and Mac OS). The application is relatively simple but still offers sufficient complexity to be able to test thoroughly. According to the description on the website, Task Coach is ” a simple open source to-do manager to keep track of personal tasks and to-do lists. It is designed for composite tasks, and also offers effort tracking, categories, notes and more.” The participants have not worked with Task Coach before and are therefore not familiar with this specific piece of software.

Creating a coverage outline
Because Task Coach is new for all participants, the software will have to be explored. Only then can we say more about what can be tested in the given time. Such an exploration of the product can be done in many different ways. During the meeting we chose to make a ‘product coverage outline’, inspired by the Heuristic Test Strategy Model of James Bach (Bach, 1996). The product coverage outline provides an overarching view of the product in which the functionality of the software is divided into seven different categories. The categories are described in the mnemonic SFDIPOT. The software tester is reminded by this heuristic that when mapping a product he can look at Structure, Function, Data, Interfaces, Platform, Operations and Time. By looking at the application from these perspectives, a relatively complete sketch is created of what the software product is and what it can do. However, the mnemonic does not only serve for exploration. The ‘map’ of the application that is created in this way can also be used to visualize the coverage of the tests and to select the tests to be carried out.

The exercise
We decide that both teams will be given half an hour for creating a product coverage outline and that each team is free to decide which in which format the outline is presented. The choice of a half-hour time limit is mainly motivated by the fact that the software must also be tested before the end of the day. There is no room to spend much more time making the product outline.

It turns out that half an hour is not enough time for mapping an application that, despite the fact that it claims to be simple, still contains a lot of functions. Team B decides to create a mind map in which the seven categories are elaborated. This mind map is not finished after half an hour. Especially the branch ‘Function’, in which the functions of the application are described, is very large and has a deeply nested structure. To build this structure, team B explores the application by clicking through it and captures the functionality (in text or in image) in the mind map. Due to time constraints, team B presents a product sketch that is not finished. This triggers a discussion about what we expect the mind map to look like given the available time. The expectations, such as completeness, can be related to the purpose of the mind map. Through a discussion it becomes clear that no clear goal has been defined for drawing up the product sketch and that the result of the assignment is difficult to assess.

Preference for the mind map
A mind map is a tool that is used by software testers on a regular basis to create a product outline. The tree structure of the mind map lends itself to classifying (grouping) properties. It is easy to start with an empty mind map, enter the categories from SFDIPOT and expand these. Moreover, a navigable structure is created in this way. The mind map forms a kind of geographical map in which you can choose to zoom in and zoom out to determine the location of something in relation to the whole. Software essentially consists of zeros and ones and the software product is an abstract concept. By making the ‘map’ the abstract software gets a concrete shape. A mind map can also be used as an instrument for reporting on the results and the progress of the tests. So there are a number of good reasons to work with a mind map from the beginning.

Team A starts the session by creating a mind map but after a short time it switches to a different form. When studying Task Coach, the team finds out that there is a help file in the application. After a short study, the help file appears to describe a large part of the application. The categories from SFDIPOT all come back in the file to a sufficient extent and so Team A chooses to use this file, converted to Word format, as a product outline. By marking the categories in the document with different colors, structure is added to the document. In addition, the table of contents provides a global overview of the functionality in the application; the details are mentioned in the paragraphs. In this way, team A delivers a product sketch that is relatively complete within the stipulated time.

Drafting the test strategy
The result of the first session is a product outline. With this product sketch and the knowledge gained during the exploration of the software it is easier to discuss a test strategy. Because exhaustive testing of an application is not feasible for the majority of software applications and because exhaustive testing in many cases does not yield better information, it is desirable to make choices with regard to the test work to be performed. We find these choices in the test strategy.

By making the product outline we have gained insight into a number of aspects of the application. We take these aspects into account and we hope to indicate per category whether this category needs to be tested and if so with which depth. The most decisive factor in that decision is the risk the company encounters when the software is put to use. Risk is a multifaceted concept and can only be determined if the software product is highlighted from different angles. In any case, the perspective of the tester alone is not sufficient to get a good picture of the risks of a software product. During the second session it becomes clear that a representative is missing who can reason about risk from the perspective of a user or of the organization. This point is also discussed in the debriefing of the second session and we conclude that for that reason the risk assessment is incomplete.

Assessing risk
In the Heuristic Test Strategy Model three dimensions are discussed that influence a test strategy. These dimensions are the project, the product and the requirements that are set for that product; the quality characteristics. Risks play a role in each of these dimensions. In the exercise, we decide not to look at the project risks. As far as product risks are concerned, we make our own assessment with regard to the categories of the product. We consider categories that are used the most, are the most prone to errors and categories where a possible error has the most impact. Because there are no users involved in the exercise, we try to place ourselves in the role of the users. This way the following product categories are discussed.

  • The primary process from the Operations category. This will tell us which functionality is used the most.
  • Using the application by multiple users from the Operations category. We recognize risks associated with synchronization: tasks are not updated properly.
  • The importing and exporting of tasks from the Interfaces category.
  • Dealing with reminders from the Functionality category. Reminders are crucial for not missing appointments.
  • Dealing with date and time from the Time category. Date and time play an important role in planning tasks, they touch the core of the application.

After we have identified the categories of the product, we try to assess which quality aspects of the product are the most in demand. Here too, it has been decided to use our own assessments as a guideline. The following quality aspects are named:

  • Functionality,
  • Usability and
  • Charisma

Coverage
We briefly discuss the coverage of the tests that we want to carry out. Since the tasks that are maintained in Task Coach can have different statuses, the tester can, from the perspective of state transitions, visualize the coverage level of the tests carried out. The state transitions are covered in the different paths through the application that a user travels. These paths are helpful when mapping the coverage. Furthermore, coverage can be looked at from the perspective of the test data used.

Drafting the charters
We decide to make two charters (Session Based Test Management) and to execute them. We prioritize the product categories mentioned in the risk analysis based on our own insights. From this we conclude that ‘the primary process’ and ‘dealing with date and time’ are the two most important product categories. We translate these two categories into charters. In one charter we will look at the primary process, in the other charter the handling of date and time will be investigated. Each team selects a charter. After the charter has been executed, each team reports the work done to the other team in a debrief session. During this short evaluation, the tester will tell his story and answer any questions. The evaluation has several goals, such as assessing whether the mission is successful, translating new insights into follow-up sessions, looking at notes and descriptions of findings and coaching the tester. Due to the debrief, the understanding of the application under test grows.

The charter for testing the primary process
To formulate a starting point for this charter, we look at the risks that can be associated with the execution of the primary process. There is a risk that no task can be created. Or it could be that the task is created but errors occur when saving or modifying a task or changing the status of a task. These considerations lead to the following mission that serves as the starting point for the charter: “Explore the basic flow of tasks using CRUD to discover/test the life cycle of a task”.

The charter for testing the handling of date and time
Prior to drafting the charter for testing the handling of date and time, the following test ideas are mentioned:

  • Start and end times are the same
  • The end time is before the start time
  • The date of a task is far in the past or far in the future
  • System time
  • Work in different time zones
  • Dealing with winter time and summer time
  • Different date formats

With these test ideas in mind, the following charter is drawn up: “Explore tasks using date/time to discover date and time related bugs”.

Execution of the charters

Testing the primary process
The charter for testing the primary process (basic flow) begins with the creation of a task. Several tasks are created using the button on the taskbar. Subsequently, several underlying tasks are created for a single task, up to 8 levels deep. Under one of the underlying tasks, a new tree structure of underlying tasks is created. During the creation of the task structure, questions arise about the maximum depth of the task structure and about the sorting of the underlying tasks. In addition, it is found that it is possible to delete underlying tasks and then to undo these deletions. It is proposed to further investigate this functionality in a separate charter, since it is suspected that this is not always going well. The resulting tree structure is removed by means of the ‘Delete’ button on the taskbar. After this, all tasks have disappeared.

On the taskbar there is also a button that offers the possibility to create new tasks from a template. There are two default templates available. Tasks are created with these templates. It appears that it is not possible to create underlying tasks from a template. The question is why this functionality is not available for underlying tasks. Finally, the created tasks are deleted.

To get a better picture of how templates work in the application, the help text is read over templates. Furthermore, the team explores the menu structure in search of more functionality that could be related to the use of templates. From the ‘File’ menu it is possible to save tasks as templates, to import templates and to edit templates. A template is created.

From the dialog that appears when choosing to import a template it appears that template files have the extension ‘.tsktmpl’. But when a task is saved as a template, it is not possible to find out whether a template file is created from this task and, if so, where this file is stored. The newly created templates are visible when one chooses to create a new task based on a template from the toolbar.

A task is created, saved and checked if this task can be imported. Again, all created tasks are deleted by selecting the menu option Edit > Delete. The team looks a bit further at the options for removing tasks. It appears that there is a shortcut combination for removing tasks. The combination is Ctrl + DEL. The team wonders why it is not possible to simply use the Delete key.

In summary, this session looked at creating, viewing, editing and deleting tasks and underlying tasks. A finding has been made regarding the undoing of changes. It turns out that undoing the removal of tasks in a tree structure of tasks and underlying tasks does not produce the same structure as the original structure. Following the session, it is proposed to look more extensively at the use of templates in new charters. Also the functionality for creating, viewing, editing and deleting tasks and underlying tasks deserves more attention, especially because this functionality can be called from many different places in the application. The various possibilities have not all been tested in the first session. Furthermore, a charter could be made for creating a task depending on another task.

Feedback
At the end of the session, the report of the session was discussed with a member of the other team with the aim of obtaining feedback about the course of the session in a debrief. The feedback shows that there was not enough time to complete the charter. To complete the charter another thirty to sixty minutes would be needed. It is noted that the description of the detected bug is not clear enough. It also becomes clear that not all possible statuses of a task have actually been addressed in the session. Finally, a new charter is suggested for testing filtering and sorting.

Testing the handling of date and time
The session is started with a clean installation of the application. First a new task is created. Attention is given to the Data tab on which various data can be entered. The date and time can be changed in separate input fields. The time can be changed by means of a dropdown or by using the arrow keys on the keyboard. It turns out that ’00:00′ cannot be used as time. It appears that the range of time can be set under the ‘Preferences’ menu. After an adjustment, ’00:00′ can be entered.

It is noted that the planned start date and the planned end date are not included in the calculation of duration. But it is unclear what this data will be used for. When the planned end date is in the past, the task will turn red. When the planned end date is in the future, the task will turn purple. It is remarkable that the planned start date can be after the planned end date. Apparently there is no validation on the order of data. Dates that are far in the future are accepted as valid dates. To change a date, the task will first have to be closed and then opened again.

An interesting finding occurs when, at the planned start date, the year is set for 1900. If this is the case, the planned start date cannot be changed after closing and reopening the task. During the study of this finding, the team finds out that the application creates a log file (in the Documents folder) in which any errors are logged. The attempt to adjust the planned start date results in the following error message: “ValueError: year = 1853 is before 1900; the datetime strftime () methods require year> = 1900”. After this finding, further attention is paid to other functionality around time and date. For example, a reminder on a task is set to 5 minutes. The reminder works.

In addition to planning tasks TaskCoach can also be used to keep track of the time spent per task. After some research it appears that this functionality is complex and that it is not easy to find out how TaskCoach deals with time spent. Time tracking can be started with a button, but can also be done by increasing the time spent in a separate tab. The team notices that when entering an hour of time spent, the actual time spent shown on the tab is just a few seconds. It is possible to aggregate time spent at month level. If we do this, we see that the descriptions that we have listed for each entry are combined in a single text field.

The time that an application uses depends on the time setting on the system. For this reason, the team manipulates the system time of a MacBook Pro. The calendar is changed to a Coptic calendar. This adjustment causes the application to crash after startup. In the log a line appears mentioning an error relating to an invalid date format.

Finally, the team looks at the connection between the budgeted time and the time spent. TaskCoach is able to calculate the remaining time on the basis of the variables mentioned. Some tests are performed with variations in hours, minutes and seconds. TaskCoach handles all these variations correctly.

Feedback
A team member verbally reports on the session to a team member of the other team in a debrief. The feedback shows that this report contains a lot of detailed information. To be able to place the detailed information, a framework is needed. The product outline could have been used as a framework. One new charter emerges from the feedback, namely the testing of the synchronization of tasks between systems with different system times.

Conclusion
The meeting is concluded with the completion of the test sessions. Looking back we conducted, in a day with two teams, a number of tests on an application that was unknown beforehand. We have shown that techniques and methods exist that help the tester to acquire knowledge about the application, develop a strategy and perform tests. Exploring the application provides insights that serve as a starting point for risk assessments and for conducting test sessions with a well-defined goal. By quickly arriving at concrete and well-substantiated tests, the tester provides valuable feedback on the application in a short period of time. The test sessions are debriefed and this provides starting points for further deepening where necessary.

Due to the popularity of agile methods, it is common for the tester to be asked to test something, while at that moment he has incomplete insight into the functionality of the application. Moreover, the tester is expected to deliver results within a limited time. It requires an approach in which the tester quickly draws up and executes a strategy by modeling, thinking critically, discovering and investigating. This is the approach that we applied during the meeting described above.

A Clash of Models
The creation of the product coverage outline led to quite different outlines being delivered by the teams. These differences and their possible causes are discussed in a separate article, written by Joris Meerts and Ruud Cox. The article is called A Clash of Models.

References
Bach, J. (1996). Heuristic Test Strategy Model.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons Learned in Software Testing. John Wiley & Sons.
Task Coach. (2018). Retrieved from Task Coach: http://taskcoach.org/

Let’s stop talking about testing, let’s start thinking about value

This year Alex Schladebeck and I did two keynotes titled “Let’s stop talking about testing, let’s start thinking about value” at QA Expo in Spain and TestNet in the Netherlands. This blogpost has the most important points we made in our talk.

The keynote was inspired by some of our frustrations: “Testing is under appreciated” (Alex) and “Most testers are unable to explain what we do” (Huib). I wrote about my frustration back in 2016 already. This blogpost is about my frustration that most testers cannot come up with a decent definition of testing. And even worse: a big majority of the people who call themselves professional testers are not able to explain what testing is and how it works! They have trouble explaining what they are testing and why they are doing specifically the thing they are doing! How can anybody take a tester seriously who cannot explain what he is doing all day?

Alex’s frustration is that testing is not valued by others. Developers are seen as the rockstars of the project because they create the software that adds value. But why are testers often not valued?

  • Lowered expectations for testing expertise by stuff like ISO standards and ISTQB: I wrote about certification and standards before. ISTQB and standards put too much emphasis on process and documentation, rather than the real testing. By assuming there can be a standard, you say that there is one best way to organize and document your testing. But isn’t your test strategy heavily dependent on its context? When using standards we tend to focus on complying with the standard, and lose sight of the real goal. This sort of goal displacement is a familiar problem in many situations. Also, the idea that you can learn how to test is a couple of days of training is dangerous. Remember lesson 272: if you can get a black belt in only two weeks, avoid fights (Lessons Learned in Software Testing: A Context-Driven Approach by Bach, Kaner and Pettichord).
  • Avoiding controversy: nowadays more and more people advocate to be nice! I think that we confuse being nice, with being kind! An interesting article about this phenomenon is written by Marcia Sirota. Of course we need to respect other people, but to push the testing craft forward, we need to have firm discussions and disagree with others way more often. Being nice doesn’t help. Serious feedback does!
  • We devalue our own work by becoming tool jockeys: unfortunately there are too many testers (and teams) out there who focus too much on automation as much as possible. Why? Because they can! The testers in those teams are often so busy doing automation that they do not have the time to test anything…
  • We do not stand up for our craft: we do not fight back enough when other people say they do not need testers, or if they tell us how to do our jobs to name a few examples. We have to learn “testers self-defence: to stand up to people who try to dictate how do our jobs. We have to learn how to organize effective (and efficient) testing. And we need to learn how to talk about our work in a way others understand. This requires practice!
  • We do not learn or practice enough: testing is difficult! We have to deal with complexity, ambiguity, change and people. Testing is a craft, not something you do as a hobby. To become a craftsperson, you have to practice (also see my blogpost: a road to awesomeness).
  • We don’t know how to talk about testing: as said before: how can anybody take a tester seriously who cannot explain what he is doing all day? To be really valuable, testers need to learn to talk about their testing in a way others understand and find valuable.

So looking at these things, are we okay with this? I don’t think so. But what can we do about it? We are trapped in this vicious circle: we need to talk about testing! It is good for our soul to explain what I did and why, but we don’t know how to talk about our testing in a way that others understand.

Alex and I listed some traps:

  • Stories decay into Numbers: testing is about providing information to enable others to make informed decisions. The number of test cases or the number of bugs do not really matter. It is the story about the product and the risks involved. Those numbers might back up your story, but they do not tell the story!
  • A performance decays into Deliverables: testing is about finding problems, collecting information, exploring and experimenting to discover new information. Sure, documents and stuff sometimes help us, but testing is a performance. (James Bach talks about that here: a test is a performance and here: Test cases are not testing: towards a culture of test performance).
  • Test strategy decays into Test execution: when was the last time you saw a really good test strategy? In many cases I find master test plans where everything is described except the strategy. It is hard to create a test strategy and it is even harder to write it down or visualise it. Many testers I meet focus on test execution: creating test cases and scenarios and calling that the strategy.
  • Tool supported testing decays into Automation: testing using tools is a great idea. It gives us more opportunities to test and improves testability. But as said earlier: it becomes a problem when we focus too much on automation or even try to automate all our work. We cannot automate testing.
  • Many kinds of coverage decays into One kind of coverage: testing benefits from diversity! You find a certain type of bug with a certain test technique or approach. By using lots of different views, approaches and techniques, we find more problems.
  • Learning activity decays into Formalized static tasks: testing is learning about the product for our stakeholders. It’s not about verification and validation, there is much more to it. I like to replace such words with challenge the belief that (verify) and investigate (validate). Those activities provide the valuable information we need.
  • Balance risk and uncertainty decays into Certainty: people like to be comfortable and we like to give other comfort as well. But as testers we need to stay unsure, when others are sure. It is our job to keep asking critical questions. We are not here to give confidence or comfort, we are here to demolish unwarranted confidence! Also keep in mind that to find new unexpected problems, we have to go where nobody has thought of and nobody went before us. That will cause confusion which feels uncomfortable for many. I learned to be okay with confusion, since this is essential for learning new things.
  • Business Impact decays into Bugs: some testers are frustrated when bugs aren’t fixed. But that is part of the deal: some things that bug us, are just not important enough.
  • Product story decays into Testing jargon: I think this is the main problem for people not listening to testers. We talk jargon and about what we do in detail too much. We say stuff like: “We’ve executed 17 test cases in the system test, we’ve automated 50% of the test cases for area C and now have 30% code coverage. We found three major and five medium bugs”. And we are surprised that nobody will listen. We need to talk about the product! So you have found 8 bugs? Who cares? Talk about the risks involved, about the threats to the value of the product.

So maybe testers need to stop talking about testing?
Well, not exactly. We need to remember that the information from testing enables other people to do better work! So the testing itself isn’t always interesting, but the story about the results and the impact on the business is!

Just imagine a conversation between a tester and the PO.
Tester: The testing is going well!
PO: Okay, great. How is the product?
Testers: It sucks!

The role of testers

What is the role of testers? Testers see things for what they are. Testers help others make informed decisions about quality, because we think critically about software. This means creating awareness about the state of the product by staying sceptical when everybody else is sure. So we have to know what our clients want from testing. What information do they need to take these decisions? Project managers have one big question to be answered: are there problems that threaten the on-time, successful completion of the product?

Product Risk Knowledge Gap

I like to explain testing using the “Product Risk Knowledge Gap” like we teach in RST. Knowledge Gaps are the things that we need to learn in order to make good decisions. We need to learn about the product to close the knowledge gap. The more we know, the less risky our decisions will be. Testers should focus on questions like: what does the client need to know right now? What might hinder the successful completion of the product? What role do I need to take on in this situation to ensure we achieve our aims? Does this information matter? To whom?

But there is a way to avoid talking about testing. Just find enough questions and problems so that your stakeholders simply won’t have time to ask you questions back! Also, if you tell a credible story and give them the information they need, nobody cares how you got the information in the first place. In this case you need to stand your ground: tell people what they need to hear despite what they want to hear. Again: it’s your job to see things for what they are. If you give people the chance to doubt what you are doing, because you do not deliver the information they need, they will start asking questions about how you do your job. And if you have to talk about how you do your testing, then prepare to be able to tell a damned good story about your testing. Something they can understand and relate to.

The testing story

The testing story by Rapid Software Testing can help you tell that story. Tell a story about the product, what you saw, what you did to gather that information and how valuable that information is. (See “Braiding The Stories” by Michael Bolton). The testing story contains three stories that feed into each other:

  1. The product story: a qualitative report on how the product can work, how it fails, and how it might fail in ways that matter to our clients.
  2. The testing story: to make the product story credible, the testing story is about how we configured, operated, observed, and evaluated the product; what we actually did and what we actually saw.
  3. The quality of testing story: to make the testing story credible, tell a story about the quality of the testing. Describe why the testing we’ve done has been good enough. It includes details on what made testing harder or slower and what we might need or recommend in order to provide better, more accurate, more timely information.

Modern testing
As testers we do way more than only testing. We are enablers of testing by doing all kind of other things to be a service to the team and our clients. Researching this, Alex and I found the Modern testing principles by Alan Page and Brent Jensen. There is a lot of good stuff in there, and yet we feel that there is not enough focus on the actual testing in their principles. Furthermore, we think that the seventh principle “We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.” is formulated too negatively. We do not talk about dedicated test specialist as a function. But we like to talk about testing skills. And although we think there should not be a need for a dedicated testing specialist, we see too many people in teams who do not like testing. Passion (or at least motivation) for what you do, is conditional to become good at anything. So we created our own testing principles (inspired by the modern testing principles of course):

  1. Deliver insight into status of the product
  2. Practice (and enact) critical thinking
  3. Enable testing: lead, coach, teach, support
  4. Discuss testability
  5. Explore & experiment
  6. Promote waste removal / avoidance
  7. Help to accelerate the team
  8. Advocate continuous improvement
  9. Foster quality culture
  10. Keep critical distance and close social distance

Stop talking about testing?

So do we need to stop talking about testing? Not really. But we need to talk about the product, risks and value more. We can talk about actual testing only to back our story up or if they ask questions. And even then, we need to make our story understandable and relatable to others. Make sure you are a service to the team. We created our own testing principles to explain what value we add. We also have a pretty clear story on what testing is and how it adds value. We did this by practicing our stories many times. But we also figured out our own testing paradigm. That makes is easier to talk about what we do and how we add value.

Software Development is research & development: a series of experiments that ultimately lead to a suitable solution. We are dealing with customers who do not know exactly what they want. Furthermore, we are dealing with the complexity, confusion, changes, new insights and half answers. That requires research. As we team we are looking for what works and what doesn’t. Testing is of great importance for this! Testing provides insight and overview. Testing shines a light on the actual status of product and project. These insights enable others to make better decisions and eventually make better products.

The slide are here.

Note: this is an Alex Schladebeck and Huib Schoots co-production and this blogpost was co-authored by Alex. So where you read I, you could read Alex and I.

Creating a Test Strategy

At EuroStar 2017 I did a experiential workshop with Pekka Marjamaki and Carsten Feilberg called “The Magic of Sherlock Holmes – Test Strategy in a blink of an eye”. The goal of this full day workshop was to teach to create a Test Strategy rapidly so you can start testing as soon as possible. This blog post describes the summary of what we taught and publishes the example I made for the participants.

What is a Test Strategy?

In the workshop we defined Test Strategy as a solution to a complex problem: how do we meet the information needs of the testers & stakeholders in the most efficient way possible. In Rapid Software Testing we define Test Strategy as “the set of ideas that guide your test design or choice of tests to be performed”. We also talk about logistics and test plan. Logistics is the set of ideas that guide your application of resources to fulfilling the test strategy and test plan is the set of ideas that guide your test project. A Test Plan is the sum of logistics and the strategy. 

Rikard Edgren did an excellent workshop on Test Strategy at EuroStar 2014. In his workshop he says: Test strategy contains the ideas that guide your testing effort; and deals with what to test, and how to do it. (Some people mean test plan or test process, which is unfortunate…). It is in the combination of WHAT and HOW you find the real strategy. If you separate the WHAT and the HOW, it becomes general and quite useless.

What influences the Test Strategy?

Your Test Strategy is influenced by many factors.

  • Context:  your testing is influenced by the details of the specific situation like information available, the tester(s) doing the testing, what has been tested before, what tools and environments are available, how much time you have, etc.
  • Missions: what do your stakeholders need to know about the product? 
  • Risks: testing is mostly motivated by problems that might happen (risks). We want to find the important problems in the product. 
  • Product: products have many dimensions. By modelling the product we find important and unique aspects of the product.
  • Quality Criteria: various criteria or requirements that define what the product should be for the stakeholders.
  • Testing: your testing changes the Strategy constantly. Each experiment learns you more about the product and the risks involved.

How to create a Test Strategy?

To create a Test Strategy, you have to examine the factors mentioned above. This can be done in several activities (which do not need to be done specifically in this order). Most likely you will do this in a iterative way, building your Test Strategy as you go.

  1. Missions for your testing
  2. Product analysis
  3. Oracles & information sources
  4. Quality characteristics
  5. Context: project environment
  6. Test strategies

I like to use to Heuristic Test Strategy Model (HTSM). It reminds me what to think about when if am creating my Test Strategy and tests.

The HTSM is a model which consists of several sets of heuristics (more about heuristics here and here).  The full model can be found here.

  • Project Environment helps to understand our context and missons: MIDTESTD (mission, information, developer relations, test team, equipment & tools, schedule, test items, deliverables).
  • Product Elements helps to identify dimensions and factors of the product that could be examined in a test: SFDIPOT (structure, function, data, interfaces, platform, operations, time).
  • Quality Criteria helps to identify value and threats to various criteria of the product: CRISP DUCCS (capability, reliability, installability, security, performance, development, usability, charisma, compatibility, scalability). In this case you could also use the software quality characteristics by the Test Eye.
  • Risk analysis reveals potential problem. Risks motivate your testing. But the testing itself is risk analysis in itself: after analysing potential risks your testing informs you about the actual problems and learn new aspects of the product which helps you identify new risks. Each test has its own strategy: FDSFSCURA​ (function testing, domain testing, stress testing, flow testing, scenario testing, claims testing, user testing, risk testing, automatic checking​). To come up with good Test Ideas is an important skill. Erik Brickarp has an excellent blog post called How to come up with test ideas.
  • Identifying Oracles and Information sources helps learning about the product and identify potential problems. To design a good test strategy we need to know what’s important.

Examples of Test Strategy

Before giving you my example, I like to link to two great example of how to create a thorough Test Strategy by Rikard Edgren.

The exercise in the workshop was defined as follows:

Product: tinyurl.com/SherlockES

Approach used to create my Test Strategy below:

  1. Look at mission and things we know (from the class) [1 min]
  2. Explore wix platform website [1 min]
  3. Explore wix casies website and create SFDIPOT mindmap while learning about the product [5-10 min]
  4. Risk analysis [5-10 min]
  5. Think of test ideas / approaches to deal with risks [5-10 min]
  6. Wrap-up. Create testing story about what I know already. List next steps [5 min]
  7. Tidy document and add some comments to make it readable for students

Total time used: 50-60 min

1. What do we know (and what important questions do I still have):

  • No developers available –> No access to code
  • Target customers? Who are they?

(Considering the short time period, I choose not to do a thorough context analysis using MIDTESTD, if I had more time, I would do so).

Mission:

Casies is web shop build with the Wix platform where customers can buy a case for their mobile phone. Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released.

Most important quality criteria:

  • Usability & charisma
  • Reliability and security of the purchase process
  • Functionality
    • Find, sort & filter
    • Purchase, cart, payment
    • Bestsellers
    • Contact
  • Performance

2. Look at Wix platform site

(url: https://www.wix.com/features/main)

Product exploration: look at website about Wix platform

Claims about the product:

  • Easy Drag and Drop
  • Free & Reliable Hosting
  • App market –> what else is there?
  • Mobile Friendly
  • loads of templates
  • SEO

3. Explore Wix casies

Start using the product

Product exploration: play with casies website using SFDIPOT

Download Xmind mind map (created in Xmind Zen beta4)

4. Risks

The risk mentioned here are probably too vague in some cases. Since risk analysis is a continuous process I will update risks later, making them more concrete and actionable. Also I will add more risks while testing.

  • Web shop not available
  • Web shop not easy to use
  • Target customers do not like the web shop
  • Web shop is not secure: customer data accessible by 3rd party
  • Customer cannot add items to cart
  • Customer cannot buy items in cart
  • Customer cannot find the items wanted
  • Web shop is not easy to find

5. Risks – Testing

Test ideas

Used document Software Quality Characteristics by Testeye

(More info on session based testing: here)

6. Testing Story & Next Steps

Looking at the website I found that the web shop doesn’t look complete to me: there is no possibility to check out and pay. Is this okay? To be able to do thorough testing to fulfil the mission “Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released” the website needs to be completed and payment functionality needs to be added. I am also interested in the maintenance model: how can I add cases? This would be very handy to create more test data and play with parameters in there to see how that comes out in the shop. Does this need to be part of my testing?

The results of my short initial exploration are captured in the SFDIPOT mind map I made while playing & interacting with the product. After that I made an initial risk analysis. I haven’t gone deep on anything yet.

Next steps will be talking to the product owner with my initial test strategy. If the payment module isn’t available soon, I will start with testing the first three charters, although I will not be able to fully do the purchase process charter, so I have to split this charter and focus on the cart part only.

  • Purchase process: cart & payment including investigation of fields
    (using the Test heuristic cheat sheet)
  • Finding cases – sorting & filtering cases
  • GUI tour: check all links and info

Used heuristics

Below an abstract of the “Software Quality Characteristics” by Testeye used as heuristics for creating this Test Strategy

Usability

  • Affordance: product invites to discover possibilities of the product.
  • Intuitiveness: it is easy to understand and explain what the product can do.
  • Minimalism: there is nothing redundant about the product’s content or appearance.
  • Learnability: it is fast and easy to learn how to use the product.
  • Memorability: once you have learnt how to do something you don’t forget it.
  • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
  • Operability: an experienced user can perform common actions very fast.
  • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
  • Control: the user should feel in control over the proceedings of the software.
  • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
  • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
  • Consistency: behavior is the same throughout the product, and there is one look & feel.
  • Tailorability: default settings and behavior can be specified for flexibility.
  • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
  • Documentation: there is a Help that helps, and matches the functionality.

Charisma

  • Uniqueness: the product is distinguishable and has something no one else has.
  • Satisfaction: how do you feel after using the product?
  • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
  • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
  • Curiosity: will users get interested and try out what they can do with the product?
  • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
  • Hype: should the product use the latest and greatest technologies/ideas?
  • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
  • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
  • Directness: are (first) impressions impressive?
  • Story: are there compelling stories about the product’s inception, construction or usage?

Reliability

  • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
  • Robustness: the product handles foreseen and unforeseen errors gracefully.
  • Stress handling: how does the system cope when exceeding various limits?
  • Recoverability: it is possible to recover and continue using the product after a fatal error.
  • Data Integrity: all types of data remain intact throughout the product.
  • Safety: the product will not be part of damaging people or possessions.
  • Disaster Recovery: what if something really, really bad happens?
  • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?

Security

  • Authentication: the product’s identifications of the users.
  • Authorization: the product’s handling of what an authenticated user can see and do.
  • Privacy: ability to not disclose data that is protected to unauthorized users.
  • Security holes: product should not invite to social engineering vulnerabilities.
  • Secrecy: the product should under no circumstances disclose information about the underlying systems.
  • Invulnerability: ability to withstand penetration attempts.
  • Virus-free: product will not transport virus, or appear as one.
  • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
  • Compliance: security standards the product adheres to.

Performance

  • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
  • Resource Utilization: appropriate usage of memory, storage and other resources.
  • Responsiveness: the speed of which an action is (perceived as) performed.
  • Availability: the system is available for use when it should be.
  • Throughput: the products ability to process many, many things.
  • Endurance: can the product handle load for a long time?
  • Feedback: is the feedback from the system on user actions app
  • Scalability: how well does the product scale up, out or down?

Final thoughts

As my example shows: you can create a Test Strategy in an hour. Of course this Test Strategy is not complete. But after the first tests (3 sessions) we learn and discover more about the product, so we can identify new risks, which inform new missions and will help us come up with new Test Ideas. Our Test Strategy will grow over time!

Extra info:

  • The slides are here.
  • The pictures and flipcharts from the workshop are here.

References: