Category: Context-Driven (Page 1 of 4)

Do we need testers?

The first version was first published on linkedin on February 5, 2020 titled “Do we need testers? No! Do we need skilled testing? Yes!”. I added my new insights to this blogpost.

Last week I did a talk at Agile, Testing & DevOps Showcase in Amsterdam. My topic was “Testing in modern times“.

In agile and especially DevOps approaches the motto is: automated everything! Companies like Facebook claim they do not have testers at all. Microsoft only has SDET (software development engineers in Test), other companies are T-shaping developers to do the testing. New kid on the block is AI and machine learning, that will definitely replace testing I hear people claim. What is really happening globally?

Do we no longer need testers? I am not sure anymore. Why? Because testers have a bad name. If you cannot automate nowadays, you are no longer valuable, people say. That makes me sad. I think skilled testing is super important! Testing informs decisions about value, quality and risk by learning about the product (and a serious professional tester also will give you insight in the status of the project or team they are working in). Modern technology and tooling is reducing the need for dedicated testers. Not by replacing testers, but by reducing certain types of risks we used to test. More and more people are getting obsessed by “automate everything”. Sadly even some testers I meet are obsessed by trying to automate everything they do…

How can we make valuable software for our clients? I believe that personal leadership and collaboration ultimately makes the difference. The quality of software is crucial nowadays. Therefore I like to focus on an integrated quality approach. As a tester, a mentor and a coach, I help people and teams learn effectively and continuously develop with attention to sustainable adaptability that leads to improving (team) results and way of working. It enables teams to create more customer value by building quality solutions!

In IT we need insights in risks. Risks and value. For that we need to learn continuously. And I think we need smart people who do skilled testing, determined to find problems that matter. Teams need to create insight and overview in the risks we take by creating, releasing and using (IT) products. Often people with excellent testing skills excel in doing this… I hear new job titles as “Quality coach” and “Quality Engineer”. Is this the way to go? Well if that solves the problem of “automation obsession” and a lack of testing skills in teams? I am game. 


So far the original post on linkedin. Recent events make me doubt if we ever get to a point where we do not need dedicated testers.

Dan Ashby reacted on linkedin:

"It's a really interesting question. My take: yes, we need testers... because we need skilled testing, and the Dev communities haven't kept up with what skilled testing is and how to do it. One thing too: Facebook use offshore testers (lots of them via a tester contracting company - I know people that work at FB and at the 3rd party testing company). Also MS have done a U-turn too, and they also employ testers again as well as SDETs. They also have test manager roles again too. Maybe lots of the big companies have hit that realisation that they needed good testing (and hence needed the testers who have those skills)? If so, hopefully the smaller companies that tend to imitate the big companies will soon follow suit. 😁"

I like what he says here: “we need testers… because we need skilled testing.” Although there are some developers who have really good testing skills, many are not interested in testing nor learning to do skilled testing. So I think he has a point there. But dedicated testers alone do not solve the problem. We need better testers and better thinking about testing too! 

I see a huge fixation on “test automation” and this is causing us to lose connection with the human, social purposes of software development and testing. The essence of software development is that during development, we learn about what we need, what the customer really wants and how the product we are building actually works. This is research and development, learning along the way, and needing sense making and feedback to get it right.

This “Vision on the Future of Software Testing” has nothing to do with skilled testing. This shows that even an institute like ISTQB does not understand IT in general nor testing. That makes me so sad!

Test approaches like TMap and ISTQB are neglecting the human aspects of testing. They try to approach testing with mechanistic thinking and are not dealing with the complexity and uncertainty that developing software and dealing with people brings. Leading test experts told me we cannot make testing too complicated or else testers will not understand.

Recently the new TMap book was published called “Quality for DevOps teams”. Reading how TMap deals with risk analysis and test strategy gives me goosebumps. The risk analysis is just a list of quality attributes with a simple calculation (possible impact x chance of failure) and based on the number we assigned a “risk class” (high, medium, low). And based on the risk class we assign the test intensity in dots. See for yourself what it looks like here and here. Now let’s look at some quotes from the book that made my eyes roll.

Chapter 47 “Experience-based testing”

“Exploratory testing is an experience-based approach of testing, the most important approach of experience-based testing in our opinion. We distinguish coverage-based and experience-based testing. Others use terms like scripted testing and free-style testing, but we prefer the division in focus on either experience or coverage.”

All testing that you do uses experience, because there is no way you can shut it off. And doing any test will give you some coverage. I guess they just do not know how to talk about coverage in a way that makes sense.  Why make this strict distinction in only two categories? There are so many more ways to classify test techniques (note: what TMap calls “approach” is called “technique” in BBST).  There is no strict distinction between test techniques. See slide 62 of BBST Test DesignEvery test addresses all of these. A specific technique typically addresses 1 to 3 of them, leaving the rest to be designed into the individual test“. Personally I like the way BBST classifies techniques looking at the driving ideas behind the testing.

“The main downside of applying error guessing is the lack of documentation. Therefore, tests are not reproducible. This may result in a developer not being able to investigate an anomaly, the tester not being able to retest a fix, and the test cannot be added to a regression test set.”

Why do tests need to be reproducible? If a tester is capable of telling or showing the developer what goes wrong, you do not need any documentation. I think with enough product knowledge, it is not that difficult to retest. So we are solving a problem in the wrong way, aren’t we?

Chapter 48 “Is there any value in unstructured testing?”

“Any testing lacking a plan containing what to do and what to expect of a system, or lacking preparation of the test, is unstructured. This is also called ad-hoc testing. Some people see a great advantage in unstructured testing because, as they say: “You can start testing right away.” That is, without “losing” any time on preparation.”

The only structure TMap knows is plans and test cases. Structure is “the arrangement of and relations between the parts or elements of something complex”. Michael Bolton wrote about this here. Testing is about learning and learning involves mental models. TMap seems to have no attention at all about how people learn. Deep learning in the beginning is a confusing process, but it gets clearer along the way. By letting it rest (defocus), we give our brains the chance to process the learned information and integrate it with the models we have in our heads. So good (mental) models are important. These “mechanistic test approaches” forget the whole learning part.

“When you have an IT system that is of good quality, the testers do their unstructured testing and don’t encounter any faults or failures. Can they now say the quality is good and that there are no significant risks? Did they really measure the quality and risks? No, the only thing they can truly say is they did “some” testing and didn’t find any problems. However, they cannot explain which requirements or quality risks have been covered. They are not even sure which parts of the system have been covered.”

This is an interesting approach taken by TMap here which is called an “appeal to ignorance”. The testers cannot explain so the approach must be wrong! But is the approach the problem or are skills of the testers the problem here?

Considerations when testing a software application in a context-driven way

Written by: Joris Meerts (main author), Huib Schoots and Ruud Cox all working at Improve Quality Services in the Netherlands.

 Introduction
One of the cornerstones of context-driven testing (Lesson learned in software testing by  Kaner, Bach, & Pettichord, 2001) is that the way of working when testing software is determined by the situation in which the tester finds himself. A good approach is not driven by a prescribed process or by a collection of steps that one habitually executes. Instead, it arises from the use of skills that ensure that testing matches the circumstances of the software project. Within the framework of context-driven testing, a wide range of skills is discussed, including critical thinking, modeling and visualization, note-taking and applying heuristics. It is not easy to learn these skills. One way to sharpen them is to do exercises and discuss the results. This was the purpose of a meeting of four testers at Improve Quality Services. In the following report we will elaborate on the exercises that were done during that meeting and on the results.

Purpose of this report
As we pointed out in the introduction, it is not easy to learn the skills associated with context-driven testing. Practitioners of context-driven testing regularly refer to professional literature that describes these skills. But applying these skills is a process of trial and error. That is why it is important to simulate real life situations and learn from exercises we do. That was the purpose of the meeting. In this report we share the steps we took and the results of those steps, for example in the form of notes, sketches or models. We also share our experiences to make it easier to apply the skills in practice.

The meeting
The meeting was held on February 14, 2017 at the office of Improve Quality Services in Eindhoven. The participating testers, Jos Duisings, Ruud Cox, Joris Meerts and Huib Schoots are all employees of Improve Quality Services.

Background information
Date: 14 February 2017
Location: Improve Quality Services Eindhoven
Start: 9 am
End: 5 pm
Team A: Jos Duisings, Ruud Cox
Team B: Joris Meerts, Huib Schoots

The assignment

The assignment of the meeting is to select and execute tests on a software application. It is carried out by two teams of two testers each. This makes it possible to take different approaches and to provide feedback from one team to the other.

Division in sessions
The meeting is divided into sessions in advance. The sessions each have their own goal and their own evaluation.

  • Welcome & introduction
  • Creating a coverage outline (per team)
  • Debriefing the coverage outline
  • Drafting a test strategy (as a group)
  • Debriefing the test strategy
  • Selecting a few charters
  • Performing a test session (charter) per team.
  • Debriefing the test session
  • Retrospective

Introduction of the application to be tested
The application to be tested has been selected prior to the meeting. We choose the application Task Coach (Task Coach, 2018), an open source to-do list manager. This software is publicly available and runs on multiple platforms (Windows and Mac OS). The application is relatively simple but still offers sufficient complexity to be able to test thoroughly. According to the description on the website, Task Coach is ” a simple open source to-do manager to keep track of personal tasks and to-do lists. It is designed for composite tasks, and also offers effort tracking, categories, notes and more.” The participants have not worked with Task Coach before and are therefore not familiar with this specific piece of software.

Creating a coverage outline
Because Task Coach is new for all participants, the software will have to be explored. Only then can we say more about what can be tested in the given time. Such an exploration of the product can be done in many different ways. During the meeting we chose to make a ‘product coverage outline’, inspired by the Heuristic Test Strategy Model of James Bach (Bach, 1996). The product coverage outline provides an overarching view of the product in which the functionality of the software is divided into seven different categories. The categories are described in the mnemonic SFDIPOT. The software tester is reminded by this heuristic that when mapping a product he can look at Structure, Function, Data, Interfaces, Platform, Operations and Time. By looking at the application from these perspectives, a relatively complete sketch is created of what the software product is and what it can do. However, the mnemonic does not only serve for exploration. The ‘map’ of the application that is created in this way can also be used to visualize the coverage of the tests and to select the tests to be carried out.

The exercise
We decide that both teams will be given half an hour for creating a product coverage outline and that each team is free to decide which in which format the outline is presented. The choice of a half-hour time limit is mainly motivated by the fact that the software must also be tested before the end of the day. There is no room to spend much more time making the product outline.

It turns out that half an hour is not enough time for mapping an application that, despite the fact that it claims to be simple, still contains a lot of functions. Team B decides to create a mind map in which the seven categories are elaborated. This mind map is not finished after half an hour. Especially the branch ‘Function’, in which the functions of the application are described, is very large and has a deeply nested structure. To build this structure, team B explores the application by clicking through it and captures the functionality (in text or in image) in the mind map. Due to time constraints, team B presents a product sketch that is not finished. This triggers a discussion about what we expect the mind map to look like given the available time. The expectations, such as completeness, can be related to the purpose of the mind map. Through a discussion it becomes clear that no clear goal has been defined for drawing up the product sketch and that the result of the assignment is difficult to assess.

Preference for the mind map
A mind map is a tool that is used by software testers on a regular basis to create a product outline. The tree structure of the mind map lends itself to classifying (grouping) properties. It is easy to start with an empty mind map, enter the categories from SFDIPOT and expand these. Moreover, a navigable structure is created in this way. The mind map forms a kind of geographical map in which you can choose to zoom in and zoom out to determine the location of something in relation to the whole. Software essentially consists of zeros and ones and the software product is an abstract concept. By making the ‘map’ the abstract software gets a concrete shape. A mind map can also be used as an instrument for reporting on the results and the progress of the tests. So there are a number of good reasons to work with a mind map from the beginning.

Team A starts the session by creating a mind map but after a short time it switches to a different form. When studying Task Coach, the team finds out that there is a help file in the application. After a short study, the help file appears to describe a large part of the application. The categories from SFDIPOT all come back in the file to a sufficient extent and so Team A chooses to use this file, converted to Word format, as a product outline. By marking the categories in the document with different colors, structure is added to the document. In addition, the table of contents provides a global overview of the functionality in the application; the details are mentioned in the paragraphs. In this way, team A delivers a product sketch that is relatively complete within the stipulated time.

Drafting the test strategy
The result of the first session is a product outline. With this product sketch and the knowledge gained during the exploration of the software it is easier to discuss a test strategy. Because exhaustive testing of an application is not feasible for the majority of software applications and because exhaustive testing in many cases does not yield better information, it is desirable to make choices with regard to the test work to be performed. We find these choices in the test strategy.

By making the product outline we have gained insight into a number of aspects of the application. We take these aspects into account and we hope to indicate per category whether this category needs to be tested and if so with which depth. The most decisive factor in that decision is the risk the company encounters when the software is put to use. Risk is a multifaceted concept and can only be determined if the software product is highlighted from different angles. In any case, the perspective of the tester alone is not sufficient to get a good picture of the risks of a software product. During the second session it becomes clear that a representative is missing who can reason about risk from the perspective of a user or of the organization. This point is also discussed in the debriefing of the second session and we conclude that for that reason the risk assessment is incomplete.

Assessing risk
In the Heuristic Test Strategy Model three dimensions are discussed that influence a test strategy. These dimensions are the project, the product and the requirements that are set for that product; the quality characteristics. Risks play a role in each of these dimensions. In the exercise, we decide not to look at the project risks. As far as product risks are concerned, we make our own assessment with regard to the categories of the product. We consider categories that are used the most, are the most prone to errors and categories where a possible error has the most impact. Because there are no users involved in the exercise, we try to place ourselves in the role of the users. This way the following product categories are discussed.

  • The primary process from the Operations category. This will tell us which functionality is used the most.
  • Using the application by multiple users from the Operations category. We recognize risks associated with synchronization: tasks are not updated properly.
  • The importing and exporting of tasks from the Interfaces category.
  • Dealing with reminders from the Functionality category. Reminders are crucial for not missing appointments.
  • Dealing with date and time from the Time category. Date and time play an important role in planning tasks, they touch the core of the application.

After we have identified the categories of the product, we try to assess which quality aspects of the product are the most in demand. Here too, it has been decided to use our own assessments as a guideline. The following quality aspects are named:

  • Functionality,
  • Usability and
  • Charisma

Coverage
We briefly discuss the coverage of the tests that we want to carry out. Since the tasks that are maintained in Task Coach can have different statuses, the tester can, from the perspective of state transitions, visualize the coverage level of the tests carried out. The state transitions are covered in the different paths through the application that a user travels. These paths are helpful when mapping the coverage. Furthermore, coverage can be looked at from the perspective of the test data used.

Drafting the charters
We decide to make two charters (Session Based Test Management) and to execute them. We prioritize the product categories mentioned in the risk analysis based on our own insights. From this we conclude that ‘the primary process’ and ‘dealing with date and time’ are the two most important product categories. We translate these two categories into charters. In one charter we will look at the primary process, in the other charter the handling of date and time will be investigated. Each team selects a charter. After the charter has been executed, each team reports the work done to the other team in a debrief session. During this short evaluation, the tester will tell his story and answer any questions. The evaluation has several goals, such as assessing whether the mission is successful, translating new insights into follow-up sessions, looking at notes and descriptions of findings and coaching the tester. Due to the debrief, the understanding of the application under test grows.

The charter for testing the primary process
To formulate a starting point for this charter, we look at the risks that can be associated with the execution of the primary process. There is a risk that no task can be created. Or it could be that the task is created but errors occur when saving or modifying a task or changing the status of a task. These considerations lead to the following mission that serves as the starting point for the charter: “Explore the basic flow of tasks using CRUD to discover/test the life cycle of a task”.

The charter for testing the handling of date and time
Prior to drafting the charter for testing the handling of date and time, the following test ideas are mentioned:

  • Start and end times are the same
  • The end time is before the start time
  • The date of a task is far in the past or far in the future
  • System time
  • Work in different time zones
  • Dealing with winter time and summer time
  • Different date formats

With these test ideas in mind, the following charter is drawn up: “Explore tasks using date/time to discover date and time related bugs”.

Execution of the charters

Testing the primary process
The charter for testing the primary process (basic flow) begins with the creation of a task. Several tasks are created using the button on the taskbar. Subsequently, several underlying tasks are created for a single task, up to 8 levels deep. Under one of the underlying tasks, a new tree structure of underlying tasks is created. During the creation of the task structure, questions arise about the maximum depth of the task structure and about the sorting of the underlying tasks. In addition, it is found that it is possible to delete underlying tasks and then to undo these deletions. It is proposed to further investigate this functionality in a separate charter, since it is suspected that this is not always going well. The resulting tree structure is removed by means of the ‘Delete’ button on the taskbar. After this, all tasks have disappeared.

On the taskbar there is also a button that offers the possibility to create new tasks from a template. There are two default templates available. Tasks are created with these templates. It appears that it is not possible to create underlying tasks from a template. The question is why this functionality is not available for underlying tasks. Finally, the created tasks are deleted.

To get a better picture of how templates work in the application, the help text is read over templates. Furthermore, the team explores the menu structure in search of more functionality that could be related to the use of templates. From the ‘File’ menu it is possible to save tasks as templates, to import templates and to edit templates. A template is created.

From the dialog that appears when choosing to import a template it appears that template files have the extension ‘.tsktmpl’. But when a task is saved as a template, it is not possible to find out whether a template file is created from this task and, if so, where this file is stored. The newly created templates are visible when one chooses to create a new task based on a template from the toolbar.

A task is created, saved and checked if this task can be imported. Again, all created tasks are deleted by selecting the menu option Edit > Delete. The team looks a bit further at the options for removing tasks. It appears that there is a shortcut combination for removing tasks. The combination is Ctrl + DEL. The team wonders why it is not possible to simply use the Delete key.

In summary, this session looked at creating, viewing, editing and deleting tasks and underlying tasks. A finding has been made regarding the undoing of changes. It turns out that undoing the removal of tasks in a tree structure of tasks and underlying tasks does not produce the same structure as the original structure. Following the session, it is proposed to look more extensively at the use of templates in new charters. Also the functionality for creating, viewing, editing and deleting tasks and underlying tasks deserves more attention, especially because this functionality can be called from many different places in the application. The various possibilities have not all been tested in the first session. Furthermore, a charter could be made for creating a task depending on another task.

Feedback
At the end of the session, the report of the session was discussed with a member of the other team with the aim of obtaining feedback about the course of the session in a debrief. The feedback shows that there was not enough time to complete the charter. To complete the charter another thirty to sixty minutes would be needed. It is noted that the description of the detected bug is not clear enough. It also becomes clear that not all possible statuses of a task have actually been addressed in the session. Finally, a new charter is suggested for testing filtering and sorting.

Testing the handling of date and time
The session is started with a clean installation of the application. First a new task is created. Attention is given to the Data tab on which various data can be entered. The date and time can be changed in separate input fields. The time can be changed by means of a dropdown or by using the arrow keys on the keyboard. It turns out that ’00:00′ cannot be used as time. It appears that the range of time can be set under the ‘Preferences’ menu. After an adjustment, ’00:00′ can be entered.

It is noted that the planned start date and the planned end date are not included in the calculation of duration. But it is unclear what this data will be used for. When the planned end date is in the past, the task will turn red. When the planned end date is in the future, the task will turn purple. It is remarkable that the planned start date can be after the planned end date. Apparently there is no validation on the order of data. Dates that are far in the future are accepted as valid dates. To change a date, the task will first have to be closed and then opened again.

An interesting finding occurs when, at the planned start date, the year is set for 1900. If this is the case, the planned start date cannot be changed after closing and reopening the task. During the study of this finding, the team finds out that the application creates a log file (in the Documents folder) in which any errors are logged. The attempt to adjust the planned start date results in the following error message: “ValueError: year = 1853 is before 1900; the datetime strftime () methods require year> = 1900”. After this finding, further attention is paid to other functionality around time and date. For example, a reminder on a task is set to 5 minutes. The reminder works.

In addition to planning tasks TaskCoach can also be used to keep track of the time spent per task. After some research it appears that this functionality is complex and that it is not easy to find out how TaskCoach deals with time spent. Time tracking can be started with a button, but can also be done by increasing the time spent in a separate tab. The team notices that when entering an hour of time spent, the actual time spent shown on the tab is just a few seconds. It is possible to aggregate time spent at month level. If we do this, we see that the descriptions that we have listed for each entry are combined in a single text field.

The time that an application uses depends on the time setting on the system. For this reason, the team manipulates the system time of a MacBook Pro. The calendar is changed to a Coptic calendar. This adjustment causes the application to crash after startup. In the log a line appears mentioning an error relating to an invalid date format.

Finally, the team looks at the connection between the budgeted time and the time spent. TaskCoach is able to calculate the remaining time on the basis of the variables mentioned. Some tests are performed with variations in hours, minutes and seconds. TaskCoach handles all these variations correctly.

Feedback
A team member verbally reports on the session to a team member of the other team in a debrief. The feedback shows that this report contains a lot of detailed information. To be able to place the detailed information, a framework is needed. The product outline could have been used as a framework. One new charter emerges from the feedback, namely the testing of the synchronization of tasks between systems with different system times.

Conclusion
The meeting is concluded with the completion of the test sessions. Looking back we conducted, in a day with two teams, a number of tests on an application that was unknown beforehand. We have shown that techniques and methods exist that help the tester to acquire knowledge about the application, develop a strategy and perform tests. Exploring the application provides insights that serve as a starting point for risk assessments and for conducting test sessions with a well-defined goal. By quickly arriving at concrete and well-substantiated tests, the tester provides valuable feedback on the application in a short period of time. The test sessions are debriefed and this provides starting points for further deepening where necessary.

Due to the popularity of agile methods, it is common for the tester to be asked to test something, while at that moment he has incomplete insight into the functionality of the application. Moreover, the tester is expected to deliver results within a limited time. It requires an approach in which the tester quickly draws up and executes a strategy by modeling, thinking critically, discovering and investigating. This is the approach that we applied during the meeting described above.

A Clash of Models
The creation of the product coverage outline led to quite different outlines being delivered by the teams. These differences and their possible causes are discussed in a separate article, written by Joris Meerts and Ruud Cox. The article is called A Clash of Models.

References
Bach, J. (1996). Heuristic Test Strategy Model.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons Learned in Software Testing. John Wiley & Sons.
Task Coach. (2018). Retrieved from Task Coach: http://taskcoach.org/

Test improvement in an agile/CDT environment

This post and the article have been updated on April 4 2017.

One day during a team meeting at Joep‘s previous job at a bank the Team Manager of Testing, listed a number of topics his testers could work on in the coming months. One of those topics was “testing maturity”. This topic was on the list not because this manager was such a fan of maturity models, but because the other team managers (Business Analysis and Development) had produced one for their own teams and higher management would like to have one for testing as well. And although Joep saw little value in a classic five-tiered maturity model either, he was intrigued by the question: so what can you do with respect to maturity models that is of value?

Joep asked Huib to help him think of a way to create a valuable, context-driven way to work on maturity. Since Huib had been working for the same bank, they met and discussed the possibilities. Soon they found out that the criteria should be variable since maturity depends on context. They started experimenting with stack ranking and quite soon they had the first version of their “maturity model”.

Maturity or improvement?

After discussing the first version of the article with James and Michael, we felt the need to update our article. Their comments helped us realize that we needed to explore maturity and maturity models a bit more. After doing this, we decided to rename our model into a test improvement model.

Maturity mission: better testing

What is the mission a maturity assessment? We think the assessment should be a pathway to better testing. As a part of solving problems we think the mission should be: “An investigation of strengths and weaknesses. A starting point for a discussion about potential (testing) problems and how to solve them.” Or as James Bach says: A maturity model is plan for achieving maturity. And this is exactly what we created. Our maturity model isn’t anything like the staged, fixed models available in the market. Maybe we shouldn’t call our method a maturity model, since basically it isn’t. It is a tool designed to help teams assess and improve their testing. It is a method supported by a card game that helps teams retrospect and identify strengths and weaknesses in their way of working, the stuff they create, the team, their skills and context.

Result

After a first try-out at the bank Joep worked, we let it rest for a while. After a couple of months we wrote this article. It is the first version and it needs to be refined and polished. The heuristics lists are probably to long and need to be reduced. We think of this model as a card game that can be played with teams.

Currently we are also working on an agile version of this model, a card game for agile teams to assess their “maturity” to help them to find possible areas for improvements. More about that later.

We are curious about your thoughts. What do you think? Maybe you want to try the game? Feel free to try it out. We hope you will share your experiences with us.

Article (pdf) – Card game (pdf)

The slides are of the meetup about our model are here.

Context Driven Clichés

Today I read a blogpost by Olaf Agterbosch on the ViQiT site with the intriguing title “Context Driven Clichés“. Since it is written in Dutch, I will first translate his post.

Do you recognize the following? In recent years I often hear the words “Context Driven ‘when it comes to developments in ICT. Apparently, those using them consider the context in their activities in other words the overall environment in which their activities take place. As if nobody kept that in mind before, these people talk about the importance of the environment and the way their products, services and processes have to connect to the context.

I sometimes wonder why people choose to kick in a open door and yell how well it’s working. But what’s even worse: people parroting it. Everyone jumps on the bandwagon and before you know it the latest hype is born. A hype that is fully wrung out by, for example consultants, who will not fail to emphasize the importance of this development. You really can not live without!

Old wine new bottles. Old wine in Agile Bags. Old wine in faded bags …

Apparently there is marketing potential.

Context Driven testing is exemplar for an over hyped branch on the Agile tree. The nice thing is that there is nothing wrong with it, you are aware of the environment in which you perform your job. What I regret is that many people get sucked into this development. Running along with the latest hype.

Does it pay? Possibly. A prediction of the trends for the coming years, we will get occupied with:

Context Driven Requirements engineering;
Context Driven Application Management;
Context Driven Directed Lining;
Context Driven Project Management;
Context Driven CRM;
Context Driven Innovation;
Context Driven Migration;
Context Driven Whatever etc.

What did you say? We were doing that for long, weren’t we? We obviously haven’t been paying attention.

Now let us just go back to work. And yes, work, like the good ones among us have always done it in a Context-Driven way. Or Risk-Based. Or Business Case Driven. But please stop those cheap semantic tricks. Try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value. Perhaps less familiar to you and others, but effective!

Not sure what his problem with context-driven is…

Fortunately, I hear about people becoming more and more context-driven. To me being context-driven is not just keeping the context or environment in mind, it is way more that that… As I wrote in a post on the DEWT  blog: “Context-driven testing made my testing more personal. Not doing stuff everybody does, but it encouraged me to develop my own style. It is a mind set, a paradigm and a culture. It is not only about what I do, it is more about who I am!”

A hype? Maybe because it gets more and more attention. Although I think it isn’t. Far from it! A hype, according to the free dictionary:

  1. Excessive publicity and the ensuing commotion
  2. Exaggerated or extravagant claims made especially in advertising or promotional material
  3. An advertising or promotional ploy
  4. Something deliberately misleading; a deception

I can’t see why context-driven testing would be a hype. Exaggerated? Misleading? A promotional ploy? Excessive publicity?

Old wine in new bags? I don’t think so. The saying means “things are presented differently, but not fundamentally changed.” I think context-driven testing is fundamentally different. Of course testers has been taking context in consideration for years. But how well do they do that? I still hear things like “that’s how we do things now here” or “you have to play by the standard/procedure” quite often. I almost never hear testers speak about serious context analysis and adapting their approaches accordingly. But there is a lot more to context-driven testing then taking context into consideration. To name a few:

  • developing and using skills to effectively support the software development project
  • learning tester to be less dependent on documentation
  • modeling by mapping of various aspects of the software product
  • diversify tactics, approaches and techniques
  • thinking skills like logical reasoning, using heuristics and critical thinking
  • dealing with complexity, ambiguity, coping with half answers and changes

In the last paragraph Olaf states that the good testers among us, have always been using a context-driven approach. Really? How does he define good testers? And if they use a context-driven approach, why is he complaining? Unfortunately I see too many testers not using context-driven approaches and creating a lot of waste!

Then he continues “try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value“. If you look at testing as performing a trick, I can see that Olaf sees context-driven testing as a cheap semantic trick! The opposite is true: adding real value is what context-driven testing is trying. Context-driven testers do this by focusing on their skills, using heuristics, considering cost verus value in all they do, continuous learning by deliberate practice, etc. When I look at the most commonly used test approaches in the Netherlands (TMap and ISTQB), I wonder how they add value by focusing on standards, document heavy procedures, use of many templates, best practices and using  the same approach every time.

Leaves me with one question… What is Olaf’s problem with context-driven exactly?

Must read: A Context-Driven Approach to Automation in Testing

Test Automation is a hot item in our industry. Many people talk about it and much has been written on this topic. Sadly there is still a lot of misconception about test automation. Also, some people say context-driven testing is anti test automation. I think that is not true. Context-driven testers use different names for it and they are more careful when they speak about automation and tooling to aid their testing. Also, context-driven testers have been fighting the myths that testing can be automated for years. In 2009 Michael Bolton wrote his famous blog post “Testing vs. checking“. Later flowed up by “Testing and checking refined” and “Exploratory testing 3.0“. These tremendous important blog post learn us about how context-driven testers define testing and that testing is a sapient process. A process that relies on skilled humans. Recently Michael Bolton and James Bach have published a white paper to share their view on automation in testing. A vision of test automation that puts the tester at the center of testing. This is a must read for everyone involved in software development.

The follow text is taken from the “A Context-Driven Approach to Automation in Testing” white paper written by James Bach and Michael Bolton.

We can summarize the dominant view of test automation as “automate testing by automating the user.” We are not claiming that people literally say this, merely that they try to do it. We see at least three big problems here that trivialize testing:

  1. The word “automation” is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that.
  2. Output checking can be automated, but testers do so much more than that.
  3. Automated output checking is interesting, but tools do so much more than that.

robotAutomation comes with a tasty and digestible story: replace messy, complex humanity with reliable, fast, efficient robots! Consider the robot picture. It perfectly summarizes the impressive vision: “Automate the Boring Stuff.” Okay. What does the picture show us?

It shows us a machine that is intended to function as a human. The robot is constructed as a humanoid. It is using a tool normally operated by humans, in exactly the way that humans would operate it, rather than through an interface more suited to robots. There is no depiction of the process of programming the robot or controlling it, or correcting it when it errs. There are no broken down robots in the background. The human role in this scene is not depicted. No human appears even in the background. The message is: robots replace humans in uninteresting tasks without changing the nature of the process, and without any trace of human presence, guidance, or purpose. Is that what automation is? Is that how it works? No!

The problem is, in our travels all over the industry, we see clients thinking about real testing, real automation, and real people in just this cartoonish way. The trouble that comes from that is serious…

Read more in the fabulous white paper “A Context-Driven Approach to Automation in Testing” by James Bach and Michael Bolton.

Why testers are not taken seriously…

5784a3e8c82ffdc1da395f1ded31eab6Some time ago I was invited to talk to a group of testers at a big consultancy firm in the Netherlands. They wanted to learn more about context-driven testing. I do these kind of talks on a regular basis. During these events, I always ask the audience what they think testing is. It surprises me each time that they cannot come up with a decent definition of testing. But it gets worse when I ask them to describe testing. The stuff most people come up with is embarrassingly bad! And it is not only them, a big majority of the people who call themselves professional testers are not able to explain what testing is and how it works…

How can anybody take a tester serious who cannot explain what he is doing all day? Imagine a doctor who tells you he has to operate your knee.

Doctor: “I see there is something wrong there
Patient: “Really? What is wrong doctor?
Doctor: “Your knee needs surgery!
Patient: “Damn, that is bad news. What are you going to do doctor?
Doctor: “I am going to operate your knee! You know cut you with a scalpel and make it better on the inside!
Patient: “Okay… but what are you going to do exactly?
Doctor: “Euh… well… you see… I am going to fix the thingy and the whatchamacallit by doing thingumabob to the thingamajig. And if possible I will attach the doomaflodgit to the doohickey, I think. Get it?
Patient: “Thank you, but no thanks doctor. I think I’ll pass

But it is much worse… Many testers by profession have trouble explaining what they   are testing and why. Try it! Walk up to one of your tester colleagues and ask what he or she is doing and why. 9 out of 10 testers I have asked this simple question begin to stutter.

How can testers be taken seriously and how they learn a profession when they cannot explain what they do all day?

albert-einstein-if-you-cant-explain-it-simply-you-dont-understand-it-well-enoughOnly a few testers I know can come up with a decent story about their testing. They can name activities and come up with a sound list of real skills they use. They are able to explain what they do and why. At any given time they are able to report progress, risks and coverage. They will be happy to explain what oracles and heuristics they are using, know what the product is all about and practice deliberate continuous learning. In the Rapid Testing class (in NL) we train testers to think and talk about testing with confidence.

How about you? Can you explain your testing?

Refusing to do bad work…

I talked at TestBash about context-driven testing in agile. I my talk I said that I refuse to do bad work. Adam Knight wrote a great blog post “Knuckling Down” about this: “One of the messages that came up in more than one of the talks during the day, most strongly in Huib Schoots talk on Context Driven in Agile, was the need to stick to the principle of refusing to do bad work. The consequential suggestion was that a tester should leave any position where you are asked to compromise this principle.

Adam also writes: “What was missing for me in the sentiments presented at TestBash was any suggestion that testers should attempt to tackle the challenges faced on a poor or misguided project before leaving. In the examples I noted from the day there was no suggestion of any effort to resolve the situation, or alter the approach being taken. There was no implication of leaving only ‘if all else fails’. I’d like to see an attitude more around attempting to tackle a bad situation head on rather than looking at moving on as the only option. Of course we should consider moving if a situation in untenable, but I’d like to think that this decision be made only after knuckling down and putting your best effort in to make the best of a bad lot.

Interesting because I think I said exactly that: “if anything else fails, leave!” But maybe I only thought that and forgot to speak it out loud, I am not sure. Let’s wait for the video that will give us the answer. But in the meanwhile: of course Adam is right and I am happy that he wrote his blog post. Because if I was too firm or too distinct, he gave me a chance to explain. Because looking back, I have done many projects where, if I hadn’t tried to change stuff, I would have left many of them in the first couple of days. There is a lot of bad testing around. So what did I try to say?

Ethics again.

This topic touches very closely to ethics in your work! Refusing to do bad work is an ethical statement. Ethics are very important for me and I hope more testers will recognize that only being ethical will change our craft. Ethics help us decide what is right and what is wrong. Have a look at the ethics James Bach summed up in this blog post “Thoughts Toward The Ethics of Testing“. Nathalie pointed to an article she wrote on ethics in a reply to my last post.

Ethics and integrity go hand in hand. Ethics are the external “rules and laws” and integrity is your internal system of principles that guides your behaviour. Integrity is a choice rather than an obligation and will help you do what is right even if no one is watching.

I refuse to do bad work!

Bad work is any work that is deliberately bad. I think along the lines of restrictions in a context, demands placed on them that they don’t know how to handle. Or even worse: intentionally doing stuff you know can be done better, but it is faster, easier or because others ask you to do it like that. Of course there are novices in the field and they do work that can be done better. I do not call that bad work since they are still learning. Still there is a limit to that as well. If you have been tester for several years and you still do not know how to do more than 3 test techniques without having to look them up, I will call that bad work as well. I expect continuing professional development from everybody in the field. Simply because working in IT (but in any profession) we need to develop ourselves to become better.

ethicsLying is always bad work. And I have seen many people lie in their work. Lying to managers to get off the hook, making messages sound just a little better by leaving out essential stuff. Also telling people what they what to hear to make them happy is bad work. What do you do when your project manager asks you to change your test report because it will harm his reputation? Or what do you tell the hiring manager in a job interview when he asks you if you are willing to learn? Many people tell that they are very willing to learn, but are they really?

Bad work is claiming things you can’t accomplish: like assuring quality or testing everything. It is also bad work when you do not admit your mistakes and hide them from your colleagues. Bad work is accepting an assignment when you know you do not have the right skills or the right knowledge. In secondment assignments this is an issue sometimes. I have taken on a project once where the customer wanted something I couldn’t deliver but because my boss wanted me on the position I accepted. That was wrong and the assignment didn’t work out. I felt very bad about it: not because I failed, but because I knew upfront I would fail! I won’t do that again, ever.

So how do I handle this?

I push back! Of course I do not run away from a project when I see or smell bad work. I do try to tackle the challenges I am faced with. I use three important ways trying to change the situation: my courage, asking questions and my ethics. Some examples: when a managers start telling me what I should do and explicitly tell me how I should do that, I often ask how much testing experience the manager has. When given the answer I friendly tell him that I am very willing to help him achieve his goals, but that I think I am the expert and I will decide on how I do my work. Surely there is more to it and I need to be able to explain why I want it to be done differently.

I also ask a lot of questions that start with “why”. Why do you want me to write test cases? What problem will that solve? I found out that often people ask for things like test cases or metrics because it is “common practice” or folklore not because it will serve a certain purpose. Also when I know the reasons behind the requests, it makes it easier to discuss them and to push back. A great example of this is the last blog post “Variable Testers” by James Bach.

Adam talks about changing peoples minds: “One of the most difficult skills I’ve found to learn as a tester is the ability to justify your approach and your reasons for taking it, and being able to argue your case to someone else who has a misguided perspective on what testing does or should involve. Having these discussions, and changing peoples minds, is a big part of what good testing is.

I fully agree. In my last on blog post “heuristics for recognizing professional testers” my first heuristic was: “They have a paradigm of what testing is and they can explain their approach in any given situation. Professional testers can explain what testing is, what value they add and how they would test in a specific situation.” To become better as testers and to advance our craft, we should train the skills Adam mentions: justify approach and being able to argue our case.

It will make you better and happier!

Jerry Weinberg listed his set of principles in a blog post “A Code of Work Rules for Consultants“. In this blog post he says: “Over the years, I’ve found that people who ask these questions and set those conditions don’t wind up in jobs that make them miserable. Sometimes, when they ask them honestly they leave their present position for something else that makes them happier, even at a lower fee scale. Sometimes, a client manager is outraged at one of these conditions, which is a sure indication of trouble later, if not sooner.

It will make you a happier person when you know what your limits are and you are able to clearly remind people you work with. It will prevent you from getting into situations that make you miserable. “That’s the way things are” doesn’t exist in my professional vocabulary. There is always something you can do about it. And if the situation you end up in, after you tried the best you can, isn’t satisfying to you: leave! Believe me, it will make you feel good. I have got the t-shirt! And… being clear about your values also will make you better in your work. Maybe not directly, but indirectly it will.

Daniel Pink speaks about the self-determination theory in his book “Drive“. The three keywords in his book are: Autonomy, Mastery and purpose: “human beings have an innate drive to be autonomous, self-determined and connected to one another, and that when that drive is liberated, people achieve more and live richer lives” (source: http://checkside.wordpress.com)

But but but….

Of course I know there is the mortgage and the family to support. Maybe it is easy for me to refuse bad work. Maybe I am lucky to be in the position I am. But think again… Are you really sure you can’t change anything? And if your ethics are violated every day do you resign yourself? Your ethics will act as heuristics signalling you that there is a problem and you need to do something. I didn’t say you have to leave immediately and if you are more patient than I am, maybe you do not have to leave at all… But remember: for people who are good in what they do, who are confident in what they will and will not do and speak up for themselves, there will always be a place to work.

Now you!

Have you even thought about integrity? What are your guiding principles, values or ethics? What would you call bad work? And what will you do next time when somebody asks you something that conflicts with your ethics?

 


 

While reading stuff online about (refusing) bad work I ran into this blog post by Cal Newport about being bad at work: “Knowledge Workers are Bad at Working (and Here’s What to Do About It…)“Interesting enough Cal Newport wrote a book called “So Good They Can’t Ignore You: Why Skills Trump Passion in the Quest for Work You Love” about the passion hypothesis in which he questions the validity of the hypothesis that occupational happiness has to match per-existing passion. In several recent talks and blog posts I did I talk about passion. Also in the talk discussed in this blog post I claim that passion is very important and I show a fragment of the Stanford 2005 commencement speech by Steve Jobs. Exactly the passage I showed in my talk, Cal uses in the first chapter of his book. “You’ve got to find what you love. And that is as true for your work as it is for your lovers. Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking until you find it. Don’t settle.” Anyway. Interesting stuff to be researched. I bought the book and started reading it. To be continued…

 

 

 

Heuristics for recognizing professional testers

Via Twitter Helena Jeret-Mäe asked this question: “What are your criteria for professionalism for testers in CDT community?”. Later via Email she updated her question to: “So the updated version of my question is what are the heuristics for recognizing professional testers in your opinion? I changed “criteria” to heuristics… it’s less categorical. And I’ll leave the term “professionalism” up to you as well – I don’t know exactly what you meant by it.”

In my talk “How to become a great tester” at ContextCopenhagen last January I talked about testers and their skills. I said that most testers don’t know what they’re doing and can’t explain effectively what value they add. I have seen many testers who use the same approach over and over again. If I ask them to name test techniques, they can only name a few. If I ask them to explain techniques to me or show how they work, I get no answers. I find that shocking and I cannot understand why testers who call themselves professionals know so little about their craft and do not study their craft.

That is why I make a distinction between professional testers (of which I think there are only few) and testers by profession. Of course I know and understand that there will always be people who have a 9 to 5 mentality, do not read books or blogs and only want to do courses when the boss pays for them. I accept that reality, but that doesn’t mean I want to work with them!

Let me now answer the questions asked, have done enough ranting for now…

Professionalism is what it means to be a professional and what is expected of them. Professional testing is complex and diverse and has several dimensions: knowledge, skills, experience, attitude, ethics and values. I wrote a blog post “What makes a good tester?” in 2011 on this topic and I have a broader view on this topic now although everything I wrote then still counts. In my earlier blog posts I didn’t mention values and ethics and I now think they are extremely important. James Bach writes about them in his blog post “Thoughts Toward The Ethics of Testing”. A great practical example of ethics and values is Rapid Testing. Look at the “the premises of Rapid Testing” and “the themes of Rapid Testing” both can be found in the slides of Rapid Software Testing.

It is hard to recognize professional testers. Every tester is unique and brings different characteristics to the table. Every project is different too and to be successful in finding the right professional tester for your project different characteristics may be important. There are many characteristics to be considered so to be able to recognize professional testers heuristics can be used. Heuristics are fallible methods for solving a problem or making a decision, shortcuts to reduce complex problem or rules of thumb. They are used to determine good enough feasible solutions for difficult problems within reasonable time. On Wikipedia I found this definition: “Heuristics are experience-based techniques for problem solving, learning, and discovery that give a solution that is not guaranteed to be optimal. Where the exhaustive search is impractical, heuristic methods are used to speed up the process of finding a satisfactory solution via mental shortcuts to ease the cognitive load of making a decision.

My heuristics for recognizing professional testers:

1) They have a paradigm of what testing is and they can explain their approach in any given situation.
Professional testers can explain what testing is, what value they add and how they would test in a specific situation.

2) They really love what they do and are passionate about their craft.
Testing is difficult and to be successful, testers needs to be persistent in learning and in their work. Passion helps them to be become real professionals. Watch this video where Steve Jobs talks about passion

3) They consider context first and continuously.
To be effective testers need to choose testing objectives, techniques and by looking first to the details of the specific situation. They recognize that there are no best practices but only good practices in a given context.

4) They consider testing as a human activity to solve a complex and difficult problem that requires a lot of skill.
Testers recognize testing is not a technical profession. Testing has many aspects of social science since software is built for humans by humans.

5) They know that software development and testing is a team sport.
Collaboration is the key in becoming more effective and efficient in testing. Software development is a team sport: people, working together, are the most important part of any project’s context. A great tester knows how to work with developers and other stakeholders in any situation.

6) They know that things can be different.
Professional testers use heuristics, practice critical thinking and are empirical. They know that time to test is limited, systems are becoming increasingly complex and thinking of everything is hard. Therefore heuristics are a very helpful “tool” for testers. They also know that biases and logical fallacies can fool them. They practice critical thinking to deal confidently and thoughtfully with difficult and complex situations. Testers have to accept and deal with ambiguity, situational specific results and partial answers.

7) They ask questions before doing anything.
Testing depends on many things and what is the context of the stuff I am looking at? What is the information we need to find? What is the testing mission? Giving a tester an exercise in an interview can easily test this. If he or she starts working on it or gives you an answer without asking questions, this tells you something.

8) They use diversified approaches.
There is no approach or technique that will find all kinds of bugs or fulfil all test goals. Different bugs are found using different techniques. In order to be able to do this, testers need to know many techniques and approaches. This demands training, practice and some more practice.

9) They know that estimation is more like negotiation.
Have a look at some blog post by Michael Bolton:

10) They use test cases and test documentation wisely.
The context determines what test documentation you should make and what kind of documentation is useful. Quite recently a great (and long) article by James Bach and Aaron Hodder was published in Testing Trapeze “Test cases are not testing: Toward a culture of test performance”. Also Fiona Charles has written some interesting stuff about test documentation in her article the Breaking the Tyranny of Form.

11) They continuously study their craft seriously, practice a lot and practice “deliberate practice”.
Deliberate practice is a structured activity with the goal of improving performance. According to K. Anders Ericsson, there are four essential components of deliberate practice. It must be intentional, aimed at improving performance, designed for your current skill level, combined with immediate feedback and repetitious.

12) They refuse to do bad work, never fake and have the courage to tell their clients and the people they work about their ethics and values.
Testing is often underestimated and many believe “everybody can test”. I have experienced project managers who tell me how to do my work. Professional testers know how to push back and are able to explain why they do what they do.

13) They are curious and like to learn new things.
Testers find pleasure in finding things out. A good read is the curious behaviour by Guy Mason: “a curious mind is one that could be described as an active, engaged and inquisitive mind. Such a mind frequently seeks out new information, enjoys discovering what there is to discover and enjoys the process that comes along with this goal.“

14) They could have any of the important interpersonal skills mentioned in the list below. As stated earlier it depends on the context which skills are most important.

  • Writing skills (like reporting, note taking and concise messages)
  • Communication skills (many different like listening, story telling, presentation, saying no, verbal reporting, arguing and negotiating)
  • Social and emotional skills (like empathy, inspiring, networking, conflict management and consulting)
  • Problem solving skills
  • Decision making skills
  • Coaching and teaching skills
  • Being proactive and assertive

15) They have excellent testing skills:

  • Thinking skills (critical, lateral, creative, systems thinking)
  • Analytical skills
  • Modelling
  • Risk analysis
  • Planning and estimation
  • Applying many test techniques
  • Exploring
  • Designing experiments
  • Observation

Have a look at the Exploratory Testing Dynamics in the RST appendices where several lists of skills are listed.

16) They have sufficient technical skills.
There are many technical skills a tester needs like being able to use tooling, coding skills or willingness to learn what they need to know about the technical structure of the application they are testing. Test automation skills like scripting and SQL skills to work with databases, being able to configure and install software, knowledge and skills to work with the platform the system under test is on (Windows, Linux, Mobile, etc.).

This list is probably not complete. It is very well possible that I have made some mistakes. So please let me know if you have any contributions or improvements. Also: being a professional doesn’t mean you are an expert on all mentioned skills. Have a look at the Dreyfus model to learn more about expert level. A real professional knows what he or she can do and when to ask for help. They do not fear to learn and they are not afraid to make mistakes. I guess that would be the 17th heuristic in the list.

BTW: I googled “code of ethics software testing” and found the ISTQB Code of Ethics for test professionals. I wonder if people who pass the exam know about these AND more important practice them… What about you?


More information

Some more posts that discuss great testers:

And last but not least: have a look at the Testers syllabus by James Bach. An awesome document which lists many important skills and areas of knowledge. It inspired me read about and study different areas.

Why I am context-driven!

Wanna know why I am context-driven? Published on the DEWT blog: Why I am context-driven!

“Because testing (and any engineering activity) is a solution to a very difficult problem, it must be tailored to the context of the project, and therefore testing is a human activity that requires a great deal of skill to do well. That’s why we must study it seriously. We must practice our craft. Context-driven testers strive to become the Jedi knights of testing”. James Bach

« Older posts