Category: Exploratory Testing (Page 1 of 2)

Do we need testers?

The first version was first published on linkedin on February 5, 2020 titled “Do we need testers? No! Do we need skilled testing? Yes!”. I added my new insights to this blogpost.

Last week I did a talk at Agile, Testing & DevOps Showcase in Amsterdam. My topic was “Testing in modern times“.

In agile and especially DevOps approaches the motto is: automated everything! Companies like Facebook claim they do not have testers at all. Microsoft only has SDET (software development engineers in Test), other companies are T-shaping developers to do the testing. New kid on the block is AI and machine learning, that will definitely replace testing I hear people claim. What is really happening globally?

Do we no longer need testers? I am not sure anymore. Why? Because testers have a bad name. If you cannot automate nowadays, you are no longer valuable, people say. That makes me sad. I think skilled testing is super important! Testing informs decisions about value, quality and risk by learning about the product (and a serious professional tester also will give you insight in the status of the project or team they are working in). Modern technology and tooling is reducing the need for dedicated testers. Not by replacing testers, but by reducing certain types of risks we used to test. More and more people are getting obsessed by “automate everything”. Sadly even some testers I meet are obsessed by trying to automate everything they do…

How can we make valuable software for our clients? I believe that personal leadership and collaboration ultimately makes the difference. The quality of software is crucial nowadays. Therefore I like to focus on an integrated quality approach. As a tester, a mentor and a coach, I help people and teams learn effectively and continuously develop with attention to sustainable adaptability that leads to improving (team) results and way of working. It enables teams to create more customer value by building quality solutions!

In IT we need insights in risks. Risks and value. For that we need to learn continuously. And I think we need smart people who do skilled testing, determined to find problems that matter. Teams need to create insight and overview in the risks we take by creating, releasing and using (IT) products. Often people with excellent testing skills excel in doing this… I hear new job titles as “Quality coach” and “Quality Engineer”. Is this the way to go? Well if that solves the problem of “automation obsession” and a lack of testing skills in teams? I am game. 


So far the original post on linkedin. Recent events make me doubt if we ever get to a point where we do not need dedicated testers.

Dan Ashby reacted on linkedin:

"It's a really interesting question. My take: yes, we need testers... because we need skilled testing, and the Dev communities haven't kept up with what skilled testing is and how to do it. One thing too: Facebook use offshore testers (lots of them via a tester contracting company - I know people that work at FB and at the 3rd party testing company). Also MS have done a U-turn too, and they also employ testers again as well as SDETs. They also have test manager roles again too. Maybe lots of the big companies have hit that realisation that they needed good testing (and hence needed the testers who have those skills)? If so, hopefully the smaller companies that tend to imitate the big companies will soon follow suit. 😁"

I like what he says here: “we need testers… because we need skilled testing.” Although there are some developers who have really good testing skills, many are not interested in testing nor learning to do skilled testing. So I think he has a point there. But dedicated testers alone do not solve the problem. We need better testers and better thinking about testing too! 

I see a huge fixation on “test automation” and this is causing us to lose connection with the human, social purposes of software development and testing. The essence of software development is that during development, we learn about what we need, what the customer really wants and how the product we are building actually works. This is research and development, learning along the way, and needing sense making and feedback to get it right.

This “Vision on the Future of Software Testing” has nothing to do with skilled testing. This shows that even an institute like ISTQB does not understand IT in general nor testing. That makes me so sad!

Test approaches like TMap and ISTQB are neglecting the human aspects of testing. They try to approach testing with mechanistic thinking and are not dealing with the complexity and uncertainty that developing software and dealing with people brings. Leading test experts told me we cannot make testing too complicated or else testers will not understand.

Recently the new TMap book was published called “Quality for DevOps teams”. Reading how TMap deals with risk analysis and test strategy gives me goosebumps. The risk analysis is just a list of quality attributes with a simple calculation (possible impact x chance of failure) and based on the number we assigned a “risk class” (high, medium, low). And based on the risk class we assign the test intensity in dots. See for yourself what it looks like here and here. Now let’s look at some quotes from the book that made my eyes roll.

Chapter 47 “Experience-based testing”

“Exploratory testing is an experience-based approach of testing, the most important approach of experience-based testing in our opinion. We distinguish coverage-based and experience-based testing. Others use terms like scripted testing and free-style testing, but we prefer the division in focus on either experience or coverage.”

All testing that you do uses experience, because there is no way you can shut it off. And doing any test will give you some coverage. I guess they just do not know how to talk about coverage in a way that makes sense.  Why make this strict distinction in only two categories? There are so many more ways to classify test techniques (note: what TMap calls “approach” is called “technique” in BBST).  There is no strict distinction between test techniques. See slide 62 of BBST Test DesignEvery test addresses all of these. A specific technique typically addresses 1 to 3 of them, leaving the rest to be designed into the individual test“. Personally I like the way BBST classifies techniques looking at the driving ideas behind the testing.

“The main downside of applying error guessing is the lack of documentation. Therefore, tests are not reproducible. This may result in a developer not being able to investigate an anomaly, the tester not being able to retest a fix, and the test cannot be added to a regression test set.”

Why do tests need to be reproducible? If a tester is capable of telling or showing the developer what goes wrong, you do not need any documentation. I think with enough product knowledge, it is not that difficult to retest. So we are solving a problem in the wrong way, aren’t we?

Chapter 48 “Is there any value in unstructured testing?”

“Any testing lacking a plan containing what to do and what to expect of a system, or lacking preparation of the test, is unstructured. This is also called ad-hoc testing. Some people see a great advantage in unstructured testing because, as they say: “You can start testing right away.” That is, without “losing” any time on preparation.”

The only structure TMap knows is plans and test cases. Structure is “the arrangement of and relations between the parts or elements of something complex”. Michael Bolton wrote about this here. Testing is about learning and learning involves mental models. TMap seems to have no attention at all about how people learn. Deep learning in the beginning is a confusing process, but it gets clearer along the way. By letting it rest (defocus), we give our brains the chance to process the learned information and integrate it with the models we have in our heads. So good (mental) models are important. These “mechanistic test approaches” forget the whole learning part.

“When you have an IT system that is of good quality, the testers do their unstructured testing and don’t encounter any faults or failures. Can they now say the quality is good and that there are no significant risks? Did they really measure the quality and risks? No, the only thing they can truly say is they did “some” testing and didn’t find any problems. However, they cannot explain which requirements or quality risks have been covered. They are not even sure which parts of the system have been covered.”

This is an interesting approach taken by TMap here which is called an “appeal to ignorance”. The testers cannot explain so the approach must be wrong! But is the approach the problem or are skills of the testers the problem here?

Considerations when testing a software application in a context-driven way

Written by: Joris Meerts (main author), Huib Schoots and Ruud Cox all working at Improve Quality Services in the Netherlands.

 Introduction
One of the cornerstones of context-driven testing (Lesson learned in software testing by  Kaner, Bach, & Pettichord, 2001) is that the way of working when testing software is determined by the situation in which the tester finds himself. A good approach is not driven by a prescribed process or by a collection of steps that one habitually executes. Instead, it arises from the use of skills that ensure that testing matches the circumstances of the software project. Within the framework of context-driven testing, a wide range of skills is discussed, including critical thinking, modeling and visualization, note-taking and applying heuristics. It is not easy to learn these skills. One way to sharpen them is to do exercises and discuss the results. This was the purpose of a meeting of four testers at Improve Quality Services. In the following report we will elaborate on the exercises that were done during that meeting and on the results.

Purpose of this report
As we pointed out in the introduction, it is not easy to learn the skills associated with context-driven testing. Practitioners of context-driven testing regularly refer to professional literature that describes these skills. But applying these skills is a process of trial and error. That is why it is important to simulate real life situations and learn from exercises we do. That was the purpose of the meeting. In this report we share the steps we took and the results of those steps, for example in the form of notes, sketches or models. We also share our experiences to make it easier to apply the skills in practice.

The meeting
The meeting was held on February 14, 2017 at the office of Improve Quality Services in Eindhoven. The participating testers, Jos Duisings, Ruud Cox, Joris Meerts and Huib Schoots are all employees of Improve Quality Services.

Background information
Date: 14 February 2017
Location: Improve Quality Services Eindhoven
Start: 9 am
End: 5 pm
Team A: Jos Duisings, Ruud Cox
Team B: Joris Meerts, Huib Schoots

The assignment

The assignment of the meeting is to select and execute tests on a software application. It is carried out by two teams of two testers each. This makes it possible to take different approaches and to provide feedback from one team to the other.

Division in sessions
The meeting is divided into sessions in advance. The sessions each have their own goal and their own evaluation.

  • Welcome & introduction
  • Creating a coverage outline (per team)
  • Debriefing the coverage outline
  • Drafting a test strategy (as a group)
  • Debriefing the test strategy
  • Selecting a few charters
  • Performing a test session (charter) per team.
  • Debriefing the test session
  • Retrospective

Introduction of the application to be tested
The application to be tested has been selected prior to the meeting. We choose the application Task Coach (Task Coach, 2018), an open source to-do list manager. This software is publicly available and runs on multiple platforms (Windows and Mac OS). The application is relatively simple but still offers sufficient complexity to be able to test thoroughly. According to the description on the website, Task Coach is ” a simple open source to-do manager to keep track of personal tasks and to-do lists. It is designed for composite tasks, and also offers effort tracking, categories, notes and more.” The participants have not worked with Task Coach before and are therefore not familiar with this specific piece of software.

Creating a coverage outline
Because Task Coach is new for all participants, the software will have to be explored. Only then can we say more about what can be tested in the given time. Such an exploration of the product can be done in many different ways. During the meeting we chose to make a ‘product coverage outline’, inspired by the Heuristic Test Strategy Model of James Bach (Bach, 1996). The product coverage outline provides an overarching view of the product in which the functionality of the software is divided into seven different categories. The categories are described in the mnemonic SFDIPOT. The software tester is reminded by this heuristic that when mapping a product he can look at Structure, Function, Data, Interfaces, Platform, Operations and Time. By looking at the application from these perspectives, a relatively complete sketch is created of what the software product is and what it can do. However, the mnemonic does not only serve for exploration. The ‘map’ of the application that is created in this way can also be used to visualize the coverage of the tests and to select the tests to be carried out.

The exercise
We decide that both teams will be given half an hour for creating a product coverage outline and that each team is free to decide which in which format the outline is presented. The choice of a half-hour time limit is mainly motivated by the fact that the software must also be tested before the end of the day. There is no room to spend much more time making the product outline.

It turns out that half an hour is not enough time for mapping an application that, despite the fact that it claims to be simple, still contains a lot of functions. Team B decides to create a mind map in which the seven categories are elaborated. This mind map is not finished after half an hour. Especially the branch ‘Function’, in which the functions of the application are described, is very large and has a deeply nested structure. To build this structure, team B explores the application by clicking through it and captures the functionality (in text or in image) in the mind map. Due to time constraints, team B presents a product sketch that is not finished. This triggers a discussion about what we expect the mind map to look like given the available time. The expectations, such as completeness, can be related to the purpose of the mind map. Through a discussion it becomes clear that no clear goal has been defined for drawing up the product sketch and that the result of the assignment is difficult to assess.

Preference for the mind map
A mind map is a tool that is used by software testers on a regular basis to create a product outline. The tree structure of the mind map lends itself to classifying (grouping) properties. It is easy to start with an empty mind map, enter the categories from SFDIPOT and expand these. Moreover, a navigable structure is created in this way. The mind map forms a kind of geographical map in which you can choose to zoom in and zoom out to determine the location of something in relation to the whole. Software essentially consists of zeros and ones and the software product is an abstract concept. By making the ‘map’ the abstract software gets a concrete shape. A mind map can also be used as an instrument for reporting on the results and the progress of the tests. So there are a number of good reasons to work with a mind map from the beginning.

Team A starts the session by creating a mind map but after a short time it switches to a different form. When studying Task Coach, the team finds out that there is a help file in the application. After a short study, the help file appears to describe a large part of the application. The categories from SFDIPOT all come back in the file to a sufficient extent and so Team A chooses to use this file, converted to Word format, as a product outline. By marking the categories in the document with different colors, structure is added to the document. In addition, the table of contents provides a global overview of the functionality in the application; the details are mentioned in the paragraphs. In this way, team A delivers a product sketch that is relatively complete within the stipulated time.

Drafting the test strategy
The result of the first session is a product outline. With this product sketch and the knowledge gained during the exploration of the software it is easier to discuss a test strategy. Because exhaustive testing of an application is not feasible for the majority of software applications and because exhaustive testing in many cases does not yield better information, it is desirable to make choices with regard to the test work to be performed. We find these choices in the test strategy.

By making the product outline we have gained insight into a number of aspects of the application. We take these aspects into account and we hope to indicate per category whether this category needs to be tested and if so with which depth. The most decisive factor in that decision is the risk the company encounters when the software is put to use. Risk is a multifaceted concept and can only be determined if the software product is highlighted from different angles. In any case, the perspective of the tester alone is not sufficient to get a good picture of the risks of a software product. During the second session it becomes clear that a representative is missing who can reason about risk from the perspective of a user or of the organization. This point is also discussed in the debriefing of the second session and we conclude that for that reason the risk assessment is incomplete.

Assessing risk
In the Heuristic Test Strategy Model three dimensions are discussed that influence a test strategy. These dimensions are the project, the product and the requirements that are set for that product; the quality characteristics. Risks play a role in each of these dimensions. In the exercise, we decide not to look at the project risks. As far as product risks are concerned, we make our own assessment with regard to the categories of the product. We consider categories that are used the most, are the most prone to errors and categories where a possible error has the most impact. Because there are no users involved in the exercise, we try to place ourselves in the role of the users. This way the following product categories are discussed.

  • The primary process from the Operations category. This will tell us which functionality is used the most.
  • Using the application by multiple users from the Operations category. We recognize risks associated with synchronization: tasks are not updated properly.
  • The importing and exporting of tasks from the Interfaces category.
  • Dealing with reminders from the Functionality category. Reminders are crucial for not missing appointments.
  • Dealing with date and time from the Time category. Date and time play an important role in planning tasks, they touch the core of the application.

After we have identified the categories of the product, we try to assess which quality aspects of the product are the most in demand. Here too, it has been decided to use our own assessments as a guideline. The following quality aspects are named:

  • Functionality,
  • Usability and
  • Charisma

Coverage
We briefly discuss the coverage of the tests that we want to carry out. Since the tasks that are maintained in Task Coach can have different statuses, the tester can, from the perspective of state transitions, visualize the coverage level of the tests carried out. The state transitions are covered in the different paths through the application that a user travels. These paths are helpful when mapping the coverage. Furthermore, coverage can be looked at from the perspective of the test data used.

Drafting the charters
We decide to make two charters (Session Based Test Management) and to execute them. We prioritize the product categories mentioned in the risk analysis based on our own insights. From this we conclude that ‘the primary process’ and ‘dealing with date and time’ are the two most important product categories. We translate these two categories into charters. In one charter we will look at the primary process, in the other charter the handling of date and time will be investigated. Each team selects a charter. After the charter has been executed, each team reports the work done to the other team in a debrief session. During this short evaluation, the tester will tell his story and answer any questions. The evaluation has several goals, such as assessing whether the mission is successful, translating new insights into follow-up sessions, looking at notes and descriptions of findings and coaching the tester. Due to the debrief, the understanding of the application under test grows.

The charter for testing the primary process
To formulate a starting point for this charter, we look at the risks that can be associated with the execution of the primary process. There is a risk that no task can be created. Or it could be that the task is created but errors occur when saving or modifying a task or changing the status of a task. These considerations lead to the following mission that serves as the starting point for the charter: “Explore the basic flow of tasks using CRUD to discover/test the life cycle of a task”.

The charter for testing the handling of date and time
Prior to drafting the charter for testing the handling of date and time, the following test ideas are mentioned:

  • Start and end times are the same
  • The end time is before the start time
  • The date of a task is far in the past or far in the future
  • System time
  • Work in different time zones
  • Dealing with winter time and summer time
  • Different date formats

With these test ideas in mind, the following charter is drawn up: “Explore tasks using date/time to discover date and time related bugs”.

Execution of the charters

Testing the primary process
The charter for testing the primary process (basic flow) begins with the creation of a task. Several tasks are created using the button on the taskbar. Subsequently, several underlying tasks are created for a single task, up to 8 levels deep. Under one of the underlying tasks, a new tree structure of underlying tasks is created. During the creation of the task structure, questions arise about the maximum depth of the task structure and about the sorting of the underlying tasks. In addition, it is found that it is possible to delete underlying tasks and then to undo these deletions. It is proposed to further investigate this functionality in a separate charter, since it is suspected that this is not always going well. The resulting tree structure is removed by means of the ‘Delete’ button on the taskbar. After this, all tasks have disappeared.

On the taskbar there is also a button that offers the possibility to create new tasks from a template. There are two default templates available. Tasks are created with these templates. It appears that it is not possible to create underlying tasks from a template. The question is why this functionality is not available for underlying tasks. Finally, the created tasks are deleted.

To get a better picture of how templates work in the application, the help text is read over templates. Furthermore, the team explores the menu structure in search of more functionality that could be related to the use of templates. From the ‘File’ menu it is possible to save tasks as templates, to import templates and to edit templates. A template is created.

From the dialog that appears when choosing to import a template it appears that template files have the extension ‘.tsktmpl’. But when a task is saved as a template, it is not possible to find out whether a template file is created from this task and, if so, where this file is stored. The newly created templates are visible when one chooses to create a new task based on a template from the toolbar.

A task is created, saved and checked if this task can be imported. Again, all created tasks are deleted by selecting the menu option Edit > Delete. The team looks a bit further at the options for removing tasks. It appears that there is a shortcut combination for removing tasks. The combination is Ctrl + DEL. The team wonders why it is not possible to simply use the Delete key.

In summary, this session looked at creating, viewing, editing and deleting tasks and underlying tasks. A finding has been made regarding the undoing of changes. It turns out that undoing the removal of tasks in a tree structure of tasks and underlying tasks does not produce the same structure as the original structure. Following the session, it is proposed to look more extensively at the use of templates in new charters. Also the functionality for creating, viewing, editing and deleting tasks and underlying tasks deserves more attention, especially because this functionality can be called from many different places in the application. The various possibilities have not all been tested in the first session. Furthermore, a charter could be made for creating a task depending on another task.

Feedback
At the end of the session, the report of the session was discussed with a member of the other team with the aim of obtaining feedback about the course of the session in a debrief. The feedback shows that there was not enough time to complete the charter. To complete the charter another thirty to sixty minutes would be needed. It is noted that the description of the detected bug is not clear enough. It also becomes clear that not all possible statuses of a task have actually been addressed in the session. Finally, a new charter is suggested for testing filtering and sorting.

Testing the handling of date and time
The session is started with a clean installation of the application. First a new task is created. Attention is given to the Data tab on which various data can be entered. The date and time can be changed in separate input fields. The time can be changed by means of a dropdown or by using the arrow keys on the keyboard. It turns out that ’00:00′ cannot be used as time. It appears that the range of time can be set under the ‘Preferences’ menu. After an adjustment, ’00:00′ can be entered.

It is noted that the planned start date and the planned end date are not included in the calculation of duration. But it is unclear what this data will be used for. When the planned end date is in the past, the task will turn red. When the planned end date is in the future, the task will turn purple. It is remarkable that the planned start date can be after the planned end date. Apparently there is no validation on the order of data. Dates that are far in the future are accepted as valid dates. To change a date, the task will first have to be closed and then opened again.

An interesting finding occurs when, at the planned start date, the year is set for 1900. If this is the case, the planned start date cannot be changed after closing and reopening the task. During the study of this finding, the team finds out that the application creates a log file (in the Documents folder) in which any errors are logged. The attempt to adjust the planned start date results in the following error message: “ValueError: year = 1853 is before 1900; the datetime strftime () methods require year> = 1900”. After this finding, further attention is paid to other functionality around time and date. For example, a reminder on a task is set to 5 minutes. The reminder works.

In addition to planning tasks TaskCoach can also be used to keep track of the time spent per task. After some research it appears that this functionality is complex and that it is not easy to find out how TaskCoach deals with time spent. Time tracking can be started with a button, but can also be done by increasing the time spent in a separate tab. The team notices that when entering an hour of time spent, the actual time spent shown on the tab is just a few seconds. It is possible to aggregate time spent at month level. If we do this, we see that the descriptions that we have listed for each entry are combined in a single text field.

The time that an application uses depends on the time setting on the system. For this reason, the team manipulates the system time of a MacBook Pro. The calendar is changed to a Coptic calendar. This adjustment causes the application to crash after startup. In the log a line appears mentioning an error relating to an invalid date format.

Finally, the team looks at the connection between the budgeted time and the time spent. TaskCoach is able to calculate the remaining time on the basis of the variables mentioned. Some tests are performed with variations in hours, minutes and seconds. TaskCoach handles all these variations correctly.

Feedback
A team member verbally reports on the session to a team member of the other team in a debrief. The feedback shows that this report contains a lot of detailed information. To be able to place the detailed information, a framework is needed. The product outline could have been used as a framework. One new charter emerges from the feedback, namely the testing of the synchronization of tasks between systems with different system times.

Conclusion
The meeting is concluded with the completion of the test sessions. Looking back we conducted, in a day with two teams, a number of tests on an application that was unknown beforehand. We have shown that techniques and methods exist that help the tester to acquire knowledge about the application, develop a strategy and perform tests. Exploring the application provides insights that serve as a starting point for risk assessments and for conducting test sessions with a well-defined goal. By quickly arriving at concrete and well-substantiated tests, the tester provides valuable feedback on the application in a short period of time. The test sessions are debriefed and this provides starting points for further deepening where necessary.

Due to the popularity of agile methods, it is common for the tester to be asked to test something, while at that moment he has incomplete insight into the functionality of the application. Moreover, the tester is expected to deliver results within a limited time. It requires an approach in which the tester quickly draws up and executes a strategy by modeling, thinking critically, discovering and investigating. This is the approach that we applied during the meeting described above.

A Clash of Models
The creation of the product coverage outline led to quite different outlines being delivered by the teams. These differences and their possible causes are discussed in a separate article, written by Joris Meerts and Ruud Cox. The article is called A Clash of Models.

References
Bach, J. (1996). Heuristic Test Strategy Model.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons Learned in Software Testing. John Wiley & Sons.
Task Coach. (2018). Retrieved from Task Coach: http://taskcoach.org/

What testing can learn from social science – Part 5

What can testing learn from social science?
Why is this important to testers? My conclusion is that testing and the social sciences are very much interconnected and there are many lessons that we could learn from this area. We should see what we can apply to our daily testing jobs. We should be more aware of what we do in testing: for example social research, making observations, doing critical thinking and most importantly continuing to learn. We should start learning from what people have done and are still doing in the social sciences. Testers should not only focus on quantitative analysis like bug counts or test case pass or fail, but also do qualitative research. Test reports should be stories about the product and the testing we did (see Michael Bolton’s article on test reporting and the telling of the story). We should use the numbers to support or backup our story. I often see it the wrong way around: lots of tables full of counts that do not tell us anything without the context. Managers do tend to draw their own conclusions and make decisions based upon this data, if we do not help them by telling the story. Again, testing is about collecting information for people who matter to enable them to make informed decisions.

Coverage?
If a manager comes up to you and asks you:

“So what is the coverage?”
or
“How many test cases do you have?“

What do you say? It is really hard to talk to managers who are obsessed by numbers and think that testing is about the number of test cases, right and wrong, green and red.

Consider this next time you talk to them. Make a simple calculation of all the possible tests you could do for the project you are working on regardless of how simple or complex it is. Think of all the possible combination both positive and negative.

What number have you ended up with?

  • 1 thousand?
  • 1 million?
  • 1 billion?
  • More?

The number of possible things that you can test are endless, exhaustive testing is futile. Even a simple requirement has infinite possibilities to test.

So what is the coverage if we do 1,000,000 test cases?

  • Coverage: 1,000,000 divided by infinite is very close to zero!
  • Coverage: 10,000.000 divided by infinite is still very close to zero!

So no matter how many test cases you have the coverage of all possible test cases you could have done is close to zero. This is why risk, priority and making choices become important for testing but that is a different topic.

Be a scientist
Science is important. It gave us critical thinking and that helps us proving the theories we have about the product. Try to prove yourself wrong instead of proving yourself right.

Ask critical questions:

  • Could it be something else?
  • Is this what I expected?
  • What did I do differently?
  • What else can I do?
  • How can I explain what I did?

While testing we should practice critical thinking: question things we encounter, make sure that what we see, is true (or not). While thinking we should be aware of fast conclusions, biases and fallacies. We often do it the wrong way around. If we focus too much on the numbers and the averages we miss the outliers: the unique random events that can do the most damage.

Qualitative research
The grounded theory method is a research method that operates almost in a reverse fashion from traditional social science research. You start with a view, theory or expectation before you start. However as you gather more data when testing, your theory/expectation becomes more ‘grounded’ upon the information you uncover or discover. Basically as you experience and gather more and more data you change your perspective and viewpoint. This is what social scientist do when they go and live with social groups. They have something they want to find out – an assumption or otherwise – and find out if it is right or not by taking part (compare missions in Exploratory testing).

In qualitative research done by anthropologists for example the context of the research is very important. Here they accept and deal with ambiguity, situational specific results and partial answers. Qualitative data deals with meanings. Use it when you want to understand the underlying thoughts and intentions.

Observation
Testers should not observe from the side-line. We should act like anthropologists do: become part of the group you are observing. Let me give you an example from my own experience.

A couple of years ago I worked as a test coordinator for a Media Company selling newspapers and magazines. We were implementing a new CRM system and my assignment was to organize the user acceptance testing. For some days I worked with the people selling the newspapers to learn how they were selling subscriptions and while doing that I learned what was important to them. First I was observing, but after a day I was selling newspapers myself and really learned how the users were working and what it took to make the department successful. We had requirements and designs, but the stuff I found out on processes and user sentiments was also very valuable to do testing. There were important steps not documented. I saw the people use the software in ways I didn’t expect and that wasn’t written down anywhere. I asked them what they did and why and they answered me “that is normally how we do this”. I learned that these people took short cuts to do their work. The team who were designing and building the software had no idea what I was talking about when I told them about my observations.

Humans will always take the shortest quickest route and the one that requires the least amount of thinking. This made clear how important it is to find out what people are thinking. And more important: the reasoning why they do the things they do. The product is a solution. If the problem isn’t solved, the product doesn’t work (5th basic principle of context-driven testing).

Now I know it is called qualitative research and I think every team developing software should do something similar. Try to really understand the users and the environment in which the product will be used. IT is often way to focussed on technical stuff. Testers go out there and meet the people who are using the product. Be part of their world for a while and start asking those critical questions.

Humans
Software is build by humans for humans. Social science is about people. Software should solve problems and help humans. To really solve a problem, we need to know more about how the users work, what they think, how they feel, their emptions, their desires. Too often I hear development teams say things like: “The user should not do that”. Or the all time classic: “no real user would ever do that!”. IT is way too technical focussed.

Read John’s blog titled “The Human Element”. It is an awesome story about his wife, a nurse, who explains why the human element is very important in her work. You see the parallel with our work?

Now it is your turn!?
Use the reading list to learn more about what social sciences, biases and other relevant topics. I am curious and I want to learn from you too. So please share your thoughts and experiences with social science.

“Great software is not produced or tested in factories, but in studios
and rehearsal halls.” (Michael Bolton)

I owe John Stevenson and Michael Bolton many thanks for their inspiration, great discussions and reviewing these blogs.

Reading List

What testing can learn from social science – Part 4

Social science: three presentations
Social science is about society, human nature and human interaction. It is an umbrella term to refer to sciences like anthropology, economics, education, linguistics, communication studies, sociology and psychology.

Anthropology teaches us about how people life, interact and something about culture. Education and didactic helps acquire new or modifying existing knowledge, behaviours, skills, values, or preferences. It helps us understand how we learn and how we can teach others. Sociology teaches us empirical investigation and critical analysis and gives insight in human social activity. Psychology is the study of the mind and behaviour and helps testers understand individuals and groups. Now how is this useful in testing? I’ll try to answer that question later. Let me first tell you about three awesome presentations on the subject of social science and testing.

Testing as a social science
Cem Kaner did a talk titled “Testing as a social science first” time in 2006 (slides are here). I haven’t had the pleasure to see the talk myself but the slides drew my interest. Cem made me aware that to test effectively, our theories of error have to be theories about the mistakes people make and when and why they make them. We design and run tests in order to gain useful information about the product’s quality.

Testing is always a search for information. Cem talks about measurement and metrics and the dangers of using metrics wrongly to measure test completeness (new updated article on this can be found here). He argues that bad models are counter productive. Cem also touched the topic of inattentional blindness in which humans often don’t see what they don’t pay attention to. He reminded us that programs never see what they haven’t been told to pay attention to. This is especially valuable when thinking about test automation. When testing we can’t pay attention to all the conditions. The systems under test are simply to complex and there are to many factors that are variable (and uncontrollable). He concludes that thinking in terms of human issues leads us into interesting questions:

  • What tests we are running and why?
  • What risks are we anticipating and how?
  • Why are these risks important?
  • What we can do to help our clients gather the information they need?

At EuroStar 2012 in Amsterdam I track chaired two excellent talks, which inspired me to study the subject of social science and qualitative research more.

Curing Our Binary Disease
Rikard Edgren talked about the getting cured from the Binary Disease (slides are here, video is here). The binary disease is when testers don’t provide useful information, because they aren’t allowed by (project) managers. They demand counting passes & fails and insist everything must be verifiable. The binary disease limits our thinking. Testers are addicted to counting passes and fails and don‘t communicate what is most important. When addicted there is no attention to serendipity moments. A model can help testers find important things, but a percentage number might not include things that are important. Therefore a coverage model is useful to get ideas but is not useful as a metric of completion. In his talk he introduces the testing potato to show that there are more things important besides written requirements. More about the potato can be found in his fabulous must read free eBook “The Little Black Book on Test Design”.

Testing Through The Qualitative Lens
Michael Bolton’s (slides of the StarEast version are here) talk elaborated differences between physical and social sciences. In physics, humans are ideally irrelevant and mostly get in the way of the experiment. Use quantitative and qualitative research methods and accept high tolerance for ambiguity, context-specific results and be aware of biases while doing research. We should value “partial answers that might be useful”. You do qualitative research when you want to understand something. You do quantitative research to inform that understanding. Quantitative research put human values first; use participant observation and practice storytelling and narration. Software testing is the investigation of systems composed of people, computer programs, products, and the relationships between them. Excellent testing is more like anthropology: interdisciplinary, systems-focused, investigative, and uses storytelling.

To be continued… part 5: So what can we learn from social science?

What testing can learn from social science – Part 3

People are predictably irrational
You think you are rational, but you are not. People fail to realize the irrationality of their actions and believe they are acting perfectly rational, possibly due to flaws in their reasoning. People’s actual interests differ from what they believe to be their interests. We have mechanisms that have evolved to give optimal behaviour in normal conditions lead to irrational behaviour in abnormal conditions. Many people put on one “mask” for one group of people and another for a different group of people. Many will become confused as to which they really are or which they wish to become (source: wikipedia). The subject of irrational behaviour is huge. I recommend you to read more about it. We can predict irrational behaviour to a degree due to lots of studies and work done in this field.

John gave me two book tips by Dan Ariel on this topic that I haven’t checked myself yet:

Or check this website also by Dan Ariel: Predictably Irrational – Investigating the Hidden Forces that Shape Our Decisions

You are not so smart
A great collection of examples that show people are easily fooled can be found in the book called “You are not so smart” by David McRaney. This book is a dose of psychology research served in tasty anecdotes that will make you better understand both yourself and others. The author describes cognitive biases, logical fallacies and heuristics. For example there is the well known “confirmation bias” where you tend to look for information that confirms your beliefs and ignore the information that challenges them. Another interesting phenomenon is the availability heuristic: a mental short cut that occurs when people make judgements about the probability of events by how easy it is to think of examples. The availability heuristic operates on the notion that, “if you can think of it, it must be important.” Examples are lotteries where you only see the winners so you might think it is easy to win. Or school shootings in the USA. People believe that since Columbine there are more and more school shootings but the opposite is true! Before Columbine there where more, but we don’t know about them. After reading this book an interesting thinking exercise can be to recognize the biases and fallacies in your thinking and testing.

Thinking fast and slow
Daniel Kahneman wrote a fascinating book about how our brain works “Thinking, fast and slow” which has been a bestseller for some time now. This book changed the way I think about thinking. Although it was sometimes hard to read for my as a non native English speaker, I almost read the book in one go. The book is about two different ways our brain works: System 1 is fast, instinctive and emotional. System 2 is slower, more deliberative, and more logical. I encourage testers to read the book and watch this video. In the video the author explains the main points from his book. You also might want to have a look at this shorter video where the same stuff is made more visual. The book will help you understand how your brain works and it will also make you aware how people make judgements and come to conclusions. Read what software tester Andy Patterson writes about on his thoughts of the book here.

There is a great video with Daniel Kahneman and Nassim Taleb (The Black Swan) in which they talk at the New York Public library about how individuals and humans make decisions – a fascinating video to watch – details and access to video can be found here.

Dancing gorillas
An interesting source the read to learn more about inattentional blindness and other illusions of memory and knowledge is the book “the invisible Gorilla” by Christopher Chabris and Daniel Simons. It makes you aware of how you can be fooled by your illusions and perception. More reading on gorillas and inattentional blindness is this article. Alan Page loves the gorilla! Especially the video. Check what he has to say about the gorilla here.

To be continued… part 4: social science

What testing can learn from social science – Part 2

Testers need to do a lot of thinking. To me testing is an investigation, gathering and providing information about things that are important. I like the definition by Jerry Weinberg: “testing is gathering information with the intention of informing a decision”. Rikard Edgren recently wrote an excellent “open letter” to define testing. Testing is much more than finding bugs or checking if requirements are met.

Systems thinking
We should not only investigate the “system under test” but also take related products in mind. What about the people using all these products or the organisations and processes in which the products are used? Testers should know more about systems thinking: the process of understanding how things, regarded as systems, influence one another within a whole (source: wikipedia).

A system is not just a collection of things. A system is an interconnected set of elements that is coherently organized in a way that achieves something. It must consist of three things; elements, interconnection and a function or purpose (source: “thinking in systems: a primer – Donnella Meadows”). If you want to learn more about systems thinking, you might want to watch this youtube movie by Russel Ackhoff and read this post by Aleksis Tulonen about what you can learn form Ackhoff.

In one of my projects my client was moving a hospital from several old locations to a huge new building. It was logical that the location codes changed since it was a new building with a very different layout. But initially they forgot to oversee that this location code was actually used as department code in several information systems. And these systems used the codes to book costs (finance), plan staff (HR) and distribution of food and medicine (logistics). Moving to the new building without overseeing the full impact would have paralysed the whole information landscape. Defining a temporary coding and making minor changes to several systems solved the problem.

Critical thinking
Testing can be seen as a form of research: investigating the system and finding information about it. In research critical thinking is important. Collecting, analysing and interpreting information requires critical thinking skills. Critical thinking to me is about thinking (critically) about your own personal thinking. Framing your own assumptions and using this to try to remove bias and hopefully clarifying your thoughts with reasoning.

In this video James Bach helps to gain quick understanding of critical thinking by asking three simple questions:

  • Huh? What does this mean? What is the point?
  • Really? Are you absolutely certain? How can I know?
  • So? Where does this lead? So what?

These questions are very helpful for understanding and to think critical about anything. This picture (click to enlarge) is taken from the book “Critical Thinking: a user’s manual” by Debra Jackson and Paul Newberry. This book is a helpful source to learn about critical thinking.

Rule of Three
“If I can’t think of at least three different interpretations of what
I received, I haven’t thought enough about what it might mean.”
(Jerry Weinberg)

Creative thinking
At EuroStar in Amsterdam I met John Stevenson who has an excellent blog with the intriguing title: “The expected result was 42. Now what was the test?”. We talked about what testing can learn from social sciences and early this year we had some fantastic conversations via skype. John pointed me to some very interesting readings about qualitative research: “Qualitative Data Analysis: a user-friendly guide for social scientists“ by Ian Dey. On his blog he wrote some very interesting posts related to testing and social science you might want to read:

John is currently writing an awesome series of articles about creative and critical thinking. Part 1 of “Creative and Critical Thinking and Testing” can be found here. From there you can find the other parts about the different styles of thinking.

So why is this important?
Systems thinking reminds us to look at the big picture and see systems as a whole. What is the purpose of the organisation we work for? And is the project we are doing contributing to that? Creative thinking helps us to solve problems in a creative way or come up with more things to test and how to do it effectively. Critical thinking helps you to really understand what you are doing. Like in research we have to process large amounts of data and make sense of it. But we also have to recognize, analyse and evaluate (see critical thinking diagram above) information, arguments and problems.

Thinking is an under appreciated subject. Thinking is very important for testers and we should learn from science: doing research, learn to design and perform experiments, collect, organize and analyse data and use the results to decide on the next steps in our work. Critical thinking helps us ask better questions in our projects and identify problems faster. It also helps avoid traps: biases and assumptions. More about that in my next post.

To be continued… part 3: irrationality and biases

What testing can learn from social science – Part 1

Last February and March I have had the privilege to talk at Belgium Testing Days in Brussel and TestBash in Brighton about what testing can learn from social science. In a series of daily blog posts I am going to write about this subject: why I choose this topic, what sources I studied and finally how I have applied this stuff to my work.

Rapid Software Testing
In the Rapid Software Testing Class I took in 2011 Michael Bolton talked about being empirical and a critical thinker as a tester. About collecting data from experiments using a heuristic and exploratory approach. About reporting by telling stories in testing instead of only reporting figures and numbers. Testing is about providing valuable information to inform management decisions. This awesome class empowered me to connect the dots of stuff I had been thinking about for years. It also pointed me towards a lot of books and information “outside” the IT and testing domain. It also triggered me to learn more about social science.

Test reports
Do you recognize test reports like this? (click the report the enlarge).

I used to write test reports like that. I was counting test cases and issues and advising my clients to take applications in production. But what do these numbers tell us? What if we didn’t test the most important functionality is the software? Numbers don’t mean anything without context!

Another example was an assignment I did at a telecom company years ago. Testing was estimated by numbers of test cases. We have 8 weeks to test so we can do 800 test cases, was a normal way to plan and estimate testing. Somewhere along the project my project manager told me his budget had been cut 10%. He asked me to drop 80 test cases from our 800 test cases scope. What was he thinking? As if all test cases take equally long to create, execute and report?

Exact or social science?
Testing and informatics (the science of information) are often seen as exact or physical science. People perceive that computers always do exactly the same. This gets reflected in the way they think about testing: a bunch of repeatable steps to see if the program is working and the requirements are met, but is that really what testing is all about? I like to think of testing more as a social science. Testing is not only about technical computer stuff, it is also about human aspects and social interaction.

Traditionally the focus in testing is on technical and analytical skills, however testing requires a lot more! Testing is also about communication, human behaviour, collaboration, culture, social interaction and (critical, creative and systems) thinking. The seven basic principles of the Context-Driven School tell us that people, working together, are the most important part of any project’s context. That good software testing is a challenging intellectual process and only through judgement and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Quality
Can we measure the quality of software? And can we do that objectively? When I ask people about quality they often refer to requirements. “Quality is compliance to functional and non-functional requirements”. In my experience I have never seen a document that contained all requirements for a software product. We can argue that requirement engineers have to do a better job. Are they doing a bad job? Can we solve the problem by writing better requirements? When discussing quality I like to use coffee as example. I like strong, black coffee without any sugar or milk. But what if you do not like coffee? For somebody who doesn’t drink coffee, my cup of coffee has no value at all. But is still the same cup. How can that be? And how about the taste? Why does coffee from an average office machine doesn’t taste very well while it meets the “requirements” I just mentioned. And what if I change my mind? Not so long ago I drank lots of cappuccino, nowadays I don’t like that any more. That is why I like the definition by Jerry Weinberg and the additions made by James Bach and Michael Bolton.

Quality is value to some person (Jerry Weinberg)
Quality is value to some person who matters (James Bach)
Quality is value to some person at some time (Michael Bolton)

I began to believe that there is much more to quality than requirements alone. I also believe that software quality is very subjective and will change over time. To better understand the subjective, human aspects of software quality I started to study social science in general and our thinking and qualitative research in particular.

Qualitative and quantitative research
Quantitative research is about quantities and numbers. The results are based on numeric analysis and statistics. There is nothing wrong with numbers, but we need to understand the story behind these numbers! Like the test report example: what is the story behind these numbers? What did we test? And how good was our testing? That is where the qualitative aspects come in. Qualitative research is focused on differences in quality and is usually for more exploratory purposes. It is more open to different interpretations. Qualitative research accepts and deals with ambiguity, situational specific results and partial answers. When doing this, testers may be more prone to bias and personal subjectivity.

To be continued… In part 2 I will discuss critical and systems thinking.

Misconceptions about testing

Shmuel Gershon’s tweet pointed me to an article on the scrum alliance website Agile Methodology Is Not All About Exploratory Testing by Dele Oluwole. I share Sigge Birgissons concerns: “the post clearly shows what I mean when having deep concerns about the knowledge of testing in agile community”.

I think Dele doesn’t fully understand what testing is or at least he uses a different definition than I do. And certainly he doesn’t understand exploratory testing. Rikard Edgren wrote an open letter to explain what testing is. Please read his high level summary of software testing to understand what testing means to me.

"It is imperative to state in clear terms why Agile testing cannot be all about exploratory testing"

(The text in these gray frames are quotes form the article by Dele)

I wrote a post some weeks ago about agile testing titled what makes agile testing different: agile testing isn’t that much different from “other” testing. Why do some people think agile is so different? To me there is no such thing as Agile testing. There is testing in an agile context. And every agile context is probably different. So saying that agile testing cannot be about exploratory testing makes no sense to me.

It is unequivocally the case that: you cannot estimate your time for exploratory testing, i.e., assign points realistically

Estimating testing is an interesting topic. This blogpost by Morgan Ahlström nicely emphasizes that estimates are guesses. Martin Jansson of the Test Eye writes about “utopic estimations” here. Michael Bolton wrote a lot about estimation here, here and here. He explains that testing is an open-ended task which depends on the quality of the products under test. The decision to ship the product (which includes a decision to stop testing) is a business decision, not a technical one.

Especially the fifth part of the black swan series is interesting in this discussion because Michael writes about the fallacies surrounding “development” and “testing” phases (by James Bach). He also explains why estimating the duration of a “testing project” by counting “test cases” or “test steps” is not a smart thing to do.

"You cannot plan for exploratory testing, as you do not have defined expected results."

Why are some people so obsessed by expected results? And why is there a need to have expected results to be able to plan testing? Expected results can be very helpful, but there is much more to quality then doing some tests with an expected result. A definition that I like is by Jerry Weinberg: “Quality is value to some person”. To understand this, you might want to read this blogpost to understand that there is more to quality than the absence of bugs. Also have a look at the excellent free ebook “The Little Black Book on Test Design” by Rikard Edgren. On page 2-6 he uses the “testing potato” to explain that there are more important aspects to the system than the requirements only.

"There is no defined scope for exploratory testing."

In the exploratory testing I do in my projects there is a “scope”. I do targeted and focused testing using charters, sessions and planning my testing using a dashboard that resembles a scrum board. Have a look at the slides of my presentation “Boosting the testing power with exploration“.

Using a coverage outline in a mind map or a simple spreadsheet, I keep track of what I have tested. My charters (a one to three sentence mission for a testing session) help me focus, my wrap-up and/or debriefings help me determine how good my testing was. My notes and the sessions sheets keep track of what I have done in my testing. Like in scrum, I use “standup” meetings to plan my testing. In these meetings we discuss progress, risks, priorities and charters to be executed. This helps us to make sure we do the best testing we can do continuously.

"The tester, product owner, and Scrum team are not in control."

I am not sure if I understand what you are trying to point out here. Are you saying that the team is not in control when doing exploratory testing? My model above shows that you can be in control when doing exploratory testing when done right. Exploratory testing is NOT ad hoc or unstructured when done right. If you do it right you will have control!

"There is no measure of progress, as testers cannot determine when testing is enough."

How do we determine how much testing is enough? Stopping heuristics might help here. No matter how simple the system is we are building, there are simply to many variables to test everything. So testing is about making choices what to test and what not to test. Even with a huge amount of automated checks, we can not check/test everything (to understand the difference between testing and checking read this). Testing is not about doing X test cases and when they all pass, you are done. Testing is providing information for managers to make good decisions. And when do managers have enough information to inform their decisions?

Still, not many Agile projects will require just two phases, like integration and regression. But it's definitely not only exploratory testing that's needed, as is erroneously believed in some quarters.

I am not sure what you mean by two phases. What do you mean by testing in phases? I like to use the agile testing quadrants when I try to explain how I think of testing in an agile context.

A team is developing software and the programmers do testing before checking the software in and making it available to the team. How do we call that sort of testing? Unit testing? But is that /only/ testing done by the programmer? I argue that the programmer might do all kinds of testing before checking his code in, even functional and acceptance tests. They probably will create a lot of automated checks and maybe even do some exploratory testing to see if the software meets their expectations: quickly testing some usability and performance aspects. So before the software is checked in, the programmer has covered testing in all 4 quadrants. This does not mean testing is done. More testing can be done, it depends on the context. What does the product owner wants to know? What are the risks involved? How much time do we have left? When discussing a test strategy I try not to speak about phases, I like to discuss what gets covered and why. What information is needed by the team and it’s stakeholders? When talking about coverage I do not mean code coverage but test coverage: the extent to which we have traveled over some map (model) of the system.

So I don’t think you can say “only exploratory testing is needed or not”.

Dele concludes his article with this statement:

It is the responsibility of the tester (and the Agile/Scrum team) to ensure that acceptance testing is in line with the expectation of the product owner. If we agree that there is an expectation, we therefore have to design test cases (even if minimal) that will verify the specified acceptance criteria."

Dear Dele please read about testing and exploratory testing. Some good starting points are these lists of resources on this blog or the one made by Michael Bolton: resources on Exploratory Testing, Metrics, and Other Stuff. I am happy to point you to more good sources of information if you are interested. Just let me know.

DEWT2 was awesome!!

Last Friday (October 5th) a bunch of software testing nerds and one agile girl gathered in Hotel Bergse Bossen in Driebergen talk about software testing with the central theme “experience reports: implementing context-driven testing”. Ruud published almost all the photos I took on the DEWT website, so I have to write a report in text here. Thanks dude 😉

After some drinks and having dinner we gathered in the conference room called the chalet. I think they call it the chalet because of the humid smell inside because it certainly didn’t look like a chalet.

But never mind, the room was good enough to do a peer conference. Lighting talks were on the program for Friday and Jean-Paul started with a talk in which he asked the question: “Is the context-driven community elitist?”. Jean-Paul sees a lot of tweets and blogs from people in the context-driven community who seem to look down on the rest of the testing community sending the message “look how great we are!”. Is this effective behaviour given the fact that the context-driven community wants to change the way people test, he asked himself.

Joris had a short and clear answer to the first question: “yes!” (the context-driven community is elitist). We had a long discussion that went everywhere but never close to an answer to question about effectiveness and how it can be done better. Never the less it was a valuable and fun discussion. I will blog about this later this year.

We had a great evening/night with home brew beer “Witte Veer”, Belgium beer brought by Zeger and a bottle of Laphroaig quarter cask brought by Joris. Sitting at the fire place a lot of us stayed up until late

Saturday morning the last DEWTs and guests arrived. With 23 people in the room we had a great group. Our guest were: Markus Gärtner, Ilari Henrik Aegerter, Tony Bruce, Gerard Drijfhout, Pascal Dufour, Rob van Steenbergen, Derk-Jan de Grood, Joep Schuurkes, Bryan Bakker, Lilian Nijboer and Philip-Jan Bosch. The DEWTS: Adrian Canlon, Ruud Cox, Philip Hoeben, Zeger van Hese, Jeanne Hofmans, Joris Meerts, Ray Oei, Jeroen Rosink, Peter Schrijver, Jean-Paul Varwijk and myself.

Peter was our facilitator and for the first time we used k-cards at DEWT to facilitate the discussion. I really like this method, but Lillian didn’t. She had an interesting discussion on twitter with some DEWTs.

Jean-Paul: I like the LAWST format we are using for #DEWT2
Lilian: @Arborosa i don’t. impersonal and inhibits useful discussion
Markus: @llillian @Arborosa What’s the improvement you’re suggesting? 🙂
Lilian: @mgaertne @Arborosa but i am invited and feel i at least have to try it this way
Jean-Paul: @llillian @mgaertne Invite us to an agile event to learn alternatives
Markus: @Arborosa @llillian Funny thing, I would like to introduce more focused-facilitated sessions quite often in agile discussions using k-cards.
Lilian: @mgaertne @Arborosa different ppl prefer different things 🙂

Ilari kicked of with a presentation about the introduction of context-driven testing at eBay. He had two very interesting ideas: the first is “Monday Morning Suggestion” a short email to his team with tips, tricks or interesting links. The second was that he supports his team in getting better and learning by paying for their books and conference visits. Great stuff!

Second was Markus Gärtner with a story about him coaching a colleague to become a better tester and more context-driven. He talked about his lessons learned coaching where teaching gradually turned into coaching. He also gave some insight in the transpection sessions he did using the socratic method.

The third presentation was about changing people to become more context-driven by Ray Oei. An interesting discussion developed about how to get people to change. We focus too much on the people who do not want to change instead of working with the people who do want to change. Pascal asked an interesting question which I put on twitter: “If all testers become context-driven overnight, are we happy?” James Bach replied: “Yes, we are happy if all testers become CDT” and “We are happy if all testers take seriously the world around them, and how it works, and dump authoritarianism”. To me it doesn’t really matters, I do hope that more people become CDT, but I prefer good software testing no matter how them call it.

After the discussion Joris said he was attending the wrong peer conference since he expected a lot of discussion about testing instead of what he calls “people management”. An interesting question, but isn’t testing also a lot about people? Just an hour late we went for lunch and a walk in a wet forest. After the walk the group photo was taken.

The forth talk was by Ruud Cox who talked about testing in a medical environment. He described the way he has been working. He and his colleague tester use exploratory testing to learn and explore. Scripted testing was used to do the checking. Ruud explained that exploratory testing fits very well in an R&D environment. After his talk we had an interesting discussion about auditors and their role in testing.

Around 16:00 it was my turn to do my talk about implementing context-driven testing within Rabobank. I did a short version of the talk I will do at EuroStar in November telling about the challenges we had and what we did to change our context. What did we do? What worked and what didn’t? Some interesting questions were asked. And we had a nice discussion about the personal side of becoming more context-driven where Joep explained how he became context-driven. The first stage was being interested and he went looking for a book. The second stage took him over a year where he struggled with the stuff he learned at RST and trying to adapt his way of working. The third and last stage is where he doesn’t need anything anymore because he will always find a way to make things work.

After my talk and the discussion the day ended. We did a quick round to discuss our experiences and thanked the people for attending, organizing and facilitating. Our agile girl still doesn’t know what to think about the facilitation with the k-cards, she obviously had to get used to discussing with the cards. I am curious how she thinks about them when she had the chance to think about it some more.

All together I had a great time. Pity we didn’t play the Kanban game on Friday. But I hope Derk-Jan will give me another chance at the game night at TestNet in November. Thanks all for the awesome weekend!!

DEWT3 is already being planned for April 20 and 21 next year and James Bach will attend. Let me know if you are interested in joining us in April 2013. The topic hasn’t been decided yet, but if you have great ideas please share them.

Standing left to right: Ray Oei, Jean-Paul Varwijk, Adrian Canlon, Markus Gärtner, Ruud Cox, Joris Meerts, Pascal Dufour, Philip Hoeben, Gerard Drijfhout, Bryan Bakker, Derk Jan de Grood, Joep Schuurkes, Lilian Nijboer, Philip-Jan Bosch, Jeroen Rosink, Jeanne Hofmans
Kneeling left to right: Tony Bruce, Zeger van Hese, Ilari Henrik Aegerter, Huib Schoots, Peter Simon Schrijver

« Older posts