Tester job is to make things work, not the opposite!
In my last post I forgot to mention the main idea I'd like to retain, which is, the tester Mission is to make the software work.
What we want, in the End, is to have working and satisfying software, with few relevant bugs. Let's not forget it. The more I get the software to work, the more complex scenarios I will build, and eventually the more interesting and relevant defects I will find.
If a tester gets stuck with basic defects, often superficial, often interface, but sometimes really blocking ones, he won't be able to advance in its mission, wich is to have working software. That's why I advised in my last post to start with simple and common, expected scenarios.
Corner cases are often interesting and exciting from an academic point of view, but in the long term will not yield so efficiently to working software. Although usually testing is seen as a way to find defects, I found out that on complex systems, our mindset should be focused on software success, not on software failures.
Friday, October 9, 2009
Thursday, October 8, 2009
What should first tests address?
When planning a testing effort, or if you wish, when designing a Test Plan, some considerations may turn out very useful.
First tests on a given area are decisive. If we consider testing is Learning, or asking valuable questions to the product, then first questions are the most important ones, because we are getting a first contact with the product (first impressions tend to stick in our mind more easily).
When addressing these questions, we should expect answers like: "Yes, this feature is working exactly as I expected", or "Not at all, there is no way this could work". Answers like "in this particular situations, this might not work", or "I think this is working, but I am not sure if this is very useful" are discardable at an early stage, because we are trying to gather basic information, not exploring detailed scenarios.
We should seek the most common scenario, the main functionality, and try to setup ideal conditions so the software succeeds. At this point, we may say if the feature is working, or on the other hand, if it is not working at all.
Even after pass the very first test, after we say the feature is working, we will increase complexity of testing, but always trying to go for the next simpler and most common scenario that someone might want. This way we achieve:
-Continuous and steady knowledge growth, easier and logical learning. Our mental model of the product is built from the basis to the upper
structure, not the other way.
-Efficient Testing effort, every results are probably matched to our mental model, which is growing continuously, thus minimizing gaps in the complexity levels of our knowledge. Suppose we go for a complex scenario at a very early stage, and we find it is not working. My experience tells me that the test is often discarded, or at least it will have to be repeated later, after do some more simple tests. Often what we see as a product failure, is a (tester) knowledge failue, because we are exploring conditions and scenarios for which it was not built.
Because our knowledge in this early stage is extremely dynamic, Exploratory Testing (ET) may be very efficient, due to itts adaptability and dynamism (see literature on ET)
First tests on a given area are decisive. If we consider testing is Learning, or asking valuable questions to the product, then first questions are the most important ones, because we are getting a first contact with the product (first impressions tend to stick in our mind more easily).
When addressing these questions, we should expect answers like: "Yes, this feature is working exactly as I expected", or "Not at all, there is no way this could work". Answers like "in this particular situations, this might not work", or "I think this is working, but I am not sure if this is very useful" are discardable at an early stage, because we are trying to gather basic information, not exploring detailed scenarios.
We should seek the most common scenario, the main functionality, and try to setup ideal conditions so the software succeeds. At this point, we may say if the feature is working, or on the other hand, if it is not working at all.
Even after pass the very first test, after we say the feature is working, we will increase complexity of testing, but always trying to go for the next simpler and most common scenario that someone might want. This way we achieve:
-Continuous and steady knowledge growth, easier and logical learning. Our mental model of the product is built from the basis to the upper
structure, not the other way.
-Efficient Testing effort, every results are probably matched to our mental model, which is growing continuously, thus minimizing gaps in the complexity levels of our knowledge. Suppose we go for a complex scenario at a very early stage, and we find it is not working. My experience tells me that the test is often discarded, or at least it will have to be repeated later, after do some more simple tests. Often what we see as a product failure, is a (tester) knowledge failue, because we are exploring conditions and scenarios for which it was not built.
Because our knowledge in this early stage is extremely dynamic, Exploratory Testing (ET) may be very efficient, due to itts adaptability and dynamism (see literature on ET)
Tuesday, October 6, 2009
Status vs State
Ok, I'll tell you my two cents: a state is any condition which is durable or lasting (Zustand, Istzustand) whereas a status is a classification of state among several well-defined possibilities. An immigrant's status is either legal or illegal, and if he is the latter, he may be in an anxious emotional state.
Monday, July 13, 2009
The best in testing is not the Tester: the domain problem
While reading some critics about a book about Lester Young, on the ever-best-blog-of-the-world , thebadplus , I got to the paradox that often the best work is done by people from other domains, and that work is often not recognized by the specialists due to lack of versatility, and interdisciplinarity.
There is this book about a musician, written by an historian specialized in Black Studies. Most people who will read the book are musicians (like me) and are not sensible, or informed , or even interested into historic details, or to litterary value. However that book is very well written, because it was written by a writer, (not a musician), and has insightful information, because it was written by an historian, (not a musician).
Nevertheless, most critics are negative, because they are done by the majority of musician-readers, and a minority of social studies people.
Sometimes in testing, and of course, in life, we are victims of our domain knowledge, and our lack of cross domain knowledge. I have seen psycologists or lawyers, do great work on the field of testing.
The cross domain exercize can be really refreshing, and is surely one of the brain storming unblocking techniques.
There is this book about a musician, written by an historian specialized in Black Studies. Most people who will read the book are musicians (like me) and are not sensible, or informed , or even interested into historic details, or to litterary value. However that book is very well written, because it was written by a writer, (not a musician), and has insightful information, because it was written by an historian, (not a musician).
Nevertheless, most critics are negative, because they are done by the majority of musician-readers, and a minority of social studies people.
Sometimes in testing, and of course, in life, we are victims of our domain knowledge, and our lack of cross domain knowledge. I have seen psycologists or lawyers, do great work on the field of testing.
The cross domain exercize can be really refreshing, and is surely one of the brain storming unblocking techniques.
Monday, June 22, 2009
tester skills: domain (business) knowledge vs. "technical" knowledge
here is my response to a post, followed by a (much better) argumentation from Cem aner:
>Having the ability to understand the implementation gives you the confidence and knowledge to make sounds decisions on what to test and where to focus your time and energy. It allows you to challenge the system design and test pieces of the system as they are developed.
On the other hand, you will be thinking in the same biased way as the developer, only (hopefully) with a deeper caution and inspection level. If you don’t have any idea about the implementation, you might stumble across totally different issues. Certainly everyone has the experience of looking at a product from a new point of view, for instance when the product is new, or when the tester is new, and how productive that might be (in terms of number of reported defects).
>However we agree that technology understanding is more expensive or more difficult to master. We don't believe that domain knowledge is not important but that it is easier to acquire.
With all respect, I don’t see how you agree on that. When a tester is hired, the academical skills are required, but in my experience, there is no specific need for a programming language, or a specific technology. The academical skills indicate that the tester is able to learn, and has already learned, technical disciplines, and has the ability to think as an engineer, solve problems, etc, etc. Usually, it is not a guarantee that he will master any particular technology. The professional will learn to master a new technology as the work demands it.
But to acquire business expertise, it usually takes time and money. While the technical knowledge is assumed when hiring, you will have to pay a salary to the tester so he learns the domain particularities. So it is not cheap!
In my field (contact center software) there are visible differences between someone that has 1 year of work, or 2, or 5, or 10! We lately hire people from other departments within the company to do the testing, because they start producing results much faster.
You might think that is because my case is a complex one (contact centers, client databases, telephony switches, etc), but even considering a simple form to input a client name might have lots of problems. Let me give an example: recently I bought an airplane ticket on a well known company using the internet. Can you imagine a simpler issue than requesting the name of the traveler? Well, there were problems on the check-in because it was not clear that only first and last name were requested, so the ticket only caught middle names!
Having said this to defend the “business” side, I should say I always try to model (pseudo-code) the implementation, and try to know the technology that is being used, because it actually helps me to find out “what’s happening” when something unexpected happens (and that’s all the time).
Me
-------------
> First our argument about domain was insurance. I argue that it does not take several
> years to know what you need about insurance to be an effective developer tester or
> system architect. This is why people in banking and insurance can move easily.
Funny thing about insurance.
I too have seen incompetent programmers, architects, project managers – and testers – at insurance companies, and banks and government agencies and universities and telephone companies (and even at game companies). I too have worked with people who have no relevant education and who serve as their corporation’s advocates of excess paperwork, invalid metrics, and the politics of blame. (And who take exams so they can put on their resume that they are Internationally Certified as Professionally Incompetent .)
However, I’ve also seen really good people in these kinds of organizations. After all, SOMEONE has to get the stuff done and working.
I don’t know why incompetents survive in these organizations, but when I think about knowledge, skills and attitudes needed to do a job well, my first point of reference is that I need people who can do the job well, not that they should not be aggressively incompetent.
Insurance companies create or acquire a lot of different types of software.
For example, testing software that does risk estimation (a core issue for insurers) takes deep subject matter knowledge. You CAN do the superficial testing (OK, it doesn’t take letters in number fields), but that doesn’t tell you whether the program is processing the probabilities appropriately – and that’s what the company is betting its future on.
Here’s another example: the software that salespeople use has to get them the information they need, quickly (quickly by their standards, not programmers’ standards), accurate in every potentially relevant respect (where relevance and accuracy are evaluated through the eyes of the salesperson and corporate counsel, not by the “good enough” filter of the programmer who has never done this job), in a format they can understand (not a format that we tell them they should be able to learn if they’re smart enough and work hard enough), displayed in a way that is not going to cause difficulties if someone looks over their shoulder at their screen, and that facilitates closing the deal. Having sat on the other side of sales software (as a user, rather than developer), my impression is that very few programmers understand what salespeople actually need to do their job well and even fewer have enough respect for the work salespeople do to CARE what the salespeople need. Sales is difficult work. Tools that support salespeople well are difficult to design and evaluate. There is such a thing as genuine domain expertise in this field, it is rare, and it takes time and work (including a lot of visits to the field) to develop.
Let’s pretend that insurance companies operate with 10 different types of software (actuarial being one example, sales support a second, etc.) I think that means that a central test organization has to develop subject matter expertise in each of these areas.
Unit tests, and most other types of glass-box tests, are verification-oriented. The question is whether the program does what the programmer intended. It’s an interesting question whether verification should be done by the programmers (whether this is a task for people designated “tester” at all), but whoever does it, that doesn’t take us to the other classes of issues that I think testers SHOULD be spending time on. Not whether the programmers implemented a model correctly but whether it was the right model. Not whether the program meets specified performance criteria but whether performance is fast enough to meet the needs of the client. Not whether the program presents the intended menus and graphics but whether they are natural and useful to the people for whom the software was written. Not whether the software works on the specified platforms but whether it works on the platforms in use. Whether it interoperates effectively with other software, etc.
Verification is the handmaiden of validation and evaluation. It is easier to evaluate a program that operates in a consistent and predictable way. Someone has to do verification, it is important work, and the test group can help with this, but if the program doesn’t solve the problems that its users need solved, if it doesn’t provide the benefits that they need provided, then it doesn’t matter how well the program doesn’t do it. Programming-focused testers can help a lot with verification, but testers need a lot of other knowledge to do the rest.
Cem Kaner, J.D., Ph.D.
>Having the ability to understand the implementation gives you the confidence and knowledge to make sounds decisions on what to test and where to focus your time and energy. It allows you to challenge the system design and test pieces of the system as they are developed.
On the other hand, you will be thinking in the same biased way as the developer, only (hopefully) with a deeper caution and inspection level. If you don’t have any idea about the implementation, you might stumble across totally different issues. Certainly everyone has the experience of looking at a product from a new point of view, for instance when the product is new, or when the tester is new, and how productive that might be (in terms of number of reported defects).
>However we agree that technology understanding is more expensive or more difficult to master. We don't believe that domain knowledge is not important but that it is easier to acquire.
With all respect, I don’t see how you agree on that. When a tester is hired, the academical skills are required, but in my experience, there is no specific need for a programming language, or a specific technology. The academical skills indicate that the tester is able to learn, and has already learned, technical disciplines, and has the ability to think as an engineer, solve problems, etc, etc. Usually, it is not a guarantee that he will master any particular technology. The professional will learn to master a new technology as the work demands it.
But to acquire business expertise, it usually takes time and money. While the technical knowledge is assumed when hiring, you will have to pay a salary to the tester so he learns the domain particularities. So it is not cheap!
In my field (contact center software) there are visible differences between someone that has 1 year of work, or 2, or 5, or 10! We lately hire people from other departments within the company to do the testing, because they start producing results much faster.
You might think that is because my case is a complex one (contact centers, client databases, telephony switches, etc), but even considering a simple form to input a client name might have lots of problems. Let me give an example: recently I bought an airplane ticket on a well known company using the internet. Can you imagine a simpler issue than requesting the name of the traveler? Well, there were problems on the check-in because it was not clear that only first and last name were requested, so the ticket only caught middle names!
Having said this to defend the “business” side, I should say I always try to model (pseudo-code) the implementation, and try to know the technology that is being used, because it actually helps me to find out “what’s happening” when something unexpected happens (and that’s all the time).
Me
-------------
> First our argument about domain was insurance. I argue that it does not take several
> years to know what you need about insurance to be an effective developer tester or
> system architect. This is why people in banking and insurance can move easily.
Funny thing about insurance.
I too have seen incompetent programmers, architects, project managers – and testers – at insurance companies, and banks and government agencies and universities and telephone companies (and even at game companies). I too have worked with people who have no relevant education and who serve as their corporation’s advocates of excess paperwork, invalid metrics, and the politics of blame. (And who take exams so they can put on their resume that they are Internationally Certified as Professionally Incompetent .)
However, I’ve also seen really good people in these kinds of organizations. After all, SOMEONE has to get the stuff done and working.
I don’t know why incompetents survive in these organizations, but when I think about knowledge, skills and attitudes needed to do a job well, my first point of reference is that I need people who can do the job well, not that they should not be aggressively incompetent.
Insurance companies create or acquire a lot of different types of software.
For example, testing software that does risk estimation (a core issue for insurers) takes deep subject matter knowledge. You CAN do the superficial testing (OK, it doesn’t take letters in number fields), but that doesn’t tell you whether the program is processing the probabilities appropriately – and that’s what the company is betting its future on.
Here’s another example: the software that salespeople use has to get them the information they need, quickly (quickly by their standards, not programmers’ standards), accurate in every potentially relevant respect (where relevance and accuracy are evaluated through the eyes of the salesperson and corporate counsel, not by the “good enough” filter of the programmer who has never done this job), in a format they can understand (not a format that we tell them they should be able to learn if they’re smart enough and work hard enough), displayed in a way that is not going to cause difficulties if someone looks over their shoulder at their screen, and that facilitates closing the deal. Having sat on the other side of sales software (as a user, rather than developer), my impression is that very few programmers understand what salespeople actually need to do their job well and even fewer have enough respect for the work salespeople do to CARE what the salespeople need. Sales is difficult work. Tools that support salespeople well are difficult to design and evaluate. There is such a thing as genuine domain expertise in this field, it is rare, and it takes time and work (including a lot of visits to the field) to develop.
Let’s pretend that insurance companies operate with 10 different types of software (actuarial being one example, sales support a second, etc.) I think that means that a central test organization has to develop subject matter expertise in each of these areas.
Unit tests, and most other types of glass-box tests, are verification-oriented. The question is whether the program does what the programmer intended. It’s an interesting question whether verification should be done by the programmers (whether this is a task for people designated “tester” at all), but whoever does it, that doesn’t take us to the other classes of issues that I think testers SHOULD be spending time on. Not whether the programmers implemented a model correctly but whether it was the right model. Not whether the program meets specified performance criteria but whether performance is fast enough to meet the needs of the client. Not whether the program presents the intended menus and graphics but whether they are natural and useful to the people for whom the software was written. Not whether the software works on the specified platforms but whether it works on the platforms in use. Whether it interoperates effectively with other software, etc.
Verification is the handmaiden of validation and evaluation. It is easier to evaluate a program that operates in a consistent and predictable way. Someone has to do verification, it is important work, and the test group can help with this, but if the program doesn’t solve the problems that its users need solved, if it doesn’t provide the benefits that they need provided, then it doesn’t matter how well the program doesn’t do it. Programming-focused testers can help a lot with verification, but testers need a lot of other knowledge to do the rest.
Cem Kaner, J.D., Ph.D.
Thursday, April 2, 2009
Mcluhan for testers (reminder)
The 4 questions to be raised in front of a new tool/technique/heuristic/etc:
(replace /medium/ with /tool/ or whatever)
1- What capabilities does the medium extend? (E)
2- What medium or media do the new medium obsolesce? (O)
3- How might the message of the medium eventually become reversed from its original message? (R)
4- How did things change when the retrieved medium appeared as a new figure in its corresponding ground? (R)
Mnemonic EORR
Michael Bolton pointed out this great definition:
a medium is something that causes change to happen.
(replace /medium/ with /tool/ or whatever)
1- What capabilities does the medium extend? (E)
2- What medium or media do the new medium obsolesce? (O)
3- How might the message of the medium eventually become reversed from its original message? (R)
4- How did things change when the retrieved medium appeared as a new figure in its corresponding ground? (R)
Mnemonic EORR
Michael Bolton pointed out this great definition:
a medium is something that causes change to happen.
Friday, March 27, 2009
Creativity in Testing
I would go even more general, and say that engineering is a creative activity. I see engineering as the craft of solving problems, which require deep domain knowledge, along with technical skills, and an open, creative and critical mind.
For testers, I think creativity comes up in things like:
-Test design aimed at exposing specific product features
-Finding workarounds for known problems
-Finding probable flaws or defects, based on our knowledge of the product history, and on the human nature.
-Finding potential ways to break a fix
-Finding faster/smarter ways to accomplish my objectives
-Find out what would be the best/ideal tool for specific task
-Finding tools
-Creating tools
-Imagine how the tool I am using might threaten the value of my work, for instance giving me confidence on certain area by false or biased results
-Generically, imagine how my concepts might be deceived. For instance: am I pretending to advocate the customer, when in fact she doesn't value my values?
-At last, but most important, finding or previewing defects just by conducting our though to the conclusion. Imagine a chess game: if a developer thinks 3 moves in advance, and the tester thinks 4, then it is possible for the tester to anticipate defects that the developer didn't. I often think of this as /Imagination/, rather than creativity, in the sense that we let our mind go to all the way to the end
---, Lisa Crispin wrote:
>
> IMO, software development in general is a creative activity, testing
> included. Testers add the most value when they strive to hone their craft,
> collaborating with team members to find good ways to ensure delivering high
> quality software and delighting customers. Exploratory testing requires
> creativity as well as critical thinking. Testers need lots of creativity,
> imagination and empathy to come up with good test cases and
> scenarios.Testers on my team come up with creative solutions to
> coding/development problems, not only testing problems (and the reverse is
> true too).
> -- Lisa
>
> On Fri, Mar 27, 2009 at 9:17 AM, Strg Pune wrote:
>
> > Hi All,
> >
> > Recently I watched a video on YouTube and fallen in love with the thought
> > expressed by Sir Robinson.
> > Sir Ken Robinson: Do schools kill creativity?
> > *
> > *
> > The question is how can we as testers' community adopt the notion of
> > creativity?
-----------
Michael Bolton added a plus 1 :) and:
>If one believes in the theory of multiple intelligence of Howard Gardner,
what is that the tester has to possess to be a creative tester?
Among other things...
- independence of mind
- diversity of experience, approaches, models, tools, education...
- freedom and encouragement to innovate, explore, and discover
- freedom from arbitrary constraints
- responsibility to explain and justify his/her work
---Michael B.
Another add on:
The creative process is said to be of five stages.
1. Preparation
2. Incubation
3. Illumination
4. Elaboration
5. Verification
For testers, I think creativity comes up in things like:
-Test design aimed at exposing specific product features
-Finding workarounds for known problems
-Finding probable flaws or defects, based on our knowledge of the product history, and on the human nature.
-Finding potential ways to break a fix
-Finding faster/smarter ways to accomplish my objectives
-Find out what would be the best/ideal tool for specific task
-Finding tools
-Creating tools
-Imagine how the tool I am using might threaten the value of my work, for instance giving me confidence on certain area by false or biased results
-Generically, imagine how my concepts might be deceived. For instance: am I pretending to advocate the customer, when in fact she doesn't value my values?
-At last, but most important, finding or previewing defects just by conducting our though to the conclusion. Imagine a chess game: if a developer thinks 3 moves in advance, and the tester thinks 4, then it is possible for the tester to anticipate defects that the developer didn't. I often think of this as /Imagination/, rather than creativity, in the sense that we let our mind go to all the way to the end
---, Lisa Crispin wrote:
>
> IMO, software development in general is a creative activity, testing
> included. Testers add the most value when they strive to hone their craft,
> collaborating with team members to find good ways to ensure delivering high
> quality software and delighting customers. Exploratory testing requires
> creativity as well as critical thinking. Testers need lots of creativity,
> imagination and empathy to come up with good test cases and
> scenarios.Testers on my team come up with creative solutions to
> coding/development problems, not only testing problems (and the reverse is
> true too).
> -- Lisa
>
> On Fri, Mar 27, 2009 at 9:17 AM, Strg Pune
>
> > Hi All,
> >
> > Recently I watched a video on YouTube and fallen in love with the thought
> > expressed by Sir Robinson.
> > Sir Ken Robinson: Do schools kill creativity?
> > *
> > *
> > The question is how can we as testers' community adopt the notion of
> > creativity?
-----------
Michael Bolton added a plus 1 :) and:
>If one believes in the theory of multiple intelligence of Howard Gardner,
what is that the tester has to possess to be a creative tester?
Among other things...
- independence of mind
- diversity of experience, approaches, models, tools, education...
- freedom and encouragement to innovate, explore, and discover
- freedom from arbitrary constraints
- responsibility to explain and justify his/her work
---Michael B.
Another add on:
The creative process is said to be of five stages.
1. Preparation
2. Incubation
3. Illumination
4. Elaboration
5. Verification
Subscribe to:
Posts (Atom)