Thursday, July 24, 2008

Failure Improvement vs crime investigation

Brian Marick said:
I'd probably say the most important skill is
"failure improvement":

<http://www.testingcraft.com/failure-improvement.html>

It, at least, is the skill I see testers paying attention to that I
almost never see anyone else paying attention to.

---
As I was reading the link, I thought this part of a tester work is what I usuallly compare to detective work (specially when someone says testing is boring):
-As the crime (failure) is found, I will try to gather all the information from the crime scene (context)
- I will try to rebuild the story (remember what I did exactly), using all the evidences I can gather (see traces and logs)
- I will try to preview where the criminal will attack again (or what are the precise conditions for the failure to occur)
- Then I will try to catch him on the crime spot (replicate the failure)

In fact, all six items that Brian refers on his article map quite well on this crime metaphor:
(The purpose of failure improvement, or defect isolation)

1-Split defects - maybe there is more than one criminal; possibly even more than one crime
2-Simplify defect - When there is a simple explanation for a crime, it has great chances of being the true. When it is complex, it might as well be the true, if there is no simpler one (as Sherlock Holmes said :)
3-Replicate - the criminal always comes back to the crime scene.
4-Find the "uggliest side" of a defect - What is the most damage this criminal can do if he is not caught? I believe the authorities often do this question in order to evaluate the effort to put on the investigation (or the risk of "not fixing the defect")
5-Describe accuratelly the failure boundaries - or defining precisely the pattern of the crimes
6-Find new failures - find new crimes - preferably before they happen!!

Thursday, June 5, 2008

Software Artist

Indeed [Chris said that the Sowftware Artist is *not* a metaphor] . Metaphor (in this context) is a conceptual tool to trigger creativity through unexpected relations between apparently different subjects. It is used both in software world and in traditionally pure arts world.
---
[Shrini asked about performance vs. productivity]
Both terms go with both. I will try to give counter examples:

-On musical creation, we talk a lot about productivity. For example, you have modern jazz artist Anthony Braxton who releases dozens of albums a year. In classical music you have famous people who were extremely *productive*, despite living only 30 years, and other that compose very few pieces, but got famous anyway.
I remember earing this kind of conversations back when I studied in the conservatory of music.

The value of productivity is arguable, some may do more with less. This is true for software world and for arts. Of course when we talk purely about money, we always want "more" (and that is also true for both worlds.)

-When designing a piece of software, sometimes there are problems, like a feature we don't know exactly how to implement. On my company, there is a guy which is known for always finding great solutions. He is known for his *creativity* in working around hard problems.

Maybe there is a problem with the words. In a previous post Chris suggests using "create" instead of "write" or "code", or "build". This seems a great idea.

.....
Sorry I used "creativity" on my previous post instead of "performance", but the reasoning is still simple.
Word performance is used in engineering fields very often: note how it was used in marketing language for describing new car models ("modelxxx has a great performance off-road") so that people get a fast intuitive idea of an overall caracteristic of that vehicle.

We could equally say "John has a great performance as an exploratory tester", or "the new kernel has a great performace on that scenario"

-----------------

Michael Bolton describes metaphor even better than I do:
Actually, "is like" denotes simile, not metaphor. What you'resaying--something like "software IS art" is metaphor, quite distinct from"software IS LIKE art".

>I thoroughly do not intend for The Software Artists to be interpreted as anunsupported metaphor.

That's too bad, because metaphors rule. In addition, metaphors don'trequire support or references or citations. Metaphors are models that (aslong as we're paying attention and participating in the exercise) highlightboth the similarities and the differences. Metaphors get their powerprincipally from the cognitive friction that gets generated from equating(not merely comparing) something with something else that is manifestlydifferent AND manifestly the same.
---Michael B.

Wednesday, June 4, 2008

Software artists explained

Shrini, thanks for your always critic thoughts.You put some phrasing that is not my own, if you read carefully my post you see I say:

"engineering is also a form of art, in what relates to creativity, beauty and emotion drive"
and
"Every piece of software we create [...], has a beauty that will echo [...]"
which is not the same as saying:
"all engineernig is art" (typo kept)

but I get your idea, so I will try to explain a little better.
I think there is a continuum between the two extremes: the complete mechanical process (call it automated, or automatic) and the human or artistic process.

Take some examples:
On a painting, for example, there is a large amount of work that is technical, like the materials, inks, brushes and the painting technique itself. There is also a lot that is conceptual, emotional, artistic.
On a musical instrument same applies: technique, sound engineering, scales, arpegios, etc are ideally automated "under the fingers", so that our mind can search more musical ideas.
So these activities, although traditionally from the "art" field, often consist of 90% of mechanics and 10% of art, if you know what I mean.

Now for the field of engineering:
-Suppose my job is to test a piece of software.I will get to know the business context, so I can explore some useful scenarios - just a painter will focus on a theme and explore it;I will place myself on a end-user position, try to see what means value to this product - just as an artist who is constantly evaluating his workSometimes I will complain because the GUI does not communicate the idea I think it should, thus misleading the user - sometimes music fails to communicate, doesn't "click", as well.

So, just to resume my point, I don't think "everything is art", but instead that we can learn a lot if we see the processes that are traditionally not artistic, through an "art lens". Particularly through the use of metaphors, that are everywhere in art, but also in software (for example, a button is a metaphore for an action. Sometimes the button is there only so we can notice we can do something)

This said, I would like to add that we cannot separate the artistic individual from the practical individual inside each one of us, although we use in different moments one or the other (some say it is located in different zones of the brain...)

Tuesday, June 3, 2008

Software Artists

I think this theme is definitely important, and I believe in the present and near future people are realizing this reality. So I congratulate you for this article.

Now for the critic...When you split between engineering and art, I tend to disagree. I think engineering is also a form of art, in what relates to creativity, beauty and emotion drive. For example, the beauty of a construction: one may argue that the architect is an artist, but the engineering itself can be beautiful, for instance, on the simplicity of the structures, the way the light enters in the inside (this is really an engineering problem). The effort of the engineer was drived by his feelings, his ideas, and the emotions he placed on the future of the users for the house.

More generally, I think there is no world where we can drop the art, as we cannot drop the feelings. Every piece of software we create, every test we design, every chair we build, has a beauty that will echo when someone else will see our mind through that work (of art)


--- "Chris McMahon" wrote:
>
> A couple of times on this list I've mentioned starting what I call the
> "Artistic School" of software development and testing. I think it's
> critically important to have intellectually rigorous descriptions of
> software practice that are *not* based in the language of
> manufacturing or engineering.
>
> To that end, I have published The Software Artists:
> http://chrismcmahonsblog.blogspot.com/2008/05/software-artists-index-of-links-to-all.html.> This is an attempt to describe software practice in the language of
> art and performance instead of manufacturing and engineering.
>
> I"m interested in comments and criticism on the paper. I vacillate
> between thinking it's either the coolest thing I've ever written, or
> completely and totally irrelevant. It might also be just a mildly
> interesting failure.
>
> Anyway, if you read it, let me know what you think.
> -Chris

Friday, May 16, 2008

Funny Quotes

Bill Hetzel, 1988, The Complete Guide to Software Testing
"The only exhaustive testing there is is so much testing that the tester is exhausted! "

The variation of this that is my wording is "Programs aren't released. They escape!" The variation of this that is uniquely James' is the notion that there are some testers who won't release a product--"you have to pry it from their cold, dead hands." -- cem kaner

Tuesday, March 18, 2008

Delightful engineering

I usually relate delight operating a software with usability or functionality.

A Delightful software is more easily sold. A Demo or a talk featuring nice usability features is much more successful, like:
-the software "guessing" what the user would do next and provide shortcuts; remove redundancy often cuts several useless steps in wizards, for example; according to context, do not provide useless options:
-a software that is self-explained, and makes the user feel a genius, instead of making him feel stupid;
-Functionalities presented as bonus; the user is surprised by capabilities that were not intended at first, but tooltips are telling him they exist and are easily accessible.

The problem here is that delightful features often become standards. If they are good and useful features, people will expect for them, and maybe submit defects when the "delightful features" are not present. Then the term "delight" often evolves to "good design".

Just trying to resume, I think good "engineering" has the implicit capability of creating delight, even if it is not it's purpose. The world is full of this, just like good furniture, cars, engines, planes, architecture, music, art...

Friday, February 29, 2008

The sooner you discover a problem, the easier it is to fix it?

The last sentence (“it is better to think of the "cost" of the fix more that the ease to fix it.”) made me think of different concepts that are often confused leading to over simplified heuristics. Heuristics tend to become more far from truth as they become more popular and generic.Here are some concepts I could filter:

1-Cost of a defect: this is the sum of several other costs (2+3+4+5+6+7)

2-Cost of finding a defect: seems that usually the later, the easiest to find a defect. Finding it on specs may be hard, on working software may be easier but requires testing, on a client the cost is zero.

3-Cost of debug/fine tuning the defect: for instance, after a tester finds a defect, he will try to reproduce, see logs, etc to find out more about it before report it. I think when found on the design, this cost is zero; when found on integration testing the cost is low; when found by a beta tester or a client this cost may be higher (developers will have to investigate more)

4-Cost of reporting

5-Cost of the /harm/ of a defect: If found in the specs, it does no harm; if found during integration or system testing it may delay the tests are cause come entropy/confusion; if found by a client it may cause more harm to software factory image, or to the client’s service.

6-Cost of fixing the defect: the minimum cost is at the design phase; depending on the language it may cost more or less to fix it in the code; depending on the complexity of the system the cost may be quite high due to integration issues, for instance.

7-Applying/propagating a fix: may be zero for an in-house tool; may be high for a product sold around the world; But: note that even in this case the cost of propagating a specific fix may tend to zero, because there are already scheduled Service Packs (SP), introducing a fix in a SP may be automatic, SP distribution may be automatic as well, so the cost of propagating a single fix is almost zero!

8-Cost of verifying the fix: seems not depending on the time it is done
As sort of conclusion:-I would not use the word “easier” but the word “cost” as Paul Holland suggests-I would not stick to the over simplified heuristic “the sooner the better” without paying attention to the context, as James Bach said.

Wednesday, February 13, 2008

Developer quotes

Question: How many developers it takes to fix a light bulb?
Answer: what's the problem? It lights in my desk!

Tuesday, February 12, 2008

schools of testing

[Cem Kaner]I wrote a blog post on this at http://www.satisfice.com/kaner/?p=15 that you might find helpful

The notion of a “school” has little to do with what techniques people use. Any school can use any technique (unless, for some reason, that group has decided that, in principle, the technique is flawed or unethical).

The notion of a school has more to do with your foundational thinking of why you do what you do. And from that, it drives your strategy, it drives the way you educate yourself further, and it drives your notions of success and failure.

Take a look at the exchanges I’ve had with Don recently. The differences there are not about techniques. They reflect fundamentally different views about the role of testing in a product development effort, the ethics of test management, the definition of success, the perceived professionalism and motivation of the programming team, the definition of quality, and so on. It is easy for us to talk past each other because our basic assumptions are so different that it is hard for us to notice them.

For example,
· I think that testing is primarily about learning and I am deeply interested in ways to make testers better learners.
· Other people think testing is primarily about techniques, more a mechanical activity than a highly cognitive inquiry.
· The implications of this difference are fundamental. For example, think about the recent thread involving automation of “mainstream tests.” One of the first tasks I give a tester new to a project is to use it in ways that help the tester understand what the product would do if it worked really well. James calls this “sympathetic testing.” The goal is not to find bugs, not to regression test, not to smoke test, but to gain an appreciation for the product that helps the tester THEN hunt more effectively for problems. You would only reserve time for this type of work if you thought that it is important for the tester, this new human on the project, to learn this kind of information and that this time-consuming task is a good way to foster that learning. If you were less worried about developing the knowledge and judgment of the tester, you’d probably just automate these tests. From my point of view, that sabotages the effort because the human doing the testing will probably think about the product with less depth and thus less effectively prioritize later work and less effectively interpret and communicate later test results. But from the point of view of someone who thinks testing is about using well-understood techniques to expose classically-defined errors or deviations from a well-written specification, the automation saves time and frees the tester to apply more techniques to more code. Who’s right?
It is hard to get people to articulate this difference, it is often hard for people to articulate it to themselves. They said instead, “testing is testing, and I think testing is about what testers do” and then they talk about the broad mix of activities they do, rather than about the underlying intuition they have about how to prioritize them.

I don’t think that people consciously join schools until they are senior enough to start asking, “why am I doing this?” and “how do I make decisions when decisions are difficult?” In those situations, professionals come up with guiding principles or guiding heuristics or guiding attitudes that influence the rest of their work.

My goal, in abstracting out the “schools” was to see what the common threads were that characterized groups of people who seemed to have reached the same principles. My primary goal, initially, was descriptive.

When I was a student in psychology, the contrasts between “schools” in that field were useful for exam study—we could organize clusters of thought in ways that the professor would grade well. And they were useful for putting some of the research we studied into context. But undergraduates who puffed their chests out and said, “I am a behaviorist!” or “You behaviorists are all stupid! I am a Gestaltist!” were (usually) playing with the ideas. Trying them on for size, experiencing them, but not really committed to them. As a doctoral student I ran what was expected to be a “safe” set of experiments (boring but original in the sense that no one had ever done this particular boring variation on a class of boring surgery-plus-maze experimental situations) (it was like applying a same-old-boring test technique to a part of the program that hadn’t been tested this way before, expecting to find the same types of bugs that this technique usually exposes on new code. Zzzz Zzzz Zzzz) But instead of giving us the expected results, our rats behaved in entirely unexpected ways. At this point, I was nearly fired from the lab for incompetence. Instead of backing off, friends and I set up video cameras, scripted the experimental method more carefully, and showed that the rats really did behave this way. Then I simplified the experimental situation and showed that they STILL behaved that way. That convinced the head of the lab that I was on to something. UP TO THIS POINT, everything I was doing was about technique. To the extent that theory was involved, it was the professor’s, or the senior doctoral students who were coaching my work. But now I had a set of decisions to make and those decisions would determine how I would spend the next almost-a-year or so of my research time in this lab. Time is limited, resources are limited, lots of ideas are possible, lots of experiments can be run, but if you don’t think carefully about why you want to go further and what types of things you want to learn and why, then in research, you are likely to drift in a sea of not-very-meaningful techniques. A lot of research papers come out that way, from immature researchers who spend their time doing the experiments and not much time on why or what they’ll learn or why they really care, but at some point, choices and constraints hit at the same time. I asked, “How would Edward Tolman have analyzed this situation and how would he have prioritized the next work?” (Tolman, see http://psychclassics.yorku.ca/Tolman/formula.htm, for an example of his foundational thinking) and from there, I was operating within the mental framework of one of the dominant schools of psychology, as it applied to animal learning theory. Had I adopted that school as my own? Not exactly. Was it like my religion? No. But I used it to structure my approach to the problems, to help me choose what to do next and how (what questions, what techniques), and to help me interpret my results. If I had adopted a different worldview, I would have done entirely different experiments from the same starting point. If I had simply followed a pragmatic course, I would have done what several colleagues (other grad students, a post-doctoral researcher, and one professor) advised me to do—put this line of work on the shelf and go do something else that was safer and more likely to yield easy publications in a reasonable period of time. I adopted a theoretically-driven approach and had the good luck to get some really interesting data. (You can see details of that experiment in my video lecture on scripted testing at www.testingeducation.org/BBST).

People can drift through their work for a long time without facing really tough choices. They don’t have to adopt a theoretical framework, or develop one for themselves, because the next decision and the next one and the next one seem pretty obvious. Not easy. But obvious. In practice, these folk pick up a smattering of ideas from the different competing viewpoints in our field. Some of the things that people like Don say sound interesting. Some of the things that James or I say sound interesting. We all (well, most of us) describe our techniques and some of those seem useful, and so, in practice, most testers I have interviewed and most test groups I have worked with have operated in a way that they considered largely atheoretical, pragmatic, and (to varying degrees) eclectic (willing to adopt a variety of different techniques and approaches to developing test ideas).

At some point, as a manager, you might hit a set of situations that are difficult enough, and feel important enough, that you start thinking at a more strategic level. Why are we in this situation? Whose opinion counts about what services we are supposed to provide, what information we are supposed to report, what information we are supposed to NOT report? How hard should we fight, over what issues? What should we do next and how can we explain it? How much more time and money do we REALLY need to do an adequate job, and what would “adequate” really mean in our situation, and how can we explain it?

Imagine you finally come up with some answers. If you’re running a group, say 6 testers (or 60), some of those people will be doing things that don’t seem like the most important things to do under the strategy you have finally thought through. Some of them can’t explain their decisions in ways that make sense to you. Some of them seem determined to do stuff that you consider wrong-headed, unlikely to yield any value at all.

At that point, whether you realize it or not, you have probably adopted a viewpoint, a way of thinking about your project, your tasks, your staff, your priorities and your values, that maps to one of the schools. Maybe not. But it’s worth considering. What we tried to do was to bring them out into the open. So that in some critical situation, you could say, “I’m not sure what to do next, but in principle, most of my decision making is more consistent with this way of thinking than that way of thinking. So in my copious spare time (3 hours) that I can spare to do some reading about how other people approach problems like mine, I think I should look first at this person’s writing, at this group’s ideas.” At that point, whether you have fully bought into it, you are operating within the context of a school.

Now, let’s consider your questions:

1.What happens when people of different schools work together?
2.Can we cross-fertilize between schools?
3.Do I have to pick a specific school?

1.What happens when people of different schools work together?

It really depends on the people involved. Doug Hoffman and I started a long way apart from each other. We decided to teach courses together and consult together, partially to gain insight into each other’s views. We learned a lot from each other. In some of our courses, our students had a fun time because one of us would lecture for a while and then the other one would butt in and contradict many of the ideas presented. They got a richer course, and in long debrief meetings after class, Doug and I worked through a lot of stuff. In this particular case, the experience (which continued over several years) was transformational. Both of us ended up in a different place. I abandoned some of my old ideas; so did he.

In small test groups, it is harder to manage testers who operate from fundamentally different perspectives. This often turns into what looks like a personality conflict, as one person described what the other person does as stupid and wasteful. (Some people say this more politely, but it can be very hard.) The problem is not just that they are doing different tasks or using different techniques or even pursuing different priorities. The problem is that their ethical analyses are different, their ideas about who is to blame, what it means to improve, who has to be retrained and why, who should be told what about the project, what autonomy individual testers should have, what control the project manager should have, what documentation is sufficient, what constitutes professionally competent work and competent recording of the work—when people have fundamentally different visions of the work they are doing:
· Sometimes (probably the most common case), one group quits
· Sometimes (not an uncommon case), the manager adopts one view or supports one faction and makes the other folks conform (and eventually, people quit)
· Sometimes people move into competing workgroups and we have bitter corporate politics
· Sometimes a particularly charismatic manager identifies the foundational differences, brings everyone together, lays them on the table, gets people to articulate them as their own, and then persuades the groups to tolerate each other:
We don’t know which is the right way, we have an impossibly complex problem/project in front of us, let’s agree to approach it from different directions and see what happens. In my limited experience, success here doesn’t just come from charisma (whatever that is), it comes from humility and the ability to convey that humility. This really IS a problem bigger than what we know how to solve, we really SHOULD accept the idea of our own fallibility, and agree to do the best we can while suspending judgment on an approach that looks less promising to us but, with our imperfectness of knowledge, might be more useful than we realize.
I have seen this, I think, but not often.

I am sure there are other cases, but they are outside of my personal experience.


2. Can we cross-fertilize between schools?

Yes. I’ve learned a lot from people whose approaches are fundamentally different from mine. In a few cases, other people have learned a lot from me, despite their fundamentally different view.

More often, I think people learn the superficial things and miss the underlying ones—learning very little. Transfer of techniques is pretty trivial. Transfer of ways of thinking, and transformation of them, is harder and much slower.

I think most of the cross-fertilization that I have seen has come from intentional, explicit effort. Often it happens at the individual level – two friends, or people who become friends, decide to take the time to work together closely enough to really appreciate the other one’s art.

3. Do I have to pick a specific school?

To the extent that you adopt any foundational thinking about what you do, you’re in the territory of people who do foundational thinking.

You can read about lots of different viewpoints—Bret’s goal in publishing his talk was to articulate those differences in a way that would be useful for educators, to help their students appreciate the diversity of the field. (Bret’s original presentation was to the Workshop on Teaching Software Testing.) Coming to understand their similarities and differences is useful.

At some point, if you are to become a mature professional in the field, you have to adopt your own foundational views. I’ve given enough examples of foundational questions above that I won’t repeat them here. At some point, you have to come up with a set of answers, and approach reflected within that set of answers, that makes sense to you and that has some coherency.

Probably, at that point, whether you have intentionally adopted one of the schools or not, probably you have adopted the point of view of one of the schools.

The goal of a descriptive taxonomy (Bret’s, mine, others to come) is to capture the essence of most of the leading, rational, competing viewpoints. If it’s a good taxonomy, and you end up doing a good job developing your own viewpoint, your view should fit somewhere on the taxonomy.

As I mentioned in the blog post, my first shot at a taxonomy was technique-focused and it succeeded in the sense of leading to some interesting ideas, but it failed as a way of describing schools of thought. I like the new one better, and will continue to like it until we find a better replacement.

Not everyone belongs to a school:
· Newcomers don’t have enough experience or insight. Some people never develop enough experience or insight to develop or adopt a foundational view of their work.
· If you mean, do you have to take sides like voting in an election or choosing to murder or be murdered in the latest religious crusade, no. Absolutely not. I (me, personally, my choice) happen to think that the way of thinking that I have adopted is a really good one, and I talk favorably about it a lot. I also happen to think that some other viewpoints are wrongheaded, and I explain why. But that’s my choice, and it reflects my personality.
· Many people in our field read so little, or read so superficially, that they never understand what any of the schools is really trying to say. If you spend most of your reading time on mailing lists or qaforums and on magazines like stqe / better software that boil things down into short, simplified, presentations then you might never be exposed to enough serious professional writing to make sense of the schools anyway. That doesn’t mean you won’t adopt your own foundational views. It will mean that you’ll reinvent the wheel, probably with the same mistakes that the last N generations of people who invented this wheel made. In testing, I think that usually means a naïve version of the factory school or the control school.
· In some fields, some people are professionally eclectic. Different professionals have different reference examples for this. For me, a good example field is psychotherapy. Psychotherapists operate within rich intellectual traditions (not all of them—for example, some hacks are mainly focused on the business of psychotherapy than the service—but the ones I know have studied the intellectual history of their field and have their own viewpoint, and that viewpoint profoundly influences their work.)
o Some very senior people are professionally eclectic, and brilliant. The ability to say, “How would Freud have analyzed this situation?” in one case and “How would Jacque Lacan think about this?” for a different case – and really understand each one and really follow in the mental shoes of one for one case and the other for the other case – that ability is very special, and I think very rare.
o Some people say they are eclectic, but it really means that they haven’t found themselves yet, or that they’re not really good at anything.
o Some people say they are eclectic, but it really means that they are still thinking about techniques (and learning how to manage their practice or work as a therapist) and haven’t faced the life crises that make them ask themselves what they really believe. (I guess this is another way of saying they haven’t found themselves yet.)

=========
Schools are not about technical skill sets. They are about the foundational visions of the field within which people develop and deploy their skill sets.

Similarly, schools are not about specific contexts (such as types of applications). It is a damning criticism of a school, a way of demonstrating the superficiality of its thinking, to say that it applies onto to one type of software or one type of customer. A school that tells you to treat all situations in much the same way TACTICALLY (same techniques, same “best” practices) across different categories of product, customer or risk—well, to me, that’s a poor way of thinking. In other fields, that kind of intellectual narrowness relegated a lot of schools to history books. But then again, what answer would you expect from someone in the context-driven school?

-- Cem Kaner

Monday, February 11, 2008

Heuristic is....

"A fallible method of solving a problem or making a decision."

There are only two key ideas that make a heuristic: 1. It may help solve problemor make a decision. 2. It may not.

I consider a test idea to be any idea related to a test you might want to perform.

A heuristic (as a noun) is "a(fallible) method for solving a problem or making a decision". Some peoplealso say "guideline" or "rule of thumb". When used as an adjective (e.g. "aheuristic approach"), "heuristic" means "conducive to learning (albeitfallible)". Contrast "heuristics" with "algorithms", which I'll define hereas "a step-by-step procedures or recipes for solving one particular class ofproblem".

Heuristics are
- not to be confused with rules or edicts
- context
-dependent, situational
- potentially contradicted by other heuristics
- to be used well, need to be used by someone with the appropriate level ofjudgement and skill to choose and use them appropriately- not to be considered perfect, but good enough.
- typically "fast and frugal"-
-much faster than more rigourous and complexforms of analysis, plus
- according to Gigerenzer, often more accurate than other rigourous andcomplex forms of analysis

The hidden secret is that /all/ of our methods of solving problems or making decisions are heuristic--good-enough approximations. Even mathematics orphysics are fallible outside of a context in which (There's a wonderful linefrom "An Introduction To General Systems Thinking", by Weinberg, that goessomething like: "Mechanics, then, is the study of things for which theapproximations of mechanics work sufficiently well.")

If you want to understand heuristics, I STRONGLY recommend GerdGigernenzer's recent book "Gut Feelings", or George Polya'snot-at-all-recent "How to Solve It", or Billy Vaughan Koen's recent butnow-hard-to-find book "Discussions of the Method." They're all verydifferent books, in fact, so maybe that "or" should be an "and".

(ideas stolen from James Bach, Cem Kaner and Michael bolton)

Tuesday, January 29, 2008

Product Maps (M. Bolton)

>1) How do you empirically or quanitatively measure "dept of coverage", doyou have an alogorithm or formula ?

I empirically and heuristically (but not quantitatively) assess (but notmeasure) depth of coverage, thus I have no algorithm or formula. Somepeople claim to have reliable algorithms or formulae, but I've never seenone that, on its own, stands up to critical scrutiny. Code coverage toolsare wonderful examples: someone can show me that they've exercised everyline of code in the product, but that doesn't really tell me anything usefulabout what they observed and evaluated.

2) Would you please provide me with a real example of what is being referredto as a "Map of the product" - and how do you build one? And how is "Mapcompleteness measured?"
To me, a map is anything that models some space such that it links ideas andreality. A map may exist only in the mind; it might not even hit paper orpixels, but a representation of it might in some form. For me, those formscan include diagrams, lists, mind maps, tables, narratives, checklists,flowcharts, matrices (all of which I have used effectively), cause andeffect diagrams (which I haven't)--you name it. These are all typicallysupplemented by description, often in the form of conversation. I'm usuallybiased towards maps and lists that are fast and frugal and that developretrospectively--that is, maps that I fill in through testing--rather thanmaps that attempt to be comprehensive from the get-go, since we can bepretty sure they won't be.

Examples abound:

Test coverage outlines and risk lists:http://www.satisfice.com/rst-appendices.pdf has several, in several forms.See pages 59-94, especially, for examples of this in list and tabularformats.Mind maps: Rob Sabourin's feature article in StickyMinds from November2006.

Test matrices: http://www.developsense.com/testing/TestMatrices.htmlAnnotated structural diagrams: Older versions of Rapid Software--and we mayrestore them in the future; plus an upcoming presentation at STAR East.

Checklists: Elisabeth Hendrickson's Test Heuristics Cheat Sheet; MichaelHunter's You Are Not Done Yet list.

Guideword heuristics:
a) James Bach's Heuristic Test Strategy Model(http://www.satisfice.com). Cem has a very interesting way of using this toanalyze a requirements document; use exactly one of the guidewords to tageach statement in a requirements document, then assess which areas areover-represented (and thereby, which ones are underrepresented).

b) the list of heuristics that NASA used for geological surveys for the Apollomissions, also in http://www.satisfice.com/rst-appendices.pdf.

---Michael B.

Big Up-Front Design (still...)

BUFD / BUFP = Big Up-Front Design / [ditto] Process.

Matt Heuser wrote: "I believe this is what Weinberg calls the primary management sin: 'If it ain't working, do more of it.'"

Also reminds me of my guitar teacher: "It is not because you practice your mistake a thousand times that it will become correct"

Customer collaboration over contract negotiation

Don wrote:
“a requirement specification was supposed to detail the problem space: business context, business goals, business data, business rules, and business constraints: but usually the problem was insufficiently well described for an adequate product to be developed to reliably resolve it. As a result, lots of time was wasted designing and building inappropriate products, which failed to incorporate or correctly implement fundamental business rules regarding data control, transformation, and information generation (roughly, input-process-output). These requirements were nothing to do with software, but everything to do with achieving business success.”

As a law student, I heard story after story from commercial law specialists of lawyers who insisted that their contracts lock down all the details. It was an interesting controversy:

· On the one hand, there was the risk of inadequacy of the contract, because unanticipated details would cause later disputes.
· On the other hand, negotiation after negotiation failed because the legal costs were so high and the negotiating delays were so long.

Later, as a lawyer, I got to see some of these disasters myself and, through my work on computer-related legislation, I got to discuss contracting with people who had negotiated Big Deals and participated in Big Expensive Failures. It was a second education.

In my own negotiations as a lawyer, I adopted two key heuristics:

1. In deals of any complexity, there will always be holes in the understanding of the parties. Even if a problem has been solved by many other people, this problem probably hasn’t been solved this way before by these people in this deal. Their education will come to some degree as they negotiate the deal, but to a larger degree as they work through the project (do the work together) and gain life experience.
2. A contract is a symbol of an underlying agreement. No amount of paper will compensate for bad faith or protect the parties against it. Between parties of good faith, it is more valuable to specify a dispute resolution process that helps the project move forward when unexpected things happen than to try to anticipate all of the future contingencies.

I heard lots of preaching in favor of locking down every last detail. And we certainly studied lots of horror stories (that is, after all, what lawsuits are all about). But if our client didn’t have infinite time and an infinite budget for “transaction costs,” and if the other party(s) didn’t just agree to whatever we asked for, then negotiations had to stop and life had to go on long, long before all the details (what you are calling the project requirements) could be specified.

Rather than locking all of those requirements down—or blaming people for not being willing to achieve this impossibly expensive task—I saw my task as helping people find the smallest set of things that had to be locked down and finding a process for resolving the rest later.

I was delighted when the agile methods started gaining traction because they finally took seriously and constructively the same issue, presenting a technical-process solution to the need for ongoing requirements evolution, rather than legal-process (mid-project mediation, mid-project arbitration and other structures for supervising negotiation of the evolving deal).

Let me suggest a different pair of interpretations of what you are writing. When you say that lots of time was wasted creating the wrong things in the wrong ways, I suspect that:

· Lots of time was wasted because the vendor had no incentive to meet regularly with the client to steer the project to success, and the client was locked into a contracting process that gave them little authority to steer without driving their costs through the roof. The contract replaced the requirement of ongoing mutual-satisfaction-oriented communication. That is the essential difference between BUFD (big up front design) and iterative development (read Tom Gilb on evolutionary development—he wrote about using it with the air force long before XP was a formalized methodology).
· Lots of time was wasted because the vendors who play in this space are the ones who have learned how to profit from a grossly inefficient development process. There was lots to dislike about Donald Rumsfeld (our previous Secretary of Defense), but this was one of the issues he raised time and again. Rather than failing more expensively with larger warehouses full of paper, Rumsfeld wanted a more agile contracting and development process. I’m not sure how successful this has been, but I am sure that a big part of the problem has been inertia on the part of the vendors.

During my legislative work, I spent a lot of time working with lawyers who represented (or worked for) insurance companies, banks, and other big non-government institutions. One of the most striking things was the extent to which they mistrusted (sometimes, hated) software vendors (especially custom software development companies) compared to their attitudes toward other vendors in other industries they dealt with. I think BFUD (oops, I mean BUFD) has played a big role in the evolution of that profound dissatisfaction.

- Cem Kaner

Monday, January 14, 2008

Testers find bugs that users don't find

It is hard to admit it, but many times we testers find bugs that a user most probably won't find.

For instance, I am testing piece A that is designed to work with piece B.
They seem completely independant, but in real life they are used always toghether.
However I, as a tester, set up piece A and test it immediatelly (unit testing, remember?).

It fails. I find later it only works when opiece B is correctly set up and working.

A system administrator would not go across this, because he would set up piece A and pice B then would try the system...