Thursday, November 15, 2007

Encourage Software Testing as a career (or not)

Cem Kaner's thoughts about this:

I don't generally encourage my students to pursue software testing AS A CAREER. They can make that decision later, after they have more experience.

I prefer to encourage them to try SOFTWARE DEVELOPMENT as a career -- to me, development includes testing. And that they take a job doing serious, skilled testing as PART of that career.

Most of the best testers I know have significant experience outside of testing and apply that experience to what they do as testers or test managers.

Most (all?) of the hidebound process-pushers that I know in the field have never done serious professional work outside of testing. From their narrow perspective, they think they know more about how to manage a development project than the people who retain their testing services. Instead of trying out their ideas as project managers (where they will be accountable if they fail) these process advocates undermine the projects they work on by trying to control things they don't understand with rigid policies and procedures, standards and superstitions, whose costs and impacts are beyond their imagination. We have too many of these people in our field. We need more people who have a broader view of the tremendous value that testing can offer--within its limited role--and are glad to embrace a service-provider role that provides that value.


What makes testing worth spending time on--as a job and maybe as a career?

We are professional investigators. Rather than building things, we find ways to answer difficult questions about the quality of the products or services we test. Our job--if we choose to do it well--requires us to constantly learn new things, about the product, its market, its implementation, its risks, its usability, etc. To learn these, we are constantly developing new skills and new cognitive structures in a diversity of fields. It also requires us to communicate well to a diverse group of people. We ALSO get to build things (test tools), but very often, we build to our own designs, which can be more satisfying than building an application that does something we'll never personally do (or want to do). Learning to do good software testing requires learning to do critical thinking well, and to back it up with empirical research.

Not everyone will like to do testing. Not every engineer or programmer will have the skills or the interest to do professional-level testing. But for those of us who enjoy critical thinking, experimentation, and keeping the human relevance of what we do always in mind, there is nothing else like it in software development (except, for some people on some projects, requirements analysis backed with rapid prototyping and prototype-based research).



-- cem

Wednesday, November 14, 2007

Should major defect be fixed earlier than usability/interface defects?

Usually there is a bug tracking system in which the reporter (tester) assignes a severity to the defect, and then a developer leader assigns a priority. Usually the usability issues are not as severe as crashes, for instance, and so are assigned more priority and fixed earlier. On the other hand, the usability/interface issues are postponed to a later phase "when the product is more stable"

Now something about the context. I am talking about a big size commercial application, and I am refering to an alfa version. The product is not officially released, it was already delivered to testing team, but there is a long way to go until it is beta ready (let's say about 6 months.

I think that in this scenario, interface and usability issues should be prioritized above other problems, like crashes or errors, provided that these are not blocking tests and development evolution.

Here are the reasons for this:
-Interface changes like uniformizations, tend to spread to many areas of the aplication, and the cost tends to be high. By fixing this earlier we are reducing the cost of the fix.

-Interface changes are likely to introduce unstability and break the code on many areas, so the sooner these major changes are done, the sooner these new defects can be fixed.

-Incorrect Labels cause confusion: testers and tech writers will have to talk to developers to understand the software, developers will have to do extra work to find, understand and fix things, etc. So this "minor" defect can increase the cost of software while fixing this would be very cheap. (I would include here the names used in the code, whose importance tends to be underestimated by the developers. I which they could count the defects caused by naming issues, and the time spent finding badly named variables, etc...)

Does anyone agree with me on this?
If so, any tips on how can we explain to leaders that "label is confuse" minor defect should be fixed earlier than that uggly "application crash" defect?

Wednesday, October 31, 2007

Creativity in software design: greatest dangers

Creativity has always been pointed as a desirable characteristic for a developer, but there are some drawbacks that we should be aware as testers. For instance, consider interface issues.

When: when you find something that you don't understand at first. Then you see it is a completely new way of doing something.

Beware with your repulse: because you might tend to reject it just because it is new. People are emotionally attached to well known and predictable software, thus fearing everything different from standards.
Beware with developers defense: people are emotionally attached to their own creations, for a variety of reasons:
- Because it took a lot of writing work (coding)
- Because it took a lot of creative work
- Because people need creativity to fulfil their lives
- because they really think it is a great idea

When is it acceptable (the new solution)?
-It is the best way to do the task (usability criteria)
-It is easy to find out

Friday, October 12, 2007

Installation testing: should it be so hard?

I share your concern.
It is indeed a huge task to verify setups, because if you need to repeat the test you tipically must:
1-Uninstall
2-Check that everything was removed (like registry stuff, etc) – Probably you have to verify the uninstall feature as well…
3-Install
4-Perform the verifications

In my company we have patches every week, and you cannot imagine the immense diversity of problems such as:-This file is missing because we forgot to include it-This file is missing because we did not know we needed to include it-This other team did not tell us this shared file was changed, so we did not include it in our module.-There were “compilation problems” (corrupted files)

It seems to me that the way to resolve this problem is more on their side. I mean, they should find procedures and methods that are more reliable, more automated (in the sense that they don’t have to think every time about what files they should include, etc). The more tasks they have to do “by hand”, the more likely they are to miss some. They should get check lists or any other tools to help them.I ought to say that if a file is missing from a release, the responsibility is from who released the product.I don’t see our function (tester function) as a parachute to developers work, because I guess it is impossible to verify everything.
Quality is not responsibility of the quality department (QA, Test team, whatever you call it); it is responsibility of the whole R&D team.(I read this on paper “classic testing mistakes”, I think)
I’m afraid I did not help much to achieve your task, but I hope you have more arguments to reduce it to more “reasonable” levels 

Joao Pedro

________________________________________From: software-testing@yahoogroups.com [mailto:software-testing@yahoogroups.com] On Behalf Of John McCondaSent: quinta-feira, 11 de Outubro de 2007 17:33To: software-testing@yahoogroups.comSubject: [software-testing] Installation Testing
Hi everyone.
I'm perplexed by a problem that seems fairly simple on the outside,but has proved to be a frustrating part of testing on a particularproject. I will try to include as much detail as I can here withoutbeing overwhelming.
The basic problem is installation testing. We test a C++ client/serverand Web app that comes with an installer built with Install Shield.The app has existed since 1998 and has gradually grown very large andcomplex, with multiple services and modules needing to be installed inthe folder structure.
There is one developer who does nothing but installers for theproject. The other developers create a build folder on a server thatthe install developer uses to create the installation exe.
Currently, it is the test team's job to verify each installation invarious ways. This includes running every new installer that iscreated and then using a tool called Beyond Compare to compare thefiles that are installed with the build folder to ensure all files areidentical. Secondly, for each build, the testers must compare allfiles of a new installed build with an installed build of the lastversion and note that all changes are expected.
We often find problems when we do these checks, and it will come backthat the install developer left something out or had the wrong versionof a file and we will need to do the whole process over again.
This seems horribly inefficient to me, and takes much time away frommy team doing actual testing. This is my first time testing somethingwith such a complicated installer though.
Has anyone else found a better way to deal with this type of problem?Are there any good tools for creating installers I can suggest to theinstall developer that will take some of the burden off of the test team?

-------------Here is another response, this time from the setups developer point of view:

John,
In my last company I was also on the other side of the fence, that is,I was the one who had the responsibility of creating the installerbesides testing them. To add to the complexity, the product I createdinstaller for, was supported on windows platform plus 3 flavours ofUNIX [HP, AIX & Solaris].
Below is the approach I took:

1. Created a single Installation script file (we used Install Anywhereso that helped) for all platforms.
2. Created a single ANT script that did the following:
a) First, it would check the build date of all the files that are tobe packaged and compare them to the last check-in date (since allfiles were sourced from a Version Control System)
b) Second check was to make sure that none of the file is of zero size.
C) Then created installer using Install Anywhere's command line utility.
d) Once the installer was created, we executed the installer in silentmode and did a complete installation for each platform.
e) Once the installation was complete, we again check that all thefiles are copied in appropriate directory structures and the values inproperties files are changed appropriately (compared to baseline files).
f) Then brought up the product for Testing team to continue working.
f) Once all the checks passed, the installer was copied to a centrallocation.
If any of the check failed, a mail was sent to appropriate people withall the details. Plus if the post-installation check failed, then thescript used to remove the new installation and restore the backup ofworking setup.
All the installation were done on the setups that were used daily bytesting team.
Now since you are not the one who is creating the installers, you'llneed to have development's buy-in for setting up such kind of process.But if you can translate this in to scenario specific to you, I'm surethe value derived would be too great for development to look other side.

Hope this helps.
ThanksNeeraj

Monday, October 8, 2007

"Once we're off the map, we don't know where we're going next"

By Michael Bolton.

A reply to a testing enginneer that looked advive on how to persuade developers to fix "small defects"

>I have the following question: what should I tell to the developer(s)when an error occurs with a low severity, occurring in specificsituations with a low probability of happening in real world, when heasks me where or how this error can affect the safety of the system?Every time the developer says that, I can't seem to find a logicalexplanation on how the defect can affect the safety.

The answer that I'd give to this question is that any defect for which wedon't have a good, clear explanation is a potential security threat. Oneway to deal with the problem is to ask the developer, "Can you be sure thatthis problem /doesn't/ represent a threat to the safety of the system? Whatmakes you /sure/? And if you're sure, why don't we handle this condition ina way that makes our certainty evident?"

One thing that tends to make us "sure" that a small symptom isn't a bigproblem is something called "representativeness bias". We have anapparently natural tendency to associated the significance of the symptomand the significance of the underlying problem. In complex systems, thefirst sign of a problem may not be a big sign.

One prominent and memorable example of this is the Challenger disaster.O-ring seals, intended to keep gasses from coming together, degraded.Instead of remaining intact, they burned partially through. NASA managersused the fact that they didn't burn ALL the way through as evidence ofsafety, but (as is now obvious) the opposite was true. As Richard Feynmansaid in his Appendix to the Rogers Commission Report on the Space ShuttleChallenger Accident (I've added emphasis below),

[quote]
There are several references to flights that had gone before. The acceptanceand success of these flights is taken as evidence of safety. /But erosionand blow-by are not what the design expected. They are warnings thatsomething is wrong. The equipment is not operating as expected, andtherefore there is a danger that it can operate with even wider deviationsin this unexpected and not thoroughly understood way./ The fact that thisdanger did not lead to a catastrophe before is no guarantee that it will notthe next time, unless it is completely understood. When playing Russianroulette the fact that the first shot got off safely is little comfort forthe next. The origin and consequences of the erosion and blow-by were notunderstood. They did not occur equally on all flights and all joints;sometimes more, and sometimes less. Why not sometime, when whateverconditions determined it were right, still more leading to catastrophe?
[/quote]

When the system is in an unpredicted state, it's in an unpredictable state.Once we're off the map, we don't know where we're going next.

---Michael B.

(Kaner)

perhaps take a look at the video at http://www.testingeducation.org/BBST/Bugs1.html

and my contribution:

Sometimes it is not easy to give up on a defect after spending several hours trying to figure it out. But sometimes it is the wiser thing to do.

The story usually goes like this:
-We find something that behaves in an unexpected way.
-We try to find out the causes with further testing, perhaps reading documentation, searching the web, talking to the developer.
-Eventually we find out that: it is not noticeable to the user, it does not compromise safety for sure, the cost of fixing clearly does not cover the benefit of having it fixed, or it is just a matter of logical/mathematical consistency.

Sometimes we like to keep everything in the right shelves, even if we’re never going to need them. It kind of gives us “peace of mind”. If we recognize that the reason for wanting the defects fixed is only psychological, it will be easier to let these defects alone.

Anyway, all these defects should be reported formally, and it is not our job or responsibility to discard them. But it is our responsibility not allowing them to be discarded when we have some reason to believe that the consequences might not be completely studied (or more serious than what is acceptable)

Joao Pedro

Thursday, August 9, 2007

Is a bug in the code necessarily a bug in the process?

(pasted from agile forum)
>As Jim Shore says in his forthcoming book, a bug found in released code is
not only a bug in the code, but a bug in the development process. What
caused this bug to be produced? What caused it to not be caught prior to
release? How can we prevent this error in the future. The causes may be
anywhere from a coding error to a product concept that's not clear.

In traditional models of testing, I rarely see any credence given to the
idea that we can (even hope to) know everything before we start. This is
one of the reasons that I like agile--and sometimes Agile--processes, and
it's a reason why I'm enjoying "Artful Making" (Austin). I'd argue that a
bug is not necessarily linked to a bug in the development process if we
learn something from it. Austin goes so far as to say that it's not
necessarily a problem if we fail in the same way ten times, because we might
need ten iterations before we can get insight into what the problem really
is.

---Michael B.

Tuesday, July 10, 2007

Creating a more open team

There is an ongoing thread on software-testing forum about how you may turn your team into a great team, in what concern knowledge sharing, interest for investigating new ideas and state-of-the-art testing practices. It all started with this email:

"I work in a team and I'd like to look at moving the team away from what often seems like a team of individuals and create a more open culture where people willingly talk about experiences gained within projects, techniques being used outside of our company and other conversations like simply mentioning an article they've read regarding testing. Has anyone out there had a similar culture-shift they've tried to achieve and therefore has a suggestion on ways to go about this?"

Here are some suggestions:

"A wiki is the ideal tool to do this sort of thing. If your team is not experienced using wikis, you might consider designating some experienced team members as "content creators" at the beginning, and make it part of their job to find and post interesting and important things to the wiki. Less experienced team members might take a more passive role, but be sure that the less-experienced are encouraged to read and comment on the content. Make it so that everyone feels that contributing to the community on the wiki is an important activity. "

"Creating forums is a good idea. You would need to have one dedicated team to drive the whole activities. You can also have brain storming sessions, solution boards.

You can also initiate Knowledge Exchange Session by bring prominent personalities across different areas (it could be testing, architects, domain experts etc)

Reward programs for sharing knowledge will be a good motivational tool to bring about the willingness in sharing experiences."

"I've found the wiki we use at my place of work to be a good place to put processes and how-to guides, but I honestly think that it won't encourage your team to further interact with each other. You can fill it full of useful knowledge, but that's not what makes a team great.
Try a practical demonstrations of the value of working more closely together. I recently reproduced an exercise from the 'rapid software testing' course I attended recently (thanks James :) ), whereby I had the team look at a completely unfamiliar gui and write down what they thought it did and how they'd test it. After a few minutes, I had them read their answers, pressed them a bit for more. Every single person in the room (8 of them) said something different (and useful). That five minute exercise alone was enough to show them that while they have a common set of skills, they also have different approaches and ways of thinking about a problem. They are now much more apt to ask each other questions or opinions than they previously were. I am encouraging the team to come up with ideas and exercises and questions that they have thought of or discovered, and present them to the rest of the group.

Maybe you should be asking them what sort of forum they would enjoy using to communicate their ideas. That question alone might be a good one to get intra-team discussion happening. You could have your test leads present a test post-mortem for a given project, have them identify (and defend) key decisions they made, what risks they identified, how they reacted to adversity and so on - then open it up to the rest of the group for comments and questions. You may need to encourage the quieter ones to speak up, but that will happen regardless. I find that people want to feel useful. If you provide a forum that encourages people to interact, to share their knowledge in a way that is useful to others, then you will probably get more buy-in than giving them a screen to regurgitate their thoughts into."

"Utilize the stickyminds editorials, find some older ones perhaps that are relevant, with some interesting comments as well, send out the link and discuss it over lunch. That will start the process going.
Another possibility now is just watching some of the google testing videos presented by testing luminaries over lunch, then having a discussion. At the google page, select Video then "Google techtalks testing", which finds 66 videos!
It has to be within the scope of people's work commitments though: I often push for people to have an hour or two a week (on average) for private study/research as part of their job role, and encourage them to present their discoveries etc at brown bag lunches."

"A nice idea (the open team one that is) but be careful not to "create" anything artificial or cringe making.
I can't count the number of times I've been on the receiving end of horrific artificial "team building" exercises.
I've watched these things alienate teams rather than build them or open them.
I suppose they do give the team an opportunity to bitch about something they have in common, but is that what you are after? ;-)

The wiki idea actually seems counter productive to your goals to me i.e. just another excuses for technical folk NOT to talk to each other.

The only thing I think every member of your team will have as a common goal is "better work/life balance".
For me, better planning and use of human resources i.e. going home on time AND having lunch (by myself or with friends), is what floats my boat.
I moved from permanent work to contract/consulting work because it actually encouraged my management to send me home on time i.e. every extra hour I spent in a day it affected their budget."

Friday, July 6, 2007

TestLink after one year...

We used Word documents and Excel worksheets as our documentation tool for scripted testing, and at one point I was given the task to evaluate and choose a new tool.

TestLink is a browser based tool to document Test scripts (planning) and execution results. It was chosen because it has some interesting features:
-there is an area for the tests design and another for the tests execution. Same test Plan (design) may be instantiated several times, for example for different platforms. It also supports Builds. Every result is easily stored and consulted.
-Main metrics are built-in. For example we may see at any point how many tests have passed, failed, blocked, not run
-Roles are supported: a tester has different privileges than a team leader, an administrator or a test designer
-Through Access rights, test executions are easily assigned to testers. Each tester only sees what matters.
-It is possible to import test cases from excel worksheet (though we had to make a macro to convert word to excel…)
-It is possible to connect to a bug tracking system

Note that this is not a tool for automation, or for calling automated scripts. It is a tool to store test documentation.

Now what is not so good:
-it does not link smoothly with exploratory testing; so one should think what is the real value of scripted testing for the System Under Test; in our case we do have to guarantee some basic features recurrently, so we do need some detailed test execution record. If the tests are not to be re-utilized, the usefulness of TestLink should be questioned.

Brilliant post from Michael Bolton, related with context-driven

Imagine reading this message:

"I'm looking to take a drug, but don't have the budget for the name brand pharaceuticals, and want more than is provided by folk remedies. I'm looking at ampicillin and ritalin, but don't know anyone who has taken these drugs. Does anyone have any experience with these drugs, or other drugs that they would recommend?"

Wouldn't you be curious to know what malady the writer was interested in relieving?

---Michael B.

From: agile-testing@yahoogroups.com [mailto:agile-testing@yahoogroups.com] On Behalf Of gmel74Sent: July 5, 2007 10:17 AMTo: agile-testing@yahoogroups.comSubject: [agile-testing] test management tools

I'm looking to implement a test management tool, but don't have the budget for the name brand solutions and want more than is provided with the open source solutions. I'm looking at TCM by Seapine and Devtest from techexcel, but don't know anyone that has used these tools. Does anyone have any experience with these tools or other test management tools that they would recommend?

Thursday, June 28, 2007

4 probes to a medium

Here is an enlightened post from Michael Bolton (as usual):

Marshall McLuhan talked a lot about media. A lot of people don't realize that his ideas were different from the ones that we usually associate with the words that he used. To McLuhan, a medium is anything that effects a change--a new tool or technology, an idea, a programming language, a hat... anything that appears as a new figure somewhere in an existing ground.

In order to investigate the impact or significance of a medium (or as McLuhan called it, the medium's "message"), McLuhan proposed four probes, or questions, that one could ask about a medium:
1) To what extent does this medium enhance or extend or intensify or expand or enable some human capability?
2) What previously obsolete medium (perhaps from a vastly different field or from a long time ago) does this new medium retrieve, or remind us of?
3) What existing (and perhaps ubiquitous) medium does this new medium make obsolete?
4) How might this medium reverse its intended or expected or beneficial effect when it exceeds its capacity, or when it becomes the norm? Every medium, said McLuhan, exhibits the general characteristics that the probes ask about--extension, retrieval, obsolescence, reversal. The intention is to try to look at a new or anticipated medium using these probes.

(Mark Federman (http://whatisthemessage.blogspot.com ) explains Marshall McLuhan's Laws of Media very well and very clearly; "Creating a Culture of Innovation" is the paper to look for on the right of the blog page. Even better is his book, co-authored by Derrick deKerckhove, "McLuhan for Managers", which is hard but still possible to get.)

So: the test team enhances the ability of the product team to find bugs, to report problems, to understand the product. It retrieves the idea of the test pilots, the food taster, the editor, the lab researcher. It obsolesces, at least to some degree, the field technician, the support centre, the unhappy customer. And it reverses into the developers who won't check their own code, blaming and fingerpointing, mindless confirmation of trivial test ideas, automated behaviour, product managers passing responsibility for release decisions on to the testers...

That list is after a minute or two of brainstorming; it could be much longer. What would happen if you spent 15 minutes on the with your test team, or your development team, or your managers? You might be able to set clearer goals (or limits) about what you want to extend, how you want to model the team, what or who the team might make redundant, and (at least) to anticipate and control for some of the reversal effects of the test team.

One more thing: rather than thinking about metrics, try thinking about observations that you might make. If it makes sense simply to describe the observation, do that. If it makes sense to measure the observation, remember that metrics are media too. That is, each metric you use will extend, retrieve, obsolesce, and reverse some aspect of your observation.

---Michael B.

See the sticky minds article here.

Wednesday, June 27, 2007

tester skills

I wouldn't say better, so here it goes:


>A tester with no programming experience at all will not be able to keep up.

Keep up with what?

James Bach, Becky Fiedler and I are having an amusing time this week wrestling with bibliographic software. EndNote (Thomson) is the leading package, but it has bugs. Thomson bought out EndNote's leading two competitors a few years ago. Not surprisingly, progress in advancing the quality of the product has not been so quick.

From a bibliography management viewpoint, I think the EndNote people understand their market's needs well, but they have a multi-year history of corrupting files (including too many of my files). I suspect that their product might benefit -- A LOT -- from adding a few testers with stronger programming and debugging skills. As stands, Becky, James and I are each ready (enthusiastically ready) for a change.

So I was initially glad to see Microsoft bringing some competition to this market by adding bibliographic support to MS Word. I suspect that their software engineers in test did a fine job checking for file-corrupting bugs. On the other hand, I think this product would have benefitted enormously from serious evaluation by people who really understand how bibliography managers are used by graduate students, researchers and technical writers. In its current form, I think the product is useless for us as writers and I would not recommend it to my undergraduates when they are writing their first essays.

A tester with no programming experience will not be able to keep up with programming-related issues, but a tester with no understanding of the non-programming domains will not be able to keep up with those many aspects of the product's quality (quality = value to some person) that have nothing to do with the technical quality of the code but everything to do with whether it is worth using.

Rather than insisting that all testers be programmers or all testers be domain experts (as if any product involves just one domain), I far prefer a staffing model in which people have a variety of capabilities. Some people know code. Some people know peripherals. Some people know accounting (if that's relevant), etc. They cross-train, they pair-test, and they keep up as a team. Better a group of strong individuals than a group of mediocre generalists. And better a group of mediocre generalists than a group of programmers who know a lot of test automation and don't understand what the people who will (not) use the product will (not) use it for.

-- Cem Kaner

Friday, June 15, 2007

Pair-programming

I only tried pair-programming when I was in college, but now we do something like pair-testing, for example when someone more skilled helps another; or when we reproduce defects so the developer “watches” them happen.


Besides everything else, I think pair-working helps to keep the focus, thus saving a lot of time. I bet we won’t check new email, handle phone calls, surf the web, reply to forums like these… while we are pair-working. So we should count that as time saving, to balance the cost of having 2 workers on the same task. Due to this “’forced’ focus, we also save a lot of brain effort in context-switching.


The drawback is that, as in any focused, concentrated, demanding activity, our mind will not use the self-resting and balancing mechanisms that would work if we were alone. I would dare to say that 8 hours a day on pair-programming must harm our mental health.


Does the words meditation and silence ring any bells in this context?