Thursday, November 15, 2007

Encourage Software Testing as a career (or not)

Cem Kaner's thoughts about this:

I don't generally encourage my students to pursue software testing AS A CAREER. They can make that decision later, after they have more experience.

I prefer to encourage them to try SOFTWARE DEVELOPMENT as a career -- to me, development includes testing. And that they take a job doing serious, skilled testing as PART of that career.

Most of the best testers I know have significant experience outside of testing and apply that experience to what they do as testers or test managers.

Most (all?) of the hidebound process-pushers that I know in the field have never done serious professional work outside of testing. From their narrow perspective, they think they know more about how to manage a development project than the people who retain their testing services. Instead of trying out their ideas as project managers (where they will be accountable if they fail) these process advocates undermine the projects they work on by trying to control things they don't understand with rigid policies and procedures, standards and superstitions, whose costs and impacts are beyond their imagination. We have too many of these people in our field. We need more people who have a broader view of the tremendous value that testing can offer--within its limited role--and are glad to embrace a service-provider role that provides that value.


What makes testing worth spending time on--as a job and maybe as a career?

We are professional investigators. Rather than building things, we find ways to answer difficult questions about the quality of the products or services we test. Our job--if we choose to do it well--requires us to constantly learn new things, about the product, its market, its implementation, its risks, its usability, etc. To learn these, we are constantly developing new skills and new cognitive structures in a diversity of fields. It also requires us to communicate well to a diverse group of people. We ALSO get to build things (test tools), but very often, we build to our own designs, which can be more satisfying than building an application that does something we'll never personally do (or want to do). Learning to do good software testing requires learning to do critical thinking well, and to back it up with empirical research.

Not everyone will like to do testing. Not every engineer or programmer will have the skills or the interest to do professional-level testing. But for those of us who enjoy critical thinking, experimentation, and keeping the human relevance of what we do always in mind, there is nothing else like it in software development (except, for some people on some projects, requirements analysis backed with rapid prototyping and prototype-based research).



-- cem

Wednesday, November 14, 2007

Should major defect be fixed earlier than usability/interface defects?

Usually there is a bug tracking system in which the reporter (tester) assignes a severity to the defect, and then a developer leader assigns a priority. Usually the usability issues are not as severe as crashes, for instance, and so are assigned more priority and fixed earlier. On the other hand, the usability/interface issues are postponed to a later phase "when the product is more stable"

Now something about the context. I am talking about a big size commercial application, and I am refering to an alfa version. The product is not officially released, it was already delivered to testing team, but there is a long way to go until it is beta ready (let's say about 6 months.

I think that in this scenario, interface and usability issues should be prioritized above other problems, like crashes or errors, provided that these are not blocking tests and development evolution.

Here are the reasons for this:
-Interface changes like uniformizations, tend to spread to many areas of the aplication, and the cost tends to be high. By fixing this earlier we are reducing the cost of the fix.

-Interface changes are likely to introduce unstability and break the code on many areas, so the sooner these major changes are done, the sooner these new defects can be fixed.

-Incorrect Labels cause confusion: testers and tech writers will have to talk to developers to understand the software, developers will have to do extra work to find, understand and fix things, etc. So this "minor" defect can increase the cost of software while fixing this would be very cheap. (I would include here the names used in the code, whose importance tends to be underestimated by the developers. I which they could count the defects caused by naming issues, and the time spent finding badly named variables, etc...)

Does anyone agree with me on this?
If so, any tips on how can we explain to leaders that "label is confuse" minor defect should be fixed earlier than that uggly "application crash" defect?