here is my response to a post, followed by a (much better) argumentation from Cem aner:
>Having the ability to understand the implementation gives you the confidence and knowledge to make sounds decisions on what to test and where to focus your time and energy. It allows you to challenge the system design and test pieces of the system as they are developed.
On the other hand, you will be thinking in the same biased way as the developer, only (hopefully) with a deeper caution and inspection level. If you don’t have any idea about the implementation, you might stumble across totally different issues. Certainly everyone has the experience of looking at a product from a new point of view, for instance when the product is new, or when the tester is new, and how productive that might be (in terms of number of reported defects).
>However we agree that technology understanding is more expensive or more difficult to master. We don't believe that domain knowledge is not important but that it is easier to acquire.
With all respect, I don’t see how you agree on that. When a tester is hired, the academical skills are required, but in my experience, there is no specific need for a programming language, or a specific technology. The academical skills indicate that the tester is able to learn, and has already learned, technical disciplines, and has the ability to think as an engineer, solve problems, etc, etc. Usually, it is not a guarantee that he will master any particular technology. The professional will learn to master a new technology as the work demands it.
But to acquire business expertise, it usually takes time and money. While the technical knowledge is assumed when hiring, you will have to pay a salary to the tester so he learns the domain particularities. So it is not cheap!
In my field (contact center software) there are visible differences between someone that has 1 year of work, or 2, or 5, or 10! We lately hire people from other departments within the company to do the testing, because they start producing results much faster.
You might think that is because my case is a complex one (contact centers, client databases, telephony switches, etc), but even considering a simple form to input a client name might have lots of problems. Let me give an example: recently I bought an airplane ticket on a well known company using the internet. Can you imagine a simpler issue than requesting the name of the traveler? Well, there were problems on the check-in because it was not clear that only first and last name were requested, so the ticket only caught middle names!
Having said this to defend the “business” side, I should say I always try to model (pseudo-code) the implementation, and try to know the technology that is being used, because it actually helps me to find out “what’s happening” when something unexpected happens (and that’s all the time).
Me
-------------
> First our argument about domain was insurance. I argue that it does not take several
> years to know what you need about insurance to be an effective developer tester or
> system architect. This is why people in banking and insurance can move easily.
Funny thing about insurance.
I too have seen incompetent programmers, architects, project managers – and testers – at insurance companies, and banks and government agencies and universities and telephone companies (and even at game companies). I too have worked with people who have no relevant education and who serve as their corporation’s advocates of excess paperwork, invalid metrics, and the politics of blame. (And who take exams so they can put on their resume that they are Internationally Certified as Professionally Incompetent .)
However, I’ve also seen really good people in these kinds of organizations. After all, SOMEONE has to get the stuff done and working.
I don’t know why incompetents survive in these organizations, but when I think about knowledge, skills and attitudes needed to do a job well, my first point of reference is that I need people who can do the job well, not that they should not be aggressively incompetent.
Insurance companies create or acquire a lot of different types of software.
For example, testing software that does risk estimation (a core issue for insurers) takes deep subject matter knowledge. You CAN do the superficial testing (OK, it doesn’t take letters in number fields), but that doesn’t tell you whether the program is processing the probabilities appropriately – and that’s what the company is betting its future on.
Here’s another example: the software that salespeople use has to get them the information they need, quickly (quickly by their standards, not programmers’ standards), accurate in every potentially relevant respect (where relevance and accuracy are evaluated through the eyes of the salesperson and corporate counsel, not by the “good enough” filter of the programmer who has never done this job), in a format they can understand (not a format that we tell them they should be able to learn if they’re smart enough and work hard enough), displayed in a way that is not going to cause difficulties if someone looks over their shoulder at their screen, and that facilitates closing the deal. Having sat on the other side of sales software (as a user, rather than developer), my impression is that very few programmers understand what salespeople actually need to do their job well and even fewer have enough respect for the work salespeople do to CARE what the salespeople need. Sales is difficult work. Tools that support salespeople well are difficult to design and evaluate. There is such a thing as genuine domain expertise in this field, it is rare, and it takes time and work (including a lot of visits to the field) to develop.
Let’s pretend that insurance companies operate with 10 different types of software (actuarial being one example, sales support a second, etc.) I think that means that a central test organization has to develop subject matter expertise in each of these areas.
Unit tests, and most other types of glass-box tests, are verification-oriented. The question is whether the program does what the programmer intended. It’s an interesting question whether verification should be done by the programmers (whether this is a task for people designated “tester” at all), but whoever does it, that doesn’t take us to the other classes of issues that I think testers SHOULD be spending time on. Not whether the programmers implemented a model correctly but whether it was the right model. Not whether the program meets specified performance criteria but whether performance is fast enough to meet the needs of the client. Not whether the program presents the intended menus and graphics but whether they are natural and useful to the people for whom the software was written. Not whether the software works on the specified platforms but whether it works on the platforms in use. Whether it interoperates effectively with other software, etc.
Verification is the handmaiden of validation and evaluation. It is easier to evaluate a program that operates in a consistent and predictable way. Someone has to do verification, it is important work, and the test group can help with this, but if the program doesn’t solve the problems that its users need solved, if it doesn’t provide the benefits that they need provided, then it doesn’t matter how well the program doesn’t do it. Programming-focused testers can help a lot with verification, but testers need a lot of other knowledge to do the rest.
Cem Kaner, J.D., Ph.D.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment