Creativity has always been pointed as a desirable characteristic for a developer, but there are some drawbacks that we should be aware as testers. For instance, consider interface issues.
When: when you find something that you don't understand at first. Then you see it is a completely new way of doing something.
Beware with your repulse: because you might tend to reject it just because it is new. People are emotionally attached to well known and predictable software, thus fearing everything different from standards.
Beware with developers defense: people are emotionally attached to their own creations, for a variety of reasons:
- Because it took a lot of writing work (coding)
- Because it took a lot of creative work
- Because people need creativity to fulfil their lives
- because they really think it is a great idea
When is it acceptable (the new solution)?
-It is the best way to do the task (usability criteria)
-It is easy to find out
Wednesday, October 31, 2007
Friday, October 12, 2007
Installation testing: should it be so hard?
I share your concern.
It is indeed a huge task to verify setups, because if you need to repeat the test you tipically must:
1-Uninstall
2-Check that everything was removed (like registry stuff, etc) – Probably you have to verify the uninstall feature as well…
3-Install
4-Perform the verifications
In my company we have patches every week, and you cannot imagine the immense diversity of problems such as:-This file is missing because we forgot to include it-This file is missing because we did not know we needed to include it-This other team did not tell us this shared file was changed, so we did not include it in our module.-There were “compilation problems” (corrupted files)
It seems to me that the way to resolve this problem is more on their side. I mean, they should find procedures and methods that are more reliable, more automated (in the sense that they don’t have to think every time about what files they should include, etc). The more tasks they have to do “by hand”, the more likely they are to miss some. They should get check lists or any other tools to help them.I ought to say that if a file is missing from a release, the responsibility is from who released the product.I don’t see our function (tester function) as a parachute to developers work, because I guess it is impossible to verify everything.
Quality is not responsibility of the quality department (QA, Test team, whatever you call it); it is responsibility of the whole R&D team.(I read this on paper “classic testing mistakes”, I think)
I’m afraid I did not help much to achieve your task, but I hope you have more arguments to reduce it to more “reasonable” levels
Joao Pedro
________________________________________From: software-testing@yahoogroups.com [mailto:software-testing@yahoogroups.com] On Behalf Of John McCondaSent: quinta-feira, 11 de Outubro de 2007 17:33To: software-testing@yahoogroups.comSubject: [software-testing] Installation Testing
Hi everyone.
I'm perplexed by a problem that seems fairly simple on the outside,but has proved to be a frustrating part of testing on a particularproject. I will try to include as much detail as I can here withoutbeing overwhelming.
The basic problem is installation testing. We test a C++ client/serverand Web app that comes with an installer built with Install Shield.The app has existed since 1998 and has gradually grown very large andcomplex, with multiple services and modules needing to be installed inthe folder structure.
There is one developer who does nothing but installers for theproject. The other developers create a build folder on a server thatthe install developer uses to create the installation exe.
Currently, it is the test team's job to verify each installation invarious ways. This includes running every new installer that iscreated and then using a tool called Beyond Compare to compare thefiles that are installed with the build folder to ensure all files areidentical. Secondly, for each build, the testers must compare allfiles of a new installed build with an installed build of the lastversion and note that all changes are expected.
We often find problems when we do these checks, and it will come backthat the install developer left something out or had the wrong versionof a file and we will need to do the whole process over again.
This seems horribly inefficient to me, and takes much time away frommy team doing actual testing. This is my first time testing somethingwith such a complicated installer though.
Has anyone else found a better way to deal with this type of problem?Are there any good tools for creating installers I can suggest to theinstall developer that will take some of the burden off of the test team?
-------------Here is another response, this time from the setups developer point of view:
John,
In my last company I was also on the other side of the fence, that is,I was the one who had the responsibility of creating the installerbesides testing them. To add to the complexity, the product I createdinstaller for, was supported on windows platform plus 3 flavours ofUNIX [HP, AIX & Solaris].
Below is the approach I took:
1. Created a single Installation script file (we used Install Anywhereso that helped) for all platforms.
2. Created a single ANT script that did the following:
a) First, it would check the build date of all the files that are tobe packaged and compare them to the last check-in date (since allfiles were sourced from a Version Control System)
b) Second check was to make sure that none of the file is of zero size.
C) Then created installer using Install Anywhere's command line utility.
d) Once the installer was created, we executed the installer in silentmode and did a complete installation for each platform.
e) Once the installation was complete, we again check that all thefiles are copied in appropriate directory structures and the values inproperties files are changed appropriately (compared to baseline files).
f) Then brought up the product for Testing team to continue working.
f) Once all the checks passed, the installer was copied to a centrallocation.
If any of the check failed, a mail was sent to appropriate people withall the details. Plus if the post-installation check failed, then thescript used to remove the new installation and restore the backup ofworking setup.
All the installation were done on the setups that were used daily bytesting team.
Now since you are not the one who is creating the installers, you'llneed to have development's buy-in for setting up such kind of process.But if you can translate this in to scenario specific to you, I'm surethe value derived would be too great for development to look other side.
Hope this helps.
ThanksNeeraj
It is indeed a huge task to verify setups, because if you need to repeat the test you tipically must:
1-Uninstall
2-Check that everything was removed (like registry stuff, etc) – Probably you have to verify the uninstall feature as well…
3-Install
4-Perform the verifications
In my company we have patches every week, and you cannot imagine the immense diversity of problems such as:-This file is missing because we forgot to include it-This file is missing because we did not know we needed to include it-This other team did not tell us this shared file was changed, so we did not include it in our module.-There were “compilation problems” (corrupted files)
It seems to me that the way to resolve this problem is more on their side. I mean, they should find procedures and methods that are more reliable, more automated (in the sense that they don’t have to think every time about what files they should include, etc). The more tasks they have to do “by hand”, the more likely they are to miss some. They should get check lists or any other tools to help them.I ought to say that if a file is missing from a release, the responsibility is from who released the product.I don’t see our function (tester function) as a parachute to developers work, because I guess it is impossible to verify everything.
Quality is not responsibility of the quality department (QA, Test team, whatever you call it); it is responsibility of the whole R&D team.(I read this on paper “classic testing mistakes”, I think)
I’m afraid I did not help much to achieve your task, but I hope you have more arguments to reduce it to more “reasonable” levels
Joao Pedro
________________________________________From: software-testing@yahoogroups.com [mailto:software-testing@yahoogroups.com] On Behalf Of John McCondaSent: quinta-feira, 11 de Outubro de 2007 17:33To: software-testing@yahoogroups.comSubject: [software-testing] Installation Testing
Hi everyone.
I'm perplexed by a problem that seems fairly simple on the outside,but has proved to be a frustrating part of testing on a particularproject. I will try to include as much detail as I can here withoutbeing overwhelming.
The basic problem is installation testing. We test a C++ client/serverand Web app that comes with an installer built with Install Shield.The app has existed since 1998 and has gradually grown very large andcomplex, with multiple services and modules needing to be installed inthe folder structure.
There is one developer who does nothing but installers for theproject. The other developers create a build folder on a server thatthe install developer uses to create the installation exe.
Currently, it is the test team's job to verify each installation invarious ways. This includes running every new installer that iscreated and then using a tool called Beyond Compare to compare thefiles that are installed with the build folder to ensure all files areidentical. Secondly, for each build, the testers must compare allfiles of a new installed build with an installed build of the lastversion and note that all changes are expected.
We often find problems when we do these checks, and it will come backthat the install developer left something out or had the wrong versionof a file and we will need to do the whole process over again.
This seems horribly inefficient to me, and takes much time away frommy team doing actual testing. This is my first time testing somethingwith such a complicated installer though.
Has anyone else found a better way to deal with this type of problem?Are there any good tools for creating installers I can suggest to theinstall developer that will take some of the burden off of the test team?
-------------Here is another response, this time from the setups developer point of view:
John,
In my last company I was also on the other side of the fence, that is,I was the one who had the responsibility of creating the installerbesides testing them. To add to the complexity, the product I createdinstaller for, was supported on windows platform plus 3 flavours ofUNIX [HP, AIX & Solaris].
Below is the approach I took:
1. Created a single Installation script file (we used Install Anywhereso that helped) for all platforms.
2. Created a single ANT script that did the following:
a) First, it would check the build date of all the files that are tobe packaged and compare them to the last check-in date (since allfiles were sourced from a Version Control System)
b) Second check was to make sure that none of the file is of zero size.
C) Then created installer using Install Anywhere's command line utility.
d) Once the installer was created, we executed the installer in silentmode and did a complete installation for each platform.
e) Once the installation was complete, we again check that all thefiles are copied in appropriate directory structures and the values inproperties files are changed appropriately (compared to baseline files).
f) Then brought up the product for Testing team to continue working.
f) Once all the checks passed, the installer was copied to a centrallocation.
If any of the check failed, a mail was sent to appropriate people withall the details. Plus if the post-installation check failed, then thescript used to remove the new installation and restore the backup ofworking setup.
All the installation were done on the setups that were used daily bytesting team.
Now since you are not the one who is creating the installers, you'llneed to have development's buy-in for setting up such kind of process.But if you can translate this in to scenario specific to you, I'm surethe value derived would be too great for development to look other side.
Hope this helps.
ThanksNeeraj
Monday, October 8, 2007
"Once we're off the map, we don't know where we're going next"
By Michael Bolton.
A reply to a testing enginneer that looked advive on how to persuade developers to fix "small defects"
>I have the following question: what should I tell to the developer(s)when an error occurs with a low severity, occurring in specificsituations with a low probability of happening in real world, when heasks me where or how this error can affect the safety of the system?Every time the developer says that, I can't seem to find a logicalexplanation on how the defect can affect the safety.
The answer that I'd give to this question is that any defect for which wedon't have a good, clear explanation is a potential security threat. Oneway to deal with the problem is to ask the developer, "Can you be sure thatthis problem /doesn't/ represent a threat to the safety of the system? Whatmakes you /sure/? And if you're sure, why don't we handle this condition ina way that makes our certainty evident?"
One thing that tends to make us "sure" that a small symptom isn't a bigproblem is something called "representativeness bias". We have anapparently natural tendency to associated the significance of the symptomand the significance of the underlying problem. In complex systems, thefirst sign of a problem may not be a big sign.
One prominent and memorable example of this is the Challenger disaster.O-ring seals, intended to keep gasses from coming together, degraded.Instead of remaining intact, they burned partially through. NASA managersused the fact that they didn't burn ALL the way through as evidence ofsafety, but (as is now obvious) the opposite was true. As Richard Feynmansaid in his Appendix to the Rogers Commission Report on the Space ShuttleChallenger Accident (I've added emphasis below),
[quote]
There are several references to flights that had gone before. The acceptanceand success of these flights is taken as evidence of safety. /But erosionand blow-by are not what the design expected. They are warnings thatsomething is wrong. The equipment is not operating as expected, andtherefore there is a danger that it can operate with even wider deviationsin this unexpected and not thoroughly understood way./ The fact that thisdanger did not lead to a catastrophe before is no guarantee that it will notthe next time, unless it is completely understood. When playing Russianroulette the fact that the first shot got off safely is little comfort forthe next. The origin and consequences of the erosion and blow-by were notunderstood. They did not occur equally on all flights and all joints;sometimes more, and sometimes less. Why not sometime, when whateverconditions determined it were right, still more leading to catastrophe?
[/quote]
When the system is in an unpredicted state, it's in an unpredictable state.Once we're off the map, we don't know where we're going next.
---Michael B.
(Kaner)
perhaps take a look at the video at http://www.testingeducation.org/BBST/Bugs1.html
and my contribution:
Sometimes it is not easy to give up on a defect after spending several hours trying to figure it out. But sometimes it is the wiser thing to do.
The story usually goes like this:
-We find something that behaves in an unexpected way.
-We try to find out the causes with further testing, perhaps reading documentation, searching the web, talking to the developer.
-Eventually we find out that: it is not noticeable to the user, it does not compromise safety for sure, the cost of fixing clearly does not cover the benefit of having it fixed, or it is just a matter of logical/mathematical consistency.
Sometimes we like to keep everything in the right shelves, even if we’re never going to need them. It kind of gives us “peace of mind”. If we recognize that the reason for wanting the defects fixed is only psychological, it will be easier to let these defects alone.
Anyway, all these defects should be reported formally, and it is not our job or responsibility to discard them. But it is our responsibility not allowing them to be discarded when we have some reason to believe that the consequences might not be completely studied (or more serious than what is acceptable)
Joao Pedro
A reply to a testing enginneer that looked advive on how to persuade developers to fix "small defects"
>I have the following question: what should I tell to the developer(s)when an error occurs with a low severity, occurring in specificsituations with a low probability of happening in real world, when heasks me where or how this error can affect the safety of the system?Every time the developer says that, I can't seem to find a logicalexplanation on how the defect can affect the safety.
The answer that I'd give to this question is that any defect for which wedon't have a good, clear explanation is a potential security threat. Oneway to deal with the problem is to ask the developer, "Can you be sure thatthis problem /doesn't/ represent a threat to the safety of the system? Whatmakes you /sure/? And if you're sure, why don't we handle this condition ina way that makes our certainty evident?"
One thing that tends to make us "sure" that a small symptom isn't a bigproblem is something called "representativeness bias". We have anapparently natural tendency to associated the significance of the symptomand the significance of the underlying problem. In complex systems, thefirst sign of a problem may not be a big sign.
One prominent and memorable example of this is the Challenger disaster.O-ring seals, intended to keep gasses from coming together, degraded.Instead of remaining intact, they burned partially through. NASA managersused the fact that they didn't burn ALL the way through as evidence ofsafety, but (as is now obvious) the opposite was true. As Richard Feynmansaid in his Appendix to the Rogers Commission Report on the Space ShuttleChallenger Accident (I've added emphasis below),
[quote]
There are several references to flights that had gone before. The acceptanceand success of these flights is taken as evidence of safety. /But erosionand blow-by are not what the design expected. They are warnings thatsomething is wrong. The equipment is not operating as expected, andtherefore there is a danger that it can operate with even wider deviationsin this unexpected and not thoroughly understood way./ The fact that thisdanger did not lead to a catastrophe before is no guarantee that it will notthe next time, unless it is completely understood. When playing Russianroulette the fact that the first shot got off safely is little comfort forthe next. The origin and consequences of the erosion and blow-by were notunderstood. They did not occur equally on all flights and all joints;sometimes more, and sometimes less. Why not sometime, when whateverconditions determined it were right, still more leading to catastrophe?
[/quote]
When the system is in an unpredicted state, it's in an unpredictable state.Once we're off the map, we don't know where we're going next.
---Michael B.
(Kaner)
perhaps take a look at the video at http://www.testingeducation.org/BBST/Bugs1.html
and my contribution:
Sometimes it is not easy to give up on a defect after spending several hours trying to figure it out. But sometimes it is the wiser thing to do.
The story usually goes like this:
-We find something that behaves in an unexpected way.
-We try to find out the causes with further testing, perhaps reading documentation, searching the web, talking to the developer.
-Eventually we find out that: it is not noticeable to the user, it does not compromise safety for sure, the cost of fixing clearly does not cover the benefit of having it fixed, or it is just a matter of logical/mathematical consistency.
Sometimes we like to keep everything in the right shelves, even if we’re never going to need them. It kind of gives us “peace of mind”. If we recognize that the reason for wanting the defects fixed is only psychological, it will be easier to let these defects alone.
Anyway, all these defects should be reported formally, and it is not our job or responsibility to discard them. But it is our responsibility not allowing them to be discarded when we have some reason to believe that the consequences might not be completely studied (or more serious than what is acceptable)
Joao Pedro
Subscribe to:
Posts (Atom)