Friday, October 12, 2007

Installation testing: should it be so hard?

I share your concern.
It is indeed a huge task to verify setups, because if you need to repeat the test you tipically must:
1-Uninstall
2-Check that everything was removed (like registry stuff, etc) – Probably you have to verify the uninstall feature as well…
3-Install
4-Perform the verifications

In my company we have patches every week, and you cannot imagine the immense diversity of problems such as:-This file is missing because we forgot to include it-This file is missing because we did not know we needed to include it-This other team did not tell us this shared file was changed, so we did not include it in our module.-There were “compilation problems” (corrupted files)

It seems to me that the way to resolve this problem is more on their side. I mean, they should find procedures and methods that are more reliable, more automated (in the sense that they don’t have to think every time about what files they should include, etc). The more tasks they have to do “by hand”, the more likely they are to miss some. They should get check lists or any other tools to help them.I ought to say that if a file is missing from a release, the responsibility is from who released the product.I don’t see our function (tester function) as a parachute to developers work, because I guess it is impossible to verify everything.
Quality is not responsibility of the quality department (QA, Test team, whatever you call it); it is responsibility of the whole R&D team.(I read this on paper “classic testing mistakes”, I think)
I’m afraid I did not help much to achieve your task, but I hope you have more arguments to reduce it to more “reasonable” levels 

Joao Pedro

________________________________________From: software-testing@yahoogroups.com [mailto:software-testing@yahoogroups.com] On Behalf Of John McCondaSent: quinta-feira, 11 de Outubro de 2007 17:33To: software-testing@yahoogroups.comSubject: [software-testing] Installation Testing
Hi everyone.
I'm perplexed by a problem that seems fairly simple on the outside,but has proved to be a frustrating part of testing on a particularproject. I will try to include as much detail as I can here withoutbeing overwhelming.
The basic problem is installation testing. We test a C++ client/serverand Web app that comes with an installer built with Install Shield.The app has existed since 1998 and has gradually grown very large andcomplex, with multiple services and modules needing to be installed inthe folder structure.
There is one developer who does nothing but installers for theproject. The other developers create a build folder on a server thatthe install developer uses to create the installation exe.
Currently, it is the test team's job to verify each installation invarious ways. This includes running every new installer that iscreated and then using a tool called Beyond Compare to compare thefiles that are installed with the build folder to ensure all files areidentical. Secondly, for each build, the testers must compare allfiles of a new installed build with an installed build of the lastversion and note that all changes are expected.
We often find problems when we do these checks, and it will come backthat the install developer left something out or had the wrong versionof a file and we will need to do the whole process over again.
This seems horribly inefficient to me, and takes much time away frommy team doing actual testing. This is my first time testing somethingwith such a complicated installer though.
Has anyone else found a better way to deal with this type of problem?Are there any good tools for creating installers I can suggest to theinstall developer that will take some of the burden off of the test team?

-------------Here is another response, this time from the setups developer point of view:

John,
In my last company I was also on the other side of the fence, that is,I was the one who had the responsibility of creating the installerbesides testing them. To add to the complexity, the product I createdinstaller for, was supported on windows platform plus 3 flavours ofUNIX [HP, AIX & Solaris].
Below is the approach I took:

1. Created a single Installation script file (we used Install Anywhereso that helped) for all platforms.
2. Created a single ANT script that did the following:
a) First, it would check the build date of all the files that are tobe packaged and compare them to the last check-in date (since allfiles were sourced from a Version Control System)
b) Second check was to make sure that none of the file is of zero size.
C) Then created installer using Install Anywhere's command line utility.
d) Once the installer was created, we executed the installer in silentmode and did a complete installation for each platform.
e) Once the installation was complete, we again check that all thefiles are copied in appropriate directory structures and the values inproperties files are changed appropriately (compared to baseline files).
f) Then brought up the product for Testing team to continue working.
f) Once all the checks passed, the installer was copied to a centrallocation.
If any of the check failed, a mail was sent to appropriate people withall the details. Plus if the post-installation check failed, then thescript used to remove the new installation and restore the backup ofworking setup.
All the installation were done on the setups that were used daily bytesting team.
Now since you are not the one who is creating the installers, you'llneed to have development's buy-in for setting up such kind of process.But if you can translate this in to scenario specific to you, I'm surethe value derived would be too great for development to look other side.

Hope this helps.
ThanksNeeraj

No comments: