Wednesday, October 10, 2012

regression testing

In the first place, the quantity and type of regression we do depends largely on the phase of the project.


Mainly, we do 3 types of regression:

1- Automated regression testing done by developers team; Consists of an automated framework that runs over a night and performs some hundreds of tests covering basic operations (some areas of functionality cannot be covered by this tests due to logistic reasons, or are not efficient to perform in this way). Runs before releasing a minor version to Test Team

2- Manual testing done by testing team: consists of running test cases in order to certify that main functionalities are still working. Consists of a subset of a large test suite, in which tests are vaguely defined and testers are (or should be) encouraged to make variations and to be responsible and think, instead of just run predefined steps (when they are defined). Some tests specifications consist simply of a parameter name, for instance. Usually run after a minor release and last for a continuous amount of time. This means that between each release these tests are almost always being executed. In our case, it is not cost effective, or even possible, to automate most of these tests, due to the nature of the software we do. I will not elaborate on this, because lots has been said on the dangers of automation and I mostly agree with it.

3- Half automated tests done by testers: consist of running predefined environments. Insists on repetition and load or stress, instead of unit functionalities. Also run continuously after minor releases, these tests usually find important crashes not visible in more simple tests, and allow taking some measures on system performance, and thus compare with other releases, aiming continuous improvement.



Although not recognized as Regression testing, I have found, over the 12 years on this kind of job, that most regression is also done in an activity that has much of exploration and of dispersion: defect verification. Even when there was no formal regression testing, I think that the activity of verifying the defects fixes, after a release, helps making a very effective regression:

-when trying to reproduce a (presumably fixed) defect, often side effects are found;

-on this process, and on the variations on the test, the changed code is highly exercized. It is important that when verifying a defect the tester not only reproduces the exact scenario but makes some variations, including the “opposite tests”, i. e. the ones that had no problem. For instance, if there was a problem deleting an attribute, I should also make sure that, after the fix, other attributes are not inadvertently deleted.

-I find the mental process of verifying a defect very motivating for ‘going a little further’, thus inspiring new test cases and functionalities.





What could be improved (in my case):

-Syncronization between 1 and 2 could allow for more efficient manual testing coverage; this could imply including some testers on the team that performs 1

-Still looking for the righ tools / methodologies for managing the manual tests more efficiently



Tuesday, April 12, 2011

Developers, find a creative hobby (as everyone)

Many times, as a software tester, I find problems and end up with this comment: "Why are they/we re-inventing the wheel?"...



I think it is a fair assumption that most of the problems in a software project arise from new or changed features. Maybe not so obvious is hte fact that most of the design/implementation work could be saved, if developers searched for waht is done, and how it is done, instead of just creating things as what is the "perfect way" to them. For instance, designing a UI for an email client.

Most of the creative work from the developers comes from a natural human instinct, which is to create, or use imagination, intelligence, association, abstraction, brain capacities, to deliver something to the world. Something that hoppefully will be recognized as a great master piece, aesthetically safisfying to the developer and to the stakeholder, or to the artist and to the "connoisseur" of art.

The problem is that what we expect from software is not what we expect from art. In a very concise way, we expect software functions to be recognizable (unlike art which should be original), emotionally we want software to be innocuous, and not to awake strong emotions, like art.
So what I am saying is that developers should find some creative hobby to express themselves, so they could focus on the plain and simple ways to get bug free software!

PS: Of course, this advice is valid for eveyone, not just developers: "Find a creative hobby"

Wednesday, October 27, 2010

Defect quantity estimation

There are some indicators of the quantity of defects a certain areea of the product might have.

Some say that testing is asking the product the important questions. Well I think that when a certain area raises too many questions about the expected product behaviour, it will probably have proportionally many defects. I guess this is based on these principles:

  • what's hard for me (user) to understand, is also hard for the developer to implement.
  • the more complex an area is, the more dificult it is to be self-explanatory, the more questions it will raise for the user, and themore defects it might have (because it is complex)
  • bug quantity grows exponentioally with product complexity, or, which is the same,
  • bug quantity grows exponentially with product size, if we count for product size areas of the product are connected

Other things that make me suspect of defects abundance:

  • when same operation/feature/information is presented on different places in different ways. More generally, usually Redundance leads either to incorrectness or mis-information, because when a user sees apparently same information on different places, he might think it is not the same.

Friday, September 17, 2010

testar é...

Reina um grande desconhecimento e equívoco sobre o que é testar. Vulgarmente confundido com "verificar", é considerado uma tarefa monótona, que não exige criatividade ou raciocínio e tomadas de decisões na hora. A definição que actualmente adopto é a seguinte: "testing is this: helping our clients to ask and answer their most important questions. (M. Bolton)". O cliente pode ser quem faz o código, quem diz as specs, quem "manda" na empresa, ou o cliente própriamente dito. Testar é essencialmente aprender, aprender o que faz o produto, o que devia fazer, o que querem que ele faça, o que pode eventualmente fazer inadvertidamente. Testar é experimentação científica, colecção de estatísticas, investigação criminal (encontrar um bug, persegui-lo, determinar as provas necessárias ou circunstanciais, etc) e não é de todo monótono. A parte da verificação, ou confirmação propriamente dita é apenas uma pequena percentagem da nossa actividade, e tendencialmente candidata a automatização. Testar é também comunicar, advogar, intermediar.

Isto vem ao encontro do meu segundo comentário: A engenharia de software, dum modo geral, é uma ciência social. Ao contrário do que se pensou durante décadas, não é uma ciência exacta, nem sequer é comparável a outras engenharias como a mecânica ou construção civil. É um campo em que o factor humano está presente ao longo de todo o processo: nos requisitos e na constante adaptação ás transformações, no compromisso entre funcionalidade idealizada, funcionalidade real e funcionalidade execuível. Se analisarmos a ocupação do nosso tempo, chegamos à conclusão que apenas uma parte é utilizada na "produção" de software, propriamente dita. A negociação, a aprendizagem, a experimentação, o debate, a passagem de conhecimentos, etc, ditam que este domínio em que nós trabalhamos seja, como todas as ciencias sociais, imprevísivel. Não existem projectos sem imprevistos de toda a espécie, e só o mais ingénuo dos estudantes pode acreditar que os requisitos não vão mudar, que o código vai funcionar à primeira, que as tecnologias não vão mudar...

Não é "defeito" das empresas, é "feitio" da área onde se movem!

Friday, October 9, 2009

Tester job is to make things work, not the opposite!

Tester job is to make things work, not the opposite!

In my last post I forgot to mention the main idea I'd like to retain, which is, the tester Mission is to make the software work.

What we want, in the End, is to have working and satisfying software, with few relevant bugs. Let's not forget it. The more I get the software to work, the more complex scenarios I will build, and eventually the more interesting and relevant defects I will find.

If a tester gets stuck with basic defects, often superficial, often interface, but sometimes really blocking ones, he won't be able to advance in its mission, wich is to have working software. That's why I advised in my last post to start with simple and common, expected scenarios.

Corner cases are often interesting and exciting from an academic point of view, but in the long term will not yield so efficiently to working software. Although usually testing is seen as a way to find defects, I found out that on complex systems, our mindset should be focused on software success, not on software failures.

Thursday, October 8, 2009

What should first tests address?

When planning a testing effort, or if you wish, when designing a Test Plan, some considerations may turn out very useful.

First tests on a given area are decisive. If we consider testing is Learning, or asking valuable questions to the product, then first questions are the most important ones, because we are getting a first contact with the product (first impressions tend to stick in our mind more easily).

When addressing these questions, we should expect answers like: "Yes, this feature is working exactly as I expected", or "Not at all, there is no way this could work". Answers like "in this particular situations, this might not work", or "I think this is working, but I am not sure if this is very useful" are discardable at an early stage, because we are trying to gather basic information, not exploring detailed scenarios.

We should seek the most common scenario, the main functionality, and try to setup ideal conditions so the software succeeds. At this point, we may say if the feature is working, or on the other hand, if it is not working at all.

Even after pass the very first test, after we say the feature is working, we will increase complexity of testing, but always trying to go for the next simpler and most common scenario that someone might want. This way we achieve:

-Continuous and steady knowledge growth, easier and logical learning. Our mental model of the product is built from the basis to the upper
structure, not the other way.

-Efficient Testing effort, every results are probably matched to our mental model, which is growing continuously, thus minimizing gaps in the complexity levels of our knowledge. Suppose we go for a complex scenario at a very early stage, and we find it is not working. My experience tells me that the test is often discarded, or at least it will have to be repeated later, after do some more simple tests. Often what we see as a product failure, is a (tester) knowledge failue, because we are exploring conditions and scenarios for which it was not built.

Because our knowledge in this early stage is extremely dynamic, Exploratory Testing (ET) may be very efficient, due to itts adaptability and dynamism (see literature on ET)