Alexandre Martins On Agile Software Development

17Jul/085

Measuring test effort

One of the most difficult tasks for consultants is to influence business people to embrace and support test-driven development. Seems like they do "understand" the values, "agree" with that, but when it comes to put into practice the figure is generally a bit different. When I say to put into practice, I mean stick with it steadily, even when dealing with unexpected situations. A typical one could be of a project with delivery delays, a tight deadline, and invariant scope. By experience, when such situation happens, the first decision made is to cut off test development and give way code quality, in order to deliver faster. No matter how hard you try to revert it by showing them the bad outcomes for this decision, they simply ignore them and take the risks, just for the fact that there are no concrete risks, other than not delivering the software.

Not having a way to show managers that not writing tests, at least for the most critical functionalities, is indeed a concrete risk, has always puzzled me. One day while talking to Kristan Vingrys about this, he showed me a risk matrix he has been using to help him influencing people to understand test values. See the image:

Basically it measures the rate of test coverage required and tells what type of tests (unit, functional) to be implemented based on the impact of the functionality to the business stakeholders and the amount of new code needed to implement it (you can be re-implementing it from an existent code). The more impact and likelihood for new technology the feature needs, the more test implementation it should have.

The ideal approach would be, for each implemented feature, the team is responsible to evaluate and make a decision on how much test effort they want to put in the story. The best time to make it is during the iteration planning meeting, so that the final output you get is both the iteration goal/features and the minimal of test effort to each of them.

And as generally all features has at least a minimum of significant business value, you will always have the guarantee of having these tested.

Share and Enjoy:
  • Digg
  • del.icio.us
  • Reddit
  • Twitter
Comments (5) Trackbacks (2)
  1. There is some amount of test that just can’t be negotiated. That’s the minimum level of what you, as a professional, require to say that something is done. For most agile teams this will include unit and integration testing, for non-agile teams maybe unit testing and manual testing is enough.

    The business has the right to reduce effort in software quality and focus on delivery but this can’t affect whatever you require to say something is done.

    cheers

  2. Sinto que cada vez mais devo me posicionar com relação a lutar para que os testes sejam colocados em todos os cronogramas.

  3. I would echo Senhor Shoe’s thoughts. Also, looking at the matrix and thinking about the other important dimension – time – I would expect that in general, a lack of tests would increase the likelihood of negative impact as time went by.

    So it might be reasonable to cut some corners on tests, refactorings, and general cleanliness in the last mad dash to a release, but those things really need to be addressed immediately for long-term benefit and easier implementation of future business value.

    If such decisions about cutting down on testing is made, it’s important to immediately start reporting changes in things like defects and rework time, to highlight the cost of _not_ taking the time to write software properly.

  4. Yes Josh, in the short term the delivery is gonna be faster, if cutting off test effort, but in long term, as bugs start popping out, more time will be needed to fix them.

  5. Isn’t it wise in face of delay you drop practices you are learning and are not master of yet in order to deliver quicker? Resume learning on a next interation/project.


Leave a comment