How can product quality improvement be tracked from sprint to sprint?
I’m interested in defining the best metrics to use for checking whether the product quality is improving as time, and sprints, pass by. One obvious factor is the number of automated tests passing in the continuous integration builds related to the coverage of functions in the code.
Or is it? Do the tests literally show the product quality? What if the finished article isn’t what the customer wanted? There are then a suite of tests making sure that the software does something it’s not meant to.
Number of defects? Measured against what? The list of possibilities goes on, what I’d like to find out are tried and tested measurables. Does anyone have a list they use in every project?
Update by Kenny 08/11/09 :-
Ok, I’ll try and answer my own question here (the power of reflection!). There are two scenarios in projects which require different solutions in relation to metrics gathering. They are :-
1) A project run using mature agile methodologies from day 1
2) A project which started with fr-agile i.e some, but not all of the necessary processes in place, and has improved sprint by sprint.
2) A project which started with fr-agile i.e some, but not all of the necessary processes in place, and has improved sprint by sprint.
Taking scenario 1, the metric monitoring is a much simpler model to build and use. In this project the user stories are clearly defined in each sprint, the acceptance criteria are agreed between product owner and team, and the automated scripts written to verify these are constantly passing throughout the project.
Quality has been set “to the max” from day 1 and, although retrospectives are an ever-present and necessary part of the process, in terms of test coverage and making sure the customer is getting what they want the criteria have/are being met.
(Ok, the software testing services improvements don’t stop there, there’s always room for improvement, but a high level of focus and comprehension is apparent. As long as it’s maintained, the product should be well received by the client)
For scenario 2 however, it’s an uphill battle. The team has a legacy lack of quality in their product and processes to date and they must play catch-up with this whilst taking on-board more new code and functionality changes.
Tricky stuff. Are the new test automation services really checking that the product is as the customer wants? We’re still not exactly sure what it’s meant to be yet! Our processes haven’t nailed the feature-defining such that everyone in the team is 100% certain on what we’re building.
This is where the bugs found in production play a key part. For the first release, there will be problems seen, no doubt. What’s important is that their number and severity are monitored closely and that in both areas there is a noticeable decrease with time (the quicker the better obviously!). This is a key indicator that things are improving in codebase stability. If not, drastic measures are required (halting new feature development and focusing everyone on refactoring and testing only).
What must also be monitored closely is the client and end-user’s satisfaction - the code might not crash every 20 minutes but if no-one uses it then it’s a worthless enterprise! These metrics are usually easily gathered - they come in the form of either glowing or damning reports from the business team …
Comments
Post a Comment