What Experiences of QA in an Agile Environment have you had?
Through the evolution of the practice of QA in agile teams, a high degree of quality can now be added to product development. Not only can thorough tests be designed, written and executed within a single iteration, the use of post-mortem analysis allows for team reflection and, as a result, process improvements.
So how best to perform QA in say a two week iteration? One way is as follows :-
* The user stories are chosen and the team members select which tasks to perform from this.
* Developers are given the morning to sit and work out which unit tests and low-level functional tests they will write. The QA person then meets and discusses this with the developer, finding out (at the high level) what the expected functionality is.
* The QA person has the opportunity to request that the developer adds more tests to the suite, change existing ones, etc - the idea is that a QA input has been provided before the developer writes a line of code.
* Agreement is made and the developer starts to write the unit tests (with Javadoc/XYZdoc to create documentation) and then the code.
* At the same time the QA person works their way round the rest of the team to a) learn the functionality to be implemented and b) to provide QA input.
* The QA person takes the code from the previous iteration (which is working and usable) and writes high level tests using a QA automation tool. These tests are demonstrated at the end of the present iteration. Their structure is already known as they were performed manually in the previous iteration.
* The QA person performs exploratory testing on the newly developed functionality and compiles a manual test suite to verify this.
* The Developers complete their unit tests, javadoc and code, all is verified to build and the tests pass.
* The QA person’s scripts from the previous iteration are continually run against the present build throughout the sprint to ensure that no regression issues creep in. This, along with the manual tests passing, ensures a stable build to release to production.
* Developers are given the morning to sit and work out which unit tests and low-level functional tests they will write. The QA person then meets and discusses this with the developer, finding out (at the high level) what the expected functionality is.
* The QA person has the opportunity to request that the developer adds more tests to the suite, change existing ones, etc - the idea is that a QA input has been provided before the developer writes a line of code.
* Agreement is made and the developer starts to write the unit tests (with Javadoc/XYZdoc to create documentation) and then the code.
* At the same time the QA person works their way round the rest of the team to a) learn the functionality to be implemented and b) to provide QA input.
* The QA person takes the code from the previous iteration (which is working and usable) and writes high level tests using a QA automation tool. These tests are demonstrated at the end of the present iteration. Their structure is already known as they were performed manually in the previous iteration.
* The QA person performs exploratory testing on the newly developed functionality and compiles a manual test suite to verify this.
* The Developers complete their unit tests, javadoc and code, all is verified to build and the tests pass.
* The QA person’s scripts from the previous iteration are continually run against the present build throughout the sprint to ensure that no regression issues creep in. This, along with the manual tests passing, ensures a stable build to release to production.
Co-ordination, communication and co-operation are the keys (as in every project). Without this the rest of the team quickly loses touch on whether the QA people are manually testing the present release, automating the tests from the previous one, finding bugs in either or reviewing the unit tests. The fact that there is an ‘overlap’ of activities for the QA person between iterations means that all automated tests must be written, related bugs created and resolved, and all verifications passing for the end of a sprint. If anything drags into the next release, the structure quickly breaks down.
So, that’s one process that works, does anyone else have one? Differences from the one described?
——————-
Addendum: 15th August 2009
After reading the above article again recently, a glaring omission came to the fore. In the structure I described, I mentioned that code from the previous iteration would have automation tests written for it. Now this is bad. Well, it could be a lot better at least.
No doubt I wrote this idea from past experiences (as I’ve never been given ‘formal Agile QA training’ - which raises a point, is there such a thing?) and it highlights a big problem in Agile QA. If there’s compromise to be made in adoption of agile processes, the traditional ‘last piece of the puzzle’ usually carries the can. And that means code not being ready for demo or test until the final day of the sprint.
A highly functioning agile team will take the user stories and develop them in such a way that every 2/3 days there is something testable. In the classic case of the login user story, within a couple of days there’ll be an interface (which will no doubt change during the sprint) with entry fields and some form of simple backend to receive values. As the sprint progresses the rest of the functionality will be added (e.g. parameter checks, link to a member page, error handling, db structure changes …).
The key point is that there is something of substance that can be checked through exploratory testing within the first couple of days and there will be more work that can be checked throughout the rest of the sprint. This allows automation scripts to be written within the same sprint such that, when demo time arrives the QA person has their corresponding automation suite ready - and demos this as well!
Think of it, if a product owner gets to see a working demo of new functionality, and then gets to see all of the testing scenarios checked for it. Their level of confidence in the stability of the product is sure to rise, along with all members of the team.
So, if your agile team can function in such a way that the necessary automation scripts are written and working for the present sprint, you are doing very well. Showing these scripts is firstly a way of advertising what QA is doing for the project and the benefits it provides, and also really helps to build the respectability of the QA services company-wide.
Comments
Post a Comment