The ScrumButt Test (2): Working Software
The second line in the ScrumButt Test says: Software must be tested and working by the end of each iteration.
This is the second of three items that confirms the team (project) is “iterative”. There is a series of small tests (within the ScrumButt Test) for whether the team is really doing Scrum (in my opinion, via a quick test).
Why this second line (element)?
As with almost everything in Agile, there are many good answers to that question. I will highlight three.
1. We can get better feedback. Only by having the software tested and working, can the Product Owner and the Stakeholders give the best feedback. And we want short, small, fast feedback: “Yes, it’s what I said, but now that I see it, it’s not what I want.” When it works, and they put it together with everything else that is working, they can lift their eyes from the weeds, and start to see if a real customer product is starting to emerge (be formed). Sometimes this allows them to creatively discover other visions for the product (or improvements to the vision of the product).
Arguably, one could get feedback without being fully tested. I am not particularly impressed by that argument, but I’ll leave it for now. My next reason for this line in the ScrumButt Test addresses that argument.
2. Working (fully tested) software is the primary measure of progress. This is straight from the Agile Principles that were agreed when the Agile Manifesto was written.
Why is that important or right? Well, before that, many were measuring progress by how much paper was churned out, or how many detailed tasks were done, or by the dev team saying “We’re 63.2% done”. None of these were ever very reliable (at least in my experience and that of many others). Certainly they had minimal meaning to a business side person who had to manage the risk of delivery by a specific date.
OK, so what does it really mean to have working, fully-tested software? Well, each team must define at some level of detail what “done” means. A company, at a slightly higher level, might also have a standard definition of done (with perhaps some wiggle room for special cases).
Definition of done would typically include (or at least address) things like:
* coded (duh!)
* automated unit tests built, in the configuration management system, run, fully passed
* refactored (anything that needs to be refactored has been: code, design, arch) Note: some refactoring might also occur after some things below.
* put into the integrated build and a new (QA?) environment (the new story does not break other things, etc.)
* automated functional tests built, in the CM syetm, reviewed by business guys, fully passed per a QA person
* other testing done (more variable by effort…eg, some performance or exploratory testing)
* business side testing and review (maybe by the Product Owner…full thumbs up)
* fully documented (any docs that need to change because of this story have been changed and reviewed and are perfect)
* no outstanding bugs (or none of any consequence)
If a story passes the above criteria, then a business person (in most projects) can assume a fairly clear and small amount of additional effort to take that story or feature live. This knowledge can be very powerful and give the Product Owner the courage to identify more early releases.
3. Working and fully tested software is necessary to know (meaningfully) the team’s velocity. (Velocity is really a later element in the ScrumButt Test, but this line in the test is setting up the team to have a meaningful velocity.) Velocity is useful in many ways, but I’ll just explain it with the family vacation metaphor. When the kids in the back seat ask “when will we be there?”, if I know we are going 60 mph (our velocity), and it’s 180 miles to go; I can give a pretty accurate answer. Good enough to make mission critical decisions like whether to pull over for a potty break. And, as it turns out, good enough for most real business decisions. And, as it turns out, giving us info with as much quality as we can get.