Tuesday, October 26, 2010

Vive La Revolution!

In the agile revolution, are managers the over privileged aristocracy? And if so, why is nobody shouting “Off with their heads!”


I must confess there is a reason I’m partial to agile software development that might unnerve some people: I like its dark underbelly, that thread of anti-establishment flowing through it. You won't find that much on this explicitly in the Scrum guide or other key agile works. But hang out in the right places and you'll discover it's there.

Much agile thought has an egalitarian, meritocratic theme that recognizes the need for leaders in a team but simultaneously discourages hierarchies, superiority through tenure or job title and rattles the cage of old school authority. I love the shunning of traditional command and control management, the respect for people that do the work as competent individual professionals who can be trusted. And I share the distaste held by many agile practitioners for practices like detailed estimates in hours that are used to berate you later, ridiculously detailed timesheets, performance reviews and meaningless certification. These things have never sat right with me.

I’ve never been very good with rules mind you. I don’t like being told what to do or how to do it. I don’t like speed limits of 35mph when, clearly, it would be OK to go 40mph. I don’t like signs that say the park isn’t open until 6am (what if I want to go hike at 5.45am one summer morning?) I could go on, but you get the idea I’m sure… This is why so much of the thinking emanating from the agile community resonates with me.

All of which sentiment got me thinking about agile as a kind of revolution. And if it’s a revolution, who’s the aristocracy? Shouldn’t heads be rolling for the waterfall mess and soul destroying corporate drivel that preceded this?

Clearly the new nobility is management. But should their heads be on the chopping block? Take a look at this for a moment. It’s a spoof of the original agile manifesto put together by Kerry Buckley.


We have heard about new ways of developing software by
paying consultants and reading Gartner reports. Through
this we have been told to value:

Individuals and interactions over processes and tools
and we have mandatory processes and tools to control how those
individuals (we prefer the term ‘resources’) interact

Working software over comprehensive documentation
as long as that software is comprehensively documented

Customer collaboration over contract negotiation
within the boundaries of strict contracts, of course, and subject to rigorous change control

Responding to change over following a plan
provided a detailed plan is in place to respond to the change, and it is followed precisely

That is, while the items on the left sound nice
in theory, we’re an enterprise company, and there’s
no way we’re letting go of the items on the right.



It’s funny. Except it’s probably all too real in some organizations. And who makes it real? Managers. Managers drafting and implementing policy. Managers policing those policies. Managers punishing people who fail to adhere to the new policies.

But we know it’s not right. We know it’s not mean to be that way. Moreover, we’re empowered, self-organizing, self-managing teams. Do we need them any more? Can’t we vote these charlatans off the island?

Well, as someone who’s gone back and forth between individual contributor and management roles more than once I can tell you without doubt, the answer is no. Sorry about that. Yes, you really do need some organizational structure in most contemporary organizations. You do still need departmental directors and line managers and so on.

Regrettably there is an extraordinary amount of work to be done that is best suited to those outside a basic agile team. There’s financial stuff: setting budgets, planning growth, managing remuneration, capital expenditure etc. There’s medium and long term planning, setting strategy. There’s market research, marketing and PR for a group, internal and external. There’s HR, pastoral care, career development. There’s departmental coordination, networking, finding internal opportunities, connecting people together to achieve things neither could independently. There’s politics: negotiating for office space, people, budget, processes and practices that are fair and equitable. And there’s more than this too.

Could you turn every group in a large organization into an agile team, with their own backlog? Could you treat them all as peers and dispense with a great deal of hierarchy? Well maybe. But it’s a long time until something like that becomes the norm, and in the interim, even if we are in the midst of an agile revolution, beheading management is not the way to go, even if there are some that deserve it ;-)


Thursday, October 21, 2010

Seven Deadly Sins of Scrum

1. Wrath
Something that anyone on a Scrum team might succumb to with the frustrations of learning to work a new way. But this might be something the Scrum Master could be especially tempted by. As guardian of Scrum principles it can be frustrating sometimes to see people doing it "wrong." But it's important to remember that people will often learn best by doing, by earning the knowledge for themselves. Scrum is a huge cultural change and requires patience as well as guidance. Besides, the basics of Scrum are very simple and few things are truly wrong so long as you manage to keep to them. Even then, sometimes letting people step outside the framework for a bit so they can compare may help.

2. Greed
When the stakeholders or the PO start to see productive, reliable outputs from the team and take note of velocity it may be tempting for them to ask the team to do more than their historical accomplishments indicates is feasible. "Can't you just squeeze in a few extra points?" This must be guarded against, because allowing such a thing runs the risk of people cutting corners, or even gaming things  so it *appears* more points are being done per sprint by falsely inflating estimates. Trust the team to work hard at a professional standard and use velocity for what it was intended: predictably planning what will be done by release time. 

3. Sloth
To a large extent the whole-team emphasis on committing to a body of work and getting it done by end of sprint coupled with daily stand-ups leaves fewer places to hide for  slackers than a waterfall team. Nonetheless, calling out people not pulling their weight can be uncomfortable. Traditionally a persons line manager would be responsible for this, but in a self managing team there is the possibility of everyone holding each other accountable for producing. Sloth harms the whole team and cannot be ignored. Sometimes people need to be "voted off the island."

4. Pride
We should always take pride in a job well done, but this should be a shared thing. A scrum team thrives on respect for and collaboration with each other. Ego-centric prima donnas have no place. It’s a team thing. Similarly, whilst great talent is always appreciated, singling out "rock stars" and putting them on a pedestal is counterproductive.

5. Lust
Yeah, um. Usual dating people at the office advice stands methinks.

6. Envy
Yeah that other Scrum team may have their own team room/bigger whiteboard/better computers/more interesting product... Whatever. Find the interesting angle in what you’ve got, work to improve what needs improving and stop coveting thy neighbor’s 27” dual monitor set up.

7. Gluttony
Sprint planning can take a while. With inexperienced teams I've seen it start with a late breakfast and run past lunch. Too many bagels with lashings of cream cheese, Danish pastries and  pizza for lunch will catch up with you. Perhaps it's a good thing you will be sprinting afterwards!

Tuesday, October 19, 2010

Requirements are dead. Long live requirements.

One of the first “Agile” books I read that really inspired me to change how we went about creating software was Mike Cohn’s User Stories Applied. I think it’s because, perhaps more than anything else in my career in software development, I had seen problems that boiled down to “requirements problems”.

We were really suffering from some of the classic problems mentioned in Mike’s book including:

  • People not really understanding what needed to be built or why, but just focusing on making sure the software did what the requirements said because the requirements couldn’t possibly be wrong...
  • People obsessing over where commas went (when ironically the basic words in play were not always comprehensive enough for us to worry about punctuation that much)
  • People testing what they thought the requirements meant, rather than what the customer needed.
  • Requirements fatigue – huge weighty tomes of requirements typically starting “the system shall…” and yet rarely  any decent narrative providing an overview. This kind of stuff would put an insomniac to sleep.
  • Ambiguous wording, cryptic wording and just plain iffy writing: many times things were being phrased  to suit a business analyst, software engineer or tester’s view of the world. Often this rendered the document almost impenetrable to our customers who we expected to sign off on it(!)
  • Conversations and emails that involved convoluted phrases like “…but don’t you realize that requirement 2.7.3.2.17b clearly means that in this context…”

There was so much in that book that had me nodding my head and thinking “Yep!” with every page. Of course criticizing traditional style requirements documents isn’t hard. I don’t know that I’ve ever met that many people who don’t have a story or complaint about how bad they can be. They’re an easy target and I have something more useful (I hope) to say. I’m not going to rehash what user stories are or how they may benefit you. If you’re not already familiar with them I heartily recommend that book mentioned above. For a more immediate introduction hit up Google and you’ll find plenty of material.

What I want to talk about is how to balance working with stories with environments that are looking for something a bit more traditional when it comes to requirements specifications.

The industry I work in provides services to the pharmaceutical and biotech industry. The work we do is subject to FDA guidelines and the audits (from internal quality groups, customers and potentially the FDA themselves) that go along with them. This has bred a certain set of expectations amongst auditors, not least that there will be a classically styled requirements specification from which they can trace design, implementation and verification and validation of features. Overcoming this expectation may eventually be possible. Perhaps even as soon as one, maybe two decades ;-) but for now trying to do so is tantamount to pushing water up a hill with a rake.

Given this situation, we (and I strongly suspect we are not alone here) still need to produce something that we can call a requirements specification, print out and sign and show some kind of traceability to our tests. There have been a handful of ideas over time on how we might do this. Different Scrum teams in my organization have tried different things, but I’m not sure that any of us yet have got this completely licked.

The first approach was really just business as usual. That is to say that although we had a backlog of stories, they were really just placeholders for work. We still captured the detailed aspects of what we were building in a traditional Requirements Specification. Our “Definition of Done” actually included “update RS” as an item for every story. This idea more or less worked, although it felt a little unwieldy. All the classic problems of big RS documents remained with us, we just bit things off a story at a time which definitely aided us in some ways (not least by eliminating the need to write the entire thing and sign it off before beginning to work).

We also flirted briefly with use cases, which if you do even a moments research, will allow you to find various “templates” and so forth for authoring them. The problems we had here were twofold: Firstly, a good use case is actually larger than a story. So with us trying to take story-sized pieces of work and describe them in use case format things got kind of artificial and we created more work for ourselves. If we’d carried on down that path we would have had innumerable documents, since each use case was its own Word file. Secondly, use cases seem best suited for very GUI centric functionality with obvious human “actors” interacting with the system. For web services and other non-GUI work they seemed quite hard to do well, seeming to come out quite convoluted. Sure, you can have the other system as an actor but nonetheless things came out weird. Maybe we sucked at writing use cases, but in truth I suspect they just really weren’t the right tool for capturing story requirements.

Around this time I had just finished reading Gojko Adzic’s Bridging the Communication Gap which introduced me to concepts like specification by example and acceptance testing. These ideas resonated with me and seemed like useful, sensible things we could try out. As an alternative to the use cases we were writing I proposed a story + acceptance criteria with examples template (heavily influenced by Gojko’s book) that I thought we might try. In actuality this never caught on. Perhaps it’s just as well because with hindsight, the idea of requirements scattered across hundreds of documents violated one of my own big pet peeves about our prior approach: no good narrative providing an overview of different product areas.

One reason that idea may not have got traction was perhaps due to the fact that we had started to make use of the Rally tool for agile project management. I’m actually not a big fan of Rally’s tool (more on that here if interested) but nonetheless, in our "new toy fervor" there did seem to be an exciting chance to do away with Word document based requirements altogether. In our environment this was a radical proposal which I firmly supported…if only it weren’t for those auditor types who want to see one. As a response to this the idea of exporting story details from Rally emerged. This made sense at the time too. One of the pieces of advice I’d seen from several people was, “If you need to keep a written spec, just keep the story cards” (or in our case, the story records in Rally). Rally’s product has a set of web services and various ways to interact with them, including a simple Ruby script for importing and exporting data. It does work, but it’s pretty simplistic at this time and requires quite a lot of effort to convert the CSV output into anything resembling a normal looking requirements document. It would certainly be possible to hack away on that script or create your own variation on the theme that was a bit more geared to the specific task of creating this kind of output. But, I’m no longer convinced that’s what we want to do.

Most recently, in anticipation of a brand new product starting I’ve been reading about and dabbling in behavior driven development (BDD) and acceptance test driven development (ATDD). In particular I’ve been looking at Cucumber and Cuke4Nuke (which enables one to write “test fixtures” in C#). See here for my first post after encountering Cucumber. Something this exposed me to was the “Given/When/Then” (G/W/T) form of expressing requirements. Even without automating your tests and doing full ATDD this simple approach to describing requirements seems to be pretty powerful. In many ways the G/W/T stuff seems to stand on the shoulders of use cases, but feels more lightweight, simpler and provides an easier time of things when it comes to splitting features into smaller parts.

One thing that I thought had lodged firmly in my mind after all this was the idea of tests as your specification, or “executable specifications.” The concept is simple: if your tests are readable and express what the software should do, then the running of them indicates what it actually does and you’re never playing “catch-up” with an old school requirements specification.

And yet…clearly this idea hadn’t stuck. My team was discussing last week whether we had to keep our RS for our legacy product and whether we could maybe forgo such an artifact for the new product we will be starting soon. We got onto talking about how we couldn’t just document things in Rally because the export wasn’t good enough for what we needed. I resolved to try and do something about this by investigating the export script. And I was doing so when I realized that this was completely the wrong problem to solve. I hadn’t seen it as clearly before, but I realized then: stories are transactional. They only make sense if you review them chronologically because story #42 can completely modify the feature you put in through story #7. Keeping your stories and transforming them into some kind of document as a surrogate requirements specification isn’t the right approach. Instead, we should be transforming our tests into a specification because they represent exactly what the product is meant to do in a much more succinct way than a huge transactional list of stories.

Of course to do this one needs tests that are human readable, and not just by humans that can interpret code. This brings us back to the Given/When/Then format, because it sits natural language descriptions on top of automated tests. And, lo and behold, it’s entirely possible to very easily transform your Cucumber feature descriptions into various outputs, including a PDF.

When we first embarked on transitioning to agile software development I wrote a series of four blog posts about our early experiences. I concluded with looking forward to where we would need to focus next. I said at the time I thought requirements would be a big deal for us. It’s taken longer than I anticipated, but I think now we can start to see a path forward.

We don’t want to have a big old requirements specification that we maintain. We do need to have something that helps us meet our regulatory obligations. ATDD and the G/W/T approaches can help us with this. We can generate what we need as a natural part of a work that’s far more integrated into software development than a traditional document ever could be. Requirements are dead. Long live requirements. 

Tuesday, October 5, 2010

Commitment: what can we really commit to?

I want to discuss a foundational topic that influences the management of software development: commitment. In particular I’m looking at this from the perspective of a business that develops software not for sale, but as an enabler to their operations. As such a business we commit to customers that we can do things. That we can do them by a certain date. That we can do them to a certain level of service, or quality. These commitments cascade down as implicit obligations for the operational teams that conduct the sold work and those that support them.

Except…in reality, no matter how much we want it to be so, software development teams creating products can’t really commit at this level when we have no idea what promises are being made or complexities are involved.

I know. That sounds pretty bloody useless. But it’s true. And to think otherwise is naïve optimism that is just waiting for disappointment.

What we can commit to is releasing working software regularly. In fact, when you’re working to Scrum principles, at the end of any sprint the software should be “potentially releasable.” If you can handle it that often we’re ready to go.

In many businesses dates are already determined, and features are already determined. Given that resources (that is, the people available) are also largely determined (“no you can’t hire any more people, make do with what you’ve got”) it’s rather apparent that all three variables of software development are being fixed. (We’ll assume for now that the fourth variable, quality, is considered non-negotiable by the team developing software, i.e. they as professionals will not compromise quality as a response to the inability to negotiate dates, features and resources.)

Fixing all three like this can work with a great team and a bit of luck. However, given a bit of bad luck or changes in staff that inevitably occur over time and problems will likely arise. Whether we acknowledge this and try anyway, or wait until disaster ensues and then start investigating misses the point: this is not a reliable way to develop software.

There is a “better way”. This “better way” however requires a bit of a mind-shift for those used to the more traditional approach. At its heart is prioritization. This is where things can get difficult for people who are commissioning work. It’s natural to want to say “Hey I just want it all, and by March too, we’ve sold it so it’s basically promised. Just find a way to make it work.” For a business to allow people in this position to operate this way is, in my opinion, a little bit like the proverbial Ostrich burying its head in the sand.

Indulge me by considering the following contrived little story.
If a tap* can, when fully open, fill a 5 gallon** bucket in a minute then that is just a simple fact with which you have to contend. No amount of wishing it could do more will make it so.
Further consider: you have two buildings on fire, burning away, your garage and your house and that you’d like them to be put out. The firefighting team can deploy the water anywhere. Now ordinarily you wouldn’t get to influence what firefighters did. But imagine for a minute that you can. Further imagine that the garage contains a stash of priceless artworks, but that the house is empty. Left to their own devices, the firefighters would probably think they were doing the right thing by concentrating on the house. But YOU know better. Obviously you would have them put every last drop of available water into preserving the garage. You realize there isn’t enough water to tackle both burning buildings. You realize you can’t have them save both the priceless art and the house. You realize that demanding the firefighters save both buildings ain’t gonna happen. You prioritize, and save the art.
In other words, you have recognized that You need to decide how much water to put on which fire when, and you need to accept that you can only get 5 gallons every minute to distribute amongst them.

Once you’ve realized this is the way the tap works, and there isn’t anything you can do to make it spit out water faster then you’ve learned one of the most valuable truisms about software development. You can only do so much with limited resources. To make best use of them you need to prioritize how best to deploy them. And if you want to do more, you need more resources.

* or faucet if that’s your thing
** or, again, 18.92706 liters (or litres) if you must – as you can see I’m conflicted and inconsistent with both terminology for everyday items and units of measure after living in the US for 8+ years

Friday, October 1, 2010

Delayed Gratification

(Or, "Why I believe fixed length release cycles are best")


Anyone who’s done some kind of Scrum reading or training classes will have had some exposure to the release planning techniques and ideas involved. There is plenty of information, ideas and opinions out there about that stuff, in particular how to organize your releases around themes, epics and stories, how to prioritize your backlog and how to run a release planning meeting. I, however, have been thinking a bit about things one level up from the mechanics, and here wish to discuss some of my opinions and ideas on how a business should look at managing releases. In particular I’m interested in how to balance responsiveness to changing needs and predictability on longer term commitments. I’m also interested in simplicity and optimizing the way product development works. Agile software developments offer the promise of delivering high value sooner, of responding to changing needs. But in order to deliver on this promise it is my belief that certain preexisting attitudes and tendencies that are ingrained in many of us need to be reconsidered.

In scrum, at the end of any sprint the software should be “potentially releasable.” Usually we choose not to do that though because:

  1. There isn’t enough value to justify the “cost” (deployment, distribution, training etc.) of releasing
  2. Customers (users too!) aren’t necessarily best pleased to have their software in a seemingly constant state of flux.
An easy way then to manage releases is to let business needs dictate release dates. This doesn’t require us to have any new special rules or planning processes, and it sounds reasonable: “Yeah we’re gonna need that BIG FEATURE out by CERTAIN MONTH for IMPORTANT CUSTOMER, so we should release then. And then we’ll fit in whatever else from the top of the backlog we can.” Sounds pretty good, right?

Ken Schwaber’s Scrum guide, in describing release planning meetings says nothing more than that they establish a “probable delivery date and cost that should hold if nothing changes.” He doesn’t seem to be advocating anything different.

But is this optimal? Are there potentially better alternatives? I think so. Here’s how I think it should be done and why.

In a nutshell I think we should take a leaf out of the finance and accounting book and use a year chopped up into quarters (i.e. 3 month long chunks) for planning and release purposes. In other words:
  • I wouldn’t much bother trying to look further than a year ahead
  • I would [try and] release every quarter (maybe sometimes you can skip, although I think if you have your deployment act together this is unlikely)
  • I would encourage the business to plan things around this quarterly release schedule…that is to think of things in terms of “I’d like this in the 1st quarter release” or “I can wait until the 3rd quarter release,” not “I want this in October.”
Why? Well, hopefully the idea of not looking more than a year ahead is not too controversial. All I’m saying is that things change so much in 12 months that looking further ahead is usually crystal ball gazing. That’s an idea you’ll see in many communities, whether agile, scrum, lean startup or whatever. I think it makes a lot of sense in many places, certainly in Perceptive MI.

Also fairly easy to swallow I think is the notion of thinking in 3 month chunks. Three months is around 12 weeks, or 6 two-week sprints. It’s long enough to get something of substance and value done, but a short enough time-horizon to keep everyone focused and avoid the risk of over-committing badly. It’s also probably not too long to wait for a new feature. Of course unexpected needs and critical defects occur, and interim or maintenance releases are always possible, particularly when using Scrum, since we’re potentially shippable at the end of a sprint.

But this notion of rigidly sticking to a plan of releasing every three months…I know that is a little different and needs explaining. Well, for starters it provides a nicely time-boxed predictability. There aren’t so many surprises. There isn’t the need to frequently revisit and negotiate the question of when releases will occur – everybody knows, it’s once a quarter.

This predictability is not just comforting and simple, it provides other benefits too. For starters it makes it easy to handle certain common situations. Let’s consider four:
1. The madly urgent problem/bug in production
Everyone is familiar with this situation. One moment everything is calm and ticking over nicely, the next minute chaos abounds: a bug without workaround has been found, or a really important customer has to have something and only just thought to ask.

Either way, within our proposed framework dealing with this is straightforward. There are really only two choices. The first question to ask is, “Can this wait until the next sprint?” The answer is going to depend on how truly urgent the matter is, and how long it is until the next sprint begins. With two week long sprints it’s never far away. Few problems require more immediate attention. For those that do, the team would simply abandon the current sprint and focus on the urgent issue at hand.

2. The unexpected feature
When planning for a new three month release cycle you usually have a fairly good idea of what you want to get done in a timeframe that short. Nonetheless, there are a number of very good reasons why a feature that nobody was aware of then can suddenly emerge and need attention.

This is not really a problem though when using Scrum. The Product Owner (and by extension stakeholders and other interested parties) are able to completely change the priority ordering of the product backlog every two weeks. Simply bump the new feature up to the top and it can get all the attention it needs at the next sprint. This means that, worst case, the business waits two weeks for things to start on something that was previously completely unplanned.

3. The client driven enhancement
Somewhere between the unexpected feature and pure market driving planning lies the customer driven enhancement. The distinction here is that one has the capacity to pick and choose whether to bump out current plans and place it into the release cycle that’s in progress or negotiate as needed to place it in the next.

4. The market driven plan
Above the cut and thrust of weekly operational driven needs and urgent customer requests is the longer term approach to product development. This is simply identifying desirable features based on market trends and selecting appropriate release cycles to implement them in up to a year ahead.

You see how with all of those there’s an established pattern and we work to fit things in? This reduces or even eliminates the need to frequently invest energy into discussing and negotiating with colleagues outside the Scrum team (IT, domain experts, business development, professional services etc.) over when a release will occur. The schedule is published and everyone can consult it and even, with time, comes to anticipate it. Things will happen without energy needed to push or chase it: IT can grow to anticipate doing deployments for example. Business development can talk confidently about the likely lead time for including new features to service particular customer needs.
Finally, ultimately, it encourages better planning and discipline on the business as a whole, from sales through to professional services and product development. I believe this leads to a better run business, better thinking, less burn out, and people selecting the right opportunities to pursue. I cannot see how that is a bad thing :-)

For me, this approach to managing the release cycles for a software product intuitively makes sense. The combination of simplicity, predictability and responsiveness that can be achieved is highly desirable and a worthwhile reason to move away from the more traditional approach to variable length release cycles driven by external events. What do you think?