Tuesday, December 15, 2009

Agile Adoption: Part III

This is my third installment writing about our agile adoption. In the first posting I discussed how we got started and how things started to pan out for us after a few months at it. In the second I moved on to look at a few of the practices we began trying after figuring out the basics. In this posting I'm going to share how we started to go beyond the notion of adopting a general notion of agile and selected Scrum along with some of the things that led to.

Ever since I started learning about agile I've become a voracious reader again. Blog posts, articles, books. I tore through many in the first months and continue to do so still. At some point I'll blog on what I found to be the most helpful material. For now though, what I want to talk about is how all this reading quickly deepened my understanding of the various ideas sitting under the general agile umbrella. In particular I was struck by the dominance of Scrum as a framework for agile project management. Although there was mention of XP and Lean (aka Kanban to some) and to a lesser extent I would stumble across RUP or Crystal...it was Scrum that seemed to have got center stage. And the more I've learned about it the more it seems clear why. It's a pretty light framework with minimal mandatory dogmatic things prescribed for you to follow. It makes a helluva lot of sense and it's not "hard" to internalize and figure out how to use it. At least not conceptually. Of course I think it takes a lot of effort to consistently employ it. If you're not convinced that Scrum is simple, try reading one of the short simple explanations of Scrum or even the official Scrum Guide. Really not that hard eh? Of course there is more to it than that. Applying this framework and these principles to an actual project leads to lots of decisions. Many times you need to "try" stuff and see what occurs. Scrum is great for that with feedback loops very obviously built into it.

All this reading about scrum started to change the language we were using. In the past we always talked about iterations, now they were sprints. People were getting comfortable with points and velocity -- much less talk of hours and days. We framed things in terms of stories now. Our former project managers were scrum masters. Our status meetings were daily stand ups. We didn't have a lessons learned or post mortem meeting, we had retrospectives. We didn't have iteration kick-off meetings, we had sprint planning meetings.

This change in language happened first just inside the immediate team, but eventually we were talking to our customers about velocity, story points and so on. I can't quite articulate why, but I found this to be immensely cool. I think maybe it was because we had started to get away from the "dammit I want all my stuff done by February" type conversations and on to much more productive and healthy "here's where the team is, here's what they look like they can do by February, maybe we should alter priorities of what's in the backlog so X, Y and Z get done."

This sea change in language was deeper than just terminology though. It spread in part through a kind of osmosis, but also through people's interest being piqued, through training (a number of people attended CSM courses) and through people sharing articles, doing their own research and so on. Again for me this was just the coolest thing to see, an organic growth in learning and interest from people.

Up until around this point we still had a rather large shadow hanging over what we were doing: stories weren't shared between software engineers and QA staff on the team. A set of stories would be developed during a sprint and, at the end, a build given over to the QA folks for them to test during the subsequent sprint. Any defects found resulted in a bug fix and new build. We were in effect doing mini-waterfalls, or "scrummerfalls".

We knew we wanted to get out of that pattern, and as we embarked on a new product release cycle we went all in on doing shared stories. Specifically now, a story was shared by the software and QA engineers, and it needed to emerge from the sprint completely done: developed and tested; bug free and potentially shippable. But there was more to "done" than this. We work in a business subject to FDA regulations. We can (and do) get audited by our clients and the FDA. They expect to see that we follow a rigorous software development process, that we have SOPs and follow them. That we follow what they believe to be best practices such as traceability from requirements to testing, that we unit test, peer review code and so on. To help with this notion of "done" -- meaning a feature was (potentially) ready to ship -- we decided to come up with a checklist that would serve as our "definition of done." This basically contained items like code the story, develop tests for the story, peer review of code, unit tests, execute manual tests, fix any defects, update any related documentation etc. 

As an aside...I was quite humbled to learn recently that we (that is me and my peers) did a piss poor job of getting this definition of done idea out to the team. I realize now that we dropped it on them already figured out, affording them no opportunity to discuss and agree what should be on there. Very bad and lesson learned for me there: people are going to feel like it's a mandate from their managers rather than a tool to help them. And really it should be a tool to help them -- something that makes clear all the good work they need to be allowed to do to develop software professionally; a guard against manager types and customers pressuring people to cut corners and build up "debt" for the future that we'll have to then rush to address or skip altogether.

This move to shared stories that need to be really done and potentially shippable at the end of the sprint had several more knock on effects:
  • the order in which we implemented and tested stories was now important -- previously we just cared which sprint they were done in, now people needed to know what order they would get done in during a sprint
  • we were clearly going to be releasing multiple builds during a sprint as stories were implemented and ready for testing
  • we needed, more than ever, to keep our build "clean" -- ready to tag and build a version that could be put into the testing environment
  • keeping our build clean meant an even greater need for unit tests and continuous integration (i.e. build on check in)
  • much greater shared understanding and transparency between software engineers and QA engineers was necessary too
  • the whole team shared estimation with planning poker

In addition to the above, a number of other things started to happen. We explored FitNesse (don't let the dodgy logo and generally retro look put you off) as a means to adding acceptance tests. That helped a lot with figuring out how far our unit tests should go. In the past there was some pretty heavyweight stuff there which was really integration or acceptance tests masquerading as unit tests. Thinking in terms of acceptance criteria or tests I believe is also going to help people do a better job of focusing on requirements.

With two product teams now running Scrum and a third already exhibiting many agile behaviors we decided to try holding a "Scrum of Scrums" meeting. We had tried in our pre-agile past many times to do *something* to get cross-team communication going. That had always typically failed in one way or another. Part of that, I think, was due to us having 2/3rds of the product development done by something less that a full-time, well defined team. This led to situation where when you got people together you needed too many and nobody was terrible interested in anybody elses situation. Now with stable teams we can have one or two members represent each team *and* there are only three streams of product development rather than a dozen. It still took us quite a while to figure out how to do the Scrum of Scrums, there's not a lot of concrete info out there, although talking with other folks provided some ideas. Right now we still have some ways to go, and may even change our focus in the future, but I believe for now we've found a good approach which I will write about at some later point.

After all of this change, what were we getting out if it? Well I'm pleased to say some pretty good stuff. Two key things were product releases on the promised date. In fact we were done a little earlier, which never hurts. But in addition to that, whilst we clearly could see some people struggling with the quantity and rapidity of changes introduced, most were thriving. There was a renewed spark in people's eyes -- they were starting to own things more, to realize they could influence things more. And that is clearly leading to a better working environment for everyone.

I'm confident that as we continue we will have more and more success. We will keep delivering what's needed on time with high quality. We won't have frenzied periods at the end of product release cycles burning people out on death marches. We will have predictability; we'll know team's capacities and we'll be able to plan well based on that. We'll use more automation. We'll see stronger relationships amongst team members. Our customers will be happier, more involved and satisfied with what they get. They'll trust us to deliver what they need.

In a bit I'm going to write more more part in this series of posts on our agile adoption. It's going to be the forward looking piece, talking about what I think we need to focus on next. Top of that list for me right now is figuring out how to capture requirements in a way that satisfies our regulatory obligations and is compatible with our Scrum approach to product development. Additionally I believe we should be focusing on more XP technical practices, test automation and really getting the Product Owner role accepted by the business.

No comments:

Post a Comment