Tuesday, December 15, 2009

Agile Adoption: Part III


This is my third installment writing about our agile adoption. In the first posting I discussed how we got started and how things started to pan out for us after a few months at it. In the second I moved on to look at a few of the practices we began trying after figuring out the basics. In this posting I'm going to share how we started to go beyond the notion of adopting a general notion of agile and selected Scrum along with some of the things that led to.

Ever since I started learning about agile I've become a voracious reader again. Blog posts, articles, books. I tore through many in the first months and continue to do so still. At some point I'll blog on what I found to be the most helpful material. For now though, what I want to talk about is how all this reading quickly deepened my understanding of the various ideas sitting under the general agile umbrella. In particular I was struck by the dominance of Scrum as a framework for agile project management. Although there was mention of XP and Lean (aka Kanban to some) and to a lesser extent I would stumble across RUP or Crystal...it was Scrum that seemed to have got center stage. And the more I've learned about it the more it seems clear why. It's a pretty light framework with minimal mandatory dogmatic things prescribed for you to follow. It makes a helluva lot of sense and it's not "hard" to internalize and figure out how to use it. At least not conceptually. Of course I think it takes a lot of effort to consistently employ it. If you're not convinced that Scrum is simple, try reading one of the short simple explanations of Scrum or even the official Scrum Guide. Really not that hard eh? Of course there is more to it than that. Applying this framework and these principles to an actual project leads to lots of decisions. Many times you need to "try" stuff and see what occurs. Scrum is great for that with feedback loops very obviously built into it.

All this reading about scrum started to change the language we were using. In the past we always talked about iterations, now they were sprints. People were getting comfortable with points and velocity -- much less talk of hours and days. We framed things in terms of stories now. Our former project managers were scrum masters. Our status meetings were daily stand ups. We didn't have a lessons learned or post mortem meeting, we had retrospectives. We didn't have iteration kick-off meetings, we had sprint planning meetings.

This change in language happened first just inside the immediate team, but eventually we were talking to our customers about velocity, story points and so on. I can't quite articulate why, but I found this to be immensely cool. I think maybe it was because we had started to get away from the "dammit I want all my stuff done by February" type conversations and on to much more productive and healthy "here's where the team is, here's what they look like they can do by February, maybe we should alter priorities of what's in the backlog so X, Y and Z get done."

This sea change in language was deeper than just terminology though. It spread in part through a kind of osmosis, but also through people's interest being piqued, through training (a number of people attended CSM courses) and through people sharing articles, doing their own research and so on. Again for me this was just the coolest thing to see, an organic growth in learning and interest from people.

Up until around this point we still had a rather large shadow hanging over what we were doing: stories weren't shared between software engineers and QA staff on the team. A set of stories would be developed during a sprint and, at the end, a build given over to the QA folks for them to test during the subsequent sprint. Any defects found resulted in a bug fix and new build. We were in effect doing mini-waterfalls, or "scrummerfalls".

We knew we wanted to get out of that pattern, and as we embarked on a new product release cycle we went all in on doing shared stories. Specifically now, a story was shared by the software and QA engineers, and it needed to emerge from the sprint completely done: developed and tested; bug free and potentially shippable. But there was more to "done" than this. We work in a business subject to FDA regulations. We can (and do) get audited by our clients and the FDA. They expect to see that we follow a rigorous software development process, that we have SOPs and follow them. That we follow what they believe to be best practices such as traceability from requirements to testing, that we unit test, peer review code and so on. To help with this notion of "done" -- meaning a feature was (potentially) ready to ship -- we decided to come up with a checklist that would serve as our "definition of done." This basically contained items like code the story, develop tests for the story, peer review of code, unit tests, execute manual tests, fix any defects, update any related documentation etc. 

As an aside...I was quite humbled to learn recently that we (that is me and my peers) did a piss poor job of getting this definition of done idea out to the team. I realize now that we dropped it on them already figured out, affording them no opportunity to discuss and agree what should be on there. Very bad and lesson learned for me there: people are going to feel like it's a mandate from their managers rather than a tool to help them. And really it should be a tool to help them -- something that makes clear all the good work they need to be allowed to do to develop software professionally; a guard against manager types and customers pressuring people to cut corners and build up "debt" for the future that we'll have to then rush to address or skip altogether.

This move to shared stories that need to be really done and potentially shippable at the end of the sprint had several more knock on effects:
  • the order in which we implemented and tested stories was now important -- previously we just cared which sprint they were done in, now people needed to know what order they would get done in during a sprint
  • we were clearly going to be releasing multiple builds during a sprint as stories were implemented and ready for testing
  • we needed, more than ever, to keep our build "clean" -- ready to tag and build a version that could be put into the testing environment
  • keeping our build clean meant an even greater need for unit tests and continuous integration (i.e. build on check in)
  • much greater shared understanding and transparency between software engineers and QA engineers was necessary too
  • the whole team shared estimation with planning poker

In addition to the above, a number of other things started to happen. We explored FitNesse (don't let the dodgy logo and generally retro look put you off) as a means to adding acceptance tests. That helped a lot with figuring out how far our unit tests should go. In the past there was some pretty heavyweight stuff there which was really integration or acceptance tests masquerading as unit tests. Thinking in terms of acceptance criteria or tests I believe is also going to help people do a better job of focusing on requirements.

With two product teams now running Scrum and a third already exhibiting many agile behaviors we decided to try holding a "Scrum of Scrums" meeting. We had tried in our pre-agile past many times to do *something* to get cross-team communication going. That had always typically failed in one way or another. Part of that, I think, was due to us having 2/3rds of the product development done by something less that a full-time, well defined team. This led to situation where when you got people together you needed too many and nobody was terrible interested in anybody elses situation. Now with stable teams we can have one or two members represent each team *and* there are only three streams of product development rather than a dozen. It still took us quite a while to figure out how to do the Scrum of Scrums, there's not a lot of concrete info out there, although talking with other folks provided some ideas. Right now we still have some ways to go, and may even change our focus in the future, but I believe for now we've found a good approach which I will write about at some later point.

After all of this change, what were we getting out if it? Well I'm pleased to say some pretty good stuff. Two key things were product releases on the promised date. In fact we were done a little earlier, which never hurts. But in addition to that, whilst we clearly could see some people struggling with the quantity and rapidity of changes introduced, most were thriving. There was a renewed spark in people's eyes -- they were starting to own things more, to realize they could influence things more. And that is clearly leading to a better working environment for everyone.

I'm confident that as we continue we will have more and more success. We will keep delivering what's needed on time with high quality. We won't have frenzied periods at the end of product release cycles burning people out on death marches. We will have predictability; we'll know team's capacities and we'll be able to plan well based on that. We'll use more automation. We'll see stronger relationships amongst team members. Our customers will be happier, more involved and satisfied with what they get. They'll trust us to deliver what they need.

In a bit I'm going to write more more part in this series of posts on our agile adoption. It's going to be the forward looking piece, talking about what I think we need to focus on next. Top of that list for me right now is figuring out how to capture requirements in a way that satisfies our regulatory obligations and is compatible with our Scrum approach to product development. Additionally I believe we should be focusing on more XP technical practices, test automation and really getting the Product Owner role accepted by the business.













Monday, December 14, 2009

Meetings


The other day I retweeted what I thought was an interesting article from the Harvard Business Review about meetings. The idea was that to avoid those days of back-to-back meetings by scheduling 50, rather than 60 minute long meetings. In this way the day is punctuated with 10 minute interludes that allow one to catch up a little bit.

I thought it was a great idea, but in response someone asked "how does that apply to meetings to discuss 'issues' where an email with the answers would do?" This is a good question, and I can't help but notice that there's a lot of gratuitous meetings. I wanted to comment on my thoughts and ideas about organizing meetings optimally for an agile team. Absent the full opportunity to try out some of these ideas it is more theory that proven methodology. Nonetheless it's based on my own observations and a fair bit of discussion and background reading...so it's not completely off the wall.

Personally, and I've read others feel the same (can't remember who though!), just inviting me to a meeting is a huge mental burden. Compared to emailing me, IM-ing me or even stopping by my desk I find it considerably more invasive. Any one of those other low key approaches and I can probably be done in minutes without really feeling to imposed upon. If needs be I can ignore/deflect the interruption too.

But you send me a meeting invite and it's an entirely different proposition. As the allotted time approaches it tends to creep into my mind...especially when that "15 minutes to go" Outlook remind chimes in. I wind down in anticipation, wander off to the meeting room (or dial in to the call) -- we do the smalltalk as we wait for everyone to assemble, then we're done...then I go make some tea or something. Finally I sit back down at my desk and try and get back into whatever I was doing before. The cost of the meeting in terms of flow is high. This is especially true for some: see Paul Graham's "Maker's Schedule, Manager's Schedule" for a good explanation of why this is.

All this preamble brings me to my first premise of meetings: I'm only interested in really necessary ones. It's far, far too easy for people to adopt a default mode for addressing things by calling meetings. Only recently I and two of my colleagues were invited to a meeting to discuss a bug by the people that found it. Of course the bug didn't really need a meeting. All salient information about the bug could be communicated via the bug report and clarification handled through informal discussion. The meeting was all about impressing upon us the severity of the bug and just how much we needed to pull our finger out and fix it pronto. I'd rather be told that straight so we're all clear about people's expectations.

I digress; moving on...so if we're only to have "really necessary" meetings -- which for me right now is those recommended by Scrum and probably some requirements workshop/story writing/estimation type things -- then how do we deal with all that other necessary communication that must happen on a product development team? How do we get clarity on various issues, or thrash out design decisions, etc.? My answer would be to just go grab the colleague(s) you need and get to it.

That notion may strike fear into some people: "Eeek no, leave me alone, if you must talk to me book some time, I don't want to be interrupted like that without warning."

But there are ways to handle this. In the Paul Graham article above he suggests one approach. I think it's spot on for all the meetings people like myself and my management peers want to have with people on product development teams, namely try and put them at the very beginning or very end of the day. That way people are left with a big hunk of uninterrupted time to do their thing.

But for the intra-team communication that must happen, I think this would be too rigid. Something I believe is more suited here is the Pomodoro Technique. At it's very simplest you can understand this to work as follows:

  • set a timer for 25 minutes and focus on whatever your next work task is without stopping until the end
  • at the end of the 25 minutes take a ~5 minute break
  • repeat
  • at the end of four of these "pomodoros" take a longer break

It's more subtle than this, but the key additional points that I see are:

  • don't permit interruptions when the timer is ticking: ignore email, IM, phone and people walking up to you (gesture pointedly at the ticking timer...)
  • deal with any important interruptions during the breaks, e.g. go see what Bob wanted when he wandered over looking for you
  • If Bob is also using the Pomodoro Technique he may either have decided to wait until you are free or started up another pomodoro (25 minute work session) of his own. Obviously if he's done the latter whatever he wanted wasn't that important...or maybe he IMed/emailed his question. If he's waited then maybe he's looking to work on something with you: it could even be done as a shared pomodoro.

I think this approach would be a very effective way of handling interrupts without ever forcing people to wait an unreasonable amount of time for access to another team member. I say "I think" because I've not had chance to put this fully into practice in a team with everyone agreeing to work this way. I've use pomodoro's myself as a simple way to just focus on getting stuff done. Procrastination is harder when using them. And it stops me from "allowing" myself to be interrupted. Based on that evidence alone I feel pretty good about how well it could work for a team. Organized this way I they would still be able to have frequent interactions as needed. But they wouldn't be filling up their calendar with "meetings" just to talk to one another.

Of course one key enabler for this approach is probably co-location, maybe even ideally a shared team room/workspace where everybody sits. But even without that I believe it would be a good approach.

There's also a psychological benefit I believe to not having a calendar full of nasty white blocks of meetings. You wouldn't feel so much as though you had been subjected to a day of meetings, unable to "get anything done". Rather, you'd feel like you had been working all day long, some of that time collaboratively with your colleagues as you and they found it helpful.

Therefore, my second premise of meetings is: handling issues via informal discussion doesn't mean you need to be unpredictably interrupted all day long.

Of course there are meetings that have to happen. Sprint planning meetings, retrospectives, requirements workshops etc. And for these I believe it's key to adhere to a few principles to enjoy success.

First of all, any real meeting needs an agenda, preferably with each item on it timeboxed. I've been known to reject invitations or at least tersely question the sender in the absence of agendas. Now obviously you can't go around doing that to everyone...but for a group of peers running a Scrum product development team everyone is free to call out colleagues requesting meetings that are all too vague.

Additionally it's important that any meeting is well facilitated: who's keeping notes? Who's ensuring we stay with the agenda? Who helps encourage the less talkative members of the team to voice their opinions? Some of these things can be shared amongst a good team, but having an identified facilitator makes for a good experience I believe.

Given how invasive meetings are they should be of a reasonable duration. I particularly liked that idea from the HBR of stopping 10 minutes early to avoid filling up entire hours. Multi-hour meetings (like spring planning) should have built in break times and they should be kept to.

Lastly, good meetings need respect from and for all participants. In particular I see the following as key:
  • give people and their ideas and contributions a fair hearing
  • invite people who haven't yet made contribution to volunteer their opinion
  • be there and start on time -- lateness is a lack of respect unless there is a good reason
  • quality phone communications (rooms with poor hardware or acoustics plain suck for those on the other end of the phone)
  • make the effort to be clear for those not physically present (i.e. those on the phone)
  • use a webex if appropriate -- it helps those not in the room follow along

Thus my third and final premise of meetings is: if we're going to have a meeting, let's bloody do it well.

Yeah, I'm biased about the not physically present stuff...I do so many meetings over the phone :-)

Sunday, November 29, 2009

Agile adoption: Part II


In my initial post discussing the adoption of agile practices in my engineering group, I gave some background on where we were a few months back and how we started to switch from waterfall to agile.

In this post I'm going to get into some details about the practices we have tried and how they have helped us (or not). Before I get into the meat of that though, one of my colleagues, Ram, commented on that first post. He had a very interesting point which gave me pause for thought. Amongst other things Ram said, "did the points estimating really helped us??i personally could not see the benefits out of it."

Chewing this over for a few days has led me to realize there's a couple of important things anyone reading this needs to know. Firstly, what I write here are my observations, and I'm not a member of any of our scrums teams such as (for example) Ram is. For those that don't work with or otherwise know me, these postings are the observation of a manager of a team of engineers. My concerns and observations will, necessarily, be different from those actually on a scrum team doing the hard work. Secondly, I may need to make clearer why I see some things as beneficial -- perhaps they would be obvious to peers, other managers etc. but I would like to share these views also with people on actual scrum teams. I think having a more holistic view of the whys, wherefores and benefits for more than just the team would be useful and (hopefully!) interesting. I know that two of the guys on our scrum teams have their own blogs and hopefully they will post their thoughts on our agile adoption over time. You can find Ram's here, and Rajiv's here. Rajiv already has posted some preliminary thoughts.

Now on to the real substance of this post. I'm going to cover what I remember as the next four items of consequence in our agile exploration: task visibility, proper iteration kick-off meetings, iteration retrospectives and lastly Twitter. Yes really, Twitter.

Task Visibility
Truth be told, having a decently detailed understanding at any given moment of what people were working and whether it was going according to plan was historically difficult for me. In part I attributed this to not playing the role of project manager for any of our ongoing endeavors...so of course I didn't know. But finding out when I needed to know was always a tedious process. It felt like it took more time than it should and never left me with much confidence that I had the full story anyway.

Our agile adoption almost immediately started to help with this. We employed two key tools: a simple task board and Basecamp, an online project collaboration and task management tool. The task board was set up in a small room we were able to use on a daily basis for our stand up meetings and was shared amongst the entire team. It took us a while to tease out "good" tasks and longer still to realize that the RA should have tasks on there too. At this stage we weren't really story based, and so the granularity of what was on the board was fairly variable. Nonetheless, being able to see at a glance how many tasks there were for an iteration, how many remained in committed vs. in progress vs. done was very powerful.

This physical task board was augmented by Basecamp. This was mostly popular with the software engineers on the team, and we wrestled for a bit over what should "drive" the stand ups ...the physical task board or Basecamp. Our situation was complicated by the fact that we had team members in both Eastern Europe and India. This led some of us to feel that Basecamp was the way to go, but, ultimately the physical board won out. I think upon reflection this was a good thing -- the simplicity and visibility of the board was key. The software engineers continued to use Basecamp to a greater or lesser extent for most of the remainder of this release cycle.

Proper Iteration Kick-offs
Even before our first tentative steps in the agile direction, we had typically divided our product development work into two week "iterations." This didn't actually afford us that much of a benefit besides perhaps enforcing a certain regular cycle when the PM would check in with the developers and obtain some vague indication of what they might do in the next couple of weeks.

Now the kick-off meetings had much more purpose: what are we putting into the "committed" column of our task board? If we estimate that stuff in points is it <= our velocity? From my perspective this seemed to give us a much more regulated rhythm to our work. Things seemed more predictable -- the team met their commitments. They thought harder and talked more about what they were committing to. Their was greater consensus across the team even though work on different facets of the project was still silo-ed in some ways.

Having seen how this panned out I would maintain that there's really no point doing work in iterations unless you are also measuring velocity in some way. There's just no point -- if you're not measuring velocity all you're doing is arbitrarily punctuating work with some meaningless and frequent end point for no benefit. When you do iterations with velocity however, you start to gather data about the capacity of the team. From this the team can become better at predicting what they can accomplish. There's a straightforward but worthwhile sense of accomplishment knowing you can approximate what can get done in a couple of weeks. 

Retrospectives
For me retrospectives, and what they imply if done well, are the key to agile success. If (for some odd reason) I could introduce a fledgling agile team to just one new idea it would be retrospectives. I've never been a big fan of meetings, but with a retrospective you can get so much:
  • sense of team cohesion ("We're all in this together...something sucks? What would you like to do about it?")
  • shared ownership of process (so many people are used to being "told" what the process is...and now there is a chance for them to shape it)
  • an opportunity to challenge existing dogmas ("our SOPs won't let us do that")
  • a forum for people to shine with insights and ideas that often don't get an audience
  • a plan of improvement and a place to measure that improvement

What I also learned was that retrospectives need a great facilitator. It's all too easy for someone to go around the room and ask what went well and what didn't and get fairly anemic responses: "I guess what went well was we got all our committed work done...can't really think of much that didn't work out so good..." Getting people to surface the less rosy side of things is hard. There are of course those who always have views on this kind of thing (I'm kinda one of those myself) but for others it's awkward. Until people find that volunteering this kind of information is not just acceptable but welcomed some skill is required in extracting it. One way of doing this is through just lots of dialog. In a one on one situation people are often more willing to offer up something that's driving them nuts. I often would then try and find a way to toss this out for discussion without specifically identifying the source.

Twitter
I may stand alone on believing Twitter was a valuable resource that contributed to our agile adoption. Certainly I do not believe I managed to convince any of my peers of my boss to sign up (or if I did they're obviously using a pseudonym and cyber-stalking me). And this despite the copious links to informative blog posts etc. I sent them. Hmm, perhaps I did sent a few too many...

Anyway, personally I found Twitter to be a fabulous new tool for finding reading information relevant to agile. I developed a twice-a-day ritual of scanning my twitter stream and flagging interesting looking links for later reading. I started out with just a few people to follow, but through the way twitter works I was soon discovering others, from internationally known thought leaders through to ordinary people like myself writing about their experiences. With the recently added "lists" feature of twitter I have create an agile list which may be of interest to anyone wanting a starting point.


OK I think that's enough for a post. In the next installment I'll talk about some of the subsequent events including how we started to talk more about a specific agile methodology (Scrum) rather than a general notion of "agile" and some of the technical practices we started to put some focus on.

Chipotle Chicken


Ingredients

  • 4 chicken breasts
  • 1 pint heavy cream
  • 3 tbsp chipotle en adobo sauce (search google for recipe, very easy to make)
  • chopped cilantro for garnish
  • butter for frying

Directions

Cover chicken breasts with plastic wrapping and slap it around a bit with a rolling pin until it's uniformly flattened. Heat butter until melted and bubbling over medium heat in a large frying pan. Add the chicken and let it sizzle for a few minutes. Lift it up and have a look underneath; you're looking to see some nice color has been added. Once this is the case flip it over and do the same to the other side. At this stage the chicken shouldn't be cooked all the way through.

Now add the chipotle en adobo -- use more or less than suggested depending on your preference for heat. Pour in the heavy cream and kind of mix it all together. The cream and adobo sauce will blend together and should start bubbling away quite nicely. I usually turn the chicken over a couple of times during this stage. When the sauce has reduced to a thicker state, if all is well the chicken will be nicely cooked too. I usually cut the pieces in half at this point both to check done-ness and because that's a nice serving size. Taste the sauce and if you feel it's lacking add in some salt & pepper.

Transfer the chicken to serving plates. Check your sauce, it may be ready to go or you may want to reduce it further by letting it bubble away a little longer. When ready, spoon it over the chicken on each plate.

Serve with a simple salad of shredded lettuce, diced tomato seasoned with lime juice and black pepper. You may also want to add your preferred Mexi-style accompaniments, e.g. rice, refried beans etc.

Wednesday, November 25, 2009

Book Review: The No Asshole Rule

I recently read Robert I Sutton's book, The No Asshole Rule. Apparently it was borne of an article originally published in the Harvard Business Review. I cannot claim to have read that original article but the book is great; I enjoyed it immensely.

Besides the frequent smirks spawned by the amusement derived in reading a "serious" work about assholes, there's plenty of just great writing, sound advice and lots of "yeah I agree" moments.

The book is pleasantly short, with just seven exquisitely crafted chapters covering everything from the definition of an asshole in this context (not just someone who pisses you off ;-) through to eradicating them if you have the power, or dealing with them as best you can if you don't. In addition there's material on the virtues of assholes (there are some) and how to stop your own "inner jerk" getting out too often. It was nice to read that we're all assholes some of the time. The acknowledgment of this reality in the book is good, because without it I believe it would preach an unattainable level of asshole free zones. In other words it's OK to be a temporary asshole, indeed as the virtues section indicates sometimes it's necessary to jolt slackers and under-performers out of their stupor.

The section on dealing with an asshole infested workplace has some solid advice. Since most of us lack the authority (not being the CEO and all) to go on an asshole eradication spree it's probably the section most useful to most people. In particular I liked the ideas like attending meetings with assholes via telephone: if you have a meeting with an asshole in at least you can avoid having to be physically in their presence. Similarly take pleasure in answering their emails slowly, or maybe not even at all. Having a sense of control over the situation as the author suggests is indeed a very empowering feeling.

Luckily for me, there are very few assholes I have to contend with these days. Nonetheless, having finished the book there are still a few candidates whose desk I am considering leaving the book on...

Tuesday, November 24, 2009

Agile adoption: Four months in.


Actually I'm not sure if it's four months, it could be a little more, could be a little less. But that's close enough. In some respects I wish I had taken time to blog about this much more frequently, as I think it would have helped me distill my thoughts on how things were going. Possibly even a slim chance someone else may have found it interesting too ;-) But I didn't. I'm going to attribute that to a lack of habit, and maybe I can cultivate that habit more going forward. On the upside, having gone four months or so before posting my thoughts on adopting agile practices, I do feel that I have got a fair bit of experience that has some value to it. Perhaps earlier posts would have comprised vacuous meanderings around the topic of no interest to anyone.

Onwards...

Now I try and think back, I'm not 100% sure I can entirely recollect what the trigger for the turning point was. The best I can remember, there was some...shall we say, doubt or perhaps concern over whether a particular project we were running was going to be feature complete on time. Let me explain a little bit about where we were at this time:

  • we had a slew of software engineers (by our standards anyway) working away on this rather undefined "refactoring and re-architecting" phase of a major product release
  • somehow, despite some top level goals for this phase being set, there was next to know clarity on what was really going on, nor whether it was worthwhile
  • our customer certainly had no clue what was going on
  • the deadline for delivery had been fuzzy until recently
  • we had some nasty combination of huge horrible "the system shall" type requirements documents, use cases that were more use-less cases
  • nobody had a clue what the QA members of the team were doing besides "regression and exploratory testing"
  • the team was working in two week iterations, although they seemed rather purposeless upon closer inspection; the only thing that they seemed to do was ensure we more or less regularly provided a new build to QA
  • the quality of the builds was low, often emergency "patches" were needed to remedy showstopper defects that somehow made it out to QA
  • nobody was pair programming or doing TDD; the unit tests we did have were...let's just say sub-optimal *cough* not working *cough* or commented out *cough*

Anyway, back to the plot. So in a conversation about the need to critically evaluate whether this project was going to succeed, I may have blurted out something about my distaste for estimating in hours and how we should use points. I had done a little reading on this topic and sensed the potential -- estimating the work left in points and measuring velocity would give us a much better feel for progress I believed. They say you "should be careful what you ask for, as you may just get it" and yeah, I got it. So I went away and read a bit more (should we do S/M/L t-shirt size estimates? Fibonacci sequence?) before probably delivering the most disruptive message ever to my developers the next day. "Yeah this is Jon. I thought it'd be a great idea to estimate what we had left in story points. Have you heard of them? No. Don't worry. We're just gonna give it a go..."

I actually had us doing this wrong of course. Rather than estimating as a team I "pair estimated" with individual engineers for features they were responsible for developing. At this stage this was the expedient approach and looking back I still maintain it was better than do a "how many hours/days of work have you got left?" exercise. There was no QA involvement in these estimates, but I think that trying to get their involvement would have been too hard at that stage.

We settled on using 1, 2 or 3 as valid values for estimating our stories, mostly because it seemed simplest to get people to think about whether something was "easy, medium or hard" rather than agonize over a more complex set of choices. At this point I hadn't read any good material on what made for a suitable story, so many of our stories were tasks. With hindsight, although clearly not optimal this *still* was a step in the right direction.

It took the best part of a day for me to work with each of the engineers on the team and get their points based estimates for each of the stories (or tasks...) they believed they had left. Once this was done though I felt a distinct sense of progress. It was somehow easier to think in terms of there being about 100 points of work left, and how this therefore implied we needed a velocity of 20 points an iteration (not real numbers, but you get the point...)

With the "stories" and their corresponding estimates in a spreadsheet, and a daily stand up we suddenly started to be able to understand progress like never before. It was euphorically exciting to feel this degree of comprehension and transparency available to us. This feeling was a fleeting thing at first, but as iterations passed by and the velocity stabilized I think we all felt better about predicting success.

Stand up meetings were initially hard. People were late. People rambled. People didn't see the point. The chickens (myself and other managers) interrupted gratuitously to "coach" people on doing it right. We were eager but perhaps ineffective. Stand ups took ages. But, interestingly, we went from weeks slipping by with hidden issues, to impediments surfacing daily. And we'd solve them, or people would agree to work on solving them there and then. This was our second big win after understanding velocity I think.

With a toehold on our agile adoption, we were starting to build momentum. We have done much more since then, which I will write about it follow up posts to come.

Book review: Bridging the Communication Gap by Gojko Adzic

I just finished reading Gojko Adzic's book, Bridging the Communcation Gap. It's subtitled "Specification by example and agile acceptance testing." For a one sentence description, that does a pretty good job of explaining what this book covers.

I was inspired to seek out something like this after previously reading Mike Cohn's User Stories Applied (TODO: my review). It was in Mike's book that I found compelling hope that we could get away from the unwieldy requirements specifications used in my organization. These have traditionally served as the definitive record of what our software should do yet, they have never felt helpful or right to me. Indeed there is much evidence that they don't help at all, but that's another post...

In User Stories Applied, Mike answers the question I think everyone has after getting the quick overview of what a user story is: "Where do the details go then?" After all, given a story such as "As a hiker I want to view a list of nearby trail heads so I can investigate suitable hikes" one is naturally left with many questions. How far away from your current location should the software look for trail heads? Should you be able to vary that distance? What information should be in the list -- trail names? Dog / horse / mountain biking use allowed? etc.

The answer is that these details, rather than being captured in a traditional specification or use case documents can be very beneficially recorded as actual examples and acceptance tests. Understanding this was a real "aha" moment for me. Prior to this, user stories, estimating in story points, measuring velocity and using that to finally have some measure of what a team could accomplish in a fixed length iteration all felt right, but I couldn't quite figure out how we should do requirements.

Gojko's book opens by building the argument that traditional approaches to gathering and documenting requirements are flawed. I don't know that it would be hard to convince many people who've spent more than a few years in an environment where they must develop software based on the information in 100+ page requiremetns documents that this was the case. However, what Gojko does is to articulate well some reasons why this is the case. For me though, the real meat of the book is all contained in section II, which provides the concrete specifics of how to go about specifying by example and deriving acceptance tests.

The key points covered that I found valuable are:
  • Confirmation of my own belief in the necessity of customer/domain expert involvement alongside testers and developers. Business analysts should not be a conduit (or worse, filter). There's a particularly good explanation of the role of BAs mentioned in the book drawn from Martin Fowler and Dan North's presentation The Crevasse of Doom.
  • A detailed explanation of what make good examples.
  • Commentary on the benefits of a project language -- a non technical business oriented vocabulary that enables unambiguous communication for the team.
  • How to go about converting examples to acceptance tests.
  • How to run specification workshops successfully. Besides the information in the bookon this topic I am currently quite enamored with the Nine Boxes approach to interviewing people.

Since reading the book I've had a chance to get a little hands on with the technique but not in anger. I took some of the ideas and applied them to something we had already done, where I knew plenty about the system to allow be to play the role adequately of customer, tester and developer. I turned 40 pages of crap documentation (no offense intended to the original author) into about three. I'm very much looking forward to seeing us apply this approach to the next release cycles of a couple of our products.

The third section of the book discusses how to go about implementing agile acceptance testing covering items such as how to get people interested in the prospect to the synergy the technique has with user stories and the various tools available for automating tests. There is one slightly dubious (but perhaps necessary under adverse conditions) technique mentioned which is altering one's language to avoid the use of the word "test".

The fourth section is pretty short, and covers how adopting this approach to specifications and acceptance testing alters the work of various roles. This was already covered in briefer terms earlier in the book. This section expands upon that, although to some extent it feels like a bit of a rerun.

All in all the book gets a definite thumbs up from me. My only real complaint would be that the writing style is a little dry, definitely not as readable as some authors. Nonetheless, the effort to read it has been worthwhile from my point of view and I would definitely recommend the book to anyone on a team looking to do requirements better.

Basic Curry

This is my basic tomato-based curry which gives pretty good results, better than something in a jar and better than just using some generic "curry powder". It's pretty flexible and can be the basis for several variations on the theme which are noted at the end.

Ingredients
  • 4 or so chicken breasts, cut into bite sized chunks
  • Large sweet yellow onion
  • Garlic cloves -- 4 or more
  • Root ginger -- an inch or two
  • Chiles - Jalapenos and/or Serranos work well. Use your own judgment depending on how hot you like things. I would usually got with 2 x jalapenos for a middle of the road level of heat.
  • Prego tomato sauce or similar. I know that sounds weird and Italian but it's a great tomato based sauce. Just make sure you use a plain variation, not one full of oregano or mushrooms or something... 
  • Cumin seeds -- 0.5 tsp
  • Cumin/Coriander powder -- 1 tbsp. In Indian grocery stores they sell ground cumin/coriander premixed. If you have separate cumin and coriander powder then just mix them approx 50/50.
  • Garam Masala -- a spice mix. Best value is in Indian grocery stores but I have seen this in regular grocery stores too.
  • 0.5 tsp turmeric powder
  • 1.5 tsp salt
  • Lemon juice -- 1-2 tbsp
  • Cilantro -- about 8 stalks + leaves chopped


How to cook it
Chop the onion, garlic and ginger. Cut the chilies in half - I generally leave the seeds in.

Heat ~3-4 tbsp vegetable oil until hot. Drop in the cumin seeds and they should quickly darken as they sizzle in the oil. Now add the chopped onion and chilies. Fry for 5-10 minutes before adding the ginger and garlic. Cook for a few more minutes.

Add in the cumin/coriander powder, garam masala and turmeric. Stir in well and cook for a minute or so. Now add the chicken and cook until mostly sealed. Add the Prego, cilantro, lemon juice and salt. You'll probably want to add a bit of water to thin out the sauce -- I usually do this by rinsing out the Prego jar with some water and then tossing that in. Stir well and bring up to a simmer. It will probably take about 20 minutes or so for the chicken to be fully cooked.

Taste and adjust as necessary. Might need a bit more salt, lemon juice or cumin/coriander powder to bring things into balance.

You *should* serve this with white basmati rice, and if you do make sure it's not the Californian crap ;-) Get something from the Indian subcontinent. These days I quite often do it with brown rice which takes a little getting used to but is still very nice if not very authentic.

It's also nice to have some raita on the side with this which I make by combining some plain yoghurt, English (hothouse?) cucumber chunks and mint and letting sit in the fridge whilst cooking so the flavors blend. You can also stick some diced chili in that mix if you haven't got enough heat already.

This keeps and freezes pretty well in the event you can stop yourself from eating it all at once.


Variations
  • Consider adding in any of the following (all is nice) up front with the cumin seeds to create a richer flavor: half a dozen cloves, cardomom pods, a cinnamon stick, a star anise, 2 x bay leaves
  • Fenugreek seeds can be added in up front with the cumin seeds to create a very distinct flavor
  • Puree the onion/garlic/ginger and then very roughly cut up a second onion. Start by frying the whole spices then add the chilies and chunks of onion. Fry for a bit before adding the puree to create a "do piaza" (double onion) style curry
  • Cook some potatoes and add these in ~10 mins from the end
  • Substitute the Prego for diced tomatoes
  • Use fresh tomatoes cut up....they reduce down and make a nice sauce usually






Chorizo & Pepper Egg Gratin

Chorizo, Pepper & Egg Gratin

Ingredients:
  • Some Spanish (or Spanish-style...) chorizo, i.e. not the stuff commonly referred to in US grocery stores as chorizo. Whole Foods have some on their deli counter which is made in California but a fair approximation of the real thing. See http://en.wikipedia.org/wiki/Chorizo#Spanish_chorizo for more details. I'm gonna be vague on the quantity...just use your skill and judgment. I used about half of what I bought from Whole Foods to make enough for three.
  • A medium onion. Yellow, red...whatever. You can either cut slices or dice it. Recommend not making it too small as you want pieces large enough to stay moderately crisp.
  • A green bell pepper. Cut it into small slices or dice it. Again, not too small...
  • 2 tomatoes -- remove skin using the old hot water trick and then chop
  • 1 or more cloves of garlic, depending on how partial you are to it
  • some eggs
  • some grated cheese (something that melts good, cheddar, gruyere etc.)


You need to remove the "wrapping" of the chorizo, then dice fairly small.

Fry it in your fat of choice (Olive oil recommended( over high heat until it starts to brown a little. Push it to the back of the pan and drop in the onion and pepper. Fry for 5 minutes or so then drop in the chopped tomatoes and garlic. Stir everything together with a good seasoning of salt and pepper then let it sizzle as the water from tomatoes evaporates.

Transfer to an oven proof dish and then crack as many eggs as you fancy into it. It can help to make a little depression where each egg is going to go to hold it in place. Sprinkle the grated cheese over then bake in over at 350F. Original recipe says 12 - 15 minutes should do it. I haven't found this to be the case. Maybe it's an altitude thing, not sure. I've found 20 minutes to be more like it.

Good with ketchup and/or a hot pepper sauce such as Cholula, Tobasco etc.

It should end up looking like this. Being a pretentious git I have sprinkled some paprika over the top :-)


Thursday, July 2, 2009

How would Agile practices help us meet deadlines more reliably?

We've been looking at adopting some Agile practices where I work. I've been doing a fair amount of reading on Agile in general and Scrum and Kanban in particular. Many thanks to some good people I follow on Twitter who have tweeted links to articles and blog posts I've found enlightening and educational.

In an excited but rather ad-hoc fashion I've been talking up all the good and interesting things I've run across to anyone who'll listen. One of my colleagues asked a great question yesterday along the lines of "so how exactly is applying Agile principles going to help us meet the customer's date?"

Try googling for that. I did. Unless things have changed since yesterday you won't turn up any really great articles that answer that very reasonable question directly. This forced me to try and articulate why I thought it would help. Below is my rather stream-of-consciousness reply to him. This could be all bollocks. This could be a gem. I'm a total neophyte here in the very early days of exploring and preparing to try and apply some of these ideas. But right now, this is where my thinking is. In the not too distant future I hope to be able to write from a position of experience. Will be interesting to contrast and compare then...


It has been proven time and again that you cannot reliably take a large set of vague needs and accurately estimate when they will all be implemented and tested. Waterfall methodology says if you fail at this it’s because you are advancing from one phase to the next prematurely, i.e. you needed to have done more requirements analysis and more design before moving to coding, testing etc. This can work if you have absolutely static requirements and a customer prepared to let you spend a hard-to-estimate amount of time doing the Big Up-Front Design™ phase without limit. Even then the phases that follow are still historically proven likely to slip unless significant padding is added to estimates and a hardline approach applied to scope creep (i.e. allow zero).

In most organizations true waterfall with the Big Up-Front Design™ is not feasible because:
  • organizations are not prepared to allow a large but unspecified period of time in order to get to a stage where you can predict the remaining 2/3 of the project time
  • requirements cannot be frozen for such long periods of time
  • the skills are not there to do requirements and design that are rigorous enough to allow a trouble-free implementation and testing phase

Therefore, if most organizations cannot successfully develop software in this way one should consider other approaches. The broad umbrella term “Agile” arose to cover a general approach with a number of specific “brands” of methodology recognized as adhering to Agile principles.

The well known ones are RUP, DSDM, XP, Scrum and Kanban. Many of the differences surround the extent to which a particular methodology is prescriptive. For example, XP specifies things you must have/do/generate that Scrum does not. Scrum specifies that you should have certain things, but fewer than XP. Kanban specifies very little. It is not uncommon to take what you decide are good practices from one methodology and incorporate them in another. For example paired-programming is a technique that originated in XP, isn’t mandated by Scrum or Kanban, but many people employing either of those techniques would still choose to use it.

The general themes that unify Agile thinking can be found expressed in the Agile Manifesto (http://agilemanifesto.org/principles.html) although they are pretty high level.

The concrete things that can be commonly identified as Agile practices include breaking work into fixed length iterations, estimating in story points, paired programming, holding retrospective meetings as a vehicle for continuous improvement, measuring team velocity, eroding barriers between different roles in the team, using daily stand ups to monitor progress etc.

So now to answer your original question, of how do these practices help us meet deadlines. There are two parts to this.

PART ONE
I believe we have to fundamentally alter what a deadline is. Instead of it being something we project after consuming some set of rough requirements, we want to set a fixed regular product release schedule.

Let’s say we deliver tracking every 6 months. Why? Because by doing this we now *know* already when we’re going to deliver it. The game now fundamentally changes from “I need you to get all this poorly specified vague stuff done by August” to the customer working with us to *get the highest value features added before the next release*

This is a much more favorable mindset to put them in. They become proper stakeholders and team members. They work with us to prioritize what they truly need over what they do not need. It’s Project Portfolio Management on a micro/individual project scale.

The notion that certain people in my organization advocate, that we can’t change deadlines or scope is not tenable. But they will continue to expect it in a mostly waterfall environment because it’s set up such that we get some needs, we estimate and then we are held to that. Yet it’s so obvious that this is impossible.

Therefore the game-changing approach of saying we release on a 6 monthly basis so now work with us to figure out what to get in there is I feel potentially very potent. Now sure, at the start of 6 months they/we will want to sketch out all the things they would like at the end of 6 months. But they have to be broken into stories and prioritized. And we have to accept that as we advance through the release cycle we *will* have to consider dropping some of the lower priority features. Why? Because we have fixed two of the three variables: budget (resources) and time (fixed date). Therefore you can only manage on that third axis.


PART TWO
If you can achieve buy in that the game changing alteration outlined in part one is the best known way to manage product development then you need a detailed way to operate during those 6 month release cycles.

Those techniques identified above will help. Specifically the really potent one is measuring velocity. Why? Because it will finally give us a way to know an approximation of team output. We have not had that before. If you have no measure of how much work the team can get done then projecting what can get done by when is somewhere between hard and impossible. This is why estimates are crap.

Using velocity involves the points based estimating and seeing how many points we can on average get through per iteration, known as velocity. Once you have that information and everyone has established a reasonably reliable pattern for generating points based estimates and burning through X number of points in an iteration you are able to do several useful things:
  • look at your backlog of prioritized features, tot up their value in points, and see when you can get them done by
  • figure out the impact of vacations more easily (25% of team on holiday? You can do 25% less points on average this iteration)
  • challenge the team to improve their velocity – typically via retrospectives

So the *$64-million dollar questions* is how do you get the customer to buy into this? Well I think the answer is to get them to recognize that the alternative doesn’t work and that people are employing Agile very successfully. One definition of insanity is to keep doing things the same way and expecting different results. Let’s not be insane. Let’s change they way we do things and be successful.

Saturday, June 13, 2009

On matters of code style

(Or, “You know where you can shove your curly brace!”)

Source code is written documentation. It is the best, most accurate, in fact perhaps *only* accurate documentation you have. It completely and perfectly describes what your application does. No requirements specification, technical specification, UML diagram or anything else comes close to the comprehensive precision of your source code.

Now lots of written material is read for entertainment. The perhaps-soon-to-be-obsolete newspaper, novels, even for many people, non-fiction. But the first rule of software documentation is nobody reads it unless they have to. Except for a rare few, reading source code is not entertainment. When they are reading software documentation, people are reading it in pursuit of some goal. An emergency bugfix perhaps, or considering how to fit in a change to effect some new functionality. Perhaps looking to improve application performance or lift some existing code as the basis for another project.

Whatever it is, people have an objective when reading software documentation. And like I said, your source code is your *best* documentation. Unlike those written specifications and pretty pictures it is always up to date. It doesn’t suffer from the vagaries and ambiguities of the English language. It doesn’t require updating to reflect the latest design and behavior, it *is* the design and behavior.

Any non-trivial piece of software typically has quite a lengthy shelf life. It will often be far longer than perhaps initially anticipated, typically many years for business applications. Over its not inconsiderable life, with the ebb and flow of need bugs will be fixed, features added, code refactored, performance enhanced, third party libraries updated. All of this by a (more than likely) ever changing team.

Now most, if not all, source code is read on screen. Some people might like to print it out for scrutiny and reflection, but not the majority, whether seeking out a bug or peer reviewing a colleague’s work. Reading on screen is hard. Today I stumbled across a copy of Jakob Nielsen’s “Designing Web Usability” when rooting about in my furnace room. I dipped into it as I remembered him having something to say about content being read on screen. According to him people read 25% slower on screen, and in a study he conducted 79% of people scanned or skimmed rather than read word for word. I think it’s pretty fair to say that this is *exactly* how software engineers read code.

Consider then the trio of factors here:
  1. source code is the best form of documentation you have for an application
  2. it lasts a long time and will be read by a variety of people over it’s lifetime
  3. it will be read on screen, likely skimmed and scanned by busy people on a mission

It seems blindingly clear to me then that the layout and style of your code is important. People need to be able to read, digest and work with source code as quickly as possible. I don’t think it’s any exaggeration to claim that many hundreds of hours of time and thousands of dollars can be saved over an application’s life if the code is easy to read. A simple back of the envelope calculation would demonstrate this for almost all business software.

So we need to make our code easy to read, need to ensure it has code layout and style to enable all those subsequent times it will be read, debugged, modified, refactored and reused. This of course begs the question, what characteristics make code easy to read? I believe it can be done by adhering to two simple principles:
  1. write code that is easy to scan
  2. use identifiers that are descriptive

What makes your code easy to scan is going to vary a bit; it's going to depend upon the language, but I think there are some general things to keep in mind. Here's my list, spawned for the most part from writing code for the last 7 years or so in Java:

  • I want to be able to quickly see the start and end of classes. This is not normally an issue since there's one per file...unless you're using inner classes which can make things a little more exciting.
  • I want to be able to to easily see significant independent pieces of code, specifically the start and end of methods, iterative blocks and conditionals
  • Method arguments should be highly scan-able
  • Judicious use of alignment and indentation can provide an easier to scan layout
  • Remember, white space is free. Use it to improve readability.
  • Any application of style and particular idioms should be used habitually. When you enter some code if you notice that it does not adhere to your preferred layout and style you should consider changing it. It's an easy and quick improvement that will probably pay dividends.

Descriptive identifiers need a bit of thinking. Sometimes the way something should be named is obvious. Sometimes however it's not, and merits discussion with peers. Sometimes a good name comes to you later after your first stab at things. This is where the "rename" refactoring in IDEs is really useful. Poor naming is worth fixing; left untouched people waste mental energy dealing with it that could be better spent. In particular give due consideration to:
  • Good clear names for packages, classes, methods and variables
  • Exploiting the existence of a domain specific vocabulary. However clarity is key; sometimes business users are inconsistent and you need to work hard to identify synonyms or subtle differences in seemingly interchangeable terms
  • Brevity. Overly long unwieldy identifiers are not easily scannable. Similarly strive to reduce code. Less is more and all that...
  • Avoiding redundancy in names, e.g. consider if you really need to pre-pend some system identifier to a series of classes, or if you have to append 'Servlet' to the end of all servlets that you write...
  • Using plain, direct language. Overly abstract terms are often unhelpful. Complexity is inevitably there in most systems, but aim to simplify terms when reasonable to do so.

Finally, I was going to provide an illustrative example. Some code laid out in gnarly 'before' fashion followed by an 'after' rendering of almost poetic beauty. But since this post has languished in unpublished draft mode for months I decided I'm never going to get around to that. So you'll just have to use your imagination.

Wednesday, May 27, 2009

Chicken Makhani

Ingredients for stage one
  • 1/4 cup vegetable oil
  • 2 onions
  • 1/2 cup chopped garlic
  • 2 cups tomato sauce (not Heinz...the liquidy stuff in tins)
  • 1 cup plain yoghurt (don't use low fat or fat free...it'll all go horribly wrong)
  • 2 tbsp cumin/coriander (i.e. 1 tbsp each)
  • 1/2 tsp garam masala
  • 4 cloves
  • 1 cinnamon stick
  • 1" root ginger, peeled and chopped
  • 3 bay leaves
  • 2 cardomom pods

Ingredients for stage two
  • 1 pint heavy cream
  • 1/2 stick butter
  • 1 tsp paprika
  • 1/2 tsp ground black pepper
  • 1.5 tsp salt
  • 1/2 tsp hot chili powder
You will also need about 6 - 8 chicken breasts cut into bite size pieces.

Puree the onions, garlic and ginger in a blender. If you have a way to grind up the cinnamon stick and cloves do so. If not they can go in whole.

Heat oil till hot but not smoking, add cardomom and bay leaves. If you are not grinding up the cinnamon stick and cloves add them now. Stir for a few seconds then add the pureed mixture. Cook for 10 - 20 minutes stirring regularly to prevent it catching and burning. Add everything else from the stage one ingredients list and bring to a boil. The yoghurt may appear to curdle, but if it does don't worry -- everything will right itself shortly.

Now add stage two ingredients and return to a boil. Add the chicken pieces and simmer for 20 minutes or so until chicken is cooked and the sauce has taken on a slightly thicker consistency.

Serve with basmati rice and naan bread.

Naan Bread

  • 1/2 cup plain yoghurt
  • 1/2 cup milk
  • 1/2 tsp baking powder
  • 1 tsp white sugar
  • 2 tsp dry yeast
  • 1/2 tsp salt
  • 1 tsp nigella (aka black onion seeds) 
  • 2 tbsp melted butter
  • 2 eggs, lightly beaten
  • 3 cups plain flour

I've noticed that Indian restaurants in the US don't seem to put nigella in their naan bread whereas those in the UK do. Having spent most of my life in England I find them to be essential. I've made naan in the past without them though and they are OK, just missing that unique nigella flavor. In my opinion they are well worth seeking out. I found my last batch in an Indian grocery store. It used to be that Whole Foods sold them but last time I looked they have greatly reduced the variety of spices they now have. 

Start off by placing the yoghurt and milk in a large glass bowl and heating for approximately a minute or so in the microwave. Warming this up helps activate the yeast allegedly...

Next add the baking powder, sugar, melted butter, eggs and yeast to the bowl and stir well with balloon whisk to incorporate everything. Finally add the nigella seeds and stir to distribute.

Switch from the whisk to a wooden spoon and begin adding the flour. Add about a cup at a time and the stir in. When you have added all three cups if the dough is still too sticky add up to an extra half cup to make it easier to manage.

Turn out the dough and kneed for several minutes until it starts to have that nice sheen and elasticity that well kneeded dough has. Roll into a ball and cover with the glass bowl. Leave to rise for about an hour at room temperature. Dough should approximately double in size.

Once dough has risen divide it into at least four and as many eight pieces (depends how many you have to feed, how big you like your naan...)

Take each piece of dough and "encourage" it into the typical naan shape. This is a very approximate thing (for me at least) achieved through stretching, pressing and other general forms of manipulation as needed to get something reasonable looking.  In general the dough should end up pretty thin, a millimeter or two. No idea what fraction of an inch that is but not much...

Melt some more butter (perhaps 2 Tbsp or so) and brush each of your naan lightly on the top side.  Now place them butter-side down in a large hot skillet/frying pan that you have pre-heated and quickly brush the second side with a little butter. It should only take a couple of minutes before you need to flip the naan over. Use a fish slice to check on progress and ultimately turn the bread.

As each naan is finished transfer to a warm plate and keep covered with something (clean tea towel, foil, etc.)

These are great with any curry, but especially those with a lot of sauce.

Monday, May 4, 2009

Why we refactor code

Formulating an explanation in plain English for the non-technical

Late last week I received some “intelligence” that during a product meeting, one of our senior stakeholders exclaimed something along the lines of

“Why haven’t we made more progress on new features? Why have we spent all this time on refactoring? I don’t care about refactoring, it’s just fluff and no use to me.”

Unfortunately I missed this meeting. Or maybe it was fortunate. I don’t know that I’ve ever had to articulate to a non-technical person the purpose and benefits of refactoring code, and may not have done a great job if I’d tried to ad-lib. So over the weekend I mulled it over a bit, looking for a way to explain it that would resonate with someone like this.

The explanation had to be quick to explain and make patently clear the need for, and benefits of, refactoring. That itself got me thinking more deeply than ever before about why we refactor. Below is what I came up with. It’s all expressed in the context of the particular product that gave rise to this challenge, but I think the argument holds up across the board.

Here’s my basic premise: building software is like a game of Jenga™, and if you don’t do an appropriate amount of refactoring over its life the game will end sooner rather than later and everything will come crashing down. If that sounds odd, bear with me and I’ll explain. Let’s start with some background on the product that had given rise to the question:

  • The product is a framework from which we build dozens of in-house web applications. They are all basically CRUD applications which vary a little from the functionality of the main framework (additional fields for example).
  • The framework has had 7 releases over 5 years
  • The first release had just over 1100 classes, the latest release has nearly 1800. That’s a 60% increase.
  • Over its lifetime, at least 18 different developers in three different offices in two different countries have worked on it. There’s never been more than five people working on it concurrently so that’s a fair amount of staffing changes.
  • The applications we build from this framework have the expectation that they can upgrade (or migrate as we call it) to newer releases of the framework than that which they were built upon

Now let’s build (no pun intended) the Jenga analogy. Consider each block in the stack is some piece of code, a class perhaps. Now obviously classes are more complex than this – they vary in size and have more complex dependencies, but as a simple model this will work.

Now, each release of the framework involves coding. That often means changing an existing class: pull out a block, "tweak" it, and then of course stack it on top. This represents how modifying existing code can lead to a more fragile situation. Unlike regular Jenga where you just use the existing blocks in the tower, coding involves adding new blocks. So your tower gets bigger over time, potentially making it a little more wobbly too. The other notable difference to help complete the analogy would be that whereas Jenga is played with 54 blocks, my framework here started with over 1100. And of course aggressive deadlines mean one doesn’t always have enough time to carefully consider how to position those blocks to best effect, but that’s how it goes...

After a few releases, our tower is looking pretty wobbly, although it still “works” just fine to the casual observer. After all, it’s still standing.

Business users: “We need you to add this new stuff.”

Engineers: “But those blocks are a completely different shape. Some of them are really oversized, and heavy. Besides, have you seen the foundations?”

Business users: “Yeah but if you could just tack them on somehow, that’d be great.”


See the problem is that the blocks are all rather precarious now. And some of them we never wrote ourselves we just use them. But they’re old, unsupported and clunky. We need to replace them or update them. We need to shore up the foundations. We’ve also noticed in the last five years that the business uses them in ways we didn’t appreciate when we first assembled them.

We need to reorganize the blocks a bit. We need to get rid of some of them, we need to simplify them. We need to shore up the foundations. We need to make some new blocks too, better suited to the way we have learned they actually get used. Considering how much we use them that’ll save money. In short, we need to refactor.

Monday, April 27, 2009

Stuffed Tomatoes

This is a recipe my wife and I discovered when toying with veganism (which was ultimately a failure - although we did pick up some good additional food.) Anyway, it was nice enough that it's been kept around and we still make it from time to time. The original recipe has you bake it in the oven, which is good...but to take it to a whole new level I highly recommend doing this over charcoal with some hickory wood chips for extra flavor. If you take the BBQ approach you don't want terribly hot coals, so either cook something else first or don't place the tomatoes over direct heat.

  • 1/2 cup of orzo pasta (the small stuff that looks like rice)
  • 4 large ripe tomatoes (such as beefsteak)
  • 1/3 cup of pine nuts, briefly toasted in hot skillet
  • 2 cloves of garlic, finely chopped
  • 2 tbsp finely chopped flat leaf parsley
  • 2 tbsp roughly chopped basil leaves

Cook the orzo according to instructions. Drain and rinse in cold water till chilled so it will keep without sticking together.

Whilst the orzo is cooking, slice the top off each of the tomatoes and then scoop out the insides using a spoon. Chop the tomato pulp and place it in a mixing bowl. Add the pine nuts, orzo, garlic, parsley and basil. Season with some salt and black pepper then mix well.

Stuff each of the tomatoes with the mixture, top with previously removed slice and drizzle with some olive oil.

If cooking in an oven, have it preheated to 375 degrees F, place on a foil covered baking sheet and bake for 20 minutes. If cooking on the BBQ, use your skill and judgment... :-)

Once cooked, serve with a generous amount of yellow pepper coulis which can be made as follows: fry one chopped onion and 3 chopped yellow bell peppers in a little olive oil. After 5 minutes add half a cup of water and some salt and pepper. Simmer until very soft, perhaps 15 - 20 minutes. Add more water if necessary. When done puree in a blender.

Gazpacho Soup

I really like this soup, the flavors are amazing. Truly one of those things where the whole is greater than the sum of the parts. Also, the fact that everything in it is raw (besides the bread) still strikes me as some kind of magic.
  • About 2lbs very ripe tomatoes
  • 1 large English-style cucumber, peeled and diced
  • Half a baguette - a little stale is considered better. For those avoiding white flour I've used wholewheat bread too and it seems OK.
  • 1 green bell pepper, de-seeded and and chopped reasonably small
  • 1 cup (a little under half a UK pint) extra virgin olive oil
  • 6 scallions (spring onions) roughly chopped
  • 1 - 4 gloves garlic (how much do you like garlic?), roughly chopped
  • 3 tbsp white wine vinegar
  • 1 tsp paprika
  • salt and pepper to taste (probably 1/2 - 1 tsp salt is what you want)
  • 1 pint or less water

Tear the bread into small pieces and leave to soak in 1 cup of water. Immerse the tomatoes in just-boiled water briefly and then "encourage" the skins to split after a few seconds with a sharp knife. Pull them out of the water and remove skins. Chop tomatoes, removing seeds first.

Place chopped tomatoes into a large bowl along with the pepper, cucumber, scallions, garlic, bread and olive oil; stir to combine. Add the white wine vinegar, paprika, salt and pepper. Lastly add remaining water (1 cup). Stir again to fully combine.

Now transfer the contents of the bowl to a blender - you may need to do this in two or three batches due to the volume. Puree the mixture.

Obviously you can use more or less water depending on the consistency you prefer.

Once blended the soup should be chilled in the fridge for at least an hour. This also allows the flavors to really develop too.

To serve you can add a garnish of avocado cubes, diced red peppers, sprinkle of paprika etc.