Tuesday, April 24, 2012

Cloudy With Metrics

I’m working on a team that is responsible for a Windows desktop application that allows the display and manipulation of medical images. Medical images are just what you would imagine: x-rays, ultrasound, MRIs, computed tomography etc.

These images are large – hundreds of megabytes (more for ultrasound video) – and therein lies one of our big challenges: high performance for users everywhere. And many of our users are sat in Southern India or Europe, while our images are in a datacenter on the East Coast of America.

Now while we do have the luxury of nice big pipes connecting these locations, they inevitably suffer from fairly high latency (~300ms for India) due to the physical distance between them. That level of latency and TCP do not mix well.

This leads to a very disappointing rate of throughput even when ample bandwidth is available. Improving this situation can be boiled down to the application of just three techniques:
  1. Move the data faster
  2. Move less data
  3. Move the data before you need it

·        For us, the option of moving the data before it was needed was already off the table. Pre-caching image data is an approach we had been using for years with a legacy product. Although it worked respectably well, it was far from transparent to end users. With some regularity, data that people expected to reach remote users had not done so in a timely fashion, and much gnashing of teeth and submitting of helpdesk tickets ensued. Caching was a four letter word, so to speak.

So, we focused on moving the data faster, and moving less of it to boot. Exactly how we did that is not really the focus of this post (but so as not to leave you hanging, scaling of images and UDP were key).

What this post is really about, is how we made use of Amazon’scloud-based SimpleDB as a super-simple means to capture in-the-field metrics on the performance and usage of our application.

In our earliest pilot phase we displayed time to load and throughput in the application’s status bar, and users could obviously relay that information to us (yeah, that was as unsatisfactory as it sounds). And of course it was equally possible to log things to a user’s machine, e.g. with Log4Net or using the Windows Event Log. But that approach makes access to, and consolidation of, the captured data a bit more involved than we wanted it to be. So although it’s not that we couldn’t capture this data without a cloud based solution, it’s clear that some single, centralized capture of this data was an appealing option.

One way we might have got a centralized means to capture this data would be to have built ourselves a small database, fronted it with a set of web services and used that. But the effort involved in getting that done and deployed is, as with many organizations, non-trivial and we were looking to slip this functionality in ASAP.

By comparison, Amazon’s SimpleDB is exactly that – simple.

It’s “schema-less”, so there’s no DDL to create tables and so forth. Instead, data is stored in domains (akin to tables; you do need to create these ahead of time) made up of items (just like a row) as a set of attributes (a bit like a column, but better thought of as name-value pairs).

But enough English, here’s some C# to show just how easy it can be to stash some data into SimpleDB. It can in fact be even easier than this, since here I’m using the asynchronous approach to putting data in.

       private void LogMetricToCloud(IMetric metric)
        {
            string domain = metric.GetType().FullName;
            CreateDomainIfNeeded(domain);

            try
            {              
                PutAttributesRequest request = new PutAttributesRequest()
                    .WithDomainName(domain)
                    .WithItemName(Guid.NewGuid().ToString());

                foreach(var attribute in metric.Attributes())
                {
                    request.WithAttribute(new ReplaceableAttribute()
                        .WithName(attribute.Name)
                        .WithValue(attribute.Value));
                }

                string state = metric.GetType().FullName + ":" + metric.CSVData();
                IAsyncResult asyncResult = _db.BeginPutAttributes(request, new
AsyncCallback(SimpleDBCallBack), state);
            }
            catch (Exception e)
            {
                TryLogError(e.Message);
            }
        }

        private void SimpleDBCallBack(IAsyncResult result)
        {
            string state = result.AsyncState as string;
            try
            {
                // If there was an error during the put attributes operation it will be
thrown as part of the EndPutAttributes method.
                _db.EndPutAttributes(result);
            }
            catch (Exception e)
            {
                TryLogError(string.Format("Exception: {0}\nFailed to log: {1}", e.Message,
state));
            }
        }


I think that what the code does and how is fairly self-evident, but that could just be because I’ve been working with it. Just to make sure it’s clear, here’s a breakdown:
  • I have a number of different classes of metric, all conforming to the IMetric interface
  • I have a domain (think: table) for each different kind of metric
  • As a concrete example, one of my metrics measures feature utilization; features are things like flipping and masking images. For feature utilization I record who, where, when, what project and what feature they used.
  • The  LogMetricToCloud method simply iterates through all the attributes of a metric and creates a request to persist this in my SimpleDB instance.
  • The request is made asynchronously, with the creatively named  SimpleDBCallBack method being invoked upon completion.  The call here to EndPutAttributes will let us know if any problems occurred with the original request

All of this seems to be working pretty well, and I hope that in the near future we can start to make even more use of SimpleDB. Next on the list for me is implementing some “feature toggle” type functionality for our application. With that we can entertain some of the ideas described in this great article by Jez Humble about core practices for continuous delivery.



Tuesday, June 7, 2011

DICOM non-technical introduction: mind map

Earlier today I went through the non-technical introduction to DICOM on the RSNA (Radiological Society of North America) website written by Steven C Horii, MD. As I was doing so I compiled a mind map using the excellent SimpleMind tool to help jog my memory on this stuff in the future.

Click the image below for the full size PNG or it's also available here as a PDF.


Monday, June 6, 2011

Understanding progress: on points, velocity and when to add new stories

Building software takes time. Usually enough time that people are interested in monitoring progress and understanding when it will be done. Agile teams often use User Stories as a unit of work. Typically these are estimated in points enabling a team to record their velocity, that is, the number of points completed per sprint.

With this data, teams have a simple means to show progress. Interested parties can follow along quite easily, seeing how each sprint eats away at the features in the backlog and how many points are left to complete the product. Using the team’s average velocity will give a nice indication of how many more sprints are required to complete the features in the backlog: points remaining ÷ average velocity = sprints remaining.

There is one wrinkle to this otherwise simple scheme. As anyone who has developed software can tell you, the devil is in the detail. Something that initially looks simple can end up being more involved than originally estimated. Case in point: my team recently had a story like the following for a desktop client image processing product they are building:
 
     As an imaging assistant
     I want to be able to open JPG images
     So that I can process them

The story seemed to be straightforward and was implemented easily and quickly. All was well until we tried some particularly large JPG files. At this point things blew up with out of memory errors and the like.

So was handling large JPGs a new feature and thus a new story? Or was it obviously part and parcel of the original? In other words, the question is how do you deal with this and still report easily understandable progress?

The short answer is it doesn’t really matter. You can add new stories to the product to cover the new work that you discover. Or you can just do the work that was implied but not necessarily obvious from the original story. The formula for figuring out how many sprints remain still works either way. Taking the former approach your velocity is likely to remain fairly stable. Taking the latter approach you may see it bounce around a little bit more, or see it dip from historic levels if the team had a velocity established through more predictable types of work (e.g. maintenance on a well known product.)

The longer answer is that each approach has its own pros and cons. Depending on your situation one may be better than the other.

Adding a story, pros:
  • Stakeholders can see the amount of work we original imagined was involved has grown and have more chance to comment on the necessity of these items – perhaps the complexities and edge cases discovered aren’t that valuable.
  • It probably shouldn't matter, but velocity remains stable and nobody feels the need to cross-examine the team and ask "Why has your velocity dropped?" Done this way velocity may even serve as a crude indicator of team performance and improvements can be seen in higher velocity.
Adding a story, cons:
  • It probably shouldn't matter, but there's likely a class of stakeholder that will question why all this "extra" work is emerging and ask how come we failed to identify it in the first place.
  • If the team has the pleasure of using a computerized agile lifecycle management tool then each additional story is another thing to enter, estimate, prioritize, track and update status on etc.
  • The number of points to complete the originally envisaged release keeps growing making some people anxious: "We don't know how much work is left to do, how can you ever hope to predict when it will be done?"
Not adding a story, pros:
  • Nothing extra to track (though we probably need to clarify the acceptance criteria of the original story)
  • Number of points to complete original feature set for the release remains the same (unless we discover a need for genuinely new features)
Not adding a story, cons:
  • The potential roasting of the team: "Why is your velocity erratic/dropping?"
  • The team might miss an opportunity to push low value work down the backlog.
Personally I'm most strongly drawn to the idea of not adding in extra stories. I like the simplicity and minimalism of this. The more items there are in the backlog the harder it is to grok the thing as a whole and more busywork goes into managing it all. I don’t think the “pros” of adding in extra stories are powerful enough to make it the preferred approach. And although there is the potential “con” of the team getting questioned about why their velocity is erratic or dropping, I believe this can be explained quickly in simple terms. I also think it’s a lot more straightforward, even comforting, for stakeholders to see that the size of the backlog remains fairly stable unless things they too understand as new work (features) get added.

Wednesday, May 18, 2011

Enemies of Agility: The Dirty Dozen

Developing software is hard. Agile software development represents one way to deal with the complexity and difficulty in a manageable fashion. Arguably the best way we've figured out so far. But it's not easy, and there are many impediments on the road to becoming a highly adept agile team. Below I present the "dirty dozen" impediments, or key enemies of successful agility that I've observed.

1. Management doesn’t understand agile
An individual team or two can get quite a lot of improvement by themselves. But to really scale up agility in an organization requires a deep understanding and commitment to the guiding principles from managers across a range of functional areas. Attaining that in a large corporate environment is surely hard. Even among the engineering or IT groups that might have more capacity or interest to “get” agile at a management level there are challenges. How can people many years distant from the act of actually creating software really understand what changes are occurring on individual delivery teams if they haven’t experienced them first hand themselves? How can they help shape and encourage greater adoption and performance, policies and culture if they are not actively engaged in a deep meaningful way with agile? How can they negotiate effective cross-department agreements when they don’t really understand how their teams are working?

It’s hard. Not impossible though. What it requires is a willingness at the management levels above individual delivery teams to immerse themselves in as much agile as they can. Go to conferences. Get the deep dive that your two day agile orientation course didn’t give you. Read books. Lots of books. Join mailing lists and discussion groups. Read the blog posts people put out there. Follow the thought leaders on Twitter. Participate on a team for an iteration – there lots of ways for people of different skills to do this. Pair with the PO to groom the backlog. Pair with a coder and/or tester. Do some exploratory testing, etc. etc.

Once managers have experienced being and doing agile they have a much greater chance of extending those principles up and across the organization.

2. Team doesn’t understand agile
Understanding agile requires more than sending one (or more) of the team off for a two day Certified Scrum Master course. And it’s considerably more than saying “go do agile.” I’ve talked with lots of people for whom this is all they’ve had, and it’s clear that the depth of understanding is far shallower than it needs to be to really improve.

From my experience two things are required for a team to really understand agile deeply rather than just go through the motions. Firstly, as suggested above with managers, people on agile teams need to really immerse themselves in the wider agile world. There is an immense volume of great stuff to discover, read, try and learn out there. Harnessing that will help a team understand what other people are doing and being successful with. Secondly, the team needs a coach, at least at the outset. The coach needn’t be an externally hired consultant; it could simply be an experienced scrum master or effective manager. Someone who can help guide the team away from common pitfalls and remind them of common pitfalls.

3. Team doesn’t understand end users
How can you make intelligent decisions about the software you are building and testing unless you have a deep understanding of the way it is used? Not understanding the product can lead to several problems: PO or some other key figure becomes a bottleneck; requirements become gospel and no intelligent interpretation is allowed; stupid things get done because people misunderstand needs; etc.

Depending on who and where your users are vis-à-vis the team this may or may not be easy to address. Ideally your users are accessible and team members can spend time with them regularly to understand better the world in which they work. Where this isn’t possible some effort should be put into finding ways to make sure the team understands the users as much as possible.

4. (Micro)managers who’re not on the scrum team interfering
This can be a particularly debilitating and frustrating class of dysfunction. How is a team meant to develop the ability to self-organize and self-manage if team member’s managers regularly disrupt matters?

This can manifest in a variety of ways:
  • Dictating how work should be done
  • Dictating priority thus undermining the PO
  • “Refining” or challenging estimates
  • Focusing on velocity over quality and predictability
  • Injecting additional work
  • Making promises to external parties without consulting the team
  • Having conversations that influence technical decisions absent the team
  • Confusing stakeholders by offering an uniformed, contradictory and out of date view of status
Managers may indeed have a lot of experience to offer to their teams, but micromanaging interference will sap teams’ engagement. Managers need to realize that they should go about things in a less directive fashion than they may have previously. Rather than push down decisions and work, managers need to harness the talents of a team and present the results up and out to the rest of the organization. Teams typically have (or can develop) regular and agreed upon venues and approaches for managing the how and what and when of their work. Managers who wish to influence things must utilize these to obtain team buy-in and agreement. Failing to do so invites resentment, apathy and underperformance.

5. Inappropriate or suboptimal tools
This is a problem that occurs more in larger organizations where well meaning people want to “standardize” on things. Things like last decade’s monolithic proprietary testing frameworks or agile lifecycle management tools that get slickly pitched to executives offering alluring reports to analyze all their staff but offer limited useful functionality to the actual teams using them day in and day out.

I’m not sure there are any good immediate ways to address this kind of problem. Perhaps when management obtains a deep understanding of agile principles such issues will diminish. In the interim, if the subversive option of simply going your own way, irrespective of corporate mandates is a possibility, I say go for it.

6. Poor technical and problem solving skills
Building anything but the most trivial software is hard. It requires good people. People with excellent technical and problem solving skills. This is as true of traditional approaches to software development as it is to agile of course. Good people will do better with a bad process than bad people with a good process.

Agile software development tends to “out” people who don’t make the grade more so than traditional forms of development where people could go weeks or months without having too much scrutiny applied to their work. The focus on building small, incremental, potentially shippable features in short blocks of time especially needs people with good technical and problem solving skills.

People need to be able to write good code: well designed, well factored, elegant and simple. They need to understand how to effectively use unit tests, to understand the benefits of continuous integration, pair programming, TDD, source code control, automated testing and on and on. They need to be able to approach problems with an engineering mindset, to solve problems scientifically, methodically. A team absent these skills will have problems.

7. Poor written and verbal communications
Having the technical and problem solving skills mentioned above is not enough. People need to be able to communicate effectively too, especially on agile teams.

When people are unable to explain themselves coherently there is a massive drag on the team (and potentially outside the team too). People have to clarify what people are saying are writing about. People have to dig for more information that is absent or not understandable in the original communication. People have to guess and may misunderstand. All of this is frustrating and costly.

8. Apathy
“It’s not that I’m lazy, Bob. It’s that I just don’t care.”

Some people are bored. Some people don’t understand what the purpose of their work is. Some people just want to be left alone to hack code and they perceive that agile gets in the way of that. Some people just want to be left alone to cruise Facebook. Whatever the cause, apathy will drag a team down.

Managers have a large stake in combating apathy. Assuming your company is doing something purposeful make sure people understand what it is, why it’s important and how they can help. Make sure they feel empowered to influence the work, the team, the tools. Help create intellectually interesting challenges that marry business needs with opportunities to grow. Support their learning and training needs. Shield them from soul sapping pointless drudgery. In short, make them give a shit.

For those wondering: what if we grow all these people with training, give them an opportunity to develop new skills and master new technologies and then they leave? Consider this: what if we don’t and they stay? Unmotivated, low-skilled, bored people will not a successful product make.

9. Inability to create good user stories
Admittedly one can quite happily apply agile software development principles without user stories. However, they currently sit as the most popular unit of work for agile software development.

A common “starting package” for a new agile team is comprised the set of Scrum ideas plus user stories as your unit of work, a product backlog and estimation using story points. For a team to get familiar with this is fairly easy. Scrum proscribes a set of standard meetings: sprint planning, daily scrum, sprint review and retrospective. Estimating in points requires a little more explanation and practice typically but eventually seems to work nicely for most.

The hardest part though in my experience is to understand and employ user stories well. The five most common problems seem to be:
  • Stories for everything: setting up new servers, writing a summary of findings, having a meeting, etc. etc. Stories are about features the software should have that make sense to users of the product. This is closely related to the misunderstanding of velocity item below.
  • Stories that are tasks: this is similar to the stories for everything problem, except that even when your stories are constrained just to product features it’s still possible to write them as discrete tasks rather than describing a full end-to-end feature.
  • Stories artificially bounded by system architecture: It’ll depend on your users, but for most users of most applications they do not know or care that your product is made up of different components. Devising stories for “the server” and “the client” may well be an indication that you’re not thinking about features from an end user point of view but from a system design perspective.
  • Abuse of the “as a  I want  so that ” formula: for many people a story is simply stating your requirements in this format. This often leads to rather convoluted looking stories. Couple this with the stories for everything or stories that are tasks problems above and you end up with things like “As the scrum master I want some administrivia completed so that I can report it to my boss.” This is pretty far adrift from where you want to be.
  • Poor acceptance criteria: conveying who wants what and why is a good start. But it is also essential to bound the scope of the story with good acceptance criteria. Ambiguity leads to confusion, poor estimates, frustration and potentially disappointment.
There’s no easy cure for this problem. You need someone on the team or someone coaching the team to a point where they have a good understanding of user stories. That or everyone reads Mike Cohn’s User Stories Applied…

10. Lack of automated testing
Whenever you change software there’s always the risk that you unintentionally break something. Sometimes you can make good educated guesses about what may break as a result of your changes. Other times the most innocuous seeming change creates a cascade of fail in the most unlikely of places. To make sure your changes don’t break things that were working before you need to test your software.

If you test it manually then the more your software grows the more time you need to execute your tests. One can imagine how, over time, more and more of an iteration could be taken up with making sure the newly added features haven’t inadvertently broken those that were already there. Developers are left with nothing to do and test staff are run ragged trying to keep up. Clearly a bad situation.

While an automated test suite may take more and more time to run it’s nothing like the days or more that manual testing requires.

You simply cannot hope to incrementally add features, release regularly and maintain quality without automated testing.

11. Misunderstanding the purpose of velocity
Velocity is not a measure of all the work a team has done. It’s a measure of how quickly a team can turn a backlog of product features into working software. When people add stories for training or support or infrastructure maintenance, sometimes retrospectively, ostensibly to “capture the work” or “earn credit for what we’ve done” this pollutes your velocity. It no longer provides an accurate measure of how much a team can advance a product. You might as well record that people work 40 hours a week.

By measuring velocity as just how many product features the team can implement in an iteration, and treating everything else as overhead you are in a much better position to predict how long it might take to burn through the rest of the backlog.

One of the reasons this problem can arise is a simple lack of understanding. That’s easy to correct. The more challenging problem is when teams are being evaluated by velocity: why did your velocity drop during the last iteration? Why isn’t your velocity increasing since we added Bob to your team? That thinking is missing the point. The value of velocity is improving your ability to predict when a product will have a certain set of features completed. It is not a means for measuring productivity.

12. Toxic team member
The toxic team member comes in many guises: antisocial, incompetent, arrogant, slacker, anti-agile skeptic or just general poor team fit. Irrespective of the particular flavor, one bad apple can cause a lot of damage.

I believe everyone deserves a chance to change. Some frank feedback may help. But, ultimately, if someone’s not working out they need to be…redeployed elsewhere.


Wednesday, April 20, 2011

Scrum and Telecommuting -- our experiment

Back in December last year I wrote about a pending experiment at work. Half of our scrum team already worked from home, and we wanted to see if things worked better or worse for us if the other half all worked from home too.

After convincing our management to let us try this we targeted starting this in the first quarter of 2011. We planned to try this one sprint at a time, considering at each retrospective if it was worth continuing and what changes we might need to make to make the next sprint better.

As things transpired, we found ourselves keeping this going for the entire first quarter of 2011, and we’re continuing to do it still. As might have deduced, we though it worked our pretty good for us.

At the end of the quarter, I conducted a couple of surveys; one for the team members and one for people outside the team that we interact with. In this post I’m going to share a few of the insights from our experiment and what the surveys revealed.

How was it for the team?
Let’s start with a quick recap of the team composition. There are four people already working from home around the Denver, CO area – three software engineers plus me as scrum master. Then we have three people in Billerica, MA – our product owner and two testers. Finally we had (yes had, but have no more – they moved onto another team during the quarter) two further testers in Hyderabad, India.

There were a couple areas I polled the team on that I felt would help capture a picture of how things worked. The first was how our meetings ran now we were all working from home. Here’s a chart of the responses:



As you can see, we mostly felt that the meetings were no worse off than when half the team was office based, and in fact a few people felt they ran better. That they would run better was certainly our hypothesis going into this, based on the idea that communication would suddenly be equitable for everyone. That is to say we would now all be on a phone and there’d be no more side conversations, people failing to talk “to” the phone etc.

The one interesting point that shows up in the chart and is fairly intuitive too is that the sprint review meeting wasn’t universally considered better in a fully decentralized set up. I was actually surprised more people didn’t feel it was suboptimal, and certainly if you skip ahead to the chart where we asked people outside the team how they found things you’ll notice a few people indicated that they didn’t feel meetings were always better. I don’t know for sure, but I suspect the review more than anything might have been the meeting that triggered that feeling as the others seemed to work well.

The second major question I asked was about getting things done. Here’s the chart for that:


You can see here a lot of what you might predict:
  • it really wasn’t that much of a problem getting a hold of people and working with them (huzzah for IM and the telephone)
  • the ability to focus and get work done was better
  • dealing with hardware and software issues was a little trickier for a couple people
I think there are two important messages. Firstly, those who think you need to be in the office to get work done forget about how much the distractions and interruptions of cube-ville can be detrimental to work. Secondly, you need to be of the “self-serve” type and/or have a good helpdesk that can work with home based users to get through those times when you need help with hardware and software issues.

In addition to structured questions people could provide freeform comments. Here’s a sample of what people on the team had to say:
"Very good experiment, worth continuing, I can always come to office as needed." 
"Meetings are more efficient: less side conversations and greater team participation." 
"Work life balance was much better in this case."
The final question I asked of the team was whether they would like to carry on with this. Not surprisingly the answer was mostly yes. Those already working from home felt probably felt less excited about it all – they don’t have a choice, they’re working from home whether they like it or not.

How was it for everybody else?
Moving on to the survey of people outside out team, the one of the first questions I asked was about the respondents own inclination for working from home. Here’s the responses in chart form:


We’re not dealing with a huge number of respondents here, but still you can see that the majority of people would be interested in some kind of arrangement involving working from home, even if it wasn’t all the time.

Of course the main purpose of surveying folks we work with outside our team was find out if things worked well from their perspective. After all, it’d be no good if our team was thrilled with this but everyone else we interacted with thought it royally sucked.

Here’s a chart showing how others found things:


You can’t please all of the people of the time. As you can see, for three out of the four questions we had a few folks that felt things weren’t an improvement. But as evidenced by the tall yellow bars, for most people it was just fine. This doesn’t totally surprise me…if you look at the communication behavior in an office, people don’t have to be very far away from each other before they resort to IM or email. Put another way, although it can be useful to be able to go and talk to someone face to face, often that is not what people do even when they are in the same office.

The Problem
Yesterday I read an article from Fortune magazine about a survey conducted by technology career site Dice.com.

What the Dice survey clearly showed is that there is a lot of talent in the technology field that would happily work from home. From our small sample above this seems to be borne out for my organization too. However, Dice notes that fewer than 1% of the jobs offered on their site offer telecommuting as an option to candidates.

The article went on to point out that, given the competitive marketplace for talent, one way employers could stand out would be to consider offering people the option to telecommute. The problem I think is that so many managers are scared of this option, seeming to believe that people need to be kept in an office so they can be checked up on, managed and so forth.

Consider, for example the follow feedback I got from the second survey:
“This survey is not fair, because I don't have a baseline for how well the team performs working "centralized" versus "non-centralized". I will say, I am not happy with the level of visibility I have into how much actual work is being done. From my perspective.. You could all be at the beach all day... with one person working feverishly the day before something is due.. or you could all be working hard.. but in different directions... My gut is that I would like to see more throughput from the team regarding what is actually released.. but from my vantage it is very difficult to say if working remotely helps or hurts.”
Of course one could argue against this (I leave that as a trivially simple exercise for the enlightened reader) but the fact remains: many people still think this way. Convincing those that do that managing by outcomes rather than direct observation and the occasional beating pep talk is not only possible but preferable, may just be too hard.

This attitude may also help explain one question which I found the responses to puzzling. Consider the below, which shows the answers given to the question, "Should our team continue working from home?"


I was surprised by how many people answered "If they feel they must." That seems at odds with the number of people that expressed an interest in working from home themselves, but perhaps, subconsciously, people still worry about the true feasibility of it.

What I can tell you, as someone who's worked from home for over three years, and now have done so with a complete team all at home, is that it can work quite OK. Is it as good as all being in an office together? Probably not. But then many teams don't have the option of all being in the same office anyway. And there's a number of potential benefits to having a team distributed like this: attracting the right talent, providing people an appealing and innovative benefit, a good work/life balance and reduced facilities costs to name just a few.

Sunday, April 17, 2011

Excuse me, I'd like to...ask you a few questions


I once worked with somebody who had the following pinned up on their wall for easy reference during the many teleconferences in which they participated:
Use questions for clarity:
  • What?
  • Why?
  • How?
  • When?
  • Where?
  • Who?
  • Which?
Not a bad idea that, a little prompt to make sure you really understand a topic. I don’t have a reminder up on my wall but I always like to try and run through these when appropriate. (Well, and when I remember.)

I was thinking about these in the context of agile software development, and how they can help various roles and guide activities. Let’s look at each of them in turn. I’m going to approach this from the perspective of a team following scrum, but I think the general principles apply to all approaches to software development.

Where
Is everyone office based? In the same office or spread across more than one? Where is the customer? And other stakeholders? Some arrangements are better than others. The answer to the WHERE question is definitely important, but usually something few agile software development projects can influence very much. Most folks would agree that the ideal set up is collocation with an onsite customer. If you can get that…awesome. If not, recognize the shortcomings and know how to mitigate the problems they might present as best you can.

Who
The WHO question can apply to a number of things. Who is on the development team? Who is the customer? Who are the users? And so forth. From the scrum perspective it’s really important to have identified who the product owner is. The product owner is such a key role for successful scrum because of their product visioning and feature prioritization work.

Which
I struggled initially with WHICH. It has less obvious utility than the other questions. I think primarily though it can be thought of as your prompt to consider prioritization. Prioritization is absolutely key in scrum. Which projects should we fund? Which themes do we want to target for this release cycle? Which features are of highest value? Which features are risky or are we unsure how to do?

What
Everyone involved in a software development effort should be clear on WHAT they’re building. Initially WHAT is primarily the province of the product owner. They start a release cycle by consolidating input from the customer, other stakeholders, the team, their own ideas and provide a statement of what the product needs to do.

Why
The WHY is important, possibly the most important question of all to my mind for a couple of reasons.

Firstly it lets you check that a feature is really needed and worthy of a place in your product backlog. With something akin to the five why’s technique you should be able to explain the necessity for every feature in terms of one of the following business values:
  • Protect revenue
  • Increase revenue
  • Manage (reduce?) cost
  • Increase brand value
  • Make the product remarkable
  • Provide more value to your customers
Product owners should be very clear on why anything in the backlog is needed. If you as product owner find you can’t tie a feature to one of the values above then you might want to ask yourself if you’re sure it’s needed.

Secondly, once the whole team understands the WHY of a feature we can harness everybody’s input for some creative thinking. Sometimes people think they need a feature because they assume certain things can’t be changed, or they don’t realize there are other ways to approach a problem they are trying to solve. Perhaps an existing feature can be adapted to meet their needs.

How
The question of HOW is almost as important as WHAT and WHY. Those three together form a kind of virtuous trio.

HOW is a question that cannot be answered by any one single role on a scrum team. Everyone has a contribution to this question. As the saying goes, there’s more than one way to skin a cat. And with software that definitely applies. Figuring out the possibilities and weighing up the pros and cons and trade-offs involved will help lead to the best possible features implemented in the best possible ways.

Beware product owners that dream up the HOW as well as the WHAT and the WHY. Unless they’re from a development background it’s likely that not all of their HOWs are feasible or optimal.

When
Often the WHEN question is the one most businesses, customers and stakeholders obsess over. To some extent I think this is almost impossible to escape, although it is possible to put more emphasis on WHAT by stabilizing the WHEN with a fixed duration release cycle. http://www.jonarcher.com/2010/10/delayed-gratification.html

Once the team understands WHAT, WHY and HOW they can estimate the effort involved. If the team has a stable velocity this then allows the product owner to be quite predictive in answering WHEN and they can move features up and down the stack ranked backlog to accommodate the needs of the business as best as possible.

All righty then
Having reviewed all these question-words there are two key insights for me.
  • Certain roles on a scrum team are better positioned than others to answer certain questions (POs have a natural affinity with WHAT and WHY; developers with HOW)
  • Nobody on a scrum team has the exclusive preserve of any particular question: by working collaboratively and with everyone empowered to ask and answer any of these questions the talents of everyone can be harnessed to make better software.

Wednesday, April 13, 2011

What I got out of Mile High Agile 2011

This is a slightly longer than usual, rambling somewhat whimsical post. Sorry about that. There is some good stuff toward the end though. Well I think so anyway.

Last Thursday I attended the inaugural Mile High Agile conference put on by Agile Denver. The day commenced at an unnaturally early hour with me hurtling down the mountain to Golden and then doing battle with I-70 and I-25. Perhaps due to the early travel time the traffic was lighter than the nightmare I had envisaged and well worth it compared to my regular commute (from bedroom to basement via kitchen) for what I got out of the day.

What follows are my thoughts on what I got out of the conference, not all of them directly related to agile.

Earlier in the year when I first heard about Mile High Agile I volunteered to help with a couple of items – setting up the website along with another volunteer and the corresponding registration and payment system. It’d been a while since I’d done any kind of web content creation but we elected to make things really simple by just using WordPress with a suitable theme. Although this obviously meant there were quite a number of restrictions (especially since we weren’t hosting the WordPress blog ourselves) it did mean we could get going easily and quickly. Given the chance to do it again I’d almost certainly stick with WordPress considering the simplicity of our needs, but hosting it ourselves would be well worthwhile. That would likely let us get around the problems we faced by using plug-ins which you cannot do when hosting at wordpress.com.

For the registration and payment side of things we used EventBrite. This was a service I hadn’t used before (at least not in the role of a conference organizer) and I have to say it’s a pretty impressive offering. While our needs were simple (a handful of ticket types, a few dozen discount codes, simple reporting and so on) EventBrite just seemed to work without effort. Everything we ever wanted to do (bar one thing) seemed to have already been anticipated and worked well. In addition, I had occasion to use their support services twice which are very good and incredibly quick to respond.

In the couple months leading up to the conference last week I really enjoyed working with the folks that brought this together. My contribution was pretty small compared to others, but it was fun to be part of it and work with people really intent (passionate is so overused, no?) on making this happen. It was a real joy to see folks pitching in no matter their role offering useful ideas and commentary as things unfolded. I wish I could work with more people like that more often.

At the beginning it was unclear how many sponsors we would secure or how many tickets we could hope to sell. Due to a truly outstanding effort on the part of several people we exceeded all expectations. Just check the number of sponsor logos up on the site and note that we sold nearly 500 tickets. Not bad for such an incredibly tight timeframe and the first ever Mile High Agile conference! If you build it they will come. So long as you get the word out and market it hard too.

There were a few things that struck after the day of the conference which were nothing to do with agile but interesting insights to me. I’ve been working from home for quite some time. Although I thought I had quite a rich interaction with people on my team via phone, email and IM etc. it was nice to suddenly be in the fray of lots of tech people and talking with them. I was part of the gang that stood behind the registration desk which was insanely busy for an hour or so as the best part of 500 people arrived to check in. It reminded me of a part-time job I had way back in time as a teenager, working in a home computer store on Saturdays which were always insanely busy. I loved it. Definitely makes me hanker for more local, in-person interaction.

Another thing that was weird was that driving into Denver didn’t seem too bad. I’ve always found it less than desirable in the past, but after a series of trips into the city over the last year perhaps I’ve finally got a decent sense of how its all connected up in my head and feel more at ease with it. For some reason I’m feeling the urge to visit Denver more and more.

The final odd little insight was how nice it was to be surrounded by people who were enthusiastic about various facets of agile software development. Although you might have different views and experiences from others on details, there was this common thread of “getting it” (“it” being agile) and wanting to “get it more.” Perhaps meeting people whose viewpoint reinforces many of your own beliefs is nothing but an ego stroke, but it made me think how cool it would be to work with more people like that.

Turning to the sessions I was able to attend three, all of which were in the technical track. I really enjoyed them. I wish there was more – it’d be great if next year were a multi-day conference :-) It also made me think it would have been nice to present. I’m not entirely certain what on, but I feel like I’ve learned a lot over the last couple of years and it’d be fun to share some of that beyond just writing about it on my blog. Maybe next year.

The first session I attended was Agile the Pivotal Way given by Mike Gehard of Pivotal Labs. I confess I hadn’t really realized that Pivotal Labs did anything other than make Pivotal Tracker. Turns out that really their business is building software for people, often startups, and they seem to really approach work in a way that makes a lot of sense to me.

Mike mentioned that they pair program everything. This reminded me how much I miss pair programming. I’ve had two serious periods of pairing in the past and I’m pretty sure it’s the best approach for a lot of software development work. The first time we didn’t call it pair programming because it was just my 12 year old self and my friend hacking away on a Commodore Vic 20 writing games in BASIC, PEEKing and POKEing the screen to make games that rivaled Pac Man. Almost. But it was great. The other time was much more recent. It’s how I learned Java courtesy of a colleague pairing extensively with me. It’s also how the first release of one third of our product suite got built incredibly quickly, by just me and that other guy. If I’m going to be programming in the future I hope it’s pair programming.

Mike also mentioned how their “teams” were small, and unless their clients insisted there were no independent QA or tester roles and no PMs. They (the pairs of developers) did this stuff themselves. Thinking back again, the best software I’ve ever written was when I was completely immersed with clients and talking to them directly and I had to make sure it worked as it should. No middle man (or woman) interpreting customer needs and nobody else to leave finding bugs to, and since I don’t like my things to fail I sure as heck tested the heck out of my work.

All in all, I wish I knew a lot more about Ruby and Rails because they sound like a totally cool place to work. I’m not sure if Mike’s talk was meant to be a stealth recruitment exercise, but it could certainly work as one.

The second presentation I attended was from Chris Powers of Obtiva. He was talking about test driving front end code, and by front end code he specifically meant Javascript. It’s a few years since I’ve dabbled with JS and probably don’t have any dabbling in the near future, but it was still interesting and credit is definitely deserved for live demo and coding activities. The tool Chris showed was Jasmine, and while I have nothing else to compare it to it seemed pretty neat, and I definitely liked the BDD-esque feel and the ability to express both feature and unit tests. If I were to be doing any JS in the future I think I’d be checking it out.

The third and final session I attended was given by Paul Rayner and was entitled Strategic Design using DDD. There was some though provoking stuff in this. First of all there was the Purpose Alignment Model by Niel Nickolaisen. It’s a simple idea whereby you can map out work onto a graph with four quadrants like so:



Of particular interest here is the notion that those things which fall into the top right quadrant are the ones you want to put your best effort into – you want to get the best people working on this and you want to really pay attention to the design of this stuff. By contrast, those items that fall into the lower left quadrant do not merit that kind of effort.

If I understood Paul correctly, he was advocating the use of the above to think through just how much effort you put into the design of various components of a system. That seems like a good idea. But what also struck about this from an agile and scrum perspective was how such a simple thing could be used to facilitate feature prioritization in product development. Now I’ve seen before the idea of having the stories (or themes or features or what have you) up on post its on a white board, and allowing people to move them around placing those the see as highest priority at one end and lowest priority at the other. But this would overlay a slightly more rigorous evaluation of things. If, when using the open whiteboard technique, someone places an item over on the high priority side I am not immediately sure why. If the whiteboard was divided into four quadrants like the above I would immediately see if they thought it was mission critical, a market differentiator or both. I think that could be powerful.

The second thing I got out of this session (actually a little afterwards, after Paul had blogged on the topic) was that the popular idea of “emergent design” is great if you can do it well. But how many teams really have the design and refactoring skills to pull this off? How many can spot good and bad directions to emerge to. Having a framework to first identify what components of a system merit more well-crafted design than others along with a set of good design skills to apply would indeed help I think.

The third thing Paul talked about which stuck in my mind was his "billboard test." The idea here is that if you take something you're working hard on and imagine it up on a billboard is it really the message you want conveyed? Consider his example: "Our logging framework kicks butt!" Frankly, unless your company actually sells logging frameworks realizing that you're putting this much effort into logging ought to give you pause for thought. Like say for example "Our test automation framework kicks butt." Or whatever. (Inside joke)

And that’s it. The trip home was quicker than I thought it would be too. Was very ready for the highway to have turned into parking lot but things zipped along nicely.