Tuesday, June 7, 2011

DICOM non-technical introduction: mind map

Earlier today I went through the non-technical introduction to DICOM on the RSNA (Radiological Society of North America) website written by Steven C Horii, MD. As I was doing so I compiled a mind map using the excellent SimpleMind tool to help jog my memory on this stuff in the future.

Click the image below for the full size PNG or it's also available here as a PDF.


Monday, June 6, 2011

Understanding progress: on points, velocity and when to add new stories

Building software takes time. Usually enough time that people are interested in monitoring progress and understanding when it will be done. Agile teams often use User Stories as a unit of work. Typically these are estimated in points enabling a team to record their velocity, that is, the number of points completed per sprint.

With this data, teams have a simple means to show progress. Interested parties can follow along quite easily, seeing how each sprint eats away at the features in the backlog and how many points are left to complete the product. Using the team’s average velocity will give a nice indication of how many more sprints are required to complete the features in the backlog: points remaining ÷ average velocity = sprints remaining.

There is one wrinkle to this otherwise simple scheme. As anyone who has developed software can tell you, the devil is in the detail. Something that initially looks simple can end up being more involved than originally estimated. Case in point: my team recently had a story like the following for a desktop client image processing product they are building:
 
     As an imaging assistant
     I want to be able to open JPG images
     So that I can process them

The story seemed to be straightforward and was implemented easily and quickly. All was well until we tried some particularly large JPG files. At this point things blew up with out of memory errors and the like.

So was handling large JPGs a new feature and thus a new story? Or was it obviously part and parcel of the original? In other words, the question is how do you deal with this and still report easily understandable progress?

The short answer is it doesn’t really matter. You can add new stories to the product to cover the new work that you discover. Or you can just do the work that was implied but not necessarily obvious from the original story. The formula for figuring out how many sprints remain still works either way. Taking the former approach your velocity is likely to remain fairly stable. Taking the latter approach you may see it bounce around a little bit more, or see it dip from historic levels if the team had a velocity established through more predictable types of work (e.g. maintenance on a well known product.)

The longer answer is that each approach has its own pros and cons. Depending on your situation one may be better than the other.

Adding a story, pros:
  • Stakeholders can see the amount of work we original imagined was involved has grown and have more chance to comment on the necessity of these items – perhaps the complexities and edge cases discovered aren’t that valuable.
  • It probably shouldn't matter, but velocity remains stable and nobody feels the need to cross-examine the team and ask "Why has your velocity dropped?" Done this way velocity may even serve as a crude indicator of team performance and improvements can be seen in higher velocity.
Adding a story, cons:
  • It probably shouldn't matter, but there's likely a class of stakeholder that will question why all this "extra" work is emerging and ask how come we failed to identify it in the first place.
  • If the team has the pleasure of using a computerized agile lifecycle management tool then each additional story is another thing to enter, estimate, prioritize, track and update status on etc.
  • The number of points to complete the originally envisaged release keeps growing making some people anxious: "We don't know how much work is left to do, how can you ever hope to predict when it will be done?"
Not adding a story, pros:
  • Nothing extra to track (though we probably need to clarify the acceptance criteria of the original story)
  • Number of points to complete original feature set for the release remains the same (unless we discover a need for genuinely new features)
Not adding a story, cons:
  • The potential roasting of the team: "Why is your velocity erratic/dropping?"
  • The team might miss an opportunity to push low value work down the backlog.
Personally I'm most strongly drawn to the idea of not adding in extra stories. I like the simplicity and minimalism of this. The more items there are in the backlog the harder it is to grok the thing as a whole and more busywork goes into managing it all. I don’t think the “pros” of adding in extra stories are powerful enough to make it the preferred approach. And although there is the potential “con” of the team getting questioned about why their velocity is erratic or dropping, I believe this can be explained quickly in simple terms. I also think it’s a lot more straightforward, even comforting, for stakeholders to see that the size of the backlog remains fairly stable unless things they too understand as new work (features) get added.

Wednesday, May 18, 2011

Enemies of Agility: The Dirty Dozen

Developing software is hard. Agile software development represents one way to deal with the complexity and difficulty in a manageable fashion. Arguably the best way we've figured out so far. But it's not easy, and there are many impediments on the road to becoming a highly adept agile team. Below I present the "dirty dozen" impediments, or key enemies of successful agility that I've observed.

1. Management doesn’t understand agile
An individual team or two can get quite a lot of improvement by themselves. But to really scale up agility in an organization requires a deep understanding and commitment to the guiding principles from managers across a range of functional areas. Attaining that in a large corporate environment is surely hard. Even among the engineering or IT groups that might have more capacity or interest to “get” agile at a management level there are challenges. How can people many years distant from the act of actually creating software really understand what changes are occurring on individual delivery teams if they haven’t experienced them first hand themselves? How can they help shape and encourage greater adoption and performance, policies and culture if they are not actively engaged in a deep meaningful way with agile? How can they negotiate effective cross-department agreements when they don’t really understand how their teams are working?

It’s hard. Not impossible though. What it requires is a willingness at the management levels above individual delivery teams to immerse themselves in as much agile as they can. Go to conferences. Get the deep dive that your two day agile orientation course didn’t give you. Read books. Lots of books. Join mailing lists and discussion groups. Read the blog posts people put out there. Follow the thought leaders on Twitter. Participate on a team for an iteration – there lots of ways for people of different skills to do this. Pair with the PO to groom the backlog. Pair with a coder and/or tester. Do some exploratory testing, etc. etc.

Once managers have experienced being and doing agile they have a much greater chance of extending those principles up and across the organization.

2. Team doesn’t understand agile
Understanding agile requires more than sending one (or more) of the team off for a two day Certified Scrum Master course. And it’s considerably more than saying “go do agile.” I’ve talked with lots of people for whom this is all they’ve had, and it’s clear that the depth of understanding is far shallower than it needs to be to really improve.

From my experience two things are required for a team to really understand agile deeply rather than just go through the motions. Firstly, as suggested above with managers, people on agile teams need to really immerse themselves in the wider agile world. There is an immense volume of great stuff to discover, read, try and learn out there. Harnessing that will help a team understand what other people are doing and being successful with. Secondly, the team needs a coach, at least at the outset. The coach needn’t be an externally hired consultant; it could simply be an experienced scrum master or effective manager. Someone who can help guide the team away from common pitfalls and remind them of common pitfalls.

3. Team doesn’t understand end users
How can you make intelligent decisions about the software you are building and testing unless you have a deep understanding of the way it is used? Not understanding the product can lead to several problems: PO or some other key figure becomes a bottleneck; requirements become gospel and no intelligent interpretation is allowed; stupid things get done because people misunderstand needs; etc.

Depending on who and where your users are vis-à-vis the team this may or may not be easy to address. Ideally your users are accessible and team members can spend time with them regularly to understand better the world in which they work. Where this isn’t possible some effort should be put into finding ways to make sure the team understands the users as much as possible.

4. (Micro)managers who’re not on the scrum team interfering
This can be a particularly debilitating and frustrating class of dysfunction. How is a team meant to develop the ability to self-organize and self-manage if team member’s managers regularly disrupt matters?

This can manifest in a variety of ways:
  • Dictating how work should be done
  • Dictating priority thus undermining the PO
  • “Refining” or challenging estimates
  • Focusing on velocity over quality and predictability
  • Injecting additional work
  • Making promises to external parties without consulting the team
  • Having conversations that influence technical decisions absent the team
  • Confusing stakeholders by offering an uniformed, contradictory and out of date view of status
Managers may indeed have a lot of experience to offer to their teams, but micromanaging interference will sap teams’ engagement. Managers need to realize that they should go about things in a less directive fashion than they may have previously. Rather than push down decisions and work, managers need to harness the talents of a team and present the results up and out to the rest of the organization. Teams typically have (or can develop) regular and agreed upon venues and approaches for managing the how and what and when of their work. Managers who wish to influence things must utilize these to obtain team buy-in and agreement. Failing to do so invites resentment, apathy and underperformance.

5. Inappropriate or suboptimal tools
This is a problem that occurs more in larger organizations where well meaning people want to “standardize” on things. Things like last decade’s monolithic proprietary testing frameworks or agile lifecycle management tools that get slickly pitched to executives offering alluring reports to analyze all their staff but offer limited useful functionality to the actual teams using them day in and day out.

I’m not sure there are any good immediate ways to address this kind of problem. Perhaps when management obtains a deep understanding of agile principles such issues will diminish. In the interim, if the subversive option of simply going your own way, irrespective of corporate mandates is a possibility, I say go for it.

6. Poor technical and problem solving skills
Building anything but the most trivial software is hard. It requires good people. People with excellent technical and problem solving skills. This is as true of traditional approaches to software development as it is to agile of course. Good people will do better with a bad process than bad people with a good process.

Agile software development tends to “out” people who don’t make the grade more so than traditional forms of development where people could go weeks or months without having too much scrutiny applied to their work. The focus on building small, incremental, potentially shippable features in short blocks of time especially needs people with good technical and problem solving skills.

People need to be able to write good code: well designed, well factored, elegant and simple. They need to understand how to effectively use unit tests, to understand the benefits of continuous integration, pair programming, TDD, source code control, automated testing and on and on. They need to be able to approach problems with an engineering mindset, to solve problems scientifically, methodically. A team absent these skills will have problems.

7. Poor written and verbal communications
Having the technical and problem solving skills mentioned above is not enough. People need to be able to communicate effectively too, especially on agile teams.

When people are unable to explain themselves coherently there is a massive drag on the team (and potentially outside the team too). People have to clarify what people are saying are writing about. People have to dig for more information that is absent or not understandable in the original communication. People have to guess and may misunderstand. All of this is frustrating and costly.

8. Apathy
“It’s not that I’m lazy, Bob. It’s that I just don’t care.”

Some people are bored. Some people don’t understand what the purpose of their work is. Some people just want to be left alone to hack code and they perceive that agile gets in the way of that. Some people just want to be left alone to cruise Facebook. Whatever the cause, apathy will drag a team down.

Managers have a large stake in combating apathy. Assuming your company is doing something purposeful make sure people understand what it is, why it’s important and how they can help. Make sure they feel empowered to influence the work, the team, the tools. Help create intellectually interesting challenges that marry business needs with opportunities to grow. Support their learning and training needs. Shield them from soul sapping pointless drudgery. In short, make them give a shit.

For those wondering: what if we grow all these people with training, give them an opportunity to develop new skills and master new technologies and then they leave? Consider this: what if we don’t and they stay? Unmotivated, low-skilled, bored people will not a successful product make.

9. Inability to create good user stories
Admittedly one can quite happily apply agile software development principles without user stories. However, they currently sit as the most popular unit of work for agile software development.

A common “starting package” for a new agile team is comprised the set of Scrum ideas plus user stories as your unit of work, a product backlog and estimation using story points. For a team to get familiar with this is fairly easy. Scrum proscribes a set of standard meetings: sprint planning, daily scrum, sprint review and retrospective. Estimating in points requires a little more explanation and practice typically but eventually seems to work nicely for most.

The hardest part though in my experience is to understand and employ user stories well. The five most common problems seem to be:
  • Stories for everything: setting up new servers, writing a summary of findings, having a meeting, etc. etc. Stories are about features the software should have that make sense to users of the product. This is closely related to the misunderstanding of velocity item below.
  • Stories that are tasks: this is similar to the stories for everything problem, except that even when your stories are constrained just to product features it’s still possible to write them as discrete tasks rather than describing a full end-to-end feature.
  • Stories artificially bounded by system architecture: It’ll depend on your users, but for most users of most applications they do not know or care that your product is made up of different components. Devising stories for “the server” and “the client” may well be an indication that you’re not thinking about features from an end user point of view but from a system design perspective.
  • Abuse of the “as a  I want  so that ” formula: for many people a story is simply stating your requirements in this format. This often leads to rather convoluted looking stories. Couple this with the stories for everything or stories that are tasks problems above and you end up with things like “As the scrum master I want some administrivia completed so that I can report it to my boss.” This is pretty far adrift from where you want to be.
  • Poor acceptance criteria: conveying who wants what and why is a good start. But it is also essential to bound the scope of the story with good acceptance criteria. Ambiguity leads to confusion, poor estimates, frustration and potentially disappointment.
There’s no easy cure for this problem. You need someone on the team or someone coaching the team to a point where they have a good understanding of user stories. That or everyone reads Mike Cohn’s User Stories Applied…

10. Lack of automated testing
Whenever you change software there’s always the risk that you unintentionally break something. Sometimes you can make good educated guesses about what may break as a result of your changes. Other times the most innocuous seeming change creates a cascade of fail in the most unlikely of places. To make sure your changes don’t break things that were working before you need to test your software.

If you test it manually then the more your software grows the more time you need to execute your tests. One can imagine how, over time, more and more of an iteration could be taken up with making sure the newly added features haven’t inadvertently broken those that were already there. Developers are left with nothing to do and test staff are run ragged trying to keep up. Clearly a bad situation.

While an automated test suite may take more and more time to run it’s nothing like the days or more that manual testing requires.

You simply cannot hope to incrementally add features, release regularly and maintain quality without automated testing.

11. Misunderstanding the purpose of velocity
Velocity is not a measure of all the work a team has done. It’s a measure of how quickly a team can turn a backlog of product features into working software. When people add stories for training or support or infrastructure maintenance, sometimes retrospectively, ostensibly to “capture the work” or “earn credit for what we’ve done” this pollutes your velocity. It no longer provides an accurate measure of how much a team can advance a product. You might as well record that people work 40 hours a week.

By measuring velocity as just how many product features the team can implement in an iteration, and treating everything else as overhead you are in a much better position to predict how long it might take to burn through the rest of the backlog.

One of the reasons this problem can arise is a simple lack of understanding. That’s easy to correct. The more challenging problem is when teams are being evaluated by velocity: why did your velocity drop during the last iteration? Why isn’t your velocity increasing since we added Bob to your team? That thinking is missing the point. The value of velocity is improving your ability to predict when a product will have a certain set of features completed. It is not a means for measuring productivity.

12. Toxic team member
The toxic team member comes in many guises: antisocial, incompetent, arrogant, slacker, anti-agile skeptic or just general poor team fit. Irrespective of the particular flavor, one bad apple can cause a lot of damage.

I believe everyone deserves a chance to change. Some frank feedback may help. But, ultimately, if someone’s not working out they need to be…redeployed elsewhere.


Wednesday, April 20, 2011

Scrum and Telecommuting -- our experiment

Back in December last year I wrote about a pending experiment at work. Half of our scrum team already worked from home, and we wanted to see if things worked better or worse for us if the other half all worked from home too.

After convincing our management to let us try this we targeted starting this in the first quarter of 2011. We planned to try this one sprint at a time, considering at each retrospective if it was worth continuing and what changes we might need to make to make the next sprint better.

As things transpired, we found ourselves keeping this going for the entire first quarter of 2011, and we’re continuing to do it still. As might have deduced, we though it worked our pretty good for us.

At the end of the quarter, I conducted a couple of surveys; one for the team members and one for people outside the team that we interact with. In this post I’m going to share a few of the insights from our experiment and what the surveys revealed.

How was it for the team?
Let’s start with a quick recap of the team composition. There are four people already working from home around the Denver, CO area – three software engineers plus me as scrum master. Then we have three people in Billerica, MA – our product owner and two testers. Finally we had (yes had, but have no more – they moved onto another team during the quarter) two further testers in Hyderabad, India.

There were a couple areas I polled the team on that I felt would help capture a picture of how things worked. The first was how our meetings ran now we were all working from home. Here’s a chart of the responses:



As you can see, we mostly felt that the meetings were no worse off than when half the team was office based, and in fact a few people felt they ran better. That they would run better was certainly our hypothesis going into this, based on the idea that communication would suddenly be equitable for everyone. That is to say we would now all be on a phone and there’d be no more side conversations, people failing to talk “to” the phone etc.

The one interesting point that shows up in the chart and is fairly intuitive too is that the sprint review meeting wasn’t universally considered better in a fully decentralized set up. I was actually surprised more people didn’t feel it was suboptimal, and certainly if you skip ahead to the chart where we asked people outside the team how they found things you’ll notice a few people indicated that they didn’t feel meetings were always better. I don’t know for sure, but I suspect the review more than anything might have been the meeting that triggered that feeling as the others seemed to work well.

The second major question I asked was about getting things done. Here’s the chart for that:


You can see here a lot of what you might predict:
  • it really wasn’t that much of a problem getting a hold of people and working with them (huzzah for IM and the telephone)
  • the ability to focus and get work done was better
  • dealing with hardware and software issues was a little trickier for a couple people
I think there are two important messages. Firstly, those who think you need to be in the office to get work done forget about how much the distractions and interruptions of cube-ville can be detrimental to work. Secondly, you need to be of the “self-serve” type and/or have a good helpdesk that can work with home based users to get through those times when you need help with hardware and software issues.

In addition to structured questions people could provide freeform comments. Here’s a sample of what people on the team had to say:
"Very good experiment, worth continuing, I can always come to office as needed." 
"Meetings are more efficient: less side conversations and greater team participation." 
"Work life balance was much better in this case."
The final question I asked of the team was whether they would like to carry on with this. Not surprisingly the answer was mostly yes. Those already working from home felt probably felt less excited about it all – they don’t have a choice, they’re working from home whether they like it or not.

How was it for everybody else?
Moving on to the survey of people outside out team, the one of the first questions I asked was about the respondents own inclination for working from home. Here’s the responses in chart form:


We’re not dealing with a huge number of respondents here, but still you can see that the majority of people would be interested in some kind of arrangement involving working from home, even if it wasn’t all the time.

Of course the main purpose of surveying folks we work with outside our team was find out if things worked well from their perspective. After all, it’d be no good if our team was thrilled with this but everyone else we interacted with thought it royally sucked.

Here’s a chart showing how others found things:


You can’t please all of the people of the time. As you can see, for three out of the four questions we had a few folks that felt things weren’t an improvement. But as evidenced by the tall yellow bars, for most people it was just fine. This doesn’t totally surprise me…if you look at the communication behavior in an office, people don’t have to be very far away from each other before they resort to IM or email. Put another way, although it can be useful to be able to go and talk to someone face to face, often that is not what people do even when they are in the same office.

The Problem
Yesterday I read an article from Fortune magazine about a survey conducted by technology career site Dice.com.

What the Dice survey clearly showed is that there is a lot of talent in the technology field that would happily work from home. From our small sample above this seems to be borne out for my organization too. However, Dice notes that fewer than 1% of the jobs offered on their site offer telecommuting as an option to candidates.

The article went on to point out that, given the competitive marketplace for talent, one way employers could stand out would be to consider offering people the option to telecommute. The problem I think is that so many managers are scared of this option, seeming to believe that people need to be kept in an office so they can be checked up on, managed and so forth.

Consider, for example the follow feedback I got from the second survey:
“This survey is not fair, because I don't have a baseline for how well the team performs working "centralized" versus "non-centralized". I will say, I am not happy with the level of visibility I have into how much actual work is being done. From my perspective.. You could all be at the beach all day... with one person working feverishly the day before something is due.. or you could all be working hard.. but in different directions... My gut is that I would like to see more throughput from the team regarding what is actually released.. but from my vantage it is very difficult to say if working remotely helps or hurts.”
Of course one could argue against this (I leave that as a trivially simple exercise for the enlightened reader) but the fact remains: many people still think this way. Convincing those that do that managing by outcomes rather than direct observation and the occasional beating pep talk is not only possible but preferable, may just be too hard.

This attitude may also help explain one question which I found the responses to puzzling. Consider the below, which shows the answers given to the question, "Should our team continue working from home?"


I was surprised by how many people answered "If they feel they must." That seems at odds with the number of people that expressed an interest in working from home themselves, but perhaps, subconsciously, people still worry about the true feasibility of it.

What I can tell you, as someone who's worked from home for over three years, and now have done so with a complete team all at home, is that it can work quite OK. Is it as good as all being in an office together? Probably not. But then many teams don't have the option of all being in the same office anyway. And there's a number of potential benefits to having a team distributed like this: attracting the right talent, providing people an appealing and innovative benefit, a good work/life balance and reduced facilities costs to name just a few.

Sunday, April 17, 2011

Excuse me, I'd like to...ask you a few questions


I once worked with somebody who had the following pinned up on their wall for easy reference during the many teleconferences in which they participated:
Use questions for clarity:
  • What?
  • Why?
  • How?
  • When?
  • Where?
  • Who?
  • Which?
Not a bad idea that, a little prompt to make sure you really understand a topic. I don’t have a reminder up on my wall but I always like to try and run through these when appropriate. (Well, and when I remember.)

I was thinking about these in the context of agile software development, and how they can help various roles and guide activities. Let’s look at each of them in turn. I’m going to approach this from the perspective of a team following scrum, but I think the general principles apply to all approaches to software development.

Where
Is everyone office based? In the same office or spread across more than one? Where is the customer? And other stakeholders? Some arrangements are better than others. The answer to the WHERE question is definitely important, but usually something few agile software development projects can influence very much. Most folks would agree that the ideal set up is collocation with an onsite customer. If you can get that…awesome. If not, recognize the shortcomings and know how to mitigate the problems they might present as best you can.

Who
The WHO question can apply to a number of things. Who is on the development team? Who is the customer? Who are the users? And so forth. From the scrum perspective it’s really important to have identified who the product owner is. The product owner is such a key role for successful scrum because of their product visioning and feature prioritization work.

Which
I struggled initially with WHICH. It has less obvious utility than the other questions. I think primarily though it can be thought of as your prompt to consider prioritization. Prioritization is absolutely key in scrum. Which projects should we fund? Which themes do we want to target for this release cycle? Which features are of highest value? Which features are risky or are we unsure how to do?

What
Everyone involved in a software development effort should be clear on WHAT they’re building. Initially WHAT is primarily the province of the product owner. They start a release cycle by consolidating input from the customer, other stakeholders, the team, their own ideas and provide a statement of what the product needs to do.

Why
The WHY is important, possibly the most important question of all to my mind for a couple of reasons.

Firstly it lets you check that a feature is really needed and worthy of a place in your product backlog. With something akin to the five why’s technique you should be able to explain the necessity for every feature in terms of one of the following business values:
  • Protect revenue
  • Increase revenue
  • Manage (reduce?) cost
  • Increase brand value
  • Make the product remarkable
  • Provide more value to your customers
Product owners should be very clear on why anything in the backlog is needed. If you as product owner find you can’t tie a feature to one of the values above then you might want to ask yourself if you’re sure it’s needed.

Secondly, once the whole team understands the WHY of a feature we can harness everybody’s input for some creative thinking. Sometimes people think they need a feature because they assume certain things can’t be changed, or they don’t realize there are other ways to approach a problem they are trying to solve. Perhaps an existing feature can be adapted to meet their needs.

How
The question of HOW is almost as important as WHAT and WHY. Those three together form a kind of virtuous trio.

HOW is a question that cannot be answered by any one single role on a scrum team. Everyone has a contribution to this question. As the saying goes, there’s more than one way to skin a cat. And with software that definitely applies. Figuring out the possibilities and weighing up the pros and cons and trade-offs involved will help lead to the best possible features implemented in the best possible ways.

Beware product owners that dream up the HOW as well as the WHAT and the WHY. Unless they’re from a development background it’s likely that not all of their HOWs are feasible or optimal.

When
Often the WHEN question is the one most businesses, customers and stakeholders obsess over. To some extent I think this is almost impossible to escape, although it is possible to put more emphasis on WHAT by stabilizing the WHEN with a fixed duration release cycle. http://www.jonarcher.com/2010/10/delayed-gratification.html

Once the team understands WHAT, WHY and HOW they can estimate the effort involved. If the team has a stable velocity this then allows the product owner to be quite predictive in answering WHEN and they can move features up and down the stack ranked backlog to accommodate the needs of the business as best as possible.

All righty then
Having reviewed all these question-words there are two key insights for me.
  • Certain roles on a scrum team are better positioned than others to answer certain questions (POs have a natural affinity with WHAT and WHY; developers with HOW)
  • Nobody on a scrum team has the exclusive preserve of any particular question: by working collaboratively and with everyone empowered to ask and answer any of these questions the talents of everyone can be harnessed to make better software.

Wednesday, April 13, 2011

What I got out of Mile High Agile 2011

This is a slightly longer than usual, rambling somewhat whimsical post. Sorry about that. There is some good stuff toward the end though. Well I think so anyway.

Last Thursday I attended the inaugural Mile High Agile conference put on by Agile Denver. The day commenced at an unnaturally early hour with me hurtling down the mountain to Golden and then doing battle with I-70 and I-25. Perhaps due to the early travel time the traffic was lighter than the nightmare I had envisaged and well worth it compared to my regular commute (from bedroom to basement via kitchen) for what I got out of the day.

What follows are my thoughts on what I got out of the conference, not all of them directly related to agile.

Earlier in the year when I first heard about Mile High Agile I volunteered to help with a couple of items – setting up the website along with another volunteer and the corresponding registration and payment system. It’d been a while since I’d done any kind of web content creation but we elected to make things really simple by just using WordPress with a suitable theme. Although this obviously meant there were quite a number of restrictions (especially since we weren’t hosting the WordPress blog ourselves) it did mean we could get going easily and quickly. Given the chance to do it again I’d almost certainly stick with WordPress considering the simplicity of our needs, but hosting it ourselves would be well worthwhile. That would likely let us get around the problems we faced by using plug-ins which you cannot do when hosting at wordpress.com.

For the registration and payment side of things we used EventBrite. This was a service I hadn’t used before (at least not in the role of a conference organizer) and I have to say it’s a pretty impressive offering. While our needs were simple (a handful of ticket types, a few dozen discount codes, simple reporting and so on) EventBrite just seemed to work without effort. Everything we ever wanted to do (bar one thing) seemed to have already been anticipated and worked well. In addition, I had occasion to use their support services twice which are very good and incredibly quick to respond.

In the couple months leading up to the conference last week I really enjoyed working with the folks that brought this together. My contribution was pretty small compared to others, but it was fun to be part of it and work with people really intent (passionate is so overused, no?) on making this happen. It was a real joy to see folks pitching in no matter their role offering useful ideas and commentary as things unfolded. I wish I could work with more people like that more often.

At the beginning it was unclear how many sponsors we would secure or how many tickets we could hope to sell. Due to a truly outstanding effort on the part of several people we exceeded all expectations. Just check the number of sponsor logos up on the site and note that we sold nearly 500 tickets. Not bad for such an incredibly tight timeframe and the first ever Mile High Agile conference! If you build it they will come. So long as you get the word out and market it hard too.

There were a few things that struck after the day of the conference which were nothing to do with agile but interesting insights to me. I’ve been working from home for quite some time. Although I thought I had quite a rich interaction with people on my team via phone, email and IM etc. it was nice to suddenly be in the fray of lots of tech people and talking with them. I was part of the gang that stood behind the registration desk which was insanely busy for an hour or so as the best part of 500 people arrived to check in. It reminded me of a part-time job I had way back in time as a teenager, working in a home computer store on Saturdays which were always insanely busy. I loved it. Definitely makes me hanker for more local, in-person interaction.

Another thing that was weird was that driving into Denver didn’t seem too bad. I’ve always found it less than desirable in the past, but after a series of trips into the city over the last year perhaps I’ve finally got a decent sense of how its all connected up in my head and feel more at ease with it. For some reason I’m feeling the urge to visit Denver more and more.

The final odd little insight was how nice it was to be surrounded by people who were enthusiastic about various facets of agile software development. Although you might have different views and experiences from others on details, there was this common thread of “getting it” (“it” being agile) and wanting to “get it more.” Perhaps meeting people whose viewpoint reinforces many of your own beliefs is nothing but an ego stroke, but it made me think how cool it would be to work with more people like that.

Turning to the sessions I was able to attend three, all of which were in the technical track. I really enjoyed them. I wish there was more – it’d be great if next year were a multi-day conference :-) It also made me think it would have been nice to present. I’m not entirely certain what on, but I feel like I’ve learned a lot over the last couple of years and it’d be fun to share some of that beyond just writing about it on my blog. Maybe next year.

The first session I attended was Agile the Pivotal Way given by Mike Gehard of Pivotal Labs. I confess I hadn’t really realized that Pivotal Labs did anything other than make Pivotal Tracker. Turns out that really their business is building software for people, often startups, and they seem to really approach work in a way that makes a lot of sense to me.

Mike mentioned that they pair program everything. This reminded me how much I miss pair programming. I’ve had two serious periods of pairing in the past and I’m pretty sure it’s the best approach for a lot of software development work. The first time we didn’t call it pair programming because it was just my 12 year old self and my friend hacking away on a Commodore Vic 20 writing games in BASIC, PEEKing and POKEing the screen to make games that rivaled Pac Man. Almost. But it was great. The other time was much more recent. It’s how I learned Java courtesy of a colleague pairing extensively with me. It’s also how the first release of one third of our product suite got built incredibly quickly, by just me and that other guy. If I’m going to be programming in the future I hope it’s pair programming.

Mike also mentioned how their “teams” were small, and unless their clients insisted there were no independent QA or tester roles and no PMs. They (the pairs of developers) did this stuff themselves. Thinking back again, the best software I’ve ever written was when I was completely immersed with clients and talking to them directly and I had to make sure it worked as it should. No middle man (or woman) interpreting customer needs and nobody else to leave finding bugs to, and since I don’t like my things to fail I sure as heck tested the heck out of my work.

All in all, I wish I knew a lot more about Ruby and Rails because they sound like a totally cool place to work. I’m not sure if Mike’s talk was meant to be a stealth recruitment exercise, but it could certainly work as one.

The second presentation I attended was from Chris Powers of Obtiva. He was talking about test driving front end code, and by front end code he specifically meant Javascript. It’s a few years since I’ve dabbled with JS and probably don’t have any dabbling in the near future, but it was still interesting and credit is definitely deserved for live demo and coding activities. The tool Chris showed was Jasmine, and while I have nothing else to compare it to it seemed pretty neat, and I definitely liked the BDD-esque feel and the ability to express both feature and unit tests. If I were to be doing any JS in the future I think I’d be checking it out.

The third and final session I attended was given by Paul Rayner and was entitled Strategic Design using DDD. There was some though provoking stuff in this. First of all there was the Purpose Alignment Model by Niel Nickolaisen. It’s a simple idea whereby you can map out work onto a graph with four quadrants like so:



Of particular interest here is the notion that those things which fall into the top right quadrant are the ones you want to put your best effort into – you want to get the best people working on this and you want to really pay attention to the design of this stuff. By contrast, those items that fall into the lower left quadrant do not merit that kind of effort.

If I understood Paul correctly, he was advocating the use of the above to think through just how much effort you put into the design of various components of a system. That seems like a good idea. But what also struck about this from an agile and scrum perspective was how such a simple thing could be used to facilitate feature prioritization in product development. Now I’ve seen before the idea of having the stories (or themes or features or what have you) up on post its on a white board, and allowing people to move them around placing those the see as highest priority at one end and lowest priority at the other. But this would overlay a slightly more rigorous evaluation of things. If, when using the open whiteboard technique, someone places an item over on the high priority side I am not immediately sure why. If the whiteboard was divided into four quadrants like the above I would immediately see if they thought it was mission critical, a market differentiator or both. I think that could be powerful.

The second thing I got out of this session (actually a little afterwards, after Paul had blogged on the topic) was that the popular idea of “emergent design” is great if you can do it well. But how many teams really have the design and refactoring skills to pull this off? How many can spot good and bad directions to emerge to. Having a framework to first identify what components of a system merit more well-crafted design than others along with a set of good design skills to apply would indeed help I think.

The third thing Paul talked about which stuck in my mind was his "billboard test." The idea here is that if you take something you're working hard on and imagine it up on a billboard is it really the message you want conveyed? Consider his example: "Our logging framework kicks butt!" Frankly, unless your company actually sells logging frameworks realizing that you're putting this much effort into logging ought to give you pause for thought. Like say for example "Our test automation framework kicks butt." Or whatever. (Inside joke)

And that’s it. The trip home was quicker than I thought it would be too. Was very ready for the highway to have turned into parking lot but things zipped along nicely.

Thursday, March 17, 2011

"If it ain't broke, don't fix it" vs. "Continuous improvement"

I don’t really remember when I first heard the phrase, “If it ain’t broke, don’t fix it,” but I’ve used it plenty. Upon reflection I’ve mostly used it to avoid doing boring or unappealing work – endeavoring to cast undesirable requests as wasteful and unnecessary changes.

Thinking about it though, there is a fair bit of wisdom in the saying. Researching its origins turns up a surprisingly recent (to me anyway) provenance. Apparently one T. Bert (Thomas Bertram) Lance, Director of the Office of Management and Budget in US president Jimmy Carter’s 1977 administration is often accorded recognition for first use (although there appears to be a reasonable case for it already being in circulation in the Southern United States). He is quoted in a newsletter as follows:
Bert Lance believes he can save Uncle Sam billions if he can get the government to adopt a simple motto: “If it ain’t broke, don’t fix it.” He explains: “That’s the trouble with government: Fixing things that aren’t broken and not fixing things that are broken.”
Hard to argue with that, eh?

Having used (and possibly abused) this phrase for years, I was given pause for thought yesterday when it was tossed my way. It’s not like it’s the first time that’s occurred, but it was one of the less regular occasions where I thought , “OK, maybe what I’m trying to change isn’t broken per se, but it could be better.”

For the last few years my work has involved a lot of change. My organization has transitioned (indeed continues to transition) from a very rigid, waterfall style process of software development to an agile one. When we started out we had a pretty naïve idea of what this entailed. We were already doing work in iterations – kinda – by providing a new build to our QA/testing people every two weeks. We made a series of small changes over a relatively short period of time (I wrote about those here, here and here) and ultimately we settled upon Scrum as our specific flavor of agile.

You don’t have to spend long with Scrum to realize that one large facet of it is pushing for something a bit different to the idea of “If it ain’t broke, don’t fix it” (which henceforth I’m shortening to IIABDFI.) It’s pushing continuous improvement, or Kaizen to borrow the rather snazzy Japanese word for it. This notion is baked right in to Scrum, with every sprint having a retrospective at the end where the team looks for ways in which they might improve things.

As much as I latched onto and used the IIABDFI idea, I find Kaizen considerably more appealing. IIABDFI is mostly a crutch for me to skirt doing boring work. But Kaizen appeals to my inner perfectionist. Anyone who’s worked with me can probably tell you that I have tendency (likely somewhat annoying) to always critique the status quo. I can’t seem to help seeing things that we could change that I think would make everything just a little bit better. This characteristic hasn’t served me too badly in the past, and when I’ve worked with people of a similar bent it’s created quite a good dynamic with people all interested in continuous improvement bringing in great new ideas and accepting them from others. There can be moments of trouble if two well meaning people hold strongly differing views on what constitutes "better", but mostly things are obvious enough as improvements that a little debate is all it takes to agree upon trying them out.

I’d never really thought about these two ideas juxtaposed though, until today. Kaizen really resonates with me. But, as IIABDFI says, changing things that aren’t broken is indeed wasteful. So, how to reconcile the two?

Well, I think its all about degree. If there’s a potentially worthwhile pay off, if it’s reasonably cheap to try, maybe it is well worth changing things that aren’t especially broken. You might learn through trying your new approach that what seemed reasonable before was, if not broken, ripe for substantial improvement. And if it doesn’t pan out? That should be OK. Trying new things involves some risk. Sometimes things don’t work and you need to revert to the original way of doing things. Unless you’re drastically affecting quality or productivity or some other key facet of software development then I’d argue that trying new things should be encouraged.

If, on the other hand you’re thinking of a disruptive or expensive change, and the outcome is uncertain, and what you’re already doing is seemingly OK, perhaps that’s a red flag. Perhaps that’s when you need a slap around the head from IIABDFI to remind you not to get carried away.

What do you think?

Saturday, February 5, 2011

How to completely fail at BDD

Are you interested in introducing BDD to your team? Don’t try and do it like this under these circumstances. Learn from my failure.

Having experienced ambiguous requirements specifications and inadequate testing of the applications I’ve worked on during my career as a software developer, when I saw BDD I thought I saw the future. In particular the Gherkin based implementations (Cucumber, Specflow) with their Given/When/Then descriptions of scenarios for product features that could lead to executable specifications blew me away as full of potential.

I was incredibly eager to try this stuff out on a real project.

I played around a bit in my spare time with Cucumber. I learned how features were specified in plain text feature files and tests related to them were implemented in step definitions. You match the scenarios you describe in your feature files to step definitions that exercise your application code using regular expressions. I’d never really used regex that much so was initially a bit concerned that this would be a problem. It turned out though that with a small grab bag of regex tricks you could do most things you needed.

One of my first real problems I ran into was that I was looking to try BDD on an upcoming C# project. That put me in a minority group. I don’t entirely pretend to understand why, but much of the C# ecosystem seems pretty closed off, lacking in the use and acceptance of rich open source projects that I was used to in my prior life working on projects on the Java stack. (As an aside, I even notice differences when I interview C# programmers compared to Java programmers. The way they think. The things they have and haven’t read. They typically haven’t done unit testing, typically haven’t used any open source software. In general, their view of the software development world is quite shuttered. I know this is a gross generalization, but it’s noticeable. It’s a shame, because C# seems like a pretty cool language…)

I quickly found Cuke4Nuke which allows you to implement your step definitions in C# rather than Ruby (the norm for Cucumber) and Cuke4VS which would give you syntax highlighting etc. when editing feature files in Visual Studio. Here I encountered my second couple of problems: Cuke4Nuke wasn’t ready for .NET 4 and Cuke4VS even to this day doesn’t work on VS2010. Of course our project was using .NET4 and VS2010. Fixing Cuke4Nuke wasn’t too hard, I fumbled my way through that OK. Fixing Cuke4VS though was beyond me. Well, beyond the amount of effort I wanted to put in anyway. I’ve not done much with C# and nothing with Visual Studio addins like this. I sorta got it working, but nothing I could push back as a quality patch to the Cuke4VS project.

Then I discovered Specflow.

Specflow has some distinct goodness:
  • It’s Gherkin compatible, which means you describe your requirements in terms of features and corresponding given/when/then scenarios. While I lack the experience that would come from doing substantial real work in this, it intuitively feels “better” to me than the table based approach taken by Fit, Concordion etc.
  • It’s good to go with VS2010 and .NET4.
  • No Ruby required. With Cucumber, even though you can write your step definitions in C# courtesy of Cuke4Nuke, you still need a Ruby installation to parse your .feature files and call out over the wire to your C# steps. I’m not the first to observe that the .NET/C# community is kind of reluctant to install “weird shit” like Ruby that “doesn’t come from Microsoft”.
Sadly, at this point in time it also has some challenges:
  • Documentation. The main problem is the documentation, or lack thereof. Pull down the “Specflow guide” PDF and you’ll find a clear work in progress, incomplete and missing sections, editing notes etc.
  • Examples. Github hosts a SpecFlow-Examples project, but it needs some TLC. There are some reasonable examples in there, but quite where is not immediately obvious. All the action is inside ASP.NET-MVC/BookShop, not the features folder. Github is also potentially another barrier to entry for people. I don’t know how many C# developers are familiar with Github, but none on my team are.
  • VS silliness. Not Specflow’s fault, but I had hoped to use Visual C# Express for the QA people on my team to create .feature files. Sadly Microsoft has seen fit to not support Add-ins so it doesn’t work. Another barrier to adoption, at least for my team.
  • Stability. I never actually experienced this myself, but two of the engineers on my team said they had stability problems that they attributed directly to Specflow. It “mangled the project file.” Things “wouldn’t compile and it was complaining about Specflow stuff”. I never really got great details on this, so it’s hard to tell how much if any was directly down to Specflow. How much was me with rose-tinted glasses seeing no problem attributable to Specflow and perhaps others less interested in a new technique seeing every problem attributable to Specflow.
In general, you’re going to need a lot of patience, a genuine desire in a majority of the team to do something quite radically different to “normal” and you’ll need to draw a lot of information from the Cucumber community and then be comfortable using the mailing list for additional support. I’m actually OK with this, but it can be a real barrier to entry for a lot of people.

There were also some serious non-technical challenges; people skills and organization/political in nature.

Process and regulatory barriers.
My team makes software that is subject to 21 CFR part 11 regulations. That’s stuff that tries to help make sure any pharmaceutical drugs you take are safe. That kind of industry cascades regulations through to any software related to clinical studies to make sure the software is of high quality. Our operation can be (and frequently is) audited by our clients to make sure we comply and can (though rarely is) audited by the FDA. Having devised a “winning formula” that gets us successfully through audits people are naturally reluctant to make changes in anything that would get seen by an auditor. Having your “requirements” in .feature files alongside source code then is a serious brown trouser moment for some people.
  • “They could end up seeing our source code!” Yes, believe it or not, a process designed to make sure the software involved in clinical trials doesn’t actually often have people look at the source code. They prefer Word documents, signed in duplicate (or more).
  • “We need to show them a traditional requirements specification with numbered requirements that we can trace through to test cases.” Because that’s the only possible way to do this. Of course.
  • “We have to review and sign off the test scripts before they’re executed, and prove that we have done so.” I can see how this made sense 20 years ago when test scripts where a documented set of steps a manual tester followed. But what does this even mean when your “test script” is a plain text .feature file and a corresponding set of step definitions in C#?
I actually have to commend the internal quality expert I spoke to for being very receptive to new ideas and trusting my assertion that I could probably generate a traditional style requirements specification and traceability matrix that mapped features to test cases from .features files. She was less resistant than some people on the team.

Skills Deficits
The people on the team responsible for testing the product and generating the nice bundle of validation paperwork those who audit us like to see primarily have a background in testing software manually. I’d looked over their manual test scripts before (long Word documents with lots of boilerplate and then long rambling detailed steps with expected outcomes and places to mark discrepancies) and got the impression they could be better. They seemed to meander quite a lot, with a lack of clarity over exactly what precisely one was testing. I do now know how anyone could review them and spot test cases that were missing. I’d even taken a manual test script for a fairly trivial feature that ran to thirteen pages and felt pretty smug about being able to reduce it down to one short .feature file.

Some people on the team had asked me whether I thought those folks could use Specflow. Could they write good .feature files and the corresponding step definition classes? It was a fair question, and I was pretty sure some might struggle. That said, we already had a test automation tool called Test Partner that involved folks writing things in VBA, so if they could do that, could this be much harder? And I didn’t see this as a reason to indicate that Specflow and BDD was wrong. If BDD is a good idea it’s a good idea. Sure, maybe we needed to train folks or modify the composition of our team, but that was all.

But oh did I underestimate just how much they would struggle. At this point in time I wasn’t sure if Visual Studio Express would work with Specflow. So I suggested they just try. There then followed quite the flurry of emails. People who ostensibly test things for a living were unable to reliably determine whether they could use Visual Studio Express or a full featured version.

And that was just the installation.

Then we got into writing a few .feature files. The first one wasn’t…very good. This was a little disappointing as I’d circulated a number of links to articles on the web about writing “good” features but, OK, it was a first attempt. A couple of the software engineers very graciously (much more gracious than I’m being right now, but I’m currently enjoying my second Left Hand Brewing Company’s Fade to Black) pointed out some things they might do to improve them. Sadly subsequent attempts at writing .feature files were no better. It was as though the advice offered was ignored or not understood.

We didn’t even get near them implementing a step definition. I really have to wonder what they’ll be putting together in Test Partner in those VBA scripts it uses. Thank goodness we have unit tests.

Incumbent Test Framework
As indicated above, we have available (actually we’re pretty much obliged to use) a product called Test Partner. Of course it’s only for QA people, especially given the per-seat cost. It’s all kinds of horrible so far as I can tell after looking at it. It doesn’t even make it easy to place your test scripts under source code control, preferring instead to keep them inside a database (but of course…) and though you can export them (base 64 encoded, naturally) it’s hardly ideal.

In addition to the original Test Partner investment, several people have spent a not inconsiderable amount of time and money building an additional layer (or framework) on top with functionality for such things as logging and managing collections (*cough* reinventing the wheel *cough*).

Of course the political reality attached to this state of affairs is that any other pretender to the crown has got its work cut out.

Cultural suitability
My love affair with computers started back in the early 80s around age 10. A friend had an Apple IIe computer and we played Bouncing Kamungas and other mad games, we entered epically long listings that sometimes worked. The school had a Commodore PET. I learned to program simple games in BASIC. I got a Vic 20 and wrote more games, even sold some to people at school. Then later I got an Amstrad CPC 6128 complete with it's wacky 3" disk drive and after that an Amiga. I worked in an independent computer store and wrote code for their business customers at age 16. I went to college and learned more. I got a “proper” job. It’s waxed and waned ever since. There’s still nothing quite like the fun of programming I think. But the reality of a corporate job can take the shine off that – it’s not the same as sitting there with your best buddy late at night debugging a game you’ve spent all day writing. So throughout my professional career my enthusiasm has varied along with different projects, languages, technologies, jobs and bosses.

Nevertheless, several years back I took a leadership position, got interested in agile and discovered a new found passion for reading about and playing with new technologies and approaches to building software. I joined twitter, found a community of thought leaders and everyday practitioners, started reading books and blog posts on all of this, got excited, started a transition at work. Mostly successful. But not successful enough I guess. I’m still one of the very few people that does that kind of thing. And when you’re perhaps the only one on a team that reads and writes blogs, uses twitter, reads books, wants to try doing things differently it can be tough. You're seeing so much that other people aren't.

The fact is I’ve been having to sell BDD to almost everyone. Maybe one person really “gets” the potential. A few more are fairly excited about it. But it's so far outside their comfort zone.

When I first managed to get some consensus that we should try this I hadn’t done enough with Specflow myself, almost all my experimentation up to this point was with Cucumber. I wasn’t worried though, the guy that was going to try it was an excellent engineer. I was more than a little surprised then when almost every hour a new “issue” was presented on why this couldn’t possibly work. I hadn’t seen that coming.

Culturally, my current team just isn’t ready or interested in something like this. And I have to accept that I can’t necessarily change that. Maybe I should have done a lot more work with Specflow myself first and had good examples to show people. I'm not sure.

Blah, blah whine! WTF am I gonna do about it?
I don’t know yet. Not for sure. I’m thinking maybe I can contribute to the Specflow project and help out with the documentation. This isn’t entirely appealing though if we won’t even be using it. So maybe I’ll use it on the sly, even if we’re not “officially” using it for our project maybe I can keep trying it out. And use that experience to develop something like the documentation that people on my team felt needed to be there.

If you have any suggestions, let me know...

OK, self-indulgent rant over.

Wednesday, January 12, 2011

Shouting is not an agile value

The Agile Manifesto is well known:

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

Related, but perhaps less well known is the idea of Agile Values. Their origin lines in Extreme Programming, with there being five recognized values as of the second edition of Kent Beck’s Extreme Programming Explained.

In brief the five values are:
  • Communication – amongst all on the team
  • Simplicity – of design, emphasizing ideas such as implementing the simplest thing that will work and the acronym YAGNI (you ain’t gonna need it)
  • Feedback – feedback from automated tests, from the customer and from the team
  • Courage – to keep things simple, to implement only what is needed today, to refactor and throw away code that’s no longer needed
  • Respect – for other team members, for the customer

The fuller definitions can be read on Wikipedia (or of course in Kent's book).

One blindingly obvious thing you’ll notice is that shouting is not an agile value. Never was, never will be.

Well duh!

So why do I mention this? Oh well, you know. It’s not like there aren’t some managers who use that as a “technique” for trying to get what they want. It doesn’t necessarily have anything to do with agile, although regrettably I’ve seen it (and once done it myself) in response to somebody just not getting our new technical practices that came along with the agile adoption.

In my case it was due to extreme frustration with somebody who seemed almost perversely reluctant to commit code to the source control system. He was passing around changes to other colleagues by emailing them Java source code. This wasn’t the first difference of opinion about developing software I’d had with the gentleman in question. He harbored some curious ideas and was not enjoying our change over to Scrum.

I confess I rather lost the plot as we were there in the office late at night desperately trying to make sure the team made their sprint commitment. I blurted out rather loudly “What the $#@! are you doing?” I did quickly apologize and later after we’d got the job done we went to dinner and made up. I’m not proud of that moment and it’s not a “technique” I ever intend to use again.

Nonetheless, there are folks out there of a more strident nature inclined to use it far more frequently. I don’t know if their threshold for frustration is lower than mine or if they just need an anger management class or what the deal is. Suffice to say though, as an approach to trying to coach staff into doing what you want it’s not a winner. Indeed it’s a completely toxic behavior. Even if Mr or Ms. Shoutypants has great ideas people are very unlikely to ever “hear” them because they’ve written him off as an aggressive belligerent asshole.

Patience. Now there’s an Agile Value. In fact it’s a key one I believe. Stimulating, inspiring, managing and maintaining the change to agile practices takes effort. It takes leadership. And it takes patience. And so I propose that we add to communication, simplicity, feedback, courage and respect the value of patience.

Measuring the hypotenuse

Consider the following right-angled triangle:



What is x? Not a trick question.

Well it’s 5. Obviously.

But how did you know that? Well maybe for this very simple example you happen to just remember that 3:4:5 are the proportions of a right-angled triangle. But given a non-trivial example with less straightforward lengths to the two known sides you would doubtlessly have employed Pythagoras’ Theorem. Certainly that’s what most people would expect to see if they presented you with such a problem.

There is of course another way. Assuming the diagram is to scale you could measure x.

But that’s not what people would expect; especially if they’d ask you to use math to solve the problem.

In previous blog posts I’ve described a programming problem I set interview candidates at my organization.

The coding exercise I set people is pretty simple. It draws its inspiration from the old text adventure games the likes of Infocom and Level 9 used to produce. If you’re not familiar with this genre the basic gist of them was that there was an unexplored territory that you could wander around with things to find and puzzle to solve. Players interacted with the game using simple text based instructions such as “go north, get axe and kill troll”.

The exercise requires the candidates to do nothing quite so complicated as that though. They simply get an XML file describing a simple collection of rooms connected in various ways and containing various objects plus a text file describing their starting location and objects they need to go find. Their task is to write a program to read the two files and then use some approach to wander through the maze and try and collect the items.

Just as you would expect something to use Pythagoras’ Theorem to find x in the problem above I would expect people to use some kind of algorithm to solve this programming exercise. I don’t care if the algorithm is as dumb as a stumbling drunk, randomly staggering from room to room until every item is found or if you’re using something like Djikstra’s shortest path, just so long as there’s some kind of intelligent solution for us to discuss.

Up until now nobody had disappointed with that expectation either. Just recently though I got a very novel submission. What the candidate had done, rather than algorithmically solve the problem was to simply issue a series of printed lines that simulated what the output might look like if they had done the exercise for real. In other words their “solution” basically was:

Console.Write("At Sculery: lamp\n At Dining Rome: plate\n At Pond: Fishing Rod\n At Library: book\n At Cave-Entrance: Pickaxe\n At pine Forest: Pine cone");

I hunted around for a bit in case there was more. Alas there was not. When I was trying to explain to the recruiter involved why there was no point proceeding further with the candidate I came up with the hypotenuse measuring analogy which he got completely.

Although I’ve never seen anyone else “solve” the programming exercise in quite this way, regrettably I have seen others who approach problems similarly, trying to find any and all obtuse approaches to avoid actually coding what’s really needed. A common variation on the theme is mangling unit tests in perverse ways to avoid actually testing what you should be.

Since this isn’t an isolated phenomenon I decided I’m coining a new term to describe it. Henceforth (for me at least) programming “solutions” that completely miss the point will be known as a “measuring the hypotenuse” approach.

Sunday, January 9, 2011

Top ten tips for distributed scrum team teleconferences

Distributed software teams using scrum can spend a fair bit of time on the telephone. Even if you only talk for just sprint planning, daily stand-ups, reviews and retrospectives that could easily be 6+ hours in a two week sprint.

When the team is spread around geographically you inevitably have a mix of people: some feeling tired because it’s early, some feeling tired because it’s late and always some annoying peppy ones because it’s the middle of their morning and they’ve had 7 coffees so far.

Add in cultural differences and that English (or whatever your primary language for communicating is) might not be someone’s native tongue and you are sometimes going to have challenges.

Here then, after acting as scrum master for several months on a distributed team with people in six different locations, three different time zones and two different countries, are my top ten tips to help get past those inevitable awkward silences:

1. Use a lot of questions.
Sometimes it’s not clear where to go next. If you can think of lots of different ways to frame questions, that can help. A question that wasn’t clear posed one way may be more accessible put in different terms.

2. Ask specific people for their view.
Asking, “Hey Bob, what are your thoughts on this?” will usually trigger one of two things: either Bob is going to tell you he doesn’t really understand what’s going on, or he shares the thoughts he was sitting on. Sometimes you might do this several times in a row. So after Bob you could be following up with, “And what about you Sally?”

3. Ask the quiet people.
This is a variation on #2. When conversation is flowing well sometimes the quieter members of a team can be left without a chance to contribute. When conversation isn’t flowing at all it’s often tempting to prod the more vocal members of the team. Instead of starting with the “easy” targets seek out other team members to encourage their contribution.

4. Volunteer your own opinion.
Just because the Scrum Master is meant to be a facilitator doesn’t mean he or she can’t have a useful opinion. Myself I can often hardly keep my mouth shut. I don’t think it’s a bad thing to throw out what you see, what you think should be done and solicit feedback on that.

5. Volunteer a ridiculous opinion.
This may be a bit Machiavellian for some, but I confess I use it myself: “So how about we use to solve ?” If it’s daft enough, somebody will usually be unable to resist telling you why that won’t work…which is the perfect segue into “Well what could we do instead?”

6. Use some humor.
You don’t need to be a stand up comedian and you don’t need to aim for tears streaming down people’s faces and belly laughs. But a little comedy can help lighten the mood and reenergize a meeting. Self-deprecating humor is a good way to go. Especially easy when you’ve got a ton of things like me to laugh at about yourself.

7. Take a break.
Long meetings need breaks. When you’re all sat in a conference room together nobody thinks twice about nipping out to fetch a cup of water. Everyone else can obviously see who’s gone and carry on accordingly or wait for their return if it’s crucial. But when you’re “blind” at the end of a phone I find more structure helps. I either build in a 10-15 minute break midway through a longer meeting or, if I sense things are flagging, ask if people would like to stop for a bit.

8. Switch topics.
There is, as they say, no point flogging a dead horse. If you’re making no headway as team with something perhaps there’s some other useful topic you can pursue together. Something more interesting or less ambiguous than what you were struggling with.

9. Do a mini-retrospective.
You don’t have to save retrospectives for the end of a sprint. You can retrospect on a single meeting or even midway through: “Is this working OK? Should we go about this differently?”

10. Note when people are “done.”
Now I don’t mean “done-done” in the definition of done sense. I mean worn out and you’re not going to get anywhere. Sometimes it’s just time to stop. There’s always another day.

Do you have any more tips for keeping teleconference meetings for distributed teams going? If so please share in the comments.