I recently read Robert I Sutton's book, The No Asshole Rule. Apparently it was borne of an article originally published in the Harvard Business Review. I cannot claim to have read that original article but the book is great; I enjoyed it immensely.
Besides the frequent smirks spawned by the amusement derived in reading a "serious" work about assholes, there's plenty of just great writing, sound advice and lots of "yeah I agree" moments.
The book is pleasantly short, with just seven exquisitely crafted chapters covering everything from the definition of an asshole in this context (not just someone who pisses you off ;-) through to eradicating them if you have the power, or dealing with them as best you can if you don't. In addition there's material on the virtues of assholes (there are some) and how to stop your own "inner jerk" getting out too often. It was nice to read that we're all assholes some of the time. The acknowledgment of this reality in the book is good, because without it I believe it would preach an unattainable level of asshole free zones. In other words it's OK to be a temporary asshole, indeed as the virtues section indicates sometimes it's necessary to jolt slackers and under-performers out of their stupor.
The section on dealing with an asshole infested workplace has some solid advice. Since most of us lack the authority (not being the CEO and all) to go on an asshole eradication spree it's probably the section most useful to most people. In particular I liked the ideas like attending meetings with assholes via telephone: if you have a meeting with an asshole in at least you can avoid having to be physically in their presence. Similarly take pleasure in answering their emails slowly, or maybe not even at all. Having a sense of control over the situation as the author suggests is indeed a very empowering feeling.
Luckily for me, there are very few assholes I have to contend with these days. Nonetheless, having finished the book there are still a few candidates whose desk I am considering leaving the book on...
Wednesday, November 25, 2009
Tuesday, November 24, 2009
Agile adoption: Four months in.
Actually I'm not sure if it's four months, it could be a little more, could be a little less. But that's close enough. In some respects I wish I had taken time to blog about this much more frequently, as I think it would have helped me distill my thoughts on how things were going. Possibly even a slim chance someone else may have found it interesting too ;-) But I didn't. I'm going to attribute that to a lack of habit, and maybe I can cultivate that habit more going forward. On the upside, having gone four months or so before posting my thoughts on adopting agile practices, I do feel that I have got a fair bit of experience that has some value to it. Perhaps earlier posts would have comprised vacuous meanderings around the topic of no interest to anyone.
Onwards...
Now I try and think back, I'm not 100% sure I can entirely recollect what the trigger for the turning point was. The best I can remember, there was some...shall we say, doubt or perhaps concern over whether a particular project we were running was going to be feature complete on time. Let me explain a little bit about where we were at this time:
- we had a slew of software engineers (by our standards anyway) working away on this rather undefined "refactoring and re-architecting" phase of a major product release
- somehow, despite some top level goals for this phase being set, there was next to know clarity on what was really going on, nor whether it was worthwhile
- our customer certainly had no clue what was going on
- the deadline for delivery had been fuzzy until recently
- we had some nasty combination of huge horrible "the system shall" type requirements documents, use cases that were more use-less cases
- nobody had a clue what the QA members of the team were doing besides "regression and exploratory testing"
- the team was working in two week iterations, although they seemed rather purposeless upon closer inspection; the only thing that they seemed to do was ensure we more or less regularly provided a new build to QA
- the quality of the builds was low, often emergency "patches" were needed to remedy showstopper defects that somehow made it out to QA
- nobody was pair programming or doing TDD; the unit tests we did have were...let's just say sub-optimal *cough* not working *cough* or commented out *cough*
Anyway, back to the plot. So in a conversation about the need to critically evaluate whether this project was going to succeed, I may have blurted out something about my distaste for estimating in hours and how we should use points. I had done a little reading on this topic and sensed the potential -- estimating the work left in points and measuring velocity would give us a much better feel for progress I believed. They say you "should be careful what you ask for, as you may just get it" and yeah, I got it. So I went away and read a bit more (should we do S/M/L t-shirt size estimates? Fibonacci sequence?) before probably delivering the most disruptive message ever to my developers the next day. "Yeah this is Jon. I thought it'd be a great idea to estimate what we had left in story points. Have you heard of them? No. Don't worry. We're just gonna give it a go..."
I actually had us doing this wrong of course. Rather than estimating as a team I "pair estimated" with individual engineers for features they were responsible for developing. At this stage this was the expedient approach and looking back I still maintain it was better than do a "how many hours/days of work have you got left?" exercise. There was no QA involvement in these estimates, but I think that trying to get their involvement would have been too hard at that stage.
We settled on using 1, 2 or 3 as valid values for estimating our stories, mostly because it seemed simplest to get people to think about whether something was "easy, medium or hard" rather than agonize over a more complex set of choices. At this point I hadn't read any good material on what made for a suitable story, so many of our stories were tasks. With hindsight, although clearly not optimal this *still* was a step in the right direction.
It took the best part of a day for me to work with each of the engineers on the team and get their points based estimates for each of the stories (or tasks...) they believed they had left. Once this was done though I felt a distinct sense of progress. It was somehow easier to think in terms of there being about 100 points of work left, and how this therefore implied we needed a velocity of 20 points an iteration (not real numbers, but you get the point...)
With the "stories" and their corresponding estimates in a spreadsheet, and a daily stand up we suddenly started to be able to understand progress like never before. It was euphorically exciting to feel this degree of comprehension and transparency available to us. This feeling was a fleeting thing at first, but as iterations passed by and the velocity stabilized I think we all felt better about predicting success.
Stand up meetings were initially hard. People were late. People rambled. People didn't see the point. The chickens (myself and other managers) interrupted gratuitously to "coach" people on doing it right. We were eager but perhaps ineffective. Stand ups took ages. But, interestingly, we went from weeks slipping by with hidden issues, to impediments surfacing daily. And we'd solve them, or people would agree to work on solving them there and then. This was our second big win after understanding velocity I think.
With a toehold on our agile adoption, we were starting to build momentum. We have done much more since then, which I will write about it follow up posts to come.
Book review: Bridging the Communication Gap by Gojko Adzic
I just finished reading Gojko Adzic's book, Bridging the Communcation Gap. It's subtitled "Specification by example and agile acceptance testing." For a one sentence description, that does a pretty good job of explaining what this book covers.
I was inspired to seek out something like this after previously reading Mike Cohn's User Stories Applied (TODO: my review). It was in Mike's book that I found compelling hope that we could get away from the unwieldy requirements specifications used in my organization. These have traditionally served as the definitive record of what our software should do yet, they have never felt helpful or right to me. Indeed there is much evidence that they don't help at all, but that's another post...
In User Stories Applied, Mike answers the question I think everyone has after getting the quick overview of what a user story is: "Where do the details go then?" After all, given a story such as "As a hiker I want to view a list of nearby trail heads so I can investigate suitable hikes" one is naturally left with many questions. How far away from your current location should the software look for trail heads? Should you be able to vary that distance? What information should be in the list -- trail names? Dog / horse / mountain biking use allowed? etc.
The answer is that these details, rather than being captured in a traditional specification or use case documents can be very beneficially recorded as actual examples and acceptance tests. Understanding this was a real "aha" moment for me. Prior to this, user stories, estimating in story points, measuring velocity and using that to finally have some measure of what a team could accomplish in a fixed length iteration all felt right, but I couldn't quite figure out how we should do requirements.
Gojko's book opens by building the argument that traditional approaches to gathering and documenting requirements are flawed. I don't know that it would be hard to convince many people who've spent more than a few years in an environment where they must develop software based on the information in 100+ page requiremetns documents that this was the case. However, what Gojko does is to articulate well some reasons why this is the case. For me though, the real meat of the book is all contained in section II, which provides the concrete specifics of how to go about specifying by example and deriving acceptance tests.
The key points covered that I found valuable are:
Since reading the book I've had a chance to get a little hands on with the technique but not in anger. I took some of the ideas and applied them to something we had already done, where I knew plenty about the system to allow be to play the role adequately of customer, tester and developer. I turned 40 pages of crap documentation (no offense intended to the original author) into about three. I'm very much looking forward to seeing us apply this approach to the next release cycles of a couple of our products.
The third section of the book discusses how to go about implementing agile acceptance testing covering items such as how to get people interested in the prospect to the synergy the technique has with user stories and the various tools available for automating tests. There is one slightly dubious (but perhaps necessary under adverse conditions) technique mentioned which is altering one's language to avoid the use of the word "test".
The fourth section is pretty short, and covers how adopting this approach to specifications and acceptance testing alters the work of various roles. This was already covered in briefer terms earlier in the book. This section expands upon that, although to some extent it feels like a bit of a rerun.
All in all the book gets a definite thumbs up from me. My only real complaint would be that the writing style is a little dry, definitely not as readable as some authors. Nonetheless, the effort to read it has been worthwhile from my point of view and I would definitely recommend the book to anyone on a team looking to do requirements better.
I was inspired to seek out something like this after previously reading Mike Cohn's User Stories Applied (TODO: my review). It was in Mike's book that I found compelling hope that we could get away from the unwieldy requirements specifications used in my organization. These have traditionally served as the definitive record of what our software should do yet, they have never felt helpful or right to me. Indeed there is much evidence that they don't help at all, but that's another post...
In User Stories Applied, Mike answers the question I think everyone has after getting the quick overview of what a user story is: "Where do the details go then?" After all, given a story such as "As a hiker I want to view a list of nearby trail heads so I can investigate suitable hikes" one is naturally left with many questions. How far away from your current location should the software look for trail heads? Should you be able to vary that distance? What information should be in the list -- trail names? Dog / horse / mountain biking use allowed? etc.
The answer is that these details, rather than being captured in a traditional specification or use case documents can be very beneficially recorded as actual examples and acceptance tests. Understanding this was a real "aha" moment for me. Prior to this, user stories, estimating in story points, measuring velocity and using that to finally have some measure of what a team could accomplish in a fixed length iteration all felt right, but I couldn't quite figure out how we should do requirements.
Gojko's book opens by building the argument that traditional approaches to gathering and documenting requirements are flawed. I don't know that it would be hard to convince many people who've spent more than a few years in an environment where they must develop software based on the information in 100+ page requiremetns documents that this was the case. However, what Gojko does is to articulate well some reasons why this is the case. For me though, the real meat of the book is all contained in section II, which provides the concrete specifics of how to go about specifying by example and deriving acceptance tests.
The key points covered that I found valuable are:
- Confirmation of my own belief in the necessity of customer/domain expert involvement alongside testers and developers. Business analysts should not be a conduit (or worse, filter). There's a particularly good explanation of the role of BAs mentioned in the book drawn from Martin Fowler and Dan North's presentation The Crevasse of Doom.
- A detailed explanation of what make good examples.
- Commentary on the benefits of a project language -- a non technical business oriented vocabulary that enables unambiguous communication for the team.
- How to go about converting examples to acceptance tests.
- How to run specification workshops successfully. Besides the information in the bookon this topic I am currently quite enamored with the Nine Boxes approach to interviewing people.
Since reading the book I've had a chance to get a little hands on with the technique but not in anger. I took some of the ideas and applied them to something we had already done, where I knew plenty about the system to allow be to play the role adequately of customer, tester and developer. I turned 40 pages of crap documentation (no offense intended to the original author) into about three. I'm very much looking forward to seeing us apply this approach to the next release cycles of a couple of our products.
The third section of the book discusses how to go about implementing agile acceptance testing covering items such as how to get people interested in the prospect to the synergy the technique has with user stories and the various tools available for automating tests. There is one slightly dubious (but perhaps necessary under adverse conditions) technique mentioned which is altering one's language to avoid the use of the word "test".
The fourth section is pretty short, and covers how adopting this approach to specifications and acceptance testing alters the work of various roles. This was already covered in briefer terms earlier in the book. This section expands upon that, although to some extent it feels like a bit of a rerun.
All in all the book gets a definite thumbs up from me. My only real complaint would be that the writing style is a little dry, definitely not as readable as some authors. Nonetheless, the effort to read it has been worthwhile from my point of view and I would definitely recommend the book to anyone on a team looking to do requirements better.
Basic Curry
This is my basic tomato-based curry which gives pretty good results, better than something in a jar and better than just using some generic "curry powder". It's pretty flexible and can be the basis for several variations on the theme which are noted at the end.
Ingredients
How to cook it
Chop the onion, garlic and ginger. Cut the chilies in half - I generally leave the seeds in.
Heat ~3-4 tbsp vegetable oil until hot. Drop in the cumin seeds and they should quickly darken as they sizzle in the oil. Now add the chopped onion and chilies. Fry for 5-10 minutes before adding the ginger and garlic. Cook for a few more minutes.
Add in the cumin/coriander powder, garam masala and turmeric. Stir in well and cook for a minute or so. Now add the chicken and cook until mostly sealed. Add the Prego, cilantro, lemon juice and salt. You'll probably want to add a bit of water to thin out the sauce -- I usually do this by rinsing out the Prego jar with some water and then tossing that in. Stir well and bring up to a simmer. It will probably take about 20 minutes or so for the chicken to be fully cooked.
Taste and adjust as necessary. Might need a bit more salt, lemon juice or cumin/coriander powder to bring things into balance.
You *should* serve this with white basmati rice, and if you do make sure it's not the Californian crap ;-) Get something from the Indian subcontinent. These days I quite often do it with brown rice which takes a little getting used to but is still very nice if not very authentic.
It's also nice to have some raita on the side with this which I make by combining some plain yoghurt, English (hothouse?) cucumber chunks and mint and letting sit in the fridge whilst cooking so the flavors blend. You can also stick some diced chili in that mix if you haven't got enough heat already.
This keeps and freezes pretty well in the event you can stop yourself from eating it all at once.
Variations
Ingredients
- 4 or so chicken breasts, cut into bite sized chunks
- Large sweet yellow onion
- Garlic cloves -- 4 or more
- Root ginger -- an inch or two
- Chiles - Jalapenos and/or Serranos work well. Use your own judgment depending on how hot you like things. I would usually got with 2 x jalapenos for a middle of the road level of heat.
- Prego tomato sauce or similar. I know that sounds weird and Italian but it's a great tomato based sauce. Just make sure you use a plain variation, not one full of oregano or mushrooms or something...
- Cumin seeds -- 0.5 tsp
- Cumin/Coriander powder -- 1 tbsp. In Indian grocery stores they sell ground cumin/coriander premixed. If you have separate cumin and coriander powder then just mix them approx 50/50.
- Garam Masala -- a spice mix. Best value is in Indian grocery stores but I have seen this in regular grocery stores too.
- 0.5 tsp turmeric powder
- 1.5 tsp salt
- Lemon juice -- 1-2 tbsp
- Cilantro -- about 8 stalks + leaves chopped
How to cook it
Chop the onion, garlic and ginger. Cut the chilies in half - I generally leave the seeds in.
Heat ~3-4 tbsp vegetable oil until hot. Drop in the cumin seeds and they should quickly darken as they sizzle in the oil. Now add the chopped onion and chilies. Fry for 5-10 minutes before adding the ginger and garlic. Cook for a few more minutes.
Add in the cumin/coriander powder, garam masala and turmeric. Stir in well and cook for a minute or so. Now add the chicken and cook until mostly sealed. Add the Prego, cilantro, lemon juice and salt. You'll probably want to add a bit of water to thin out the sauce -- I usually do this by rinsing out the Prego jar with some water and then tossing that in. Stir well and bring up to a simmer. It will probably take about 20 minutes or so for the chicken to be fully cooked.
Taste and adjust as necessary. Might need a bit more salt, lemon juice or cumin/coriander powder to bring things into balance.
You *should* serve this with white basmati rice, and if you do make sure it's not the Californian crap ;-) Get something from the Indian subcontinent. These days I quite often do it with brown rice which takes a little getting used to but is still very nice if not very authentic.
It's also nice to have some raita on the side with this which I make by combining some plain yoghurt, English (hothouse?) cucumber chunks and mint and letting sit in the fridge whilst cooking so the flavors blend. You can also stick some diced chili in that mix if you haven't got enough heat already.
This keeps and freezes pretty well in the event you can stop yourself from eating it all at once.
Variations
- Consider adding in any of the following (all is nice) up front with the cumin seeds to create a richer flavor: half a dozen cloves, cardomom pods, a cinnamon stick, a star anise, 2 x bay leaves
- Fenugreek seeds can be added in up front with the cumin seeds to create a very distinct flavor
- Puree the onion/garlic/ginger and then very roughly cut up a second onion. Start by frying the whole spices then add the chilies and chunks of onion. Fry for a bit before adding the puree to create a "do piaza" (double onion) style curry
- Cook some potatoes and add these in ~10 mins from the end
- Substitute the Prego for diced tomatoes
- Use fresh tomatoes cut up....they reduce down and make a nice sauce usually
Chorizo & Pepper Egg Gratin
Chorizo, Pepper & Egg Gratin
Ingredients:
You need to remove the "wrapping" of the chorizo, then dice fairly small.
Fry it in your fat of choice (Olive oil recommended( over high heat until it starts to brown a little. Push it to the back of the pan and drop in the onion and pepper. Fry for 5 minutes or so then drop in the chopped tomatoes and garlic. Stir everything together with a good seasoning of salt and pepper then let it sizzle as the water from tomatoes evaporates.
Transfer to an oven proof dish and then crack as many eggs as you fancy into it. It can help to make a little depression where each egg is going to go to hold it in place. Sprinkle the grated cheese over then bake in over at 350F. Original recipe says 12 - 15 minutes should do it. I haven't found this to be the case. Maybe it's an altitude thing, not sure. I've found 20 minutes to be more like it.
Good with ketchup and/or a hot pepper sauce such as Cholula, Tobasco etc.
It should end up looking like this. Being a pretentious git I have sprinkled some paprika over the top :-)
Ingredients:
- Some Spanish (or Spanish-style...) chorizo, i.e. not the stuff commonly referred to in US grocery stores as chorizo. Whole Foods have some on their deli counter which is made in California but a fair approximation of the real thing. See http://en.wikipedia.org/wiki/Chorizo#Spanish_chorizo for more details. I'm gonna be vague on the quantity...just use your skill and judgment. I used about half of what I bought from Whole Foods to make enough for three.
- A medium onion. Yellow, red...whatever. You can either cut slices or dice it. Recommend not making it too small as you want pieces large enough to stay moderately crisp.
- A green bell pepper. Cut it into small slices or dice it. Again, not too small...
- 2 tomatoes -- remove skin using the old hot water trick and then chop
- 1 or more cloves of garlic, depending on how partial you are to it
- some eggs
- some grated cheese (something that melts good, cheddar, gruyere etc.)
You need to remove the "wrapping" of the chorizo, then dice fairly small.
Fry it in your fat of choice (Olive oil recommended( over high heat until it starts to brown a little. Push it to the back of the pan and drop in the onion and pepper. Fry for 5 minutes or so then drop in the chopped tomatoes and garlic. Stir everything together with a good seasoning of salt and pepper then let it sizzle as the water from tomatoes evaporates.
Transfer to an oven proof dish and then crack as many eggs as you fancy into it. It can help to make a little depression where each egg is going to go to hold it in place. Sprinkle the grated cheese over then bake in over at 350F. Original recipe says 12 - 15 minutes should do it. I haven't found this to be the case. Maybe it's an altitude thing, not sure. I've found 20 minutes to be more like it.
Good with ketchup and/or a hot pepper sauce such as Cholula, Tobasco etc.
It should end up looking like this. Being a pretentious git I have sprinkled some paprika over the top :-)
Thursday, July 2, 2009
How would Agile practices help us meet deadlines more reliably?
We've been looking at adopting some Agile practices where I work. I've been doing a fair amount of reading on Agile in general and Scrum and Kanban in particular. Many thanks to some good people I follow on Twitter who have tweeted links to articles and blog posts I've found enlightening and educational.
In an excited but rather ad-hoc fashion I've been talking up all the good and interesting things I've run across to anyone who'll listen. One of my colleagues asked a great question yesterday along the lines of "so how exactly is applying Agile principles going to help us meet the customer's date?"
Try googling for that. I did. Unless things have changed since yesterday you won't turn up any really great articles that answer that very reasonable question directly. This forced me to try and articulate why I thought it would help. Below is my rather stream-of-consciousness reply to him. This could be all bollocks. This could be a gem. I'm a total neophyte here in the very early days of exploring and preparing to try and apply some of these ideas. But right now, this is where my thinking is. In the not too distant future I hope to be able to write from a position of experience. Will be interesting to contrast and compare then...
It has been proven time and again that you cannot reliably take a large set of vague needs and accurately estimate when they will all be implemented and tested. Waterfall methodology says if you fail at this it’s because you are advancing from one phase to the next prematurely, i.e. you needed to have done more requirements analysis and more design before moving to coding, testing etc. This can work if you have absolutely static requirements and a customer prepared to let you spend a hard-to-estimate amount of time doing the Big Up-Front Design™ phase without limit. Even then the phases that follow are still historically proven likely to slip unless significant padding is added to estimates and a hardline approach applied to scope creep (i.e. allow zero).
In most organizations true waterfall with the Big Up-Front Design™ is not feasible because:
Therefore, if most organizations cannot successfully develop software in this way one should consider other approaches. The broad umbrella term “Agile” arose to cover a general approach with a number of specific “brands” of methodology recognized as adhering to Agile principles.
The well known ones are RUP, DSDM, XP, Scrum and Kanban. Many of the differences surround the extent to which a particular methodology is prescriptive. For example, XP specifies things you must have/do/generate that Scrum does not. Scrum specifies that you should have certain things, but fewer than XP. Kanban specifies very little. It is not uncommon to take what you decide are good practices from one methodology and incorporate them in another. For example paired-programming is a technique that originated in XP, isn’t mandated by Scrum or Kanban, but many people employing either of those techniques would still choose to use it.
The general themes that unify Agile thinking can be found expressed in the Agile Manifesto (http://agilemanifesto.org/principles.html) although they are pretty high level.
The concrete things that can be commonly identified as Agile practices include breaking work into fixed length iterations, estimating in story points, paired programming, holding retrospective meetings as a vehicle for continuous improvement, measuring team velocity, eroding barriers between different roles in the team, using daily stand ups to monitor progress etc.
So now to answer your original question, of how do these practices help us meet deadlines. There are two parts to this.
PART ONE
I believe we have to fundamentally alter what a deadline is. Instead of it being something we project after consuming some set of rough requirements, we want to set a fixed regular product release schedule.
Let’s say we deliver tracking every 6 months. Why? Because by doing this we now *know* already when we’re going to deliver it. The game now fundamentally changes from “I need you to get all this poorly specified vague stuff done by August” to the customer working with us to *get the highest value features added before the next release*
This is a much more favorable mindset to put them in. They become proper stakeholders and team members. They work with us to prioritize what they truly need over what they do not need. It’s Project Portfolio Management on a micro/individual project scale.
The notion that certain people in my organization advocate, that we can’t change deadlines or scope is not tenable. But they will continue to expect it in a mostly waterfall environment because it’s set up such that we get some needs, we estimate and then we are held to that. Yet it’s so obvious that this is impossible.
Therefore the game-changing approach of saying we release on a 6 monthly basis so now work with us to figure out what to get in there is I feel potentially very potent. Now sure, at the start of 6 months they/we will want to sketch out all the things they would like at the end of 6 months. But they have to be broken into stories and prioritized. And we have to accept that as we advance through the release cycle we *will* have to consider dropping some of the lower priority features. Why? Because we have fixed two of the three variables: budget (resources) and time (fixed date). Therefore you can only manage on that third axis.
PART TWO
If you can achieve buy in that the game changing alteration outlined in part one is the best known way to manage product development then you need a detailed way to operate during those 6 month release cycles.
Those techniques identified above will help. Specifically the really potent one is measuring velocity. Why? Because it will finally give us a way to know an approximation of team output. We have not had that before. If you have no measure of how much work the team can get done then projecting what can get done by when is somewhere between hard and impossible. This is why estimates are crap.
Using velocity involves the points based estimating and seeing how many points we can on average get through per iteration, known as velocity. Once you have that information and everyone has established a reasonably reliable pattern for generating points based estimates and burning through X number of points in an iteration you are able to do several useful things:
So the *$64-million dollar questions* is how do you get the customer to buy into this? Well I think the answer is to get them to recognize that the alternative doesn’t work and that people are employing Agile very successfully. One definition of insanity is to keep doing things the same way and expecting different results. Let’s not be insane. Let’s change they way we do things and be successful.
In an excited but rather ad-hoc fashion I've been talking up all the good and interesting things I've run across to anyone who'll listen. One of my colleagues asked a great question yesterday along the lines of "so how exactly is applying Agile principles going to help us meet the customer's date?"
Try googling for that. I did. Unless things have changed since yesterday you won't turn up any really great articles that answer that very reasonable question directly. This forced me to try and articulate why I thought it would help. Below is my rather stream-of-consciousness reply to him. This could be all bollocks. This could be a gem. I'm a total neophyte here in the very early days of exploring and preparing to try and apply some of these ideas. But right now, this is where my thinking is. In the not too distant future I hope to be able to write from a position of experience. Will be interesting to contrast and compare then...
It has been proven time and again that you cannot reliably take a large set of vague needs and accurately estimate when they will all be implemented and tested. Waterfall methodology says if you fail at this it’s because you are advancing from one phase to the next prematurely, i.e. you needed to have done more requirements analysis and more design before moving to coding, testing etc. This can work if you have absolutely static requirements and a customer prepared to let you spend a hard-to-estimate amount of time doing the Big Up-Front Design™ phase without limit. Even then the phases that follow are still historically proven likely to slip unless significant padding is added to estimates and a hardline approach applied to scope creep (i.e. allow zero).
In most organizations true waterfall with the Big Up-Front Design™ is not feasible because:
- organizations are not prepared to allow a large but unspecified period of time in order to get to a stage where you can predict the remaining 2/3 of the project time
- requirements cannot be frozen for such long periods of time
- the skills are not there to do requirements and design that are rigorous enough to allow a trouble-free implementation and testing phase
Therefore, if most organizations cannot successfully develop software in this way one should consider other approaches. The broad umbrella term “Agile” arose to cover a general approach with a number of specific “brands” of methodology recognized as adhering to Agile principles.
The well known ones are RUP, DSDM, XP, Scrum and Kanban. Many of the differences surround the extent to which a particular methodology is prescriptive. For example, XP specifies things you must have/do/generate that Scrum does not. Scrum specifies that you should have certain things, but fewer than XP. Kanban specifies very little. It is not uncommon to take what you decide are good practices from one methodology and incorporate them in another. For example paired-programming is a technique that originated in XP, isn’t mandated by Scrum or Kanban, but many people employing either of those techniques would still choose to use it.
The general themes that unify Agile thinking can be found expressed in the Agile Manifesto (http://agilemanifesto.org/principles.html) although they are pretty high level.
The concrete things that can be commonly identified as Agile practices include breaking work into fixed length iterations, estimating in story points, paired programming, holding retrospective meetings as a vehicle for continuous improvement, measuring team velocity, eroding barriers between different roles in the team, using daily stand ups to monitor progress etc.
So now to answer your original question, of how do these practices help us meet deadlines. There are two parts to this.
PART ONE
I believe we have to fundamentally alter what a deadline is. Instead of it being something we project after consuming some set of rough requirements, we want to set a fixed regular product release schedule.
Let’s say we deliver tracking every 6 months. Why? Because by doing this we now *know* already when we’re going to deliver it. The game now fundamentally changes from “I need you to get all this poorly specified vague stuff done by August” to the customer working with us to *get the highest value features added before the next release*
This is a much more favorable mindset to put them in. They become proper stakeholders and team members. They work with us to prioritize what they truly need over what they do not need. It’s Project Portfolio Management on a micro/individual project scale.
The notion that certain people in my organization advocate, that we can’t change deadlines or scope is not tenable. But they will continue to expect it in a mostly waterfall environment because it’s set up such that we get some needs, we estimate and then we are held to that. Yet it’s so obvious that this is impossible.
Therefore the game-changing approach of saying we release on a 6 monthly basis so now work with us to figure out what to get in there is I feel potentially very potent. Now sure, at the start of 6 months they/we will want to sketch out all the things they would like at the end of 6 months. But they have to be broken into stories and prioritized. And we have to accept that as we advance through the release cycle we *will* have to consider dropping some of the lower priority features. Why? Because we have fixed two of the three variables: budget (resources) and time (fixed date). Therefore you can only manage on that third axis.
PART TWO
If you can achieve buy in that the game changing alteration outlined in part one is the best known way to manage product development then you need a detailed way to operate during those 6 month release cycles.
Those techniques identified above will help. Specifically the really potent one is measuring velocity. Why? Because it will finally give us a way to know an approximation of team output. We have not had that before. If you have no measure of how much work the team can get done then projecting what can get done by when is somewhere between hard and impossible. This is why estimates are crap.
Using velocity involves the points based estimating and seeing how many points we can on average get through per iteration, known as velocity. Once you have that information and everyone has established a reasonably reliable pattern for generating points based estimates and burning through X number of points in an iteration you are able to do several useful things:
- look at your backlog of prioritized features, tot up their value in points, and see when you can get them done by
- figure out the impact of vacations more easily (25% of team on holiday? You can do 25% less points on average this iteration)
- challenge the team to improve their velocity – typically via retrospectives
So the *$64-million dollar questions* is how do you get the customer to buy into this? Well I think the answer is to get them to recognize that the alternative doesn’t work and that people are employing Agile very successfully. One definition of insanity is to keep doing things the same way and expecting different results. Let’s not be insane. Let’s change they way we do things and be successful.
Saturday, June 13, 2009
On matters of code style
(Or, “You know where you can shove your curly brace!”)
Source code is written documentation. It is the best, most accurate, in fact perhaps *only* accurate documentation you have. It completely and perfectly describes what your application does. No requirements specification, technical specification, UML diagram or anything else comes close to the comprehensive precision of your source code.
Now lots of written material is read for entertainment. The perhaps-soon-to-be-obsolete newspaper, novels, even for many people, non-fiction. But the first rule of software documentation is nobody reads it unless they have to. Except for a rare few, reading source code is not entertainment. When they are reading software documentation, people are reading it in pursuit of some goal. An emergency bugfix perhaps, or considering how to fit in a change to effect some new functionality. Perhaps looking to improve application performance or lift some existing code as the basis for another project.
Whatever it is, people have an objective when reading software documentation. And like I said, your source code is your *best* documentation. Unlike those written specifications and pretty pictures it is always up to date. It doesn’t suffer from the vagaries and ambiguities of the English language. It doesn’t require updating to reflect the latest design and behavior, it *is* the design and behavior.
Any non-trivial piece of software typically has quite a lengthy shelf life. It will often be far longer than perhaps initially anticipated, typically many years for business applications. Over its not inconsiderable life, with the ebb and flow of need bugs will be fixed, features added, code refactored, performance enhanced, third party libraries updated. All of this by a (more than likely) ever changing team.
Now most, if not all, source code is read on screen. Some people might like to print it out for scrutiny and reflection, but not the majority, whether seeking out a bug or peer reviewing a colleague’s work. Reading on screen is hard. Today I stumbled across a copy of Jakob Nielsen’s “Designing Web Usability” when rooting about in my furnace room. I dipped into it as I remembered him having something to say about content being read on screen. According to him people read 25% slower on screen, and in a study he conducted 79% of people scanned or skimmed rather than read word for word. I think it’s pretty fair to say that this is *exactly* how software engineers read code.
Consider then the trio of factors here:
It seems blindingly clear to me then that the layout and style of your code is important. People need to be able to read, digest and work with source code as quickly as possible. I don’t think it’s any exaggeration to claim that many hundreds of hours of time and thousands of dollars can be saved over an application’s life if the code is easy to read. A simple back of the envelope calculation would demonstrate this for almost all business software.
So we need to make our code easy to read, need to ensure it has code layout and style to enable all those subsequent times it will be read, debugged, modified, refactored and reused. This of course begs the question, what characteristics make code easy to read? I believe it can be done by adhering to two simple principles:
What makes your code easy to scan is going to vary a bit; it's going to depend upon the language, but I think there are some general things to keep in mind. Here's my list, spawned for the most part from writing code for the last 7 years or so in Java:
Descriptive identifiers need a bit of thinking. Sometimes the way something should be named is obvious. Sometimes however it's not, and merits discussion with peers. Sometimes a good name comes to you later after your first stab at things. This is where the "rename" refactoring in IDEs is really useful. Poor naming is worth fixing; left untouched people waste mental energy dealing with it that could be better spent. In particular give due consideration to:
Finally, I was going to provide an illustrative example. Some code laid out in gnarly 'before' fashion followed by an 'after' rendering of almost poetic beauty. But since this post has languished in unpublished draft mode for months I decided I'm never going to get around to that. So you'll just have to use your imagination.
Source code is written documentation. It is the best, most accurate, in fact perhaps *only* accurate documentation you have. It completely and perfectly describes what your application does. No requirements specification, technical specification, UML diagram or anything else comes close to the comprehensive precision of your source code.
Now lots of written material is read for entertainment. The perhaps-soon-to-be-obsolete newspaper, novels, even for many people, non-fiction. But the first rule of software documentation is nobody reads it unless they have to. Except for a rare few, reading source code is not entertainment. When they are reading software documentation, people are reading it in pursuit of some goal. An emergency bugfix perhaps, or considering how to fit in a change to effect some new functionality. Perhaps looking to improve application performance or lift some existing code as the basis for another project.
Whatever it is, people have an objective when reading software documentation. And like I said, your source code is your *best* documentation. Unlike those written specifications and pretty pictures it is always up to date. It doesn’t suffer from the vagaries and ambiguities of the English language. It doesn’t require updating to reflect the latest design and behavior, it *is* the design and behavior.
Any non-trivial piece of software typically has quite a lengthy shelf life. It will often be far longer than perhaps initially anticipated, typically many years for business applications. Over its not inconsiderable life, with the ebb and flow of need bugs will be fixed, features added, code refactored, performance enhanced, third party libraries updated. All of this by a (more than likely) ever changing team.
Now most, if not all, source code is read on screen. Some people might like to print it out for scrutiny and reflection, but not the majority, whether seeking out a bug or peer reviewing a colleague’s work. Reading on screen is hard. Today I stumbled across a copy of Jakob Nielsen’s “Designing Web Usability” when rooting about in my furnace room. I dipped into it as I remembered him having something to say about content being read on screen. According to him people read 25% slower on screen, and in a study he conducted 79% of people scanned or skimmed rather than read word for word. I think it’s pretty fair to say that this is *exactly* how software engineers read code.
Consider then the trio of factors here:
- source code is the best form of documentation you have for an application
- it lasts a long time and will be read by a variety of people over it’s lifetime
- it will be read on screen, likely skimmed and scanned by busy people on a mission
It seems blindingly clear to me then that the layout and style of your code is important. People need to be able to read, digest and work with source code as quickly as possible. I don’t think it’s any exaggeration to claim that many hundreds of hours of time and thousands of dollars can be saved over an application’s life if the code is easy to read. A simple back of the envelope calculation would demonstrate this for almost all business software.
So we need to make our code easy to read, need to ensure it has code layout and style to enable all those subsequent times it will be read, debugged, modified, refactored and reused. This of course begs the question, what characteristics make code easy to read? I believe it can be done by adhering to two simple principles:
- write code that is easy to scan
- use identifiers that are descriptive
What makes your code easy to scan is going to vary a bit; it's going to depend upon the language, but I think there are some general things to keep in mind. Here's my list, spawned for the most part from writing code for the last 7 years or so in Java:
- I want to be able to quickly see the start and end of classes. This is not normally an issue since there's one per file...unless you're using inner classes which can make things a little more exciting.
- I want to be able to to easily see significant independent pieces of code, specifically the start and end of methods, iterative blocks and conditionals
- Method arguments should be highly scan-able
- Judicious use of alignment and indentation can provide an easier to scan layout
- Remember, white space is free. Use it to improve readability.
- Any application of style and particular idioms should be used habitually. When you enter some code if you notice that it does not adhere to your preferred layout and style you should consider changing it. It's an easy and quick improvement that will probably pay dividends.
Descriptive identifiers need a bit of thinking. Sometimes the way something should be named is obvious. Sometimes however it's not, and merits discussion with peers. Sometimes a good name comes to you later after your first stab at things. This is where the "rename" refactoring in IDEs is really useful. Poor naming is worth fixing; left untouched people waste mental energy dealing with it that could be better spent. In particular give due consideration to:
- Good clear names for packages, classes, methods and variables
- Exploiting the existence of a domain specific vocabulary. However clarity is key; sometimes business users are inconsistent and you need to work hard to identify synonyms or subtle differences in seemingly interchangeable terms
- Brevity. Overly long unwieldy identifiers are not easily scannable. Similarly strive to reduce code. Less is more and all that...
- Avoiding redundancy in names, e.g. consider if you really need to pre-pend some system identifier to a series of classes, or if you have to append 'Servlet' to the end of all servlets that you write...
- Using plain, direct language. Overly abstract terms are often unhelpful. Complexity is inevitably there in most systems, but aim to simplify terms when reasonable to do so.
Finally, I was going to provide an illustrative example. Some code laid out in gnarly 'before' fashion followed by an 'after' rendering of almost poetic beauty. But since this post has languished in unpublished draft mode for months I decided I'm never going to get around to that. So you'll just have to use your imagination.
Subscribe to:
Comments (Atom)