Monthly Archives: November 2010

Eight Innovations For Project Managers

Many aspects of project management are tried and tested. For example, Gantt charting is the most obvious example. Below are more innovative concepts that are not so broadly used by all and might be useful to you. These aren’t changes you should introduce wholesale, but rather a list of ideas to experiment with on more of an ad hoc basis, depending on the needs of the project you’re involved with. Each idea links to the relevant blog post.

  1. Reference class forecasting – looking at similar past performance as the basis for estimates
  2. Pre-mortems – assessing what could cause problems before they happen
  3. Avoiding the Fred Brooks fallacy – adding more people to a project may actually slow down progress in the short term
  4. Aiming for more ideas rather than better ideas – great ideas come from culling and refining a large idea list, not from waiting for a single perfect shot of inspiration
  5. Setting ambitious goals – setting goals high leads to better performance
  6. Focus on the hard conversations – communication is key to any project, and the hard conversations are where the value is
  7. Consider outsourcing tasks – there is no reason that employees of your organization are the best people to execute all the tasks on a project plan
  8. Use burndown charting – a way to predict finish dates on certain projects with greater accuracy

The Value of Burndown Charts

Burndown gives you a robust way to figure out when your project will actually finish. It’s a core part of scrum, but really a lot of project managers can get value from using this technique.

Burndown is valuable because we know people are generally bad at estimating completion dates, so any estimation technique that contributes an objective estimate should be useful. If you already have a Gantt-based schedule, then burndown is likely a complement rather than a substitute for it because the two methods are fundamentally different, but are both help answer the same question – when will we finish?

Burndown hinges on a very simple equation:

Rate of change in work = New work added – work completed

There are two basic outcomes of this equation. If you’re adding more work than you’re completing, then the completion of the project is still some way off. If you’re completing more work than your adding, then you are on the way to being done, and burndown charting can give you an estimate of how close the endpoint is.

As with anything apparently simple, the challenges are simplified away. Here the main challenges are:

1. Ensuring that all work is defined at a similar level (for example work in a software project might be defined as a bug)

2. Ensuring that you actually have a way to measure when new work is added and existing work is completed.

And of the 2 items above (1) is actually much harder than (2). With any decent tracking system (2) should be straightforward, but on (1) work items will never actually be truly equal, for example I mentioned software bugs as a common unit to measure in, but of course not all bugs are equal, some can be fixed in 10 minutes, others may take weeks or longer.

Then generically any project has 3 phases.

  • Phase 1 – planning: work items flat or increasing
  • Phase 2 – core execution: work items increasing
  • Phase 3 – approaching finish – work items decreasing

The diagram above gives an indication of what this looks like over the life of a project, though of course no project is ever quite this pure.

Burndown is most interesting/useful in the final phase (the downward slope) because that’s when the estimates have most forecasting value.

Example 1

50 work items outstanding

1 work item being added each day (on average)

11 work items being done each day (on average)

So, net 10 items are being done each day and the project should be done in 5 working days (i.e. 50/10).

Example 2

200 work items outstanding

10 work items being added each day (on average)

15 work items being done each day (on average)

So, net 5 items are being done each day and the project should be done in 40 working days (i.e. 200/5).

The examples above also imply that burndown only tells you when the stock of existing work will hit zero. Obviously, if work is continually added to the list each day, the project will never be done, because the stock will be at zero at the end of each day, but then tomorrow more work will come in.

So that, in essence, is burndown charting. If you’re not currently using it and have a large number of similar work items on your project (or sub-project) then it’s a useful technique to experiment with.

Building a Project Plan – Key Activities Checklist

In the appendix of a recent report regarding the Department of Energy, the Government Accountability Office use the following checklist for assessing project plans, which I’ve added two broad grouping to
1. Build an accurate plan that reflects the project

  • Capturing key activities
  • Sequencing key activities
  • Establishing the duration of key activities
  • Assigning resources to key activities
  • Integrating key activities horizontally and vertically

2. Manage project risks

  • Establishing a critical path for key activities
  • Identifying the float time between key activities
  • Performing a schedule risk analysis
  • Distributing reserves to high risk activities

Of course,  this is a very schedule-centric checklist. There is no mention of talking with your stakeholders, managing partner relationships or assessing feasibility of the work to be undertaken. However, as a checklist for building a project plan, I think it’s a good list, and the risk management section is particularly useful, because it demonstrates what can be achieved when a solid plan is in place.

Off Topic – What People Are Searching For

An implication of the predictive feature that intenet search engines like Google and Bing now use is that you can tell what people are often searching for by entering the start of a general phrase into the search bar.

You can see a few below some are depressing, some are cute “how big will my puppy get?” and others border on the philosphical “how big is the universe?” to the existential “when will I die?” and the mundane “when to throw out make up?”

Project Failure – Channel Tunnel

The Channel Tunnel Rail Link source: Akanekal (via Flickr)

The Channel Tunnel or Chunnel is a 31  mile tunnel running underneath the English Channel to carry Eurostar trains and freight trains between the UK and France.

Construction of the tunnel started in 1988, the project took approximately 20% longer than planned (at 6 years vs. 5 years) and came in 80% overbudget (at 4.6 billion pounds vs. a 2.6 billion pound forecast).

The tunnel wasn’t completely unprecedented. The Seikan Tunnel in Japan had similar length and depth. Nonetheless, like projects such as NASA’s missions and the Sydney Opera House, it seems part of the reason for cost overrun was the absence of many precedents and associated experience to base sound estimates off. In fact, subsequently, the Channel Tunnel has been listed as one of the engineering wonders of the world, which emphasizes its uniqueness.

The issues that caused delay resulted from three factors:

  • Changed specifications for the tunnel, there was need for air conditioning systems to improve safety that were not included the initial design.
  • The communication between the British and French teams who were essentially tunneling from the two different sides and meeting in the middle could have been improved. Theses sorts of communication issues are relatively common in delayed projects when tensions rise, Wembley Stadium is an interesting example, where poor communication meant that junior employees where often more informed about project status than senior managers.
  • The contract was bid on by competing firms, this framework will necessarily encourage the ‘winner’s curse’ of the successful bidders having the lowest and most optimistic price estimates, again the Wembley Stadium project offers another example of the winner’s curse.

Another interesting aspect of the Channel Tunnel’s forecasts were that a lot of revenue was projected to come from driving the existing ferry operators out of business. Of course, these ferry operators were the main way to cross the English Channel before the Channel Tunnel existed. However, this analysis ignored the possibility that the ferries would react to the Channel Tunnel with improved pricing and service, leading to them retaining market share. In addition the creation of budget airlines providing cheap air travel between UK and France was not foreseen. It is a good reminder that in making strategic forecasts of benefits or results you should bear in mind how competitors will react to the project you are envisioning.

Whilst it is not a project management issue per se, it should be noted that a great deal of the financial problems with the Channel Tunnel were caused by overly optimistic revenue projections, on top of the construction cost overruns, and those projections failed to anticipate that the set of options for getting from Paris to London might change, both in reaction to the tunnel and because of innovation in other areas such as the development of the budget airline business model.

See more project management case studies at or follow me on Twitter for blog updates.

Channel Tunnel Drilling Equipment (source: Tony Bradbury)

How Good Is NASA At Project Management?

Source: NASA Ares Project Office

NASA too experiences schedule and cost overruns. Of the 10 NASA projects that have been in implementation phase for several years, those 10 projects have experienced cost overruns of 18.7% and launch delays of 15 months. In 2005 Congress required NASA to provide cost and schedule baselines, so no long term data is available. NASA’s projects are consistently one of a kind and pioneering, therefore uncertainty is likely to be higher than for other sorts of projects.

These cost and schedule overruns are largely due to the following factors:

External dependencies

The primary external dependencies that cause problem for NASA are weather issues causing launch delay and issues with partners on projects. NASA projects with partners experienced longer delays of 18 months relative to 11 months for those projects without partners.

Technological feasibility

As the Government Accountability Office assessment states: “Commitments were made to deliver capability without knowing whether the technologies needed could really work as intended.” This is so often a cause of project failure, see my articles on the Sydney Opera House, Denver Airport Baggage System and many
others for examples of how common this cause of failure is.

Failure to achieve stable designs at Critical Design Reviews

90% stability at Critical Design Review is cited by the Government Audit Office as a goal for successful projects, which is consistent with NASA’s System Engineering Handbook. Without this, designs are not sufficiently robust to execute against. It’s clear that NASA takes Critical Design Reviews seriously, but doesn’t always achieve 90% stability. The exact value varies across projects, but appears to be in the 70-90% range for most NASA projects. Raising the stability level at theseCritical Design Reviews would reduce project risk.

Though long, the Government Audit Office report contains lots interesting of further detail and can be found here.