Monday, October 10, 2016

Donald Trump : The Lean-Agile Candidate

This article is not an endorsement of Donald J Trump. Instead, this is an analysis of how well the Trump campaign has utilized many of the techniques and strategies of Lean and Agile to run a very successful campaign under some very trying conditions. The campaign, since its beginning, has had a shoestring budget, a mercurial candidate with no prior public office experience, very little core establishment support and lack of a broad, informed policy platform. Despite all these hindrances, Donald Trump, has not only beaten out a crowded Republican field but also remained competitive till late in the election cycle against an established political Titan, Hillary Clinton.

Lean Startup

Donald Trump's candidacy was considered a joke for quite some time. This gave Trump the ability to take risks to make a mark for himself. In the beginning of his campaign, he made many outlandish statements. Many of which would easily have rendered him unelectable if he was being taken seriously. His immediate strategic focus though was to make some noise and gain notoriety. He is a startup in the field of established players. Doing what everyone else does is not going to help him separate himself in a field of 17 contenders for the Republican nomination. The same applies to new products. Yes, the table stakes (In this case having a pulse and enough backers to launch a bid) are necessary, but not enough to be successful. You have to stand out, even if it is in an unorthodox manner to gain market share. Breaking the mold and having a distinctive appeal is critical for any startup.

At this point, it is not just the primary voters that take note of Donald Trump, but his outlandish statements start bringing in a lot of media attention. The amount of free air time that Donald Trump's comments and Trump as a guest himself gets from the various networks greatly exceeds the paid and free airtime for any of his competitors. This goes a long way in cementing the Trump political brand. Trump knows that the voters in the Republican primary are not fans of the media. Hence, while he feeds soundbites to the media, he also chastises them for unfair coverage. This becomes a consistent theme for the remainder of the Trump campaign. As a new product, it is important to establish a brand with your customers. Use all avenues available to remind your customers of your brand and how it stands out.

This beginning and most of the rest of the campaign seems to have been run using Lean Startup principles almost by the book. Trump repeatedly employs the Build-Measure-Learn loop to not just figure out the right things to say, but also to create and adjust policy positions. The campaign guides its steps by observing the customer reaction and understanding them first hand, rather than through pollsters and policy experts.

Limiting WIP

Trump has been, until recently very focused on the immediate strategic direction. During the primaries, Trump campaign had multiple hurdles ahead of them. Instead of tackling all 16 of his opponents at once, Trump goes after them one at a time. While others are not taking him seriously, he starts by discrediting the most lucrative target. The establishment heavyweight Jeb Bush. Trump repeatedly calls him weak and makes sure that Bush is known as the establishment candidate. This is a great strategic direction to take as it is potentially the easiest and most lucrative. Notably, he does not go after the other candidates at this time. He limits his WIP to one strategic direction at a time. Trump does not have the resources to spread his attacks out. This forces him to be lean. repeating the same message about one candidate over and over helps him get the best results against that candidates with his customers.

The Trump campaign in time, shifts focus to Marco Rubio, John Kasich, and eventually Ted Cruz in order to eliminate the competition one at a time. He picks on them one by one and brands them in ways that would hurt them with republican primary voters.  This entire time Trump's focus was one of his Republican opponents and not Hillary Clinton. He made wildly unpopular statements, but these were unpopular overall, not amongst the voters that would show up to vote in the republican primaries. Trump proved that limiting your WIP works at all levels, especially at the strategy level in a lean organization.

Feedback Loops

Most political campaigns thrive on feedback loops. They adjust as they get more information through polls and media feedback. The Trump campaign has taken this to the next level. Trump seems to be deliberately creating these feedback loops using rallies and social media. There are elements of Lean UX, Dev-Ops and Continuous Delivery present in the implementation of these feedback loops.

Lean UX

As opposed to other candidates that spend long hours coming up with sound bites and attack lines, Trump's campaign spends very little time doing such analysis. Trump tries out every new sound bite with audiences, both live and on twitter, till he finds the one that sticks. The campaign is consistently pursuing the "linguistic killshot" as Scott Adams (of Dilbert fame) puts it. Lying Ted, Little Marco, Weak and Low Energy Jeb Bush and even Nice Ben Carson were all effective branding of Trump's opponents that hurt their campaigns. These "linguistic killshots" were not a result of hours and days of research. The Trump campaign took multiple ideas straight to the rallies and carried on the ones that gained traction. The users help shape the experience rather than simply being subjected to it.  Instead of spending days analyzing and doing research, Trump was getting ideas out first and getting feedback. When you are trying out new user experiences, the easiest way to validate which ones work is to actually put them in front of your customers. 

Linguistic killshot: An engineered set of words that changes an argument or ends it so decisively, I call it a kill shot. One of the ones Donald Trump used was referring to Bush as a "low energy guy" or Carly Fiorina as a "robot" or Ben Carson as "nice." - Scott Adams

Dev-Ops and CD

Much is made of Trump's late night/early morning tweets. These are often the more "fiery" and controversial tweets that Trump puts out. The Trump campaign has realized the power of social media and utilized in a lot more effectively than the Clinton campaign has.Trump has been running his campaign with a lot less operating cash than his opponents. In May, the Trump staff consisted of 69 staffers as opposed to 685 staffers for Hillary Clinton. This means that Trump has to find new and better strategies for getting the word out. Trump delivers his messages directly to his customers, early and often. Early morning tweets are not just seen by his followers when they wake up, but due to their controversial nature, re-tweeted and replied to by thousands of folks. What is even more important is that the early morning news and talk shows pick these up and talk about them for hours. Trump is not just the engineer of the messages but also does the deployment of it out in the field. He delivers early and often. By doing this, he is able to shape the conversation for the day without making the rounds of the news and talk shows.

According to New York Times, in March, Trump had received almost 2 billion dollars of free media coverage due to his continuous delivery of unique messages.

A common charge leveled against Trump is that he "takes the bait" and cannot resist responding to every charge leveled against him. This might be a valid charge and Trump might have little control over his instincts. The tendency to do this, though, still provides all the same benefits. By using multiple rapid fire responses, the campaign is able to identify which ones resonate with the voters and implement those in detail. Trump being the engineer and the deployer of these cuts down the amount of time it takes to run these responses by the "end-users". The campaign also does not waste time trying to craft the perfect response and allows the "customers" to choose which response the campaign spends time on, in order to perfect it.


The trump campaign, (until very recently) has been the definition of anti-fragility.
Antifragility is a property of systems that increase in capability, resilience, or robustness as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures.
Software developers anticipate volatility and shocks to the system in order to make the system perform better under these forms of stress. A great example of this is Netflix's Chaos Monkey that turns off services randomly to make sure the system is built to handle the stress. Trump is his own campaign's Chaos Monkey. He is unbridled in his speeches, tweets and personal interactions. His team has known this from day one. Every campaign manager and media relations person on his team has become an expert in the art of the spin. This means two things. First, it does not matter what Trump actually says. The media and the political elite might sneer at his remarks, but Trump's team has a lot of practice in handling these situations. They make every remark, that would otherwise sink a campaign, into a positive for the country. Second, it allows Trump to keep running his experiments with words. The media and the establishment up in arms against him is an anticipated stressor. It makes his image as the anti-establishment, outsider more robust with every attack. The Trump campaign is the mythological Hydra, which becomes stronger when it is attacked.

Small Batches

This political cycle has made it hard to concentrate on actual policy matters. While most candidates went into the race with filled out platforms and explicit positions on all the major issues, Donald Trump did no such thing. He went about making his policy explicit in small batches. He first released  his position on immigration in August 2016. Even in this case, most of the plan was kept flexible to account for the feedback from the voters. The Trump campaign has shifted positions based on what seems to work with the public at their rallies, rather than what the experts think. This is the exact opposite of a well thought out, heavily analyzed political platform. Rather than having a set in stone "product roadmap", the Trump Campaign releases information on policy initiatives in small batches so that they can be easily consumed and future initiatives adjusted as needed. 

Trump is executing Build-Measure-Learn, while everyone else is doing large upfront analysis. Trump's campaign is agile while most others are living in the traditional Waterfall world. Trump is able to make repeated policy shifts without people taking much notice because he makes these shifts in small batches by changing little details as opposed to having to stay stuck on a pre-defined and committed to a platform.

The Lean-Agile Candidate

Donald Trump is far from an ideal presidential candidate. He has great flaws that he seems to escape just as they catch up to him. Many times, he comes out stronger than before because these flaws help him show his anti-fragility. The Trump campaign has done a great job of tapping into the voters directly and making an otherwise improbable candidate into a strong presidential contender. The strategies and tactics used by the campaign, whether knowingly or not, bear great resemblance to the Lean and Agile principles that we encourage teams and organizations to adopt. Of late, Trump's old words have come back to haunt him. Such deep-rooted flaws are probably beyond the anti-fragile ability of his campaign. However the race ends, this lean and agile campaign has probably changed the world of political campaigns for years to come. 

Monday, September 12, 2016

Types of Variability and Roger Federer's Serves

What makes top ranked tennis players so extraordinary? Apart from the pure athleticism and power that they are naturally gifted with, there is the amazing consistency that they have developed in their shot making as well. The following graph from BBC shows the landing spots for Roger Federer's serves during Wimbledon 2012.

What stands out here is the placement, which is amazingly consistent. The serves are either close to the middle of the court or the sidelines. The graph is only showing successful serves, and hence there are some missing spots, but they would all be close to the existing clusters. You can see why he aims for certain areas as well, by the concentration of the "Unreturned" dots. There is some variability in the landing spots of the serves, even the best in the business cannot land it on a dime every time, but he is amazingly accurate. There might have been some serves that were affected by external factors, like gusts of wind, that strayed either off the court or away from the intended areas represented by the clusters. In effect, Federer might not be able to hit a dime with his serve every time due to the natural variations or due to external factors.

Understanding variation is the key to success in quality and business. - W Edwards Deming

Most systems, have two types of variabilities present in them. First, are the "common cause" variations that exist in all systems as it is almost impossible to get a process that produces work at the exact same pace for every item.  The second type is the "special cause" variations that happen once in a while due to external factors or special circumstances that are usually out of the team's control. These are usually the ones that are the major outliers in terms of cycle time. A scatter plot as shown in the single items forecasting post (and copied here) is an easy way to visualize these variabilities.

We have talked in earlier posts about understanding uncertainty, but it is not enough to just understand that uncertainty and variability exist in every process. We need to understand the types and sources of these variabilities so that we can react to them appropriately. Let us talk about these in the context of a software development team.
  • Common Cause Variability - This is the inherent variability present in knowledge work. This can be caused by some stories being easier to accomplish than others, internal queues within the team, process policies of the team, holidays etc. The main attribute of common cause variations is that they are caused due to things within the team's control. Thes are usually natural variations. These are similar to clusters of dots on Roger Federer's service map. On the scatter plot of stable teams that have consistent policies, these are easy to spot. For example, in the figure shown above, 95% of the stories are getting done in 14 days or less. The tight grouping and random distribution of the dots under the 2-week line represent the natural variability in the team's process.
  • Special Cause Variability - This is variability that is usually caused by external factors. This can be a result of work getting stuck in external queues, half the team getting sick, emergency production issues, a machine with un-checked in code dying, etc. These are usually things that come out of left field and cannot be predictably accounted for. These are also the outliers on Roger Federer's service map (some of which are not shown here) which could be caused by wind gusts or a racquet string breaking. Once again, these represent themselves on scatter plots for stable teams (like the one above) as the outliers. 
As we are fully aware the big question we are always asked is - "When will it be done?". In answering this question with high confidence, having low variability is the first important step. With a wide variation in story cycle times, the answers are only as good as gut feels. In order to get better at answering the question, we have to adjust our policies to ensure a lower range of variability. Just like professional tennis players we have to work hard at controlling the variability to the lower levels of natural variations. We know that variability will still exist, but we can definitely work at controlling the amount of variability through consistent and smart practices.

Very often, teams do the opposite and make the mistake of going for speed before achieving predictability. 

For someone taking up tennis for the first time, it is more important to get the mechanics and placement of a service than hit the serve a fast as possible. Once you have the basics in place, we can dial up the speed, while trying to keep the natural variability in control. The same is applicable to team's finishing stories. Very often, teams do the opposite and make the mistake of going for speed before achieving predictability. Our first objective should be to tweak the team's policies in order to finish the stories in short cycle times and limit the variability that the team is introducing into the process. Once we have limited this type of variability, we can start applying other adjustments to make our stories flow through faster. Going for speed before achieving predictability can often have the exact opposite effect on the amount of variability in the system.

There are multiple tactics at the team's disposal in order to reduce the common cause variability in their process. Some of these are outlined below -
  • Optimize WIP - The easiest way to reduce the variability in the system is to control the number of things that the team is working on. The more things we work on, the longer things take (Little's Law). The longer things take, the greater the range of the number of days your stories take to complete. It also reduces task switching, removing that variability as well.
  • Right Sizing Stories - Your stories don't all need to be the same size, but need to have an upper limit. If a story seems that it will take longer than the team's SLA (of course you will need to have one for this). The SLA then becomes your upper bound for common cause variations.
  • Sizing Stories In Progress - We very often realize that stories are larger than we initially thought as we start working on them. The inertia of the story often stops us from splitting it when in flight. Getting past the inertia can help us get a more predictable flow and also more options for story prioritization.
  • Swarming - Often there will be a story that cannot be split and is large enough to allow for multiple team members to work on it together. This can help the work item get done in a predictable timeframe.
  • Eliminating(or reducing) Queuing Time - Very often work just spends time sitting waiting for handoffs to happen. This time between Analyst-Developer or Developer-Tester handoffs is where a story can spend more time than even the time it was actually worked on. Reducing this as much as possible can bring a lot of stability to the system.
These and other similar strategies can help create a system where the variability due to common causes is within a very small range. Any variability that is due to special causes is very easily visible as outliers on a cycle time scatterplot a shown below.

If a definite pattern starts appearing with the outliers, this might suggest that these are common causes masquerading as special causes. Variabilities that might be special cause for teams might be common cause for an organization.

This separation in the two can be used to determine what strategies to use for special cause vs common cause variations. The outliers, as they represent the special causes, would, in this case, be things out of the Team's control. These would often be external dependencies that the team needs assistance with. If a definite pattern starts appearing with the outliers, this might suggest that these are common causes masquerading as special causes. Variabilities that might be special cause for teams might be common cause for an organization. These require similar approaches as mentioned above for controlling common cause variabilities at the organization level. They need to be reviewed by leads of teams in a common forum in order to detect these patterns and apply solutions that reduce these to common cause levels. Of course, that is not applicable to every special cause issue, but can go a long way in both become predictable and moving faster.

Taking steps to control common cause variability at the team level exposes special cause variations. Special cause variation at the team level can often be common cause variation at the organization level. This would often mean simple solutions, similar to the ones that helped at the team level, can be developed and applied at the organization level to reduce the time it takes for work items to finish. The delays at the organization level might be more costly than the delays at the team level. The easiest way to expose them is to get control of common cause variability at the team level. This can help the common cause variability at the organization level to become evident.

Reference : The scatterplots in this post are from the Analytics tool developed by Dan Vacanti ( .

Monday, August 22, 2016

Probabilistic Forecasting - Effects Of Uncertainty On Predictions(aka Controlling Variability)

The variability in Monte Carlo predictions or the range of predictions is a direct result of the variability in the team's daily throughput. A team with a very consistent throughput will produce a "tighter" Monte Carlo result set. The results of simulations for teams that have a regular daily throughput will not change by much at different confidence levels. The results of simulations for teams that have fluctuating daily throughput will have more pronounced changes as we change confidence levels. Lets take the following two hypothetical teams as examples.
Both teams in this case have closed 30 stories over the course of 30 days.
Team A finishes one story almost every day, with a few days where they finish 2 stories. Their historical throughput graph looks like this - 

Here the horizontal axis is a timeline and the vertical axis is the number of stories done on that day. When we run Monte Carlo simulations (10,000 simulations) for this team the following results appear -

The percentage lines on the above graph are levels of confidence that help us interpret the graph. Based on the above results, Team A has a 95% chance of getting at least 24 stories done, 70% chance of 28 or more, and a 50% chance of getting 30 or more stories done over the next 30 days.

Now let us consider Team B, which has a more variable daily closure rate. The team tends to have some days when they close a bunch of stories and other days when they do not close any at all. Their throughput graph looks like this - 

Just like Team A, Team B also competed 30 items over the same time period.

Examining the results from Monte Carlo, we can see that there are many more possibilities and the numbers on the conservative end of the spectrum are much lower. Team B has a 95% chance of getting at least 16 stories done, 70% chance of 21 or more, and a 50% chance of getting 30 or more stories done over the next 30 days.
We can see that in both data sets, the middle value is 30 stories, but values that give the same amount of confidence on the higher side are much lower for the team with higher variability in throughput. We can conclude from this that in order to make predictions with high confidence, that still help us deliver the best results from our teams, we need to control the variability in our throughput. The question though is - How can team create systems that have lower variability so that we can make predictions with higher confidence?

Probabilistic Forecasting - Forecasting In The Face Of Uncertainty - Multiple Items(Monte Carlo)

Multiple Item Forecasts

The rate at which stories get done, can help us figure out what the capacity of a team is for a given period of time. We need to make sure that we do not fall into the Flaw of Averages trap though. We have to model the inherent uncertainty in our processes in order to make sensible predictions for a team. Apart from how long it takes for things to get done, uncertainty also presents itself in the form of how many things get done on a given day. Using the historical trends of how many things are getting done on a daily basis we can model the future, assuming that the team will behave the way it has behaved in the past. This, in essence, is the Monte Carlo method. Monte Carlo uses data from the past to give us probabilistic capacity for a team. We assume that if, say in the past 30 days there have been 3 days when the team closed 2 stories, then there is a 10% (3 divided by 30) likelihood that any random day in the future, the team will close 2 stories.
Monte Carlo Method
The Monte Carlo method(One variation of it) as we use it, runs through the following steps -
  1. We determine a past range to use and a future range to predict.
    1. The range we select is usually in the order of a few weeks.
    2. We use the latest few weeks as we believe that the latest data is the best representation of future performance.
  2. For the first day in the future range that we are trying to predict, we randomly select a day from the past range.
    1. The throughput from the randomly selected day in the past is assigned to the day in the future range we are trying to predict.
  3. Step 2 is repeated for all days in the future that we are trying to predict.
  4. When all the days have been predicted using the past range, the total of all the throughputs assigned to the future days, gives us one answer for how many stories can the team get done.
    1. We record this throughput as one possible result, which can answer how much capacity thee team has for a given time period.
  5. We repeat steps 2 through 4 a few thousand times and gather the results of each of those simulations.
The results of these simulations can be over a wide range, depending on the variability in the team's processes(i.e., fluctuations in number of stories closed on a daily basis) and the length of time we are trying to predict over. The numerous predictions though, all represent the possibilities available to us to choose from. Based on the distribution of these possibilities, we can start saying what is the probability that we can get at least x number of stories done. For example, if 85 percent of our simulations told us that over the next 60 days a team can do either 30 stories or more than 30 stories, we can say that we have an 85% confidence that the team can do 30 or more stories. The same set of simulations might have 50 percent of our results be 40 stories or more. This would give us a 50% confidence that the team can do 40 stories or more over the next 60 days. We can now plan according to the amount of risk we are willing to take. If we plan for more stories, at least we are aware of the amount of risk we are taking.

Next : Uncertainty And Predictions

Probabilistic Forecasting - Forecasting In The Face Of Uncertainty - Single Items

Single Item Forecasts

Our ability to estimate how long a single item is going to take to get done is largely overestimated(because we are bad at estimating (smile)). One of the best ways to get a handle around the uncertainty and getting a better idea of how long something might take to get done, is to actually start measuring how long they currently take. This is where the metric of Cycle Time comes in. The most common reason to lower cycle time is to increase throughput. If each item gets done faster, more items will get done over a period of time - sounds obvious. But another great reason for lowering cycle time is to lower variability. Going back to the commute example, if we timed our commute everyday, we would be able to get an idea of what the distribution of our commute times is. Similarly, if we timed our stories and recorded how long it takes for them to complete, we can start making some sense of the variability. The graph below is the cycle time scatter plot for a team. It shows the days it took for stories that closed in the first quarter of 2015. Along the vertical axis is the number of days a story took and the horizontal axis is the time line. Each dot here represents at least one story.

The horizontal lines running through the graph show what percentage of these stories took 12 days or less, 9 days or less and so on. Based on the above graph we can say that any story this team starts has an 85% chance of getting done in 12 days or less. Half the stories(50%) get done in 6 days or less. Now, when this team is handed a new story to work on, they can use this data to communicate to the story's customer how many days it will take with what level of confidence. This is a much better way of forecasting and "estimating" the amount of time a single item would take than taking averages or estimating points.

Probabilistic Forecasting - Understanding Uncertainty

The subtitle of the book The Flaw of Averages is - "Why We Underestimate Risk In The Face Of Uncertainty". Let us talk a bit about uncertainty. Uncertainty comes in two flavours - Single item and Multiple item. The Single item uncertainty refers to the variation in lengths of time around how long it takes for one thing to get done. An example of this would be the cycle time of a single story or cycle time of a single feature. Multiple item uncertainty refers to how long does it take a set of single items to get done. This would be equivalent to figuring out capacity of a team for the length of a release. In other words - How many stories can we get done in a given time frame?

Single Item Uncertainty

For those who commute to work - How long does it take for you to get to work? Let us say your answer is 15 minutes. Would you bet your paycheck on getting to work exactly 15 minutes after you leave home? If we started noting down the amount of time it takes every day to get to work, we would probably see a very random distribution of times(with a high likelihood that none of them is exactly 15 minutes). The reason for the random distribution of time, is that there are inherent uncertainties in the process of commuting from one place to another. Traffic lights, errant drivers, accidents, heavy traffic and many other variables come into play. All these variables, which compound on each other lead to the uncertainty that prevents us from knowing the exact time we will be at work. There are numerous parallels we can draw between commuting and getting a work item(story or feature) through our process and calling it finished. After years of having tried to do this, we know that asking a developer to estimate how long a story is going to take is a futile exercise. This is due to variability built into knowledge work. We do not know what we will find when we go to try and implement something, much less how long testing it would take. The best we can do is give a range that we are comfortable with. Even beyond that, there is no direct predictability on when a build with the change will become available, or how long a change will sit waiting for someone to pick it up to test. The greatest sources of uncertainties though, are external dependencies. Whether at story or feature level, completion of a work item that cannot be done by one team, is very difficult to predict.
The one thing we can say for sure is that the larger a work item is, the more unpredictable it will be. It is again analogous to a trip in the car. A "15 minute" trip can be variable by 10 minutes, most of the time, but a "4 hour" trip can easily be variable by 1-2 hours. When something looks like it will take a long time, it is more susceptible to variability and the completion time for it is increasingly uncertain.

Multiple Item Uncertainty

Imagine we were holding a chess competition to figure out the best chess player among a group of 10 players. The structure of the tournament means that we need to have 20 games in order to crown a champion. Each game can take anywhere from 15 minutes to 3 hours. We have 2 chess boards. How many hours do we need to schedule in order to make sure we have enough time to finish 20 games? This is the multiple item problem that we try to resolve everyday when we try to figure out how long a particular functionality or feature is going to take to develop. Looking at this another way - How many games can we play in a 24 hour schedule? That is the same problem as trying to figure out how many stories we can get done in a release. As you can see, the fact that each game itself can be of a variable length, makes it very hard to predict the completion time of all 20 games. We can easily be drawn into the flaw of averages thinking here. There are also dependencies between games. The earlier round games have to be played before games in a later round. Playoff games have to be played in a particular sequence. This would make our job harder. If we do not have players available at certain times when they are scheduled to play, it would add more delay to the tournament as well.
What makes this a hard problem to solve, is that there are multiple answers. The total time for the tournament (just like the car trip) is not just one number, but multiple possible numbers. The trick is to figure out what is the probability of each of those numbers. The easiest way to do this would be to play the tournament thousands of times and record how long it took every time, so that we can use that data to determining probabilities for different lengths of times.

Next : Forecasting Single Items

Probabilistic Forecasting - The Flaw Of Averages

The Big Question

Whenever we are planning out a release, the first question that we try to answer is - What is the team's capacity? Similarly when we start work on a feature, the first or story question is - How long will this take? We have traditionally tried to answer these questions in multiple ways with very little attention to the fact that there are multiple possible answers. Each of these answers have their own probability of being correct. We need tools and systems that help us figure out what these answers are and what are their probabilities of being correct.

The Flaw Of Averages

Our traditional approaches to projecting capacity and of tracking progress have been fraught with errors. We have in the past used techniques like average number of stories, or average number of points or even stories done in the last release to determine what we believe is the capacity of the team for a given release. These approaches suffer from a few major flaws. The greatest of these flaws is that average is at best telling you that there is a 50% likelihood of doing the same amount of work or more and 50% of doing less. For greater level of detail into why "Plans based on average fail on average" please read the fascinating book "The Flaw of Averages". 
This calls for a better way to predict and track capacity. A method that does not just give us prediction in the form of a date or a number, but also the likelihood of that prediction being correct. We need something that is better than average, something that lets us know what is the level of the risk we are taking when we say that a team will be able to get X number of work items completed. Every prediction has the possibility of being wrong, hence, when we make a prediction, we should acknowledge what is the probability of that prediction being correct and conversely the probability of the prediction being wrong.
The Flaw of Averages, does not preclude us from using the past as a predictor of the future. Past performance is the best baseline we have to project future performance. There are better methods to do this than traditional methods like gut feel estimates and averages. A detailed look at our past performance can reveal the level of uncertainty that exists in our system. An Understanding of this uncertainty can give us the ability to make better predictions. Modes of forecasting that boil predictions down to a single number are flawed because they don’t take the random nature of software development into account. Before we get into modes of prediction, let us understand the nature of uncertainty in software development.

Next : Understanding Uncertainty