Thursday, June 30, 2016

The Audacity of the Strategy of Hope

Typing the words "Hope Meaning" into Google (since Google knows everything), gives you the following definition of hope -


Whether hope is being used as a noun or a verb, the implication for projects is the same. There is always a degree of hope involved in projects that are either being planned or are already active. Projects start with a certain level of confidence of hitting the desired end date. The point at which our confidence in making the date falls below our initial confidence is when we start employing the "Strategy of Hope".

A soccer team that is up 2-0 at half time is operating in the second half with confidence because they have a very high likelihood of winning the game. The percentage of games won from this position is very high. On the other hand, the losing side at the same point is operating (amongst other things) under the strategy of hope. According to 5addedminutes.com, "a mere 1.8% of 2-0 leads have been squandered into defeats, whilst 93% have been converted into wins". Despite knowing these percentages, we still see the hope bias, both in sports and Software development. We always believe that we can be the 1.8% and not realizes that only 1.8% can be the 1.8%.

The language used by development teams and managers starts to change the more they start to rely on hope. Words that represent high degree of certainty like "definitely" and "will", are replaced by words that represent more hope than certitude. Phrases containing words similar to "if", "may" and "hopefully" become the norm. This lasts usually till the last day of the project, when the certainty comes back. This time though, it is "definitely not" and "will not" that are the phrases which are most commonly used. Hope turning to despair in the last throes of a software project is all too common an occurrence. 

I believe that traditional application of hope in projects follows the graph as shown below. The reliance on hope comes in multiple forms. The most common form, which I have been guilty of as well, sounds a bit like this - "We have been going through a rough patch due to [insert reason here], we expect to get over the hump soon and make up ground". It is usually not till the release date or the day before the release that we finally admit that there was too much ground to make up.


Saying that it looks bleak, but we will definitely get it done, is usually just delaying the point of despair. There are two things that commonly happen as we approach the point of despair. First, the team starts working overtime to get the impossible done. Second, the team takes shortcuts to get the work done. Both of these avenues are likely to result in quality issues and creation of tech debt. In circumstances where the date or scope is negotiable, the information regarding a delayed release or delayed features arrives too late, causing loss of credibility and possibly revenue.

How do we escape the hopeless Strategy of Hope (Alright, it is not completely hopeless, every once in a while it does work)? Whenever we see a project that is starting to rely more on hope and less on what the team's actual current performance tells us, it is time to self correct. Avoiding reliance on hope is not possible, there is always a degree of hope in every endeavour. If you have planned the project start and end dates knowing the confidence level you have of completing that project on time, and the confidence starts getting replaced by hope - that is where you readjust scope or time. Set expectations early and often.

We live in a probabilistic world. Every software professional will tell you they do not really know exactly how long it will take them to fulfill a single request and get it into production. I am not sure why we have the confidence to know exactly how long it will take to get numerous requests into production. Setting end dates using probabilistic forecasting measures and not being deterministic about a release date and release contents is accepting reality. Acknowledging reality and shedding the reliance on hope is not taking the easy way out, it is simply doing business properly. Acknowledge your reality early and acknowledge it repeatedly as often as you can.

We use Monte Carlo (more on these in another post) simulations to understand our reality when it comes to releases. We run these simulations for our releases every hour, for each team, to find out where the teams stand and what kind of progress they are making. The simulations give us a percentage probability of the team being able to finish all the items in the release by the release date, based on their recent performance. This means that we find out very soon whether our confidence in the completion of the release has dropped below an acceptable level. Invariably, when the confidence in the release goes below a desirable point, the Strategy of Hope starts to make its appearance. This is the point where we try to look for alternatives to hope. We look to realign expectations as early as possible rather than waiting for the last part of the release. Teams should aim to avoid disappointment brought about by despair at the end of a release.

Monte Carlo is not the only way to do this of course. Whatever is your metric/simulation/gut feel that shows that hope has become the operative method, use it to detect the point where you readjust expectations. Lower you tipping points for despair, so that you do not have to wait too long to realize that your expectations are not aligned with reality. Whenever your reliance on hope exceeds a certain point, realign expectations if possible. If realigning is not possible, be prepared to deploy the emergency fixes immediately after the release and carry the tech debt forward for eternity.

Strategy of hope is probably best left to soccer teams that are 2-0 down at the half. They are waiting for the miracle in the second half. They are carrying the "feeling of expectation and desire for" that miracle. It happens once in a while, that a team that has not scored a goal in a half, will score three in the second half, but do you really want to bet on that miracle?

Friday, May 20, 2016

Mathematical Boondoggle

I spent the last week in San Diego at the Lean Kanban North America conference, followed closely by the Kanban Leadership Retreat. All the big names and though leaders in the Kanban community were there, at least all the ones that have not been banned from the LeanKanban Inc. events were there. I had no idea conferences had banned lists. After finding out about this, I firmly believe, any speaker/writer/thought leader on the conference circuit has not done enough innovate work, unless they have gotten themselves banned from a conference. I looked forward to a week of exciting and invigorating conversations along with some quality team building time with the development managers at Ultimate Software.The trip lived up to the latter for sure, the antics of the Ulti-Managers will have to be a blog post on a different blog. The former on the other hand was a bit of a disappointment. There were definitely some bright spots. Some sessions and conversations were definitely enlightening, but a lot of it was filled in by what I can only call Mathematical Boondoggle.


Look At All The Maths I Have

Since most Kanban implementations are metrics driven, the community seems to have a lot of conversations around them. Infact there seems to be a mathematical arms race amongst the leading voices in the community. There are multiple curve-fitting conversations and detailed analysis of Cost of Delay curves that are being explored and presented. Nothing wrong with the concepts and with the theorizing of these concepts, but what is missing is the practicality of their application. Over the past few days I have been scratching my head to figure out the dots that might not be getting connected in my head. Maybe my education in the Kanban practice has been too close to practical application and use of real data that making the data fit a Weibull or a Log-Normal curve in order to do forecasting and sampling makes no sense to me.

Both Troy Magennis and Frank Vega presented forecasting techniques (Monte Carlo) which used real data to generate forecasts and probabilities. That was very heartning and reaffirming to see. They did not try to fit the data into any named statistical curve. They took the data as it was and used the knowledge of the fact that every individual data point is just as likely to occur again to simulate what the results would be. They did not make any attempts to figure out the tail of the distribution or what the shape factor of the Weibull distribution was. I know Troy Magennis has in the past(and probably still does) espoused the use of Weibull distribution, but his session included no mention of it.

There was a good amount of SAFe bashing (not that I am a fan of SAFe), especially around the way SAFe uses estimated and hence not real data to determine the calculation of the weighted shorted job first. This was in the middle of the Cost of Delay discussion. The irony was strong in this discussion and it appeared to be lost on most of the folks in the room. Folks ridiculing a formula that uses estimates and makes multiple assumptions in the middle of a discussion where they are espousing a technique(Cost of Delay calculation) that starts with multiple estimated values and multiple assumptions regarding value curves.

A group of us had the same conversation at the dinner table with a few folks and it seems that at least that group agreed that Cost of Delay is almost impossible to calculate because of the inherent problems with estimating of the value and of the value decay over time. The only practical voice on the Cost of Delay conversation came from Klaus Leopold, as he said that the value conversations should only happen at the feature level, not above or below that. I will have to have an in depth conversation with Klaus at some point to understand how he is calculating the value though. It just does not seem that it is something that can be easily determined and that determining it by itself does not add to the cost.

I See Your Theoretical Math And Raise You Practicality

I left the Leadership Retreat disappointed in one major fact. The community that has valued and espoused using real measurements to make decisions and collection of actual data for analysis has moved so far in the direction of adding layers on top of the data in order to analyze it. The same community is talking in terms of estimates when we have espoused real data since the beginning. It increasingly seems to be mathematical boondoggle coming through from the leaders of the Kanban community(at least the ones not banned from the conferences). 

Apart from Troy and Frank's presentations and the dinner conversation, I had at least one more reaffirmation from another attendee of the conference and the retreat. We happened to run into each other at the San Diego airport, and did a quick sharing of notes. He was frustrated with the theoretical nature of the discussions as well. In his words (and gestures) - "The discussions were up here(raises his hand up above his head) and the reality is down here (lowers hand to his knee)".

There is a lot of good work left to be done on the practical reality of software delivery before we go so far into the land of theoretical mathematical boondoggle. Hopefully in the future these gatherings are more about actually learning the applied concepts that work and figuring out which ones do not work. It would be great to be able to share experiences and learn from others about what is working and what is not, as opposed to the math devoid of practical applications or examples(hence boondoggle). Also boondoggle has once again become one of my favourite words.

Friday, April 22, 2016

Probability, Predictions and Peyton

Teams, managers, product owners are often asked to predict the future. What is more surprising than that is that we actually attempt to predict the future. We say things like "We will have 40 story points done in 4 weeks" or "we will have 15 issues resolved by the end of this month".  We look at our past numbers, figure out some math that we believe would be the most accurate predictor and respond with the numerical value that we "know" to be an accurate forecast.  The single number forecast is the equivalent of knowing exactly what the Dow Jones index would be at the end of the month. There is a basic flaw in our assumption that we can accurately figure out the single value that represents the future in our context.

Our minds, due to years of training and education have become deterministic thinkers. Most developers and engineers have the basis of their education in mathematics. The code we write and the tests we perform on a daily basis are in essence binary, they either work or they do not. It is when we take a broader look at the world around us that we realize that everything is probabilistic. There are numerous questions that need to be answered with degrees of certainty. The fact that a coin will come up heads can be said with a 50% certainty. A six sided dice rolling a 3 is 16.66% certainty. Only thing we have control over is the fact that it will roll something between 1 and 6, but beyond that it is a game of probability. Let us say if you hit all the green lights and there is no traffic, it takes you 15 minutes to get to work. Let us also say that having timed you over the past 20 days, the average is 20 minutes. How long will it take you to get to work tomorrow? What if we rephrase that question, with a 90% confidence, how long will it take you to get to work tomorrow? The addition of the probability does not only make the question easier to answer (more on that later), but also acknowledges the reality that the answer cannot be deterministic.



One of my favourite examples that demonstrate the need for probabilistic predictions is the following question - How many yards is Peyton Manning going to throw for in the next game? The question would make more sense if he was not retired, but Manning gives us a great dataset to work with. If Manning's coach asked him for an exact number of yards he would throw for, he would probably have a hard time coming up with a number. What if the question was phrased as - At least how many yards are you going to throw for with an 80% confidence? Or what is the least number of yards you can throw for with a 50% confidence? Those might be easier questions to answer if Manning knows his past performances and has a decent idea of the next opponent he is going to face. In fact he might be able to answer that question, knowing his past performances and his team, without knowing his opponent, just with a different number that represents his overall degree of confidence.

What is interesting about the example is that Peyton Manning is(was) one of the most consistent and reliable Quarterbacks in the business. He might even, at the time, be playing in the most efficient systems in the business, with all the right strategies and formations to ensure success. He cannot, even in that case give you the single number answer. He could give you a range answer, for example, more than 200 yards, but if you dig in further and ask him his level of confidence on it, it would probably be something like 80%.  There are too many variables in the game to allow him to answer the question with a single number. or even with a range with a 100% confidence. The variables, like the defense he is facing, the injuries to his own team, the current form of players, the weather, and others all effect the outcome. A few of these are determinable and fixed before the game starts, but many change through the course of the game. The same is true for any process that we run. Development processes, whether waterfall, scrum, kanban, or of any other type, have too many variables to be able to result in a deterministic prediction. Just as with most things in the world around us, randomness is inherent in our processes. Burndown charts, throughput numbers, velocity etc. are representation averages that we use to predict the future, which at best can be used at a 50% confidence.


The problem with predictions based on averages is exactly that, they are likely to succeed only an average number of times. Once we have accepted that every prediction requires a probability component, which represents your confidence level, we can look at better ways to answer the questions about what the future holds. We can use our past patterns to determine how likely we are to accomplish a certain task. Based on stats (http://espn.go.com/nfl/player/gamelog/_/id/1428/peyton-manning) from the last 3 years (2013, 2014, 2015), without knowing the opponent, Peyton Manning should have a 95% confidence of throwing for 150 yards or more. In the 42 regular season games that he has played over that period, he has missed that mark only twice. The likelihood of him missing that mark again is about 5%.  

Software development processes have as many variables as the ones that the quarterback faces. These could be brought under control to a good extent, but the randomness and the variability in the process is native. Last minute PTO, unknown complexity, issues from the field, undiscovered requirements, people quitting etc., all show up at seemingly random times. We cannot rely on making plans and predicting the future without taking the random nature of our world into account. Our confidence might shift based on which of the factors we have been able to control and how much of the randomness we have been able to minimize, but there will always be variables that stop us from making deterministic predictions.

In order to be successful at making predictions about the future, we have to stop thinking in binary pass/fail systems. The adoption of not just probabilistic thinking but also making predictions at varying degrees of confidence is what will lead us weigh our confidence against the level of risk we are willing to take. More on forecasting techniques and risk profiles in another post. For now, what we can learn from stats for one of the most consistent and successful Quarterbacks in recent history is that predictions cannot be absolute and have to take into account the probability of success. Which of those probabilities are we comfortable relying on for our plans, is a separate question.

Tuesday, January 26, 2016

Saying "No" - Lessons For Organizations, As Learned By A Developer


I recently read a post by Alissa Heywood subtitled - "Don’t worry about catering to everyone" . It was a quick read, but what hit me the most about it was the fact that Alissa had gone through the lifecycle of maintaining products all by herself on Github. A lifecycle very similar to one most products and apps built by companies go through. There are striking parallels in what Alissa went through and what startups that end up making it, go through. Good intentions, a couple of clients and the desire to say yes to all requests lead to the same feeling of "shadows looming over" in corporations that Alissa felt.

Let us draw the parallels - A startup development team has a product it believes in. The owners hit the road and sell hard and find a couple of clients that decide to take chance on the upstarts. There is elation, we have people using our software, it will only get better from here. Development team loves the fact that they were able to work together to create something new and shiny that is being used and keep developing new features with enthusiasm. The psyche of the company begins to change though. We have to keep the clients, we "MUST help" our clients. We are not able to say "No" to the customers for new feature requests and especially to bug fix requests.

Initially, the team is able to keep up and puts in the extra hours to fix the issues, while keeping on working on the new features. Things do tend to change further though. Something similar to what Alissa recorded happens - "...I had numerous problems adding up. I then started just closing issues, telling users to fix it themselves and leaving the issue to rot. This worked until I started gaining issues. The guilt sunk in as the issues unsolved grew into the double digits. And I just left them there to rot.". Replace "I" with "we" and you will see the familiar story of development team that is having a hard time saying no.

Morale starts to flag. Developers start complaining about the fact that they are doing maintenance work instead of actually solving new business problems. The problem feeds itself as the multiple patches put in often results in even more issues propping up. People start to burn out working overtime to keep the product afloat. Being a programmer or a developer over time loses its meaning as the entire team spends most of its time on maintenance tasks and bug fixes. The hallways start having the refrain of "Things are not how they used to be" echo through them.

Finally we get to the point where the company hits reboot.  After many days of trying "face the guilt of people expecting me to dedicate time to their specific issues", the company tries to delete history and move on.  It hires some people for maintenance of the old product if it has the money or just shuts down support for it. In some cases, even massive refactoring projects to make the code base maintainable and more modular. Either way, it tries to give the jaded developers what they have wanted for a while - the ability and option to do development activities other than bug fixes and maintenance work.

Alissa sums up the moral of the story really well and it applies at the macro (development organization) level as well - "You will feel guilty at first, but you should realize that trying to cater to everyone is never optimal". Remember that the real power is in saying "No". It will save your developers a lot of heartache and will maintain the startup buzz and culture that helped build great products in the first place.


Wednesday, December 30, 2015

Starbucks and Flow

Starbucks introduced their now much-used app in December 2014 in Portland. As they rolled it out to the rest of the country it has seen increasing adoption. In August Starbucks reported that 20% of their orders were paid via mobile payments and two thirds of those were placed through Mobile Order and Pay. The app has definitely been a hit and the adoption has probably brought a lot of customers back to being regulars(including myself).

The mobile order takes the waiting in line aspect of the Starbucks store out of the equation and knowing that my order will take about 10 minutes to be ready, helps me time my walk from the office to the local Starbucks as I wrap up the minor email-checking type activities. There is the added bonus of every 12 drink orders giving me a free reward drink when using the app. It is a great technology solution to a non-technical problem and Starbucks is explicitly and implicitly encouraging adoption of the app among its clients. Apart from the reward that I just mentioned, Starbucks also asked the baristas to treat mobile orders as expedites. These orders jump ahead of the drinks that have not yet been started, which means that everyone in the store that hasn't placed an order has dropped a spot back in line.

In the world of flow, expedites are almost always bad. They disrupt the system and cause delays to every other work item in progress. There are multiple repercussions to giving expedite requests priority, the major one being that you eventually get flooded with nothing but expedite requests. My assumption is that this is the exact effect that Starbucks was counting on. Whether intentionally(most likely) or not, Starbucks is counting on the ratio of the online orders to grow. There is a direct benefit to doing this for them. It is taking the entire ordering process out of the hands of the baristas, and outsourcing it to the customers. The baristas can now spend more time making the drinks rather than taking the orders.

In essence Starbucks is encouraging the exact opposite of the behaviour we coach teams. We tell teams expedites lead to unpredictable systems. Starbucks is trying to use expedites to add capacity to the system by outsourcing the ordering system. This seems to be leading to a more stable system. Lets run a quick simulation on a few scenarios here. The following assumptions stand for all simulations -

  • There are about 40 orders placed per hour..
  • It takes 30 seconds to take an order and process payment.
  • It takes 2 minutes to prepare the drinks for an order.
  • The simulation is run for 60 minutes.

Scenario 1 - No Mobile Orders, 1 employee taking orders, 1 employee making drinks.
This is the base simulation in a world where there are no Mobile orders. All payments and orders are done in person. The simulation saw 42 customers coming in and 15 of those orders successfully getting processed. Which means of all the customers that came in during the hour, less than half were successfully served at the end of the 60 minutes. The 15 customers spent an average of 1343 seconds, or about 22 minutes getting in and out of Starbucks. Based on our assumptions, that is 18 to 19 minutes more than the active ordering and drink making time.

Scenario 2 - 15% Mobile Orders, 1 employee taking orders, 1 employee making drinks.
This is similar to the numbers reported by Starbucks, in terms of percentage of orders coming in from mobile devices. With similar setup for arrival rates of orders, 26 customers had their drinks at the end of the hour. These customers spent an average of 490 seconds or close to 8 minutes getting in and out of Starbucks which is about 5 and a half minutes more than the active time.

Scenario 3 - No Mobile Orders, 1 employee taking orders, 2 employees making drinks.
What if we added capacity to the system in the form of an employee making drinks. With similar setup for arrival rates of orders, 34 customers had their drinks at the end of the hour. These customers spent an average of 326 seconds or close to 5 and a half minutes getting in and out of Starbucks which is about 3 minutes more than the active time.



Scenario 1
Scenario 2
Scenario 3
Total Employees 2 2 3
Orders Completed 15 26 34
Average Order Cycle Time 22 mins 8 mins 5.5 mins
Average Wait Time 18.5 mins 5.5 mins 3 mins

The advantages of scenario 2 are numerous. The extension of the scenario, where 80% of the orders are mobile orders and the employee taking the orders spends more time making drinks than taking orders, actually come out similar to the scenario 3 results. In other words, by getting a good majority of their users to use the mobile app, Starbucks can have the employees do mostly drink making rather than order taking and double throughput(no. of orders satisfied) per hour. The simulation of this scenario showed a 4 minute net "cycle time" as well. Most of which would probably be spent by the user walking to the store rather than queuing up in the store.

These are results based on simulations and I would corroborate them with personal experience as well. I knew that the expedited nature(jumping the in-store line) of the app was working in my favour when I used it. I recently switched my phone and for the past week have not used the app and have found that the in-store line is taking less time that I remember as well. There could be multiple factors to this, but I am sure the adoption of the mobile app is a major contributor.

I am not sure if folks at Starbucks ran similar simulations before rolling out the app(I have a feeling they did), but they are definitely altering the flow of customer orders. The Starbucks app is not just an easy way to pay but a shift in user behaviour. The shift, on the surface, seems to violate some principals that we espouse while teaching flow, especially around expedites. The fact though is, these are not expedites, but reallocation of the work into the hands of the user from the hands of the employees.


Thursday, August 6, 2015

The Grand Old Men Of Agile(...or what not to do when coaching)

It was a great pleasure to have attended Agile 2015. The sessions were mostly good, but my favourite part of the conference was "Open Jam" space. It was an open space where folks could propose topics and a small group of 4 to 6 people would discuss the subject around a small round table. The "Open Jam"s were more interactive than the sessions and was a great way to get to know folks and talk about real problems facing teams. I actually ended up attending more "Open Jam"s than sessions.

There was one open jam on Wednesday that stood out for me, but not in a good way. The session was proposed by a practitioner who was looking for games or a new way of doing retrospectives so that she could use to engage the one or two developers that think of retrospectives as a waste of time. As we learned later on in the conversation, she was a PA on the team and there were a couple of developers on her team that would rather be at their desks coding than be in a retrospective.

The collection at the table was illustrious. There were two experienced developers who just happened to be there and were looking for a space to pair program, they left pretty soon. The other two folks at the table were stalwarts of the agile development community. One of them is an original signatory of the Agile manifesto and the other's name shows up on the signatories page online (http://www.agilemanifesto.org/authors.html). In hindsight, I should have been more excited to see the two agile luminaries in action, but I was brought there by the topic of the open jam, and that was the focus for the moment.

The session was a little off right from the beginning. The PA looking for coaching had barely finished framing the problem that she had before she was hit with a series of questions, none, directly about the problem she was describing. "How involved is your product owner?", "What is the size of your team?", "How often do you release software?" were examples of the questions asked. While these might have indirect bearing on the issue at hand, the connection between the issue and the questions was never made clear to the practitioner who was being asked the questions. On the other hand, each of the answers she gave were treated as problems by themselves. I took direct issue with the experts pointing out that the team size of 20 people was an issue. We will come back to that in a bit.

Taking issue with the declaration that the team size was the biggest problem gave me the opportunity to interject and have two minutes to talk to her about the actual problem she was bringing up and we(her and I) figured out that the developers motivation, for example shipping code, needed to be talked about at the retrospective. If the retrospective was centered around things that are in the way of us shipping software and getting work done, the developer could be a lot more engaged and the retrospectives would be very beneficial for the team. At that point one of the experts agreed and started talking along the same lines. It was good to see some agreement around the table and it seemed we were making some progress.

Somehow the conversation came back to team size. I had mentioned that I have a 34 person team that delivers software on time and successfully exhibits the right team behaviours. This seemed highly extraordinary to the experts. They were a little taken aback that their 7 +/- 2 team size theory was being challenged. The lady asking the question had to leave to attend a session and I hope she got enough answers to make headway with the issue she is facing. Meanwhile, the rest of the table continued the team size discussion. I was asked how big should a team be if you were asked to build a team. Apparently my response of "it depends on the project or product to be built" was a "bullshit answer" according to the expert who signed the agile manifesto. I wonder if the same expert would choose a tech stack without knowing what product to build (I did not have the quickness of mind to ask this on the spot). I did explain to him that it was an inadequate question, as the composition of a team without a purpose is irrelevant and we need to define what the team is doing before deciding it's composition.

There was an interesting twist though. After I mentioned that at my company the average team size is 15 to 20 team members and we have a thriving and successful agile culture, they noticed that I worked at Ultimate Software and their demeanour seemed to change. They had both consulted at Ultimate about 8 years back and heaped praise on the culture at the company. They were gracious in their praise and conceded that the team sizes could work in that culture. The name of the CTO of my company was dropped in the process as well. They remembered how we used to hand out "Purple Cow" toys. These used to represent standing out in a crowd and being remarkable. At the end of the conversation, I took my leave and thanked the folks for the fun conversation.

That was the story(at least from my perspective) of the session. I had some obvious issues with it. The biggest issue was the body language and demeanor of the stalwarts towards the person looking for advice. One of them leaned back in an "I know better than you" position in his chair and spoke in what sounded to me like a condescending voice while asking follow up questions, and at times even asking them before person had answered the initial question. He also took obvious pleasure in landing zingers like "so you don't have a product owner" when they were told the product owner is not always present for the team. The other stalwart, the more famous one, from the agile manifesto, on the other hand, had the same condescending demeanor and was playing a game on his tablet throughout the conversation. This same expert did not even have the decency to look up from the game as he was asking or answering questions to the folks around the table. It was a bit shocking to say the least. He showed a lot more interest in the game than anything else for the course of the conversation.

It was just one instance, but if this is how these "Grand Old Men" of agile treat people and talk to individuals they should stay miles away from coaching people at close quarters. They have done great things in the past and have written books and papers that I and many of us have read. They have made practices that have changed the industry commonplace. They are great minds who have changed software development to a great extent, but in lieu of being able to treat folks who come to you with decency, they should probably stop coaching/consulting people and teams.

I had a great time at Agile 2015, met some great people, acquired some great ideas. The sessions, both scheduled talks and most open jams were great. There were some very empathetic coaches and leaders who presented great new ways of helping, inspiring and bringing teams together. It was one bad experience with a couple of grand old men of agile that was the "Purple Cow", but not in a good way. I hope I just caught them on a bad day at a bad time. If that was the case though, they should probably have stayed away from "trying" to provide help. Maybe they should leave that to the other empathetic, bright coaches who are passionate about helping teams and people achieve the best results without coming across as condescending or disinterested. I have to say I met quite a few of those at the conference this year.

Sunday, June 28, 2015

What to call the Scrum Master of a Kanban Team?(Guest Post by Mike Longin)

Prateek and I recently presented an experience report at the Lean Kanban North American conference. And as we went through session, it became clear that one of our biggest challenges was helping our audience understand our deep-seated use of homegrown terminology, specifically surrounding job titles. For example, the title we use to refer to our Kanban “Scrum Masters” at Ultimate Software is “Iteration Manager,” or IM.

To be honest, this challenge isn’t new. Prateek and I often find ourselves using a few terms interchangeably – Iteration Manager (IM), Lead Process Engineer (LPE), and in some cases, Scrum Master. We’ve used these three titles to describe one position that has not changed that much since Ultimate Software’s original Scrum implementation in 2005.

In Essential Scrum, Ken Rubin defines the scrum master position like this:
“The Scrum Master helps everyone involved understand and embrace the Scrum values, principles, and practices. She acts as a coach, providing process leadership and helping the Scrum team and the rest of the organization develop their own high-performance, organization-specific Scrum approach.”

A key takeaway here is that a Scrum Master is the leader or the manager. It is their job to lead the team through the trials and tribulations of product development. And to that end, we empowered our Scrum Masters by making them responsible for the product the team is producing. If the need arose, the Scrum Master provided course corrections to keep not only the process, but also the product on point.

As we transitioned from Scrum to Kanban in 2008, we recognized that there was a need for a similar position – a leader who would empower the team to own the process and product they’d created. Similar to a Scrum Master, the leader would not have people management responsibilities. However, they would be responsible for the products and processes of their team. A new title was created—the Lead Process Engineer (LPE). The title emphasized the fact that our LPEs would make process management a major part of their day-to-day responsibilities. But the down side was that it under emphasized their responsibility for the team and its people.

In 2012 we made a process reversal, and roughly 25% of our teams returned back to a highly customized version of Scrum. But instead of calling team leaders Scrum Masters, we went with the term Iteration Manager (IM). These teams worked in parallel with our Kanban Lead Process Engineers, but we eventually adopted the IM title to cover all team leads, regardless of the process being used. Although the name itself was a bit of a misnomer as the job still derived its responsibilities from leadership qualities, not people management.  Thus the term “manager” is still a bit confusing to those outside the company. And once again, the title also highlighted the iteration (process) aspect and downplayed the people and product.

So what is a better name? After a bit of soul searching, I realized that whenever I spoke about my position to someone outside of the organization, I always used the term Team Lead. The more I thought about this, the more I realized that this term is the best fit for the position. While an agile team is self-empowered, I believe there is always a single person who should hold final responsibility. After all, when everyone is responsible, no one is responsible. That person is not a “master” of the team, but is instead the “lead.”

The Team Lead interfaces with outside parties and helps the team remove impediments. They may or may not lead internal team meetings like standups and retrospectives, but they are the person who ensures the team is making those ceremonies a part of their process. The lead is responsible for the team’s process, people, and product. And while they are not necessarily the team’s manager, they are empowered to help individual team members, and the team as a whole, succeed.

Personally, I also like the title because I believe it speaks more to the position itself than any of the previous names. Where Iteration Manager, Lead Process Engineer, and Scrum Master all highlight the process; Team Lead highlights what I believe is the most important aspect of the position – team. Now an interesting question could be whether Product Lead is a better title since that highlights the goal of any team, which is to ship a product.

Lead also speaks better to the position than Manager or Master – both of which imply the team works for the position itself rather than the position being one of responsibility. Team Lead highlights what makes this position the position it really is. You are neither a manager nor management. Instead, you are the team’s leader. You are responsible for leading them to success, which is just not conveyed with Scrum Master, Iteration Manager, or Lead Process Engineer.

Finally, for what it’s worth, I think the title also speaks more to the outside world. When we interface with customers, titles like Iteration Manager, Scrum Master, and Lead Process Engineer do not convey our personal responsibility to that customer. Team Lead, however, speaks to the responsibility we have to that specific customer while also helping the customer recognize who they are speaking to.

So what’s the next step? Obviously new business cards are in order (it may be time to buy stock in VistaPrint to keep up with the demand). Second is to get further buy-in from the Kanban community and Agile community as a whole. While it seems like a small thing, having consistent titles is an important part of helping a methodology mature. It also makes it easier for successful team leads in Kanban implementations to find work outside their company. Take a look on Monster and you’ll notice that there are 1000+ job openings for Scrum Masters. But to be honest, I have no idea what I would even search for to find a position at a company practicing Kanban.

So all of you Team Leads out there, what do you think? Is Team Lead the title you use (either publically or internally), or do you have something better? Let me know in the comments below, or you can reach me on Twitter at @mlongin.