Friday, May 20, 2016

Mathematical Boondoggle

I spent the last week in San Diego at the Lean Kanban North America conference, followed closely by the Kanban Leadership Retreat. All the big names and though leaders in the Kanban community were there, at least all the ones that have not been banned from the LeanKanban Inc. events were there. I had no idea conferences had banned lists. After finding out about this, I firmly believe, any speaker/writer/thought leader on the conference circuit has not done enough innovate work, unless they have gotten themselves banned from a conference. I looked forward to a week of exciting and invigorating conversations along with some quality team building time with the development managers at Ultimate Software.The trip lived up to the latter for sure, the antics of the Ulti-Managers will have to be a blog post on a different blog. The former on the other hand was a bit of a disappointment. There were definitely some bright spots. Some sessions and conversations were definitely enlightening, but a lot of it was filled in by what I can only call Mathematical Boondoggle.


Look At All The Maths I Have

Since most Kanban implementations are metrics driven, the community seems to have a lot of conversations around them. Infact there seems to be a mathematical arms race amongst the leading voices in the community. There are multiple curve-fitting conversations and detailed analysis of Cost of Delay curves that are being explored and presented. Nothing wrong with the concepts and with the theorizing of these concepts, but what is missing is the practicality of their application. Over the past few days I have been scratching my head to figure out the dots that might not be getting connected in my head. Maybe my education in the Kanban practice has been too close to practical application and use of real data that making the data fit a Weibull or a Log-Normal curve in order to do forecasting and sampling makes no sense to me.

Both Troy Magennis and Frank Vega presented forecasting techniques (Monte Carlo) which used real data to generate forecasts and probabilities. That was very heartning and reaffirming to see. They did not try to fit the data into any named statistical curve. They took the data as it was and used the knowledge of the fact that every individual data point is just as likely to occur again to simulate what the results would be. They did not make any attempts to figure out the tail of the distribution or what the shape factor of the Weibull distribution was. I know Troy Magennis has in the past(and probably still does) espoused the use of Weibull distribution, but his session included no mention of it.

There was a good amount of SAFe bashing (not that I am a fan of SAFe), especially around the way SAFe uses estimated and hence not real data to determine the calculation of the weighted shorted job first. This was in the middle of the Cost of Delay discussion. The irony was strong in this discussion and it appeared to be lost on most of the folks in the room. Folks ridiculing a formula that uses estimates and makes multiple assumptions in the middle of a discussion where they are espousing a technique(Cost of Delay calculation) that starts with multiple estimated values and multiple assumptions regarding value curves.

A group of us had the same conversation at the dinner table with a few folks and it seems that at least that group agreed that Cost of Delay is almost impossible to calculate because of the inherent problems with estimating of the value and of the value decay over time. The only practical voice on the Cost of Delay conversation came from Klaus Leopold, as he said that the value conversations should only happen at the feature level, not above or below that. I will have to have an in depth conversation with Klaus at some point to understand how he is calculating the value though. It just does not seem that it is something that can be easily determined and that determining it by itself does not add to the cost.

I See Your Theoretical Math And Raise You Practicality

I left the Leadership Retreat disappointed in one major fact. The community that has valued and espoused using real measurements to make decisions and collection of actual data for analysis has moved so far in the direction of adding layers on top of the data in order to analyze it. The same community is talking in terms of estimates when we have espoused real data since the beginning. It increasingly seems to be mathematical boondoggle coming through from the leaders of the Kanban community(at least the ones not banned from the conferences). 

Apart from Troy and Frank's presentations and the dinner conversation, I had at least one more reaffirmation from another attendee of the conference and the retreat. We happened to run into each other at the San Diego airport, and did a quick sharing of notes. He was frustrated with the theoretical nature of the discussions as well. In his words (and gestures) - "The discussions were up here(raises his hand up above his head) and the reality is down here (lowers hand to his knee)".

There is a lot of good work left to be done on the practical reality of software delivery before we go so far into the land of theoretical mathematical boondoggle. Hopefully in the future these gatherings are more about actually learning the applied concepts that work and figuring out which ones do not work. It would be great to be able to share experiences and learn from others about what is working and what is not, as opposed to the math devoid of practical applications or examples(hence boondoggle). Also boondoggle has once again become one of my favourite words.