This article is an supplementary appendix to the post: The Power of Feature Hypotheses
Instead of making up-front decisions based on assumptions and imperfect knowledge we create hypotheses and run a series of experiments to determine whether each hypothesis is true or false. The results determine what we should stop doing, start doing and continue doing. Based on complexity, uncertainty and rapidly changing environments, most software development projects are “research and development” exercises ideally suited to this hypothesis-based approach.
This provides an effective means to test our assumptions and prove them true or false. A “false” result does not mean that our experiment failed – false results mean that we’ve learnt something new. False results allow us to stop before we’ve spent too much money and.or pivot to an alternative option if needed. The only failed experiment is one where we do not get enough data back to either prove or disprove our hypothesis.
For example, we might think that adding a “Password Reset” feature to our system will save $1million per month by dropping call centre requests by 90% – but we won’t actually know that this will be the result until the functionality is live and users are using (or not using) our new feature. The trick is to find the quickest and cheapest way of determining whether we’ll get the hypothesised benefit or not.
As Easy Way To Measure Return On Investment (ROI)
Another great benefit with this approach is that it is relatively easy to measure whether we achieve the benefit or not. Return on investment is something that is typically documented up front to secure funding – usually it is never looked at again to assess whether we spent our money wisely and achieved the intended benefit. Feature hypotheses allow an easy mechanism for doing so. If we can slice-up our features into really small chunks we can run lots of experiments, have quick learning cycles and can easily assess whether we achieved the intended benefit.
Survival Of The Fittest
Conversely, if a business stakeholder struggles to come up with a valid benefit hypothesis, it will be very difficult for that feature to proceed into build because it is competing against other stronger features for prioritisation. Benefit hypotheses are easy to scrutinise – making it harder to “game the system”. If I think “Password Reset” will save us a $1 million a month I’d better have the data to back this up (e.g. call centre logs) or my colleagues will tear it apart. In a worst-case scenario where a “pipe dream” feature does get built, the experiment results will show us how far from the mark we were. In this way the system deals with serial offenders.
Want to know more – read “The Lean Startup” by Eric Ries.Follow Running Mann: