You’ve undoubtedly heard of the ‘fail fast’ philosophy as it applies to software or product development, where the phrase is really a synonym for fast learning.
By contrast, systems design looks at it more literally, with a focus on safety and disaster prevention: monitor for problems in a process and stop as soon as you find one. At first glance, these are quite different. Or are they?
One way to think of a business is as a system which takes investment as input and generates benefits as output. There is a risk that the investment won’t pan out. Since product development is a form of investment (and a process!), we’d like to know sooner rather than later if the benefits are likely to materialize. Controlled experimentation can do just that. And by “controlled” we mean experiments with well-functioning, fast feedback loops that either monitor for unusual events in the operation of our products and services, or gather information and data which we can use to inform us as to what to do next. If, for example, a series of experiments shows that there’s little or no traction with a solution, the feedback loops have indeed identified a failure to deliver value and we should at least pause to protect our long-term investment potential.
Unfortunately, the control and protection that experiments can give us doesn’t come automatically. We must deliberately engineer it into our product development processes or risk the consequences of discovering failure too late to be able to do anything about it or to have to “fix things” at a much higher cost.
Does your approach to product development give you the ability to learn quickly and protect against poor investment? Can you think of doing things differently in your role so that it does?