Feedback flutter

About 15 years ago, I was close to boarding a trans-Atlantic flight, browsing in a store for some books to read on the way. My eye caught a cover which had a panda up a ladder, armed with a paintbrush dipped in red paint, correcting the title.

I scanned the title, paused, then read it again. With a grin on my face I turned to the back cover where I found a joke. It was about the title and explained perfectly what was going on. You see, the panda on the front cover didn’t want to be the panda from the joke. The difference between them? A comma. Well, at least visually. As to its meaning, that comma turned a peaceful creature into a violent perpetrator, maybe even a murderer. The book was Eats, Shoots & Leaves by Lynne Truss; a lighthearted guide to punctuation. (The current edition seems to have lost the back-cover joke but you can still find it quite easily on the Net.)

As much as I’d recommend the book to anyone wishing to improve their punctuation in English, it isn’t the point I’d like to make. (Get it?) What actually struck me back then is how sometimes a small detail can make a big difference. For good, or bad. It struck me because I was also reading about chaos at the time. In Chaos Theory, the butterfly effect describes a situation where, within a particular system, one can observe vastly different outcomes due to very small, seemingly unconnected changes. Which brings me to experiments and feedback.

The purpose of an experiment is to generate learning about the nature of a system or environment, based on some hypotheses, assumptions, and previous learning. When applied within product development, experiments aim to validate our guesses as to how (or how well) products or services work within our business model. Most of the time there is an underlying and unstated assumption that we’re not dealing with chaotic scenarios. Yet, there’s always a possibility that we are. How would we know? We can look at the feedback. Significant variations in results would be an indicator of potentially chaotic behavior which, in turn, should influence how we would proceed with shaping solutions to our customers’ problems.

Consider

Is running your experiment only once enough to draw reliable conclusions? Can you think of small changes in the parameters of an experiment which might produce different outcomes? Would it be worth testing these variations? Could you harness the power of digital simulation to explore faster and cheaper?