Patzer gives a good talk, not that the story needs any added flash. I was especially drawn to how he defied some of today’s prevailing wisdom around putting out a minimum viable product (MVP) and iterating like mad, instead spending months testing and refining the core concepts before writing any code (jump to 8:00 in the video). He claims that when he was validating the idea, he explained the concept of Mint to eighty people and only had one person say that they would use it. And yet three years later the idea that became Mint sold to Intuit for $170 million. It goes to show the expertise and thoughtfulness it takes to parse huge amounts of feedback and turn it into something valuable. How many others would have heard that many “No” responses and kept pushing ahead?
The story also kind of makes you wonder what that one person said to Patzer, doesn’t it? My guess is that he learned more from all the negative feedback, but I know from experience how tempting it is to over-weight outliers that finally give you the answer you’ve been waiting to hear all along. It’s like striking oil when someone finally agrees that your idea is a good one, even if they are the only person who thinks so. Thoughtfully dissecting market and customer feedback so that you can draw implications from it is a topic I’ll return to in its own post, but it also struck me that Patzer had an advantage doing this that those of us working for larger organizations often don’t: he was on his own schedule. Patzer was free to dig into the outliers in his market research without a management team pressuring him to get started. It’s natural for that pressure to exist within an organization, but if you’re not careful it can lead you to hasty conclusions, especially when it comes to evaluating the meaning of outliers.
Outliers -- observations that are few in number but disproportionately affect your analysis -- can create headaches when we face time pressure to make decisions. Maybe it’s the lone beta tester who gives you brutal feedback about a product feature that you no longer have time to change. Or maybe it’s the tiny new cluster of data points that make your pretty financial model all of sudden look hideous. Whatever form they take, outlier observations force us to make critical decisions about how much we should weigh their significance. Timing complicates matters further. Sometimes new information contradicts assumptions underlying work that you’ve already done (otherwise known as “oh shit” moments), and other times the outliers keep you from making a decision. Ignoring the outliers can be tempting; life is simpler without them. When you’re under pressure to wrap up the research phase and show progress on a project (and who isn’t?), you may even be encouraged to ignore inconvenient observations. “The business case was approved, so get moving.” This is where things get risky.
Making business decisions based solely on outliers is dangerous, but so too is ignoring them. Many of the outliers you come across doing market research are red herrings, but some are buried treasure. When you’re under time pressure, it’s easy to make bad decisions in the face of insufficient data. There’s no formula for how to approach this – making these decisions is part science, part art. But here is how I recommend approaching two common scenarios in which you might find yourself struggling with how to deal with outliers early in your career.
When an Outlier Jeopardizes Decisions You’ve Already Made
No matter how much I try not to, I can’t help feeling disappointed at times when an idea of mine gets eviscerated by customer feedback. And as sad as it can be to wave good-bye to an idea, the break-up is messier the longer an idea is strung along before running into resistance. A while back, I had an idea for a product feature that I thought would be a home run. I sketched out the idea and started running it by prospective buyers to gauge their interest. The feedback was resoundingly positive, and I got management approval sign-off to move forward with design before the research phase had even concluded (uh-oh). We had internal resources who were about to free up, so it felt like a win-win all around. I was off to races with the team, when couple of months later I was pitching to idea to another prospect and she stopped me cold. In great detail, this woman explained why the feature we were developing was totally irrelevant for her industry. No one else had mentioned the issues she raised, but I could see that her points applied to many of them as well. It was a great singular piece of feedback; unfortunately we were already way out in front of our skis with this idea and now I faced a dilemma.
While the idea wasn’t completely toast, it needed significant changes to be workable which changed the ROI significantly. I felt stupid for letting it play out that way, but the fact is that it’s not hard to find yourself in similar circumstances. The desire to both move fast and keep people busy can make it easy for companies to go too far with a half-baked idea, particularly in software where there is no factory machinery to spin up. A project might have a research or discovery phase scoped into it, but what do you do when you still have more questions than answers at the end of that phase? Few things bug managers as much as idle resources, and that creates the pressure to start work while key questions remain unanswered. It’s kind of a perversion of lean startup methodology to move forward with an investment once the lack of clarity passes a certain threshold, but it happens.
What can you do if you find yourself in a situation where some new piece of insight calls key assumptions of your work into question? The best thing to do first is buy yourself time however you can and validate the new information as quickly as possible. Platforms like zintro can be great for spinning up super-fast research conversations on specific topics, but anything that allows you to verify the feedback is better than nothing when you’re under the gun to produce despite significant uncertainty. All bets are off until you can figure out:
- How credible was the source of the feedback?
- Is the new information conventional wisdom to push back against or detailed insider knowledge that invalidates your assumptions?
- How much of the value of your idea/strategy/product is now at risk based on the new information?
- Can you address the potential shortcomings iteratively, or is there no recovering if the first iteration fails?
- Can potential problems be mitigated with great execution?
Get on top of it as fast as you can, however you can, and make your recommendations. Anything you can do is better than nothing if you’re being pushed to move forward despite red flags. You might not be able to salvage a home run, but sometimes preventing a small mistake from becoming a larger one is the best move you can make.
When an Outlier Points to a New Opportunity
This is the more fun case, in which you encounter a novel piece of information that contradicts what you thought you knew and you have time to capitalize on it (known as “holy shit” moments). Rather than worry about the downside, “positive outliers” get you thinking about upside. This can be exhilarating, but the outlier risks are still there in the form of confirmation bias.
You need to be on guard as much for how others react to positive outliers as yourself. People can be incredibly generous when they see positive feedback about a pet project of theirs, and it can make them forget all the negatives. Positive feedback and anecdotal evidence get magnified, and before you know it you can find yourself trying to build a business case around something that only works under narrow circumstances. In statistics, an outlier is generally dropped if an association between two variables is solely due to the presence of an outlier. In the same vein, if a business case depends on an outlier case being widely replicable, you should definitely consider hitting pause and doing more validation, no matter how much backing you have from your leadership.
This is a fairly common situation, and the risks of wanting to move fast and keep resources productively engaged are still present. Just remember that all of the agility and rapid iteration in the world won’t save a product that solves a problem people don’t care about.
Statistical Significance Collides with Innovation
We face two opposing notions in business that must constantly be balanced. The first is the cult of data-driven decision making which influences so much of managerial thinking. When we encounter unfamiliar observations, we are trained to look at sample size and the reliability of conclusions based on statistical significance to know how seriously to take something. At the other end, the cult of data is perhaps matched only by the cult of innovation. We elevate great innovators to sometimes almost mythical status, and “innovation” has long been a buzzword among management gurus and consultants. In the long run, companies can only survive if they innovate. And among other things, the innovation mantra holds that you can’t focus group or A/B test your way to breakthrough ideas. It seems like you have to definitely follow one, except when you should follow the other.
One thing you can always count on is that information is your friend. If you’re trying to do something novel you’ll probably have to rely on your gut and take more risk than usual, but there’s no excuse for you (and your managers) to not pressure test ideas. If you’re resourceful enough, you can always find a way to get a little bit more clarity before you put all your chips in.
After that, best of luck partner.