We track key performance indicators (KPIs) like believability, uniqueness, value and fit to see how they all compare. For example, if a new product idea ranks on par or below a not-so-loved feature on value or need, then that’s a good signal to rethink the proposal.
And something similar would work for startups as well. Evaluating your new idea against successful incumbents or well-adopted adjacent products will help you understand how it sizes up.
Define indicators of volume & impact
Also, without benchmarks, it’s easy to misinterpret usage at a small scale as being enough for impact. Heck, it’s common for folks to interpret any usage as success when in practice that behavior won’t translate to meaningful business impact.
To avoid that, we define success metrics for early stage prototypes by setting benchmarks for indicators of volume and impact: what actions and at what volume do we need to see with hundreds of users that will to translate into the volume and actions needed at a large scale with hundreds of thousands of users to have a meaningful impact to the business.
For example, when developing our comments product the Product Manager Marley Spector, closely followed a 50% “closed loop” metric to indicate the viability of the product. “Closed loop” is a comment that was not only posted but also read and is a good predictor of engagement and volume.
Your data is inaccurate
Getting reliable data is hard (on my team we call it “maintaining statistical rigor”). It takes planning (properly planning what you’re trying to learn and how to measure that with confidence), time (the actual effort involved in running a study, experiment, or survey), and resources (the people and tools to make it happen). So, it’s understandable how it can be tempting to ease up your standards so you can move faster and leaner.
The problem is that with crappy data you’re not going to have good results—I’ve never seen it go any other way. But the good news is that being lean and fast isn’t actually at odds with statistical rigor.
Start with low confidence data and work towards high confidence
It’s okay to start with lean, low confidence methods so long as you work your way toward high confidence. For example, competitive analysis will help you understand market permission but it won’t help you understand the potential impact of your solution. At the same time, it’s a big investment to build prototypes and evaluate business impact, and it doesn’t make sense to do that unless you have some evidence that your idea has potential.