So, it’s the internet and I really can’t see you, but I bet that most product designers would have their hand in the air. The truth is that projecting and delivering the intended business impact is really hard and more rare than Amazon’s Best Sellers in Technology would have you believe. The truth is—and we don’t talk about this enough—failure is more common than success. And success is even more difficult to achieve when developing new products.

Whether your team is looking to deliver business impact by optimizing an existing product or developing a new product, the goals will be similar: Can we deliver enough value to customers so that our business is positively impacted? But new products have the additional challenge of proving the likelihood of adoption: Will customers change behavior and adopt a new tool?

This is where we often go wrong when developing new products. It’s common to inflate and miss goals for adoption (millions of users and dollars!) because of unrealistic benchmarks and/or poor hypothesis evaluation during the product development stage. So let’s talk about how to not do that and to set your team up for impact.

There are no benchmarks for success

One of the biggest challenges in building new products is that there’s very little data to work with in the beginning—there’s no existing product to benchmark against. And the problem with not having benchmarks is it’s easy to blindly prototype, survey, or experiment with users without having defined what a good outcome is. And worse, it leaves the team to guess about whether you’ve succeeded or failed, which is less likely to lead to the impact you’re hoping for.

Benchmark against successful (and unsuccessful) products

At Dropbox, we evaluate new product ideas against successful features, like Smart Sync, and features that didn’t resonate with customers. It’s not a perfect comparison, but these products provide objective baselines of what customers love and use versus what they don’t.

Benchmark
Benchmark against successful and unsuccessful products to understand how a new idea sizes up.

We track key performance indicators (KPIs) like believability, uniqueness, value and fit to see how they all compare. For example, if a new product idea ranks on par or below a not-so-loved feature on value or need, then that’s a good signal to rethink the proposal.

And something similar would work for startups as well. Evaluating your new idea against successful incumbents or well-adopted adjacent products will help you understand how it sizes up.

Define indicators of volume & impact

Also, without benchmarks, it’s easy to misinterpret usage at a small scale as being enough for impact. Heck, it’s common for folks to interpret any usage as success when in practice that behavior won’t translate to meaningful business impact.

To avoid that, we define success metrics for early stage prototypes by setting benchmarks for indicators of volume and impact: what actions and at what volume do we need to see with hundreds of users that will to translate into the volume and actions needed at a large scale with hundreds of thousands of users to have a meaningful impact to the business.

For example, when developing our comments product the Product Manager Marley Spector, closely followed a 50% “closed loop” metric to indicate the viability of the product. “Closed loop” is a comment that was not only posted but also read and is a good predictor of engagement and volume.

Your data is inaccurate

Getting reliable data is hard (on my team we call it “maintaining statistical rigor”). It takes planning (properly planning what you’re trying to learn and how to measure that with confidence), time (the actual effort involved in running a study, experiment, or survey), and resources (the people and tools to make it happen). So, it’s understandable how it can be tempting to ease up your standards so you can move faster and leaner.

The problem is that with crappy data you’re not going to have good results—I’ve never seen it go any other way. But the good news is that being lean and fast isn’t actually at odds with statistical rigor.

Start with low confidence data and work towards high confidence

It’s okay to start with lean, low confidence methods so long as you work your way toward high confidence. For example, competitive analysis will help you understand market permission but it won’t help you understand the potential impact of your solution. At the same time, it’s a big investment to build prototypes and evaluate business impact, and it doesn’t make sense to do that unless you have some evidence that your idea has potential.

Due Diligence
Start lean and up diligence as investment in an idea increases. Here is a handy chart for evaluation.

When you’re trying to move fast while building confidence in an idea, it makes sense to start with the easy stuff first so you can quickly eliminate areas of lower opportunity. But what’s important is that you’re making informed decisions along the way and that as your investment increases, you're building more confidence with higher quality data.

Be realistic about impact with a projection range

It’s industry practice for the calculation of potential impact to include assumptions based on market and competitor norms (for example, based on research, we believe the addressable market to be $X). But at the end of the day, the ability to deliver on a projection reflects the evidence strength that went into it in the first place. If the calculation was mostly based on assumptions, it’s probably a very optimistic projection (millions of users and dollars!) and less likely to hold up than a figure derived from high intent, first-party data.

To improve accuracy and allow for some assumptions, we started using a projection range with a conservative, mid-range, and optimistic estimate.

Range
A projection range is more realistic about impact. Consider expectations around a range versus a set number.

The conservative estimate is calculated using only high-certainty data from our target cohort while the mid-range and optimistic estimates allow for varying degrees of certainty and the making of reasonable assumptions. With a projection range, the team can have a no-frills conversation about the likelihood of impact.

Hypotheses go unchallenged

When the solution is the same at the end of the process as it was in the beginning, it’s generally a sign of failure. It’s likely that the core hypotheses driving the solution weren’t appropriately evaluated and that the team didn’t learn from and calibrate towards customers.

List hypotheses and evaluate them at every step

What are the core business and user goals do you think your project will achieve? Increase upgrades by x%? Save a customer’s time on repetitive tasks? Write it down. Find ways to answer these questions in every survey, prototype, experiment, and research session. Log the data, track how you’re doing, and—most importantly—make changes based on what you’re learning.

It’s a simple thing that teams often overlook. The concise business and user goals of a product are it’s North Star. Continuously keeping score on those goals through ongoing testing (even after you ship) will help you narrow in on what the product is and what it’s not. And that strong POV will be critical when it comes to feature prioritization, product marketing, and, ultimately, delivering value to your target customer.

Evaluate, don’t validate

We have the bad habit of saying we’re going to “validate” things. “We’re going to validate if customers do x” or “let’s validate if that’s true.” The challenge in being able to appropriately gauge our hypotheses and assumptions is that we need to be okay with being wrong (yeah, that whole celebrate failure thing). Constantly saying “validate” sets an expectation for validation or accepting a hypothesis as true and, worse, implies that “invalidating” is the opposite of making progress.

So by simply saying “evaluate” instead of “validate” you can help your team adopt the learning mindset necessary for developing products in a way leads to impact. Plus, “evaluate” is the correct term for studies and experiments. Science!

How to use this stuff

So I know you may be thinking, Designer, “This sounds like PM material!” But, if designers want to have influence over the direction of a product then we also need to share accountability for the success of the product.

Direction setting starts with establishing good, thorough, accurate goals and ends with delivering on those results and vision. Following the tips above will help you do what is deceptively simple—delivering business impact.

Latest in Product Design