- Leading Product
- Posts
- "Worst Case Impact" for Selling Big Bets and Ideas
"Worst Case Impact" for Selling Big Bets and Ideas
a Tool for Conservatively Forecasting the Impact of New and Innovative Ideas
Being data-informed is incredibly important when making decisions about your product. The data can help you determine what might happen if you were to change a key flow for your users, or it might help you validate whether your hypothesis has solid ground to stand on.
The thing is that sometimes, you don’t have directly comparable data for opportunities or ideas you feel compelled to pursue. In these scenarios, you can’t rely on simple prioritization like impact + effort to decide for you. Maybe it’s a creative approach to a specific flow with an experience you have never tried before, or it’s integrating a new technology or dynamic. These opportunities have huge potential but are hard to move forward internally due to natural skepticism, lack of data, and the always-present fear of the unknown. That’s why I started using a process (framework if you must) I am calling “Worst Case Impact.”
I recently mentioned this on a LinkedIn post, and when the person asked me for more information about it… I realized that it was just something I started doing. There are some posts around the web that capture some of the steps I use in the process to build your pitch or narrative, but nothing I found went end to end.
Let’s set the stage
With something like this, it helps to connect it to an applicable and real-world scenario or narrative, so here we go:
Imagine you’re eyeing the user experience for gen AI within your application or product. You don’t love this text-based chat experience and how disconnected it feels from your users' core jobs to be done. You also have some user feedback and tangential data to back up your gut feelings that… it just kind of sucks.
Despite massive marketing efforts, engagement is low, and you’re not seeing any correlation between AI usage and features that connect to conversion during the free trial—a big focus for you and your team. You have started exploring ideas with your design team and feel like you’re on to something great with AI being contextually helpful within the product during workflows with high drop-off. You even have positive signals from some customers you keep an open communication thread with.
You want to pitch this as a priority because you’re sure it will have a more significant impact than the current items on your roadmap. Additionally, you know that the way AI is implemented is increasing costs significantly. You have floated the idea with leadership, and they default to “Show me the data.”
Here’s how to use the "Worst Case Impact" framework to back your bet and do precisely that.
Start with the Benchmark
Begin by identifying the baseline metric(s) your work aims to improve—be it engagement, conversion rates, or user satisfaction. For instance, "Current engagement with our AI feature stands at X, with a conversion to paid rate of Y."
Capture a current performance benchmark for these metrics over the last three months and then look at trends for the last 12-24 months. Identify any major changes, as you will use that information to identify other relevant work or experiments to reference as models or stand-ins for your proposed work.
If you haven’t gone through the process of putting a $ value on core or calculated metrics, you should check this out.
Calculate your “Blast Radius”
Once you pin your metrics down, you should get a sense of how many users are currently impacted by the problem you want to solve or would be affected by the work you want to ship.
Every analytics platform I’ve used allows you to get a unique user count for specific events or segments, so use the parameters that make the most sense and work from there. It’s helpful to consider the % of users as well as the frequency/intensity of the problem.
Back It Up with Analogous Data
Since direct data on the new feature's impact isn't available, could you draw parallels from similar improvements made within your company or the industry? You should look for improvements or experiments within the same JTBD space in the product, but you can also compare tasks or flows that are comparable in importance and complexity.
Outside of the complex data, I encourage teams to dig into support requests, customer feedback, and even social media chatter to identify real users who feel the pain.
Factor in the Variables
Acknowledge the variables and unknowns in your calculation, demonstrating your awareness and acknowledgment that this data isn’t a direct 1:1 with what you’re proposing. Nothing gets shot down faster than someone presenting the wrong data as a basis for making an investment or big decision.
I always lean towards conservative outcomes in this approach, thus the “worst case” part. Let’s go back to the scenario we started with:
Even if your current data shows that 30% of your new users are trying but failing to generate value from the AI, you shouldn’t assume all of those users will suddenly find success with the first iteration of what you ship.
The more important question is, “What if only 3-5% found success?”
Does the worst-case reward still justify the effort?
How does this compare to other initiatives that have shipped?
What is the cost of NOT doing it?
Tell The Story
Now it’s time to put it all together and get some buy-in to move this thing forward.
Start by focusing on the problem and who it’s impacting. Maybe it’s directly impacting youor users, or maybe the problem is that your competitors are doing it better, and it’s directly impacting your business.
Provide validation and confirmation that your users want this problem to be solved.
Shift into painting a picture of what things look like when that problem is solved.
Bring it home by showing your logic and thinking about what could be and then what you think will be. Ensure your target is still meaningful, but let them know your research indicates a much higher potential.
I know this is a lot to read without any visuals, so I made a Figjam board for you to use as you see fit. Feel free to comment with suggestions/ideas or play around and share your iterations.