Be Reasonable

Understanding How to Leverage AI Reasoning Models

In partnership with

Want to get the most out of ChatGPT?

Revolutionize your workday with the power of ChatGPT! Dive into HubSpot’s guide to discover how AI can elevate your productivity and creativity. Learn to automate tasks, enhance decision-making, and foster innovation, all through the capabilities of ChatGPT.

Reasoning is all about drawing conclusions that make sense. It’s the process of analyzing a problem, weighing evidence, and deciding whether a particular outcome holds up under scrutiny. There are different types of reasoning too. Deductive reasoning guarantees a conclusion if the premises are true (think of it like a well-structured math proof). Inductive reasoning, on the other hand, is more about probabilities—making generalizations from specific examples but without 100% certainty. And then there’s abductive reasoning, which is all about finding the most likely explanation for a set of observations.

At its core, reasoning is about why a conclusion is valid. It’s not just about solving problems but understanding the deeper logic behind them—examining whether the solution fits and what implications might arise. It’s more than just figuring out the next step in a process; it’s evaluating the entire path.

Now, let’s compare that to problem-solving. Problem-solving is task-oriented—it's about applying known techniques to overcome specific challenges. Need to figure out how to fix a bug in your code? That’s problem-solving. It’s often procedural: break down the issue, follow the steps, and get to the answer. Reasoning, though, asks a deeper question: Why is this the right solution?

Humans are great at everyday problem-solving. But when it comes to complex reasoning—where you need to factor in multiple variables, evaluate contradictory evidence, or balance logic with emotional or ethical considerations—that’s where we often struggle to find efficiency. We tend to rely on cognitive shortcuts or heuristics, that help us make quick decisions but often lead to mistakes when the situation calls for more in-depth thinking. This is why we get tripped up by biases like confirmation bias, where we favor information that supports what we already believe.

Now, here’s the twist: when GPT’s new o1-preview reasoning model came out, the first thing a lot of so-called "AI experts" did was run it through tasks designed for general/foundational models — simple tasks that don’t require deep reasoning at all. I saw everything from asking it to identify and label transactions to writing in a specific style, or rewrite and improve something generated by another model…

And that’s where the disconnect happened. This new model isn’t just another text generator—it’s designed to think through complex problems, to reason in ways that foundational models weren’t built for.

Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”

Einstein - I think?

Is this reasoning model any good?

Here’s an anecdotal use case I was able to test in order to validate its potential:

  • I’ve been working with another very experienced and successful product leader to validate a new opportunity.

  • We have interviewed over a dozen “buyers” and “users” to better understand the problem space and opportunity.

  • We have spent a good amount of time shaping an initial PRD and MVP scope with a fairly clear short-term roadmap.

I gave GPT o1-preview a short summary of the problem that looked like this.

 “[Persona] have found [solution] to be incredibly effective when it comes to producing [result A] and [result B], yet they have a hard time getting buy-in on the initial investment because [problem A] and [problem B]. How would you approach building a solution for this problem space? How would you shape a competitive MVP?

It nailed its response, giving me 80% of the features we discussed in almost the exact same priority order for implementing them. It understood the technology available, the problem space, and what would make a worthy MVP.

My colleague’s initial reaction was something like, “Well shit… is there even a point to creating startups if AI can reason its way to strong solutions this quickly?”

Here’s how I am choosing to think about it:

  • Framing the problem is a major part of the process and it doesn’t work as well if you ask it to identify a sharp problem.

  • Where we are at in the process right now is where more other companies will stop. We need to outthink our competitors AND the technology they have available to them so that 20% of the solution that the AI missed needs to be significantly better than the 80% of the solution that is “obvious”. If it’s not, we need to spend more time finding the diamond in our coal mine.

  • Reasoning and logic aren’t binary. This is why we have disagreements, political parties, and wars. They are built around information and beliefs, and the AI essentially believes what we tell it to. Human creativity is critical to building something meaningful.

How to leverage o1-preview for better planning, strategy, and artifacts

If you haven’t, it’s worth looking into OpenAI’s 5 levels of AI — its planned path to AGI. o1-preview is a baby step into reasoning, which is only level 2.

For product teams, reasoning models have the potential to help you develop a deeper understanding of your customer’s needs, the problems you can solve for them, and the approach to solving those problems.

Should you use a reasoning model to generate your PRD? No, use ChatPRD, but before you do, use the reasoning model to help you frame up and clearly define the opportunity. Here are a few simple and useful ways to leverage it:

  • Ask it to identify the most meaningful threads or themes across call transcripts and customer feedback — specifying that you are looking for an opportunity to improve [specific metric]

  • Give it an executive-level summary of the problem you are solving or opportunity you are pursuing and ask it to explain how it would approach it.

  • Ask it to challenge your logic or assumptions around a set of data or collected insights.

  • Get a gut check by asking it to identify any gaps in your logic or approach.

  • Ask it to anticipate challenges you might face in reaching specific goals like activating existing users on a new feature or branching into a new market.

  • Ask it to provide opposing perspectives on critical decisions to better understand (and anticipate) any objections and concerns you might run into.


Prompting for Reasoning Models

If you happen to read my previous deep dive on prompting… you should forget it completely when testing/using a reasoning model like o1-preview. Your goal with this model isn’t to get a specific format or type of output, it’s to help you factor in and consider multiple variables and data points while identifying those you may have missed.

For the best results, you need to present the most clear and simplistic framing of your problem/goal. Here are a few that I have tried and been impressed with:

  • I want to [goal description], but [problem A] and [problem B] are preventing me from making progress. How would you navigate these problems to accomplish [goal]?

  • I performed extensive research on [topic] and provided my summarized notes below. What perspectives have I missed or what biases are present in my summary that I should consider correcting?

  • Here is a list of obvious use cases for [product]. Review the documentation [link to docs or marketing page] and identify high potential use cases that are likely to emerge over the next year.

  • Here is information on [your product]. What would you build and launch if your goal was to win 20% market share from this competitor in the next year?

As you can see, you don’t need to give it extensive context or structure your prompts in a specific way. Give it your goal/objective, define the problems to solve, and let it do its thing.

What do I do with the output?

You can either use the output from the reasoning model as part of your prompt to generate what you need OR you can just use the information. As foundational models improve and increase their context limits/windows, they will take more guidance. Using the logic from the reasoning model will help to ensure the output from the foundational model is aligned with the direction you need it to go.

As humans, it actually takes a lot of work to break down our thought process and the thinking behind every decision — and that’s before we would have to type the whole thing out. Leveraging the reasoning model saves you from having to do that while also helping you identify what is more unique about your own approach.

If you’ve done something cool with o1-preview, respond and let me know and I will happily highlight it in the next post I send out.