ABOUT THE KEYNOTE

AI vs. Traditional Product Management

Every CEO (and their grandma) wants AI in their product. As the Product Manager, it’s now your job to make it happen. How hard can it be?

In this talk, you’ll learn why building AI and ML products is nothing like traditional product management. Based on a PhD in machine learning and 7+ years of hands-on experience, Thomas Brouwer breaks down the 5 key differences that will challenge your assumptions, shake up your roadmap, and change the way you work with stakeholders, engineers, and uncertainty.

Packed with real examples and practical lessons, this talk will help you avoid the common pitfalls and build AI features that actually work for your users and your business.

Type
Virtual Keynote
Onsite Talk
Time
October 10, 2025 11:00
To be announced
Year
2025

Rewatch the Keynote

Want to join the next just product conference to see keynotes like this live onsite or in the livestream?

Make sure to get your ticket today

Summary & key takeaways

AI Product Management Is Expectation Management

Why do 95% of AI projects fail to deliver measurable value?

Most companies already use AI tools, set up evaluation systems, and integrate large models into their workflows, yet the majority still miss impact.

In his talk at Just Product 2025, Thomas Brouwer (Product Leader at Blinkist and Go1, formerly Yelp and Apheris) revealed a simple but powerful answer: AI product management is expectation management. The success of AI products depends less on the technology itself and more on how well you set expectations about what to build first, how to measure success, how long things will take, and what kind of innovation you are actually pursuing.

The Core Idea: Manage Expectations Before You Manage Models

When teams hear “AI,” they often jump straight into model building.

Stakeholders expect something complex, automated, and groundbreaking, but that is rarely where success starts.

Thomas shared that most AI projects fail because teams do not manage four key expectations:

  1. MVPs: What is the simplest version worth testing?
  2. Success: What does “working” even mean when results are uncertain?
  3. Timelines: How do you plan when experiments may fail?
  4. Innovation: Are you building hype, solving user problems, or aiming for true breakthroughs?

The rest of the talk explored each of these in detail.

Start Simple: Build an MVP Model

Thomas’s first story came from his time as a Product Manager at Yelp.

A team asked for a machine learning model that could predict whether a user owned a car to improve ad targeting for car insurance.

Instead of spending six months building and training a model, his team started with an MVP model, a simple heuristic that grouped users based on similar behavior like clicking on car insurance ads. Within two weeks, they learned something crucial: the team did not need perfect precision, they needed a broader audience.

If they had gone straight to a complex model, that insight would have taken months and a lot of wasted effort.

The takeaway: do not start with machine learning, start with learning.

Redefine Success: The Three Horizons Framework

In traditional product management, even a failed experiment leaves you with something visible, a feature, a design change, or at least new code.

In AI, failed experiments often leave nothing but data and lessons.

Thomas recommended using the Three Horizons Framework to align stakeholders on timelines and risk:

  • Horizon 1: Short term improvements that are almost certain to deliver results.
  • Horizon 2: Medium term bets with moderate risk and medium payoff.
  • Horizon 3: Long term innovation projects, the “moonshots” that may fail but could redefine the business if they work.

For typical product teams, most time is spent in Horizon 1.

For AI teams, the balance shifts: roughly 30% short term, 30% mid term, and 40% long term.

By making this portfolio explicit, product leaders can show that not everything is supposed to succeed, and that is okay.

Fix the Timeline Problem: Use Timeboxing

Predicting how long it will take to build an AI product is nearly impossible.

You do not know if the data will be good enough, the model will converge, or the results will be usable.

Thomas shared a personal failure from his time at Apheris, where a federated learning demo project dragged on for months and eventually got canceled because timelines and expectations were never clear.

The solution he now uses is timeboxing.Give teams a clear problem, one or two months to explore, and freedom to decide what to include. At the end of that box, review progress and decide whether to continue or pivot.

This approach keeps teams motivated, avoids endless R&D spirals, and gives executives predictable updates without false promises.

Be Clear About the Kind of Innovation You Are Building

Not every AI initiative is meant to change the world, and that is fine.

The problem starts when leaders expect one thing and teams deliver another.

Thomas distinguished three valid types of AI work:

  1. Hype and Marketing: Sometimes you build something mainly to showcase capability or attract partners. For example, his team once built an AI chatbot that did not outperform search for users but helped secure new business.
  2. User Problems: Here, AI is just one of several tools to solve a real customer need, like personalization at Blinkist, where algorithm changes, notifications, and UI tweaks all worked together.
  3. True Innovation: Projects that create new value altogether, such as Yelp Store Visits, which allowed advertisers to measure real world visits instead of just clicks.

Each type has value, but only if everyone agrees upfront on which one you are building.

The Playbook for AI Product Managers

Thomas summed up his framework for managing AI product expectations in four simple habits:

  1. Start with an MVP model to learn before you build.
  2. Map your initiatives to the Three Horizons so risk and reward are transparent.
  3. Use timeboxing to bring clarity to inherently unpredictable work.
  4. Label your work clearly as hype, user problem, or innovation and align stakeholders early.

When these expectations are clear, AI teams stay trusted and empowered, even when experiments fail.

The Bigger Message: From 95% Failure to 5% Success

Thomas closed by returning to his opening statistic that 95% of AI projects fail to deliver value.

In his view, this is not a failure of technology but a failure of management.

Success comes from aligning everyone, engineers, leaders, and stakeholders, around the messy reality of AI work. When expectations are realistic and learning is intentional, product teams move from the 95% that fail to the 5% that truly make an impact.

About the Speaker

Thomas Brouwer is a product leader focused on personalization, AI, and data driven product development. He has worked at Yelp, Apheris, and Blinkist, leading applied machine learning projects that bridge the gap between technology and strategy. His mission is to help teams build AI products that deliver real value, not just hype.

ABOUT THE SPEAKER

Thomas Brouwer

Thomas Brouwer is a product leader based in Hamburg, Germany. After completing his PhD in machine learning at the University of Cambridge, he spent seven years building data and ML products at Yelp, Apheris, Blinkist, and Go1. He enjoys bringing data-heavy products to market and making them a success. Outside of work you can find him bouldering, travelling, and writing (about product management, and more).