💜 PRODUCT ART 💜

💜 PRODUCT ART 💜

The Innovation Paradox: Why Your Metrics Might Be Killing Your Product’s Future | Dopamine-Driven Development: How Addiction Patterns Shape Your Product Backlog Priorities

Issue #220/ Wydanie #220

Destare Foundation's avatar
Alex Dziewulska's avatar
Sebastian Bukowski's avatar
Jakub Sirocki's avatar
+2
Destare Foundation
,
Alex Dziewulska
,
Sebastian Bukowski
, and 3 others
Oct 07, 2025
∙ Paid

In today's edition, among other things:

💜 The Innovation Paradox: Why Your Metrics Might Be Killing Your Product’s Future (by

Alex Dziewulska
)

💜 Dopamine-Driven Development: How Addiction Patterns Shape Your Product Backlog Priorities (by

Łukasz Domagała
)

💪 Interesting opportunities to work in product management

🍪 Product Bites - small portions of product knowledge

🔥 MLA week#30

Join Premium to get access to all content.

It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).

Grab a notebook 📰 and your favorite beverage 🍵☕.

DeStaRe Foundation

Editor’s Note by Alex 💜

The ‘Full-Stack PM’ Trap: Why Wearing All Hats Makes You Worse at Each One

Here’s the brutal truth your company won’t tell you: Being asked to do more with less isn’t pragmatic leadership—it’s organizational negligence disguised as empowerment.

You’re being told you’re versatile. A “full-stack PM” who can handle UX design in the morning, analyze retention funnels at lunch, write go-to-market strategy in the afternoon, and facilitate stakeholder alignment before dinner. You’re told this makes you valuable—that companies need “T-shaped” professionals who can do it all.

The reality? You’re not becoming a versatile professional. You’re becoming a burned-out generalist producing B-grade work across every dimension while your company congratulates itself on “efficiency.”

The Industry’s Dirty Little Secret: Full-Stack Is Code for Understaffed

Let me decode the euphemisms: When your job description says “full-stack PM,” read it as “we’re too cheap or disorganized to properly staff our product teams.” When you hear “wear multiple hats,” understand it means “we’ll stretch you thin across incompatible roles until something breaks—probably you.”

The product management community has normalized this dysfunction through a brilliant linguistic sleight of hand. We’ve rebranded organizational failure as individual capability. Instead of asking “Why is this person responsible for five distinct professional disciplines?” we celebrate their “versatility.” Instead of questioning why companies can’t build properly specialized teams, we tell PMs they need to “level up” their skillset to include... well, everything.

This isn’t happening because products got more complex. It’s happening because of three converging forces: (1) the 2022-2024 tech layoffs that decimated support functions, (2) the “do more with less” dogma that followed, and (3) a fundamental misunderstanding of how human cognition actually works. The result? Product teams operating at 60% capacity while leadership congratulates itself on lean operations.

What makes this particularly insidious is how it weaponizes the PM’s natural inclination toward ownership. We want to solve problems. We want to fill gaps. So when UX is understaffed, we jump in. When marketing needs help with positioning, we volunteer. When analytics is overwhelmed, we build our own dashboards. Each act feels responsible. Collectively, they’re killing your ability to do actual product management.

The Neuroscience of Why You Can’t Actually Multitask (Even Though You Think You Can)

Here’s where the evidence becomes devastating. University of Washington professor Sophie Leroy spent years studying what happens when people switch between tasks. She discovered a phenomenon called “attention residue”—when you shift from one task to another, part of your cognitive capacity remains stuck on the previous task, impairing your performance on the new one.

In her seminal 2009 research, Leroy found that attention residue doesn’t just happen with major task switches. Even brief interruptions—checking email, answering a Slack message, glancing at a notification—leave cognitive residue that degrades your performance. The effect is cumulative: “If, like most, you rarely go more than 10-15 minutes without a just check, you have effectively put yourself in a persistent state of self-imposed cognitive handicap.”

Now apply this to the “full-stack PM” reality: You’re sketching wireframes for 90 minutes, then jumping into SQL to debug analytics, then pivoting to stakeholder presentations, then reviewing competitive positioning, then back to technical specs. Your brain isn’t smoothly transitioning between these domains—it’s leaving cognitive residue with every switch, systematically degrading your performance across all of them.

But it gets worse. Research on employee interruptions reveals workers are interrupted every six to 12 minutes in the modern workplace. When 40% of workers report being interrupted 10+ times per day, and each interruption creates attention residue, we’re not just context-switching—we’re operating in a permanent state of cognitive fragmentation.

Cal Newport, synthesizing this research, concluded: “The relative cognitive enhancement that would follow by minimizing this effect” is massive. In other words, eliminating constant task-switching doesn’t improve your performance by 10-20%. It transforms it entirely.

The Expertise Paradox: Why Breadth Destroys Depth

The research on skill acquisition makes the full-stack PM model even more absurd. Anders Ericsson spent decades studying expert performance across domains—chess, music, medicine, athletics. His finding? True expertise requires what he called “deliberate practice”—focused, sustained effort on specific skills, with immediate feedback, far beyond your comfort zone.

Ericsson’s research revealed that becoming genuinely skilled at any complex discipline requires thousands of hours of this type of practice. Not just time spent, but deliberate practice with full cognitive engagement. The popularized “10,000-hour rule” (which Ericsson himself hated as an oversimplification) actually understates the requirement for most domains—he estimated elite-level performance often requires 20,000-25,000 hours of deliberate practice.

Now consider what happens when a PM splits time across five disciplines. You’re not building toward expertise in any of them. You’re engaging in what research calls “dabbling”—surface-level exposure that creates the illusion of competence while systematically preventing actual skill development.

Research on specialist versus generalist performance consistently finds that specialists outperform generalists in their domain of expertise. This isn’t controversial. What’s interesting is why: specialists develop what cognitive scientists call “chunking”—the ability to recognize patterns and make high-quality decisions rapidly because they’ve built extensive mental models through deliberate practice.

When you’re a full-stack PM doing a little UX, a little analytics, a little strategy, and a little stakeholder management, you never develop these expert mental models. You remain perpetually at the novice-to-intermediate level across all dimensions—precisely the level where you make the most mistakes, work the slowest, and add the least strategic value.

Warren Bennis, the leadership authority who coined the term “deep generalist,” described professionals who “develop a unique blend of knowledge depth and knowledge breadth.” The key word? Develop. This means building genuine expertise first, then adding complementary knowledge. It doesn’t mean spreading yourself so thin across domains that you develop expertise in none.

The Real Cost: Burnout, Bad Decisions, and Broken Careers

The consequences of the full-stack PM delusion extend far beyond sub-optimal work quality. The human toll is catastrophic.

In 2025, 82% of workers are at risk of burnout—up from 43% just five years ago. Among managers specifically (which includes most product managers), 43% report experiencing burnout, 10% more than executives. The numbers are worse for younger workers: 59% of workers under 35 face work-related stress daily, compared to 50% of those 35 and older.

Research consistently identifies being overworked as the primary cause of stress for 37% of US workers, with one-third citing lack of work-life balance. When product managers are expected to be full-stack—handling design, analytics, strategy, marketing, and stakeholder management—they’re not just overworked. They’re attempting the cognitively impossible while being measured against standards that assume specialized focus.

LaunchNotes research on product manager burnout specifically identified that PMs experience burnout “as a result of their relentless pursuit of perfection, constant multitasking, and the pressure to deliver results within tight deadlines.” The full-stack PM model takes the already-demanding PM role and multiplies these stressors by requiring competence across five distinct professional domains simultaneously.

But the damage extends beyond individual burnout. When PMs operate in constant cognitive overload from task-switching, their decision quality plummets. Research shows that “people experiencing attention residue after switching tasks are likely to demonstrate poor performance on that next task. The thicker the residue, the worse the performance.”

This manifests as:

  • Strategic decisions made without deep analysis because you’re too busy with tactical UX work

  • Poor prioritization because you lack time for proper customer research

  • Technical debt accumulation because you’re splitting attention between stakeholder management and technical specs

  • Weak product positioning because you’re doing surface-level competitive analysis between meetings

The compounding effect of mediocre decisions made under cognitive load creates products that systematically underperform—not because the PM lacks talent, but because the operating model guarantees suboptimal outcomes.

Perhaps most insidious: the full-stack PM model destroys career development. When you spend years as a jack-of-all-trades, you never develop the deep expertise that makes you genuinely valuable. You can’t compete with specialized UX designers on design thinking. You can’t match data analysts on analytical depth. You can’t rival experienced strategists on competitive positioning. You end up replaceable precisely because you tried to be everything to everyone.

What Must Change: A Prescription for Sanity

Here’s what needs to happen, and it needs to happen now:

For Companies: Stop Calling Understaffing “Agility”

If your product teams need design work, hire designers. If they need analytics, hire analysts. If they need marketing support, hire PMMs. The ROI calculation is simple: one properly specialized professional produces higher-quality output faster than three generalists splitting attention. Research shows specialists outperform generalists in task-specific performance while generalists experience higher burnout rates. Stop pretending thin staffing is sophisticated org design.

Implement what research calls “cognitive load management”: structure teams so individuals can achieve flow state—sustained, focused work on compatible tasks. This means PMs focus on product strategy, user insight, and roadmap decisions. Not everyone does everything.

Create what Gallup research identifies as essential: supportive leadership that reduces burnout by 70%. This doesn’t mean cheerleading. It means giving people the resources, team structure, and reasonable scope to actually succeed at their jobs.

For PMs: Reclaim Your Role Boundaries

You have permission—no, you have a professional obligation—to say no to responsibilities that pull you away from core PM work. When asked to design mockups, respond: “I can define the requirements and success metrics. We need a designer for the visual solution.” When asked to build analytics dashboards, respond: “I can articulate what we need to measure and why. We need an analyst to implement tracking properly.”

This isn’t shirking responsibility. It’s professional boundaries based on cognitive science. Teresa Torres emphasizes that “product management is no longer a siloed discipline. It’s a team sport.” But team sport doesn’t mean everyone plays every position—it means each position has specialized responsibilities that together create success.

Document your actual work allocation. Track how many hours you spend on core PM activities (strategy, discovery, decision-making) versus auxiliary functions (design, analytics, stakeholder appeasement). If you’re spending more than 30% of your time on non-PM work, you’re not being versatile—you’re being misused.

For the Industry: Demand Better Standards

We need prominent product leaders to stop celebrating the full-stack PM mythos and start calling out the organizational dysfunction it represents. When Brian Chesky talks about taking back product control, we should ask: “Why were PMs given scope so broad they became ineffective?” When companies boast about “lean” product teams, we should ask: “Lean compared to what? Functional?”

Thought leaders like Marty Cagan, Teresa Torres, and Melissa Perri need to explicitly address this: the PM role has a specific scope, and diluting it across five disciplines doesn’t make PMs more valuable—it makes them less effective. The industry needs to standardize around what PM actually means, rather than letting companies use the title as a catch-all for “whoever fills this role does whatever we’re too understaffed to handle properly.”

Stop attending conferences that celebrate “how I balanced 8 different roles” and start demanding sessions on “how we built properly specialized product teams.” Vote with your attention and your career choices for companies that understand cognitive science.

The Revolution Starts With You Saying No

The full-stack PM mythology persists because it’s convenient for companies and flattering to PMs who mistake exhaustion for effectiveness. But the science is clear: humans cannot maintain high-quality output across multiple distinct cognitive domains simultaneously. Every hour you spend context-switching is an hour you’re not developing genuine expertise. Every responsibility you accept outside core PM work is a step away from becoming strategically valuable.

The solution isn’t individual heroics—working harder to somehow transcend human cognitive limits. The solution is structural: companies must staff product teams properly, and PMs must defend role boundaries backed by decades of research on attention, expertise, and burnout.

When your manager next suggests you’re being “empowered” to handle design, analytics, marketing, strategy, and stakeholder management, you now have the research to respond: “I appreciate the confidence, but cognitive science shows this approach guarantees mediocre outcomes across all dimensions. Instead, let’s discuss how to build a properly specialized team where I can focus on the strategic product decisions that actually require PM expertise.”

Will saying this be uncomfortable? Absolutely. Will it risk making you seem “not a team player”? Possibly. But here’s what’s more uncomfortable: burning out while producing substandard work across five disciplines because you accepted an organizationally dysfunctional operating model.

The product management community faces a choice: continue celebrating the full-stack delusion while our best people burn out producing mediocre work, or acknowledge what the research proves—that expertise requires focus, quality requires specialization, and human cognition has limits we ignore at our peril.

Your career, your mental health, and the quality of your products depend on which path you choose. Choose wisely.

Leave a comment


💪 Product job ads from last week

Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)

  1. Product Manager - Trans.eu Group

  2. Product Manager - EY

  3. Product Manager - Softserve

  4. Product Manager - aleno

  5. Product Manager - mBank

    Refer a friend


🍪 Product Bites (3 bites 🍪)

🍪 The Availability Heuristic: When Recent Feedback Drowns Out Real Patterns

Why Product Teams Mistake the Loudest Voice for the Majority Opinion

Last Tuesday’s angry customer email sits at the top of your inbox. It’s detailed, passionate, and demands a feature change. By Wednesday’s sprint planning, that single piece of feedback has somehow become “what our users are asking for.” Sound familiar?

The Availability Heuristic is a cognitive bias where we overweight information that’s readily available in our memory—typically recent, vivid, or emotionally charged—when making decisions. In product management, this mental shortcut transforms the freshest complaint into our strategic north star, often at the expense of actual data patterns that tell a different story.

This isn’t just a minor cognitive quirk. Research by Daniel Kahneman shows that people consistently overestimate the probability of events that are easy to recall, sometimes by as much as 300%. For product teams drowning in feedback channels—Slack messages, support tickets, sales calls, user interviews—the availability heuristic becomes our silent adversary, quietly steering product strategy toward whoever shouted last.

The Echo Chamber in Your Memory

Think of your brain as a newsroom. The availability heuristic is like an overzealous editor who puts every dramatic story on the front page while burying systematic trends in the back section. The problem? Your product decisions are made from today’s headlines, not from the archive of truth.

Here’s how it manifests in product work:

The Recency Trap: The last three user interviews mentioned a missing integration feature. Suddenly, that integration feels like a top priority—even though your analytics show only 4% of users would benefit from it.

The Vividness Distortion: A customer churned with an emotional exit interview explaining exactly why they left. That narrative becomes more memorable than the 300 silent customers who quietly renewed, making churn prevention feel more urgent than it statistically is.

The Frequency Illusion: After hearing about a competitor’s new feature twice in one week, we start believing “everyone is talking about it”—a phenomenon psychologists call the Baader-Meinhof effect. Your roadmap suddenly needs to respond to this “trend.”

Consider Dropbox’s experience in 2014. Their support team was inundated with requests for a native Linux client—vocal, technical users creating dozens of forum threads. The availability heuristic would suggest: “We need Linux support now.” But when Dropbox analyzed their actual user base, Linux users represented less than 1% of potential revenue. The loudest voice was drowning out the silent majority of Windows and Mac users who needed different improvements.

The Three Amplifiers of Availability Bias

Not all available information weighs equally in our minds. Certain types of feedback get cognitively amplified, making them even more dangerous for product decisions:

1. Stakeholder Proximity Bias

Feedback from people we interact with daily carries disproportionate weight. When your CEO mentions a feature idea, or when the sales team shares a recurring objection, these inputs become immediately “available” in every conversation. We’ve seen this at Netflix, where product teams had to actively create systems to ensure that executive opinions didn’t override subscriber behavior data.

2. Emotional Intensity Bias

Angry feedback sticks. A study published in the Journal of Consumer Research found that negative information is processed more thoroughly and weighted more heavily than positive information—sometimes by a factor of three to one. One furious user’s Twitter thread can psychologically outweigh 50 positive support tickets.

3. Narrative Coherence Bias

Stories are more memorable than statistics. When a user tells you exactly how they use your product and what’s missing, that complete narrative creates a cognitive anchor. Meanwhile, aggregate analytics showing different patterns feel abstract and forgettable.

Building Your Availability Antidote System

Knowing about the availability heuristic isn’t enough—we need systematic defenses. Here’s a practical framework for product teams:

The 72-Hour Rule

When new feedback arrives, especially if it’s vivid or emotional, implement a mandatory 72-hour waiting period before adding it to your roadmap discussion. This isn’t about ignoring feedback—it’s about preventing recency from hijacking strategy.

How to implement it:

  • Create a “feedback intake” system where all requests spend three days in triage

  • During these 72 hours, cross-reference the feedback against existing data

  • Tag each piece of feedback with a “freshness” label to remind the team of recency bias

What you’ll discover: About 40% of “urgent” feedback loses its urgency once you’ve had time to contextualize it within broader patterns.

The Evidence Triangle

Before prioritizing any feature based on feedback, require three sources of evidence:

  1. Quantitative validation: Does data support this pattern? (Analytics, surveys, usage metrics)

  2. Qualitative depth: Do multiple users describe similar needs? (Interviews, support tickets)

  3. Strategic alignment: Does this support our documented product vision?

Only when all three sides of the triangle are present should feedback influence prioritization. This framework prevents any single source—no matter how vivid—from dominating decisions.

The Silent Majority Dashboard

Create a standing dashboard that visualizes what your silent users are actually doing. Include metrics like:

  • Top features by daily active usage (not by support requests)

  • Retention drivers from cohort analysis (not from exit interviews)

  • Revenue per feature utilization (not from sales team anecdotes)

At Spotify, product teams maintain “ground truth” dashboards that are reviewed before any roadmap discussion. This makes data-driven patterns as “available” to memory as the latest stakeholder request.

The Counter-Evidence Practice

In every sprint planning or roadmap review, assign one team member the role of “availability auditor.” Their job is to actively search for counter-evidence to proposed priorities:

  • “We heard three users want X, but what do our 10,000 silent users need?”

  • “This feature request came from our biggest customer, but would our typical customer find it valuable?”

  • “Support tickets mention this daily, but is it because of a confusing UX we could fix instead?”

This practice forces the team to confront availability bias before it calculates into commitments.

The Frequency vs. Importance Matrix

Not everything that’s frequently available is actually important. Here’s a prioritization tool specifically designed to combat availability bias:

Map each piece of feedback on two axes:

  • Vertical Axis (Availability): How often does this feedback surface? How vivid or memorable is it?

  • Horizontal Axis (Strategic Impact): What’s the potential impact on key metrics and product vision?

This creates four quadrants:

  1. High Availability, High Impact: Legitimate priority (but verify it’s not just seeming urgent)

  2. High Availability, Low Impact: The availability trap—frequent but unimportant

  3. Low Availability, High Impact: Hidden opportunities your silent users need

  4. Low Availability, Low Impact: Rightfully deprioritized

The magic happens in quadrants 2 and 3. Quadrant 2 is where availability bias lives—things that feel urgent but aren’t strategic. Quadrant 3 is where real opportunities hide—important patterns that aren’t shouting at you.

When Availability Heuristic Actually Helps

Here’s the nuance: the availability heuristic isn’t always wrong. Sometimes recent, vivid feedback is genuinely important. The key is knowing when to trust it.

Trust availability when:

  • You’re in crisis mode and rapid response is necessary (major outages, security issues)

  • Multiple independent sources suddenly surface the same pattern within days

  • The feedback aligns with early warning indicators you’ve been monitoring

  • You’re in genuine discovery mode and gathering initial signal

Question availability when:

  • It contradicts long-term data trends

  • It comes from a single source, no matter how important that source is

  • The feedback aligns suspiciously well with someone’s pet project

  • The urgency feels manufactured rather than data-driven

The Memory Diversity Strategy

The best defense against availability bias is memory diversity—actively making different types of information equally available to your decision-making process.

Weekly Practice:

  • Start roadmap meetings by reviewing long-term metrics before discussing recent feedback

  • Rotate who presents user research to prevent any single voice from dominating

  • Archive quarterly “what we didn’t build and why” documents to remember past reasoning

  • Create highlight reels of positive usage patterns, not just problem reports

Monthly Practice:

  • Conduct “silent user” research sessions focused entirely on the 80% who never give feedback

  • Review churn reasons alongside renewal reasons (we naturally remember departures more than stays)

  • Audit how much of your roadmap comes from proactive strategy versus reactive feedback

The Bigger Picture: Building Decision Hygiene

The availability heuristic reveals a deeper truth about product management: our minds are powerful but imperfect decision-making instruments. We need cognitive infrastructure—systems, rituals, and frameworks—that compensate for our mental shortcuts.

Think of these practices not as bureaucracy but as decision hygiene. Just as we wouldn’t make medical decisions based solely on the most memorable patient, we shouldn’t make product decisions based solely on the most available feedback.

The best product teams don’t try to eliminate cognitive biases—they design around them. They build systems that make the right information as available as the loudest information. They create cultures where asking “but what does the silent majority need?” is as natural as responding to the latest customer complaint.

Here’s your implementation challenge: This week, identify one product decision your team is considering. Before finalizing it, ask: “Are we responding to what’s available in our memory, or to what’s important in our data?” Then spend 30 minutes actively searching for counter-evidence or silent patterns that might tell a different story.

The loudest voice in the room is rarely the voice of your user base. But the patterns in your data? Those whisper truths worth listening to—if only we make them available enough to hear.

Leave a comment


🍪 The Hindsight Bias: Why “I Told You So” Ruins Product Retrospectives

How Our Need to Have Been Right Prevents Us from Learning What Went Wrong

The product launch failed. Revenue missed projections by 60%. In the retrospective, voices rise around the conference table: “I knew the pricing was too high.” “I said we should have launched on mobile first.” “Obviously, we should have done more user testing.” Everyone, it seems, predicted this exact outcome. Except nobody wrote it down beforehand.

Hindsight bias is the psychological tendency to perceive past events as having been more predictable than they actually were at the time. Once we know an outcome, we unconsciously rewrite our memory to believe we “saw it coming all along.” In the psychology literature, it’s sometimes called the “knew-it-all-along effect,” and it’s one of the most destructive forces in product retrospectives and organizational learning.

Research by Baruch Fischhoff, who pioneered hindsight bias studies in the 1970s, found that people consistently overestimate their predictive abilities by 40-60% after learning an outcome. In product teams, this isn’t just an interesting quirk—it’s a learning killer. When everyone believes they knew what would happen, nobody digs deep enough to understand what actually happened.

The Retrospective Rewrite

Imagine your memory as a Wikipedia page that anyone can edit after the fact. Hindsight bias is like a sneaky contributor who goes back and changes all the “uncertain” and “surprising” markers to “obvious” and “predictable” the moment you learn how things turned out.

Here’s what makes hindsight bias particularly insidious in product work:

The Inevitability Illusion: After a feature flops, the warning signs seem obvious. The competitive research that was “interesting but not definitive” becomes “clearly showed market saturation.” The user interview that expressed mild concern becomes “explicitly warned us about this problem.”

The False Confidence Cycle: When team members believe they predicted past outcomes, they become overconfident in future predictions. If you “knew” the last three feature launches would succeed, you trust your instinct more than you should for the next one.

The Learning Prevention: The purpose of retrospectives is to discover what we didn’t know. But hindsight bias convinces us we already knew everything—we just didn’t act on it. This subtle shift transforms “what should we learn?” into “who should we blame?”

Consider Google Wave’s 2010 shutdown. After the product failed, countless post-mortems described how the problems were “obvious from the start”—too complex, no clear use case, tried to replace email without email’s simplicity. Yet Google Wave was developed by brilliant engineers with extensive user research. The issues that seem obvious in hindsight were genuinely uncertain during development. Hindsight bias prevents us from studying why smart teams with good data still made those choices.

The Three Mechanisms of Memory Distortion

Hindsight bias doesn’t just make us think we predicted the past—it actively changes how we remember it. Understanding these mechanisms helps us recognize when we’re rewriting history:

1. Outcome Knowledge Contamination

Once you know what happened, it’s cognitively impossible to “unknow” it when evaluating past decisions. This isn’t a moral failing—it’s how memory works. Studies show that even when people are explicitly warned about hindsight bias and instructed to ignore outcome knowledge, they still can’t fully reconstruct their pre-outcome uncertainty.

When Quibi launched and shut down within six months in 2020, commentary was filled with “everyone knew short-form premium video wouldn’t work.” But before launch? The company raised $1.75 billion from sophisticated investors who clearly didn’t “know” it would fail. Outcome knowledge made the failure seem inevitable.

2. Sense-Making Compression

Our brains are wired to create coherent narratives. When we look back at a product decision, we unconsciously simplify the messy reality—the competing data points, the resource constraints, the organizational politics—into a clean story where the outcome follows logically from the beginning.

This compression erases the actual uncertainty we felt. What was genuinely a 60/40 decision gets remembered as an 90/10 decision once we know which way it went.

3. Ego Protection Mechanism

If the outcome was positive, we remember advocating for that direction. If negative, we remember being skeptical. This isn’t conscious lying—our memory genuinely rewrites itself to protect our self-concept as competent decision-makers.

Spotify’s eventual pivot to podcasting seems obvious now, but early discussions were filled with genuine uncertainty about whether it fit their core music identity. Many team members who initially questioned the strategy now remember being early supporters—not from dishonesty, but from hindsight bias reshaping memory.

Designing Bias-Resistant Retrospectives

Traditional retrospectives—sitting in a room and discussing what happened—are hindsight bias factories. We need systematic interventions to preserve learning:

The Pre-Mortem Practice

Before any major product decision or launch, conduct a structured pre-mortem. This is the most powerful hindsight bias prevention tool available.

How it works:

  1. Assume the product has completely failed six months from now

  2. Give each team member 10 minutes to write privately: “Here’s why it failed...”

  3. Share all predictions without debate

  4. Document everything in a locked file that can’t be edited

Why it works: Pre-mortems exploit hindsight bias in reverse—they force you to imagine failure as inevitable and work backward to causes. This surfaces concerns people are hesitant to voice during optimistic planning phases. When Amazon Web Services launches new features, teams conduct pre-mortems that stay sealed until retrospectives, creating an objective record of pre-launch uncertainty.

Critical detail: The pre-mortem document must be locked and timestamp-verified. If people can edit it later, hindsight bias will creep in through the back door.

The Prediction Registry

Create a simple system where team members record specific predictions before decisions:

Template:

  • Decision: What are we deciding today?

  • My prediction: What do I think will happen? (Include probability estimate)

  • Confidence level: How sure am I? (1-10 scale)

  • Key uncertainty: What could make me wrong?

  • Date: When will we know?

Store these in an append-only log that can be reviewed during retrospectives. At Shopify, product teams maintain prediction registries that are reviewed quarterly, creating an honest record of what people actually thought versus what they remember thinking.

The Role Reversal Retrospective

Instead of asking “What happened?”, assign roles:

  • The Optimist: Argues why every decision we made was reasonable given what we knew

  • The Pessimist: Identifies genuine warning signs we overlooked (not invented after the fact)

  • The Archaeologist: Presents primary evidence from before the outcome (messages, documents, meeting notes)

  • The Fortune Teller: Explains what was genuinely unpredictable

This structure forces the team to distinguish between real foresight and hindsight distortion.

The Surprise Audit

Open retrospectives with: “What surprised us about this outcome?” If nothing surprised the team, hindsight bias is at work. Every product initiative contains some unpredictable elements. If your retrospective lacks surprise, you’re remembering a simplified history, not learning from a complex reality.

The Documentation Defense System

Hindsight bias thrives in the absence of evidence. Combat it with strategic documentation:

Before Launch:

  • Decision logs explaining why you chose option A over option B

  • Risk assessments with probability estimates

  • Team member predictions (sealed and dated)

  • Alternative approaches you considered and rejected

During Execution:

  • Weekly conviction checks: “How confident are we now?” (1-10)

  • Assumption tests: “What would have to be true for this to succeed?”

  • Uncertainty logs: “What don’t we know yet?”

In Retrospectives:

  • Open the sealed pre-mortem first

  • Review decision logs before discussing outcomes

  • Compare initial risk assessments to actual risks encountered

  • Identify which predictions proved accurate and why

When Microsoft launched the Surface tablet, they maintained extensive decision documentation. Years later, when retrospectives examined early model struggles, teams could review authentic uncertainty about enterprise versus consumer positioning—rather than invented narratives about “obvious” strategic errors.

The Language of Uncertainty

Hindsight bias is amplified by how we talk about the past. Shift your retrospective language:

Replace: “It was obvious that...” With: “Given what we know now, we can see that...”

Replace: “We should have known...” With: “With the information we had then, we believed...”

Replace: “I predicted this would happen...” With: “Here’s what I wrote in the pre-mortem: [actual quote]”

Replace: “The data clearly showed...” With: “The data we had was ambiguous—here’s the full picture we were working with...”

This isn’t pedantic language policing—it’s preserving epistemic humility. Every word choice either reinforces or challenges hindsight bias.

When Hindsight Bias Is Actually Useful

Here’s the counterintuitive insight: hindsight bias exists for a reason. It helps us construct causal narratives and identify patterns. The goal isn’t to eliminate it—that’s impossible—but to harness it productively.

Use hindsight constructively:

  • After identifying what “should have been obvious,” ask: “What systems would help us notice this type of signal next time?”

  • Transform “I told you so” into “Here’s what helped me see this coming—can we systematize that intuition?”

  • Use outcome knowledge to refine your prediction models, not to blame past decisions

Example: After Slack’s explosive growth, many said “obviously workplace chat would win.” The useful question isn’t whether it was obvious—it’s “What specific indicators in early traction data predicted massive adoption that we can watch for in future products?”

The Accountability Paradox

Here’s where things get philosophically thorny: organizations need accountability, but hindsight bias makes fair accountability nearly impossible. If we can’t accurately remember what we knew when, how do we evaluate past decisions?

The Solution: Judge decisions by process quality, not outcome quality.

Bad Accountability Question: “Why did you launch a product that failed?” Good Accountability Question: “Did you follow our decision-making process? Did you test key assumptions? Did you identify and monitor risks?”

Amazon’s principle of “disagree and commit” works partly because it separates decision-making process from outcome. Leaders are accountable for making well-reasoned decisions with available data, not for being omniscient about future outcomes.

Building a Learning Culture Beyond Hindsight

The deepest problem with hindsight bias isn’t that it distorts memory—it’s that it prevents genuine learning. When failure seems obvious in retrospect, we learn the wrong lesson: “We should have been smarter.” The right lesson is usually: “The situation was genuinely uncertain, and here’s how we can make better decisions under uncertainty.”

Cultural shifts that combat hindsight bias:

  1. Celebrate Updated Beliefs: Reward people for changing their minds when evidence emerges, not for “being right all along”

  2. Normalize Uncertainty: Make it safe to say “I don’t know” and “I was wrong” without career consequences

  3. Document Disagreement: When smart people disagree before a decision, record those disagreements. They’re valuable even if one view wins out

  4. Probabilistic Thinking: Talk about decisions in terms of probability ranges, not binary right/wrong outcomes

Stripe’s engineering culture includes “reversal documents”—write-ups when someone changes their technical opinion. These documents celebrate intellectual honesty over stubborn consistency.

Your Implementation Challenge

In your next retrospective, try this exercise:

Before discussing what happened, have each team member privately write: “Here’s what genuinely surprised me about this outcome.” Then compare surprises. If your team has zero collective surprise, you’re not learning—you’re reconstructing history.

Then ask: “What did our pre-launch predictions actually say?” Pull up the real documents. Compare authentic predictions to hindsight-influenced memory.

The gap between what you predicted and what you remember predicting? That’s the tax hindsight bias charges on organizational learning.

The best product teams aren’t those who were always right. They’re teams who remember accurately what they didn’t know, learn genuinely from uncertainty, and resist the seductive comfort of “I told you so.”

Because the only thing more dangerous than making mistakes is believing you never made them at all.

Leave a comment


🍪 The Decoy Effect: Using Strategic Pricing Tiers to Guide User Decisions

How Adding an Option Nobody Wants Makes Everyone Choose What You Want

You’re buying a coffee. The menu shows three sizes: Small ($3), Medium ($4), Large ($5). Most customers choose Medium—it feels like the rational middle ground. Now imagine the menu shows: Small ($3), Medium ($6.50), Large ($7). Suddenly, Large looks like incredible value. Same large coffee, same $7 price, but now 80% of customers choose it instead of 30%.

What changed? A strategically placed decoy.

The Decoy Effect (also called asymmetric dominance) is a cognitive bias where introducing a third, strategically inferior option changes the relative attractiveness of the other two options. The decoy isn’t meant to be purchased—it’s meant to make your target option look better by comparison. In behavioral economics, it’s one of the most reliable ways to influence choice without changing the actual options people want.

Dan Ariely’s classic experiment demonstrated this perfectly. When he showed MIT students two subscription options for The Economist—web-only for $59 or print-and-web for $125—16% chose print-and-web. But when he added a decoy (print-only for $125), suddenly 84% chose print-and-web. The decoy option made the bundle look like a no-brainer, even though the decoy itself received zero purchases.

For product teams, this isn’t manipulation—it’s architecture. We’re not creating desire where none exists. We’re clarifying value in a world where customers struggle to evaluate abstract software benefits.

The Psychology of Relative Evaluation

Here’s the uncomfortable truth: humans are terrible at absolute valuation but excellent at relative comparison. When you ask someone “Is this product worth $49/month?” they struggle. But when you ask “Which of these three options offers the best value?” they become confident decision-makers.

Think of it like this: your brain isn’t a calculator—it’s a comparison engine. It doesn’t compute absolute worth; it identifies which option dominates which. The decoy effect exploits this by creating a clear dominance relationship.

How it works in the brain:

  1. Comparative Ease: When faced with choices, we look for clearly “better” options where one dominates another on most attributes

  2. Justification Need: We need to rationalize our choices to ourselves and others; decoys provide easy justification

  3. Value Anchoring: The decoy sets a reference point that makes the target option look like a bargain

When Spotify positions its pricing, they’re not just listing plans—they’re architecting a decision environment. Individual ($9.99), Duo ($12.99), Family ($15.99), Student ($4.99). The Duo plan serves partly as a decoy, making Family look like amazing value per person while making Individual seem reasonable for solo users.

The Three Types of Decoys That Shape Product Choices

Not all decoys work the same way. Understanding the mechanics helps you deploy them effectively:

1. The Asymmetrically Dominated Decoy

This is the classic decoy: clearly inferior to your target option but similar in form.

Example Structure:

  • Basic Plan: $10/month, 5 users

  • Professional Plan: $30/month, 15 users, advanced features (target option)

  • Business Plan: $50/month, 20 users, all features

Wait—where’s the decoy? Add this:

  • Team Plan: $45/month, 15 users, advanced features (the decoy)

Team Plan is clearly dominated by Professional (same features, higher price) but makes Business look like only $5 more for significant additional value. Professional becomes more attractive, and Business becomes the “smart upgrade.”

2. The Compromise Decoy

Position your target option as the balanced middle choice between two extremes.

Example Structure:

  • Starter: $15/month, limited features

  • Professional: $49/month, full features (target)

  • Enterprise: $299/month, full features plus dedicated support

The high-priced Enterprise isn’t really meant to sell—it makes Professional look moderate and reasonable. Humans naturally gravitate toward middle options when anchored by extremes. This is why restaurant menus often list one outrageously expensive wine—it makes the second-most-expensive wine look sensible.

3. The Ratio Decoy

Create pricing where the per-unit value changes dramatically between tiers.

Example Structure:

  • Individual: $10/month per user

  • Small Team (5 users): $40/month ($8 per user) (decoy)

  • Team (10 users): $70/month ($7 per user) (target)

The Small Team plan makes per-user costs salient, and Team suddenly looks like you’re “saving” money by buying more seats—even if you don’t need them yet.

The Pricing Tier Architecture Framework

Building effective pricing with strategic decoys requires systematic thinking. Here’s how to architect your tiers:

Step 1: Identify Your Target Tier

Which plan do you want most customers to choose? This isn’t always the most expensive—it’s the one that optimizes for your business model and customer success.

Questions to ask:

  • Which tier has the best unit economics?

  • Which tier drives best long-term retention?

  • Which tier creates the most upgrade momentum?

  • Which tier your ideal customer profile needs?

For SaaS products with high support costs, the target might be the middle tier that provides enough value without requiring extensive hand-holding.

Step 2: Define Clear Value Jumps

Each tier must offer obviously more value than the previous tier. Avoid “mushy middle” where it’s unclear why someone would upgrade.

Good value jumps:

  • Features that unlock new use cases (not just “more” of something)

  • Threshold changes (limited → unlimited, basic → advanced)

  • Access level changes (self-serve → support, single-user → team)

Bad value jumps:

  • Arbitrary number increases (5 projects vs. 8 projects—why 8?)

  • Features customers don’t understand the value of

  • Incremental improvements that don’t change behavior

Step 3: Position Your Decoy

The decoy should be:

  • Similar enough to the target that comparison is natural

  • Obviously inferior on at least one key dimension

  • Slightly more expensive than seems justified by the value

Placement strategies:

For upselling: Place the decoy between your target and the tier above it, making upgrading seem like better value.

For anchoring: Place the decoy tier far above your target, making the target look moderate.

For justification: Make the decoy obviously irrational, so choosing the target feels smart.

Step 4: Test the Decision Flow

Before launching, walk through the customer journey:

  • How do they encounter pricing?

  • What comparison do they make first?

  • Does the decoy create the intended contrast?

  • Can they easily justify their choice to themselves?

HubSpot famously redesigned their pricing page multiple times, A/B testing how decoy positioning affected plan selection. Their ultimate structure uses Enterprise as both a legitimate product and a decoy that makes Professional look like the sweet spot.

Real-World Decoy Engineering

Let’s examine how successful products use decoys:

Apple’s iPhone Storage Tiers

  • 128GB: $799

  • 256GB: $899 (target)

  • 512GB: $1099

The 128GB isn’t really a decoy—it’s entry-level. But notice the jump to 256GB is only $100 for double storage, while 256GB to 512GB is $200 for double storage. The 512GB tier is the decoy, making 256GB look like optimal value-per-gigabyte.

The Economist’s Famous Mistake (Then Correction)

When Ariely exposed their accidental decoy, The Economist didn’t remove it—they refined it. Their pricing now uses digital-only as an intentional low anchor, making the bundle seem premium but accessible.

Slack’s Pricing Evolution

Slack’s early pricing had: Free, Standard ($6.67/user), Plus ($12.50/user). The Plus plan was the decoy, making Standard seem reasonable. As they matured, they added Enterprise Grid (custom pricing), which serves as an expensive anchor making Plus look like a smart upgrade from Standard.

Implementation Tactics and Testing Protocol

Ready to add decoys to your pricing? Here’s your deployment checklist:

Phase 1: Preparation (Week 1)

  • Document current plan selection distribution

  • Interview recent customers about their tier decision

  • Map out which tier you want to optimize for

  • Design 3-5 decoy variations

Phase 2: Design Testing (Week 2-3)

  • Create mockups of pricing pages with different decoy positions

  • Run fake door tests or user testing sessions

  • Ask participants to “think aloud” while choosing

  • Measure: time to decision, confidence level, tier selected

Phase 3: Limited Launch (Week 4-6)

  • A/B test on 20% of traffic

  • Track: conversion rate, plan selection distribution, revenue per customer

  • Monitor support tickets for confusion about pricing

  • Interview customers who chose each tier

Phase 4: Optimization (Week 7+)

  • Iterate based on data

  • Test different decoy positions

  • Adjust feature packaging if needed

  • Monitor long-term retention by tier

Critical metrics to watch:

  • Overall conversion rate (are you losing people in analysis paralysis?)

  • Target tier selection rate (is the decoy working?)

  • Revenue per customer (are people upgrading or downgrading?)

  • Time to decision (faster usually means clearer value)

The Ethical Boundaries of Decoy Pricing

Let’s address the elephant: is using decoys manipulative?

It’s ethical when:

  • All options provide genuine value at fair prices

  • The decoy exists as a real option some customers might legitimately want

  • You’re clarifying value, not obscuring it

  • Customers end up with a product that serves their needs

It crosses the line when:

  • The decoy is intentionally confusing to generate bad decisions

  • You’re hiding information customers need

  • The “value” you’re highlighting is fabricated or misleading

  • You’re exploiting cognitive biases to sell people products they don’t need

Think of decoys like architectural lighting in a museum. The lighting guides visitors’ attention to certain pieces, but the art itself is authentic. You’re curating the decision experience, not manipulating the decision.

Netflix’s testing showed that when they removed all decoys and just listed plans by features, decision time increased by 40% and abandonment increased. The decoys weren’t tricking people—they were helping people make decisions in the overwhelming abundance of choices.

Common Decoy Design Mistakes

Even with good intentions, teams often misapply the decoy effect:

Mistake #1: Too Many Decoys Having 5-7 pricing tiers usually just creates confusion. Stick to 3-4 tiers with one strategic decoy.

Mistake #2: Obvious Uselessness If the decoy is so bad nobody would ever consider it even for a second, it won’t create useful contrast. The decoy should be almost-reasonable.

Mistake #3: Wrong Domination Relationship The decoy must be clearly inferior to your target option but not to all options. Get the asymmetry wrong and you just confuse people.

Mistake #4: Changing Too Frequently Constant pricing changes train customers to wait for better deals. Decoys should be stable architectural elements, not promotional tactics.

Mistake #5: Ignoring Customer Segments Different segments may perceive decoys differently. What works as a decoy for startups might be the target tier for enterprises.

Beyond Pricing: Decoys in Product Decisions

The decoy effect isn’t just about pricing—it shapes how users evaluate features, plans, and product options throughout their journey.

Feature Adoption: When rolling out a new feature, position it between a familiar feature and a more complex feature. The familiar feature makes it feel accessible; the complex feature makes it feel sophisticated.

Plan Upgrades: When prompting upgrades, show three options: stay on current plan, mid-tier upgrade (decoy), premium upgrade (target). The decoy makes the premium upgrade feel like “going all the way.”

Onboarding Paths: Offer three setup options: Quick (limited), Standard (decoy), Custom (target). The decoy makes Custom feel thorough rather than overwhelming.

The Meta-Principle: Choice Architecture Matters

Here’s what the decoy effect teaches us about product design: there’s no such thing as “neutral” presentation of options. Every choice architecture—even one that tries to be neutral—shapes decisions.

The question isn’t whether to influence choice; it’s whether to influence choice thoughtfully and ethically. Random pricing structures still create decoys and anchors—they’re just accidental ones that don’t serve anyone well.

Your implementation challenge: Review your current pricing tiers. For each tier, ask: “What is this tier’s job?” If the answer is “to be purchased,” that’s fine. But you probably need one tier whose job is “to make another tier look better.”

Then test it. But don’t just track which tier sells most—track whether customers feel confident in their decision. The best decoy makes choosing feel easy, not tricky.

Because at its heart, the decoy effect isn’t about tricking people into spending more. It’s about removing the paralysis of abstract valuation and replacing it with the clarity of concrete comparison.

Sometimes, the kindest thing we can do is make the choice obvious.

Leave a comment


🔥 MLA #week 30

The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forward—no overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.

Why it matters? Culture isn’t built overnight. It’s the sum of consistent, small actions. MLA creates momentum—one small win at a time—and turns those wins into lasting change. Small actions, big impact

MLA: Lunch & Learn with Engineering

Why This Matters:

Product managers make better decisions when they understand the technical landscape—not just what’s possible, but why certain approaches are faster, more scalable, or more maintainable. Too often, PMs and engineers speak different languages, leading to misaligned expectations, frustration, and suboptimal solutions. A casual lunch where engineers share technical concepts in a no-pressure environment builds mutual understanding, reduces friction in future technical discussions, and demonstrates that you value their expertise beyond just “can we ship this by Friday?”

How to Do It:

Choose the Right Topic:

Select a technical concept that impacts product decisions but often feels like a black box:

  • Technical debt and its real impact on velocity

  • System architecture and why refactoring matters

  • Performance optimization fundamentals

  • Security considerations that affect features

  • Testing strategies and quality assurance

  • The CI/CD pipeline and deployment process

Pick the Right Engineer(s):

Look for someone who:

  • Is passionate about the topic and enjoys explaining things

  • Has credibility on the team

  • Can translate technical concepts to non-technical audiences

  • Might not typically get spotlighted in product discussions

Frame the Invitation:

Make it informal and curiosity-driven:

“I’d love to learn more about [topic] over lunch—nothing formal, just help me understand it better so I can make smarter product decisions. Can you walk me through the basics and answer my questions?”

Set the Right Environment:

  • Keep it small (3-6 people max) to encourage questions

  • Provide lunch—make it feel like an appreciate gesture, not another meeting

  • Choose a relaxed setting (break room, outdoor space, or casual conference room)

  • No presentations required—whiteboard discussions are perfect

  • Block 45-60 minutes

Prepare Your Mindset:

  • Come with genuine curiosity, not an agenda

  • Ask “why” and “how” questions, not “when can we...”

  • Take notes—show you’re learning

  • Acknowledge what you don’t understand

  • Share how this knowledge will influence product thinking

During the Session:

  • Let the engineer lead the conversation

  • Ask clarifying questions: “Can you give me an example?” or “What would happen if...?”

  • Connect technical concepts to product decisions: “So when we prioritize X, we’re actually choosing Y trade-off?”

  • Invite other attendees to share their perspectives

Follow Up:

After the lunch:

  • Send a thank you message highlighting what you learned

  • Share one specific insight with the broader team (crediting the engineer)

  • Apply the knowledge in an upcoming product discussion and reference what you learned

  • Ask if they’d be open to doing another session on a different topic

Expected Benefits:

Immediate Wins:

  • Gain practical knowledge that improves your next technical discussion

  • Build rapport with engineering team members outside of sprint planning

  • Identify constraints and opportunities you didn’t previously understand

Relationship/Cultural Improvements:

  • Engineers feel valued for their expertise, not just their output

  • Create psychological safety for “I don’t understand” conversations

  • Break down the “PM vs. Engineering” dynamic into collaborative partnership

  • Model continuous learning for your team

Long-Term Organizational Alignment:

  • Make more informed product decisions that respect technical realities

  • Reduce back-and-forth during technical estimation and planning

  • Create a culture where cross-functional learning is normalized

  • Build trust that pays dividends during difficult prioritization conversations


Let us know how it went and what technical mysteries you unlocked! Use the hashtag #MLAChallenge to share your story. Let’s inspire each other to build bridges across disciplines!

Leave a comment


📝 The Innovation Paradox: Why Your Metrics Might Be Killing Your Product’s Future

Let me tell you about a conversation that changed how I think about product management forever.

I was sitting across from a brilliant PM at a major tech company—let’s call her Ania—who looked absolutely exhausted. Not the “I pulled an all-nighter” kind of tired. The deeper kind. The “I’m questioning my entire career” exhaustion.

“I know exactly what we should build,” she told me, staring into her coffee. “Our customers are screaming for it. The data’s crystal clear. But I can’t touch it.”

“Why not?” I asked.

“Because it won’t move my quarterly revenue number.”

And there it was. The invisible cage that’s trapping some of the smartest product people in tech.

The Seductive Trap of “Strategic” Accountability

Here’s what sounds perfectly reasonable on paper: Product Managers should own

Keep reading with a 7-day free trial

Subscribe to 💜 PRODUCT ART 💜 to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 PRODUCT ART
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture