💜 PRODUCT ART 💜

💜 PRODUCT ART 💜

The Comfort Trap: Why Being Valued at Your Company Doesn’t Mean You’re Valuable in the Market | The Psychology Behind B2B Buying: What Kahneman, Cialdini, and Pink Teach Us About Personas

Issue #226

Destare Foundation's avatar
Alex Dziewulska's avatar
Sebastian Bukowski's avatar
Jakub Sirocki's avatar
+2
Destare Foundation, Alex Dziewulska, Sebastian Bukowski, and 3 others
Nov 18, 2025
∙ Paid

In today's edition, among other things:

💜 The Comfort Trap: Why Being Valued at Your Company Doesn’t Mean You’re Valuable in the Market - Editor’s note (by Alex Dziewulska)

💜 The Psychology Behind B2B Buying: What Kahneman, Cialdini, and Pink Teach Us About Personas (by Alex Dziewulska)

💪 Interesting opportunities to work in product management

🍪 Product Bites - small portions of product knowledge

📚 Monthly Book Club for Product Managers

🔥 MLA week#33

Join Premium to get access to all content.

It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).

Grab a notebook 📰 and your favorite beverage 🍵☕.

DeStaRe Foundation

Editor’s Note by Alex 💜

The Comfort Trap: Why Being Valued at Your Company Doesn’t Mean You’re Valuable in the Market

I need to tell you about something that’s been keeping me up at night.

Over the past few years, I’ve had the privilege of mentoring some genuinely exceptional product managers. Smart people. Talented people. The kind of PMs you’d want on your team in a heartbeat.

And I’ve watched several of them get absolutely blindsided by layoffs.

Not because they weren’t good. They were great—at least according to their companies. Promotions, raises, glowing reviews, the whole package. But when the layoffs came and they suddenly found themselves on the job market? They couldn’t land a single interview.

That’s when I realized we have a massive problem in our profession that nobody’s talking about.

Let me introduce you to someone I’ll call Ewa. Eight years at the same company. Three promotions. Led major product initiatives. Leadership loved her. Her team respected her. By every metric that mattered inside her company, she was crushing it.

Then came the layoffs. Not performance-based—just the usual “we need to make the numbers work for investors” situation. Ewa figured she’d land somewhere new pretty quickly. I mean, look at her track record, right?

Three months later, she was still job hunting. Six months in, she was starting to panic.

What happened?

Here’s what I discovered when we dug into it together: Ewa had become incredibly good at succeeding in one very specific environment. She knew her company’s culture inside and out. She could navigate the politics. She understood exactly how to get things approved and shipped.

But when interviewers started asking about continuous discovery, or how she validated assumptions before building, or her approach to outcome-based roadmapping... crickets.

Her company didn’t do any of that. They talked about being “product-led,” but really? They were a software delivery shop with product managers coordinating backlogs and managing stakeholder expectations.

Ewa had optimized perfectly for success in that environment. The problem? That environment was the only place where those optimizations mattered.

This is where it gets really interesting—and a bit uncomfortable.

Your brain is literally designed to keep you in situations that feel safe, even when those situations are slowly killing your career. Daniel Kahneman’s research on loss aversion shows we’re wired to prefer the certainty of what we know over potentially better alternatives. That comfortable role you’re in? Your brain treats it as the default, and it takes overwhelming evidence of danger before you’ll seriously consider leaving.

Think about it: every year you stay, you accumulate more company-specific knowledge. Every promotion raises the stakes. Every salary bump makes leaving feel like accepting a loss. And because we feel losses about 2.5 times more intensely than equivalent gains, staying put always feels safer than it actually is.

Carol Dweck’s work on growth mindset reveals another trap. When your company consistently rewards you, your brain can slip into thinking your current capabilities are sufficient. Growth becomes optional. You start believing your value is inherent rather than developed.

Meanwhile, out in the market? Everything’s evolving. New frameworks are emerging. Best practices are shifting. And you’re still perfecting skills that only matter within your company’s walls.

Here’s a sobering stat: professional skills have a half-life of roughly 5 years. Half of what you know becomes obsolete or less relevant every five years. For PMs in fast-moving tech? That timeline’s even shorter.

If you’re not actively developing new capabilities, you’re not standing still. You’re falling behind.

I want to talk about something organizational behavior researchers call “competency traps”—because I see this pattern everywhere.

You become the expert in your company’s specific way of doing things. You know the politics, the processes, the unwritten rules. That expertise is genuinely valuable—inside those walls.

Strip away that context though? Suddenly you’re not an expert. You’re someone who knows one way of doing things, sitting across from interviewers who want breadth, adaptability, and modern best practices.

I’ve sat in on mock interviews where this plays out painfully:

Interviewer: “Walk me through your discovery process.”

PM: “Well, we gather requirements from stakeholders, analyze what competitors are doing, and document features in our backlog...”

Interviewer: “How do you validate you’re solving the right problem before building?”

PM: [Long pause] “We... we get approval from leadership.”

That PM wasn’t incompetent. They were executing exactly what their company rewarded. But the market had moved on.

Amy Edmondson’s research on psychological safety reveals an unexpected irony here: environments that make you feel safe and valued can inadvertently prevent the discomfort necessary for growth. When you’re comfortable, praised, promoted—your brain interprets that as confirmation you’re doing everything right.

The learning stops. The growth stalls. And you don’t even notice because all the internal signals say you’re succeeding.

The 2022-2024 tech layoffs have been brutal. But they’ve also been revealing in a way previous downturns weren’t.

These weren’t performance-based cuts. Companies eliminated entire teams, high performers included. The logic was purely financial: appease investors, adjust to new realities, correct for over-hiring.

What this means for you? Performance doesn’t protect you anymore. Being good at your job doesn’t guarantee your job exists tomorrow.

I’ve had variations of this conversation more times than I can count:

Week 1: “I can’t believe they cut me. I was just promoted. My reviews were excellent.”

Week 2: “I’ll land something quickly. My track record speaks for itself.”

Week 3-8: “Why aren’t my interviews converting? The feedback’s so vague. These job requirements seem impossibly broad compared to what I actually do...”

Week 9+: “I’m starting to realize... I don’t know how to do continuous discovery. I’ve never run a proper A/B test. I can’t speak credibly about OKRs because we never used them. Everything in my portfolio is internal work I can’t share details about.”

The PMs who bounce back quickly? They’ve been learning continuously whether or not their company required it. They can demonstrate modern practices even if their company didn’t use them. They have external validation through writing, speaking, community involvement. They’ve stayed market-aware.

The ones who struggle? They optimized perfectly for one specific environment. And that environment just disappeared.

Look, I know what you’re thinking. “Work isn’t my life. I have a job. They pay me. I don’t owe them my nights and weekends learning new frameworks.”

You’re absolutely right.

And you’re also completely missing the point.

You don’t owe your company your growth. But you owe yourself your survival.

When you invest in continuous learning, you’re not doing your company a favor. You’re building insurance against disruption. You’re creating options. You’re making sure that when—not if—your current situation changes, you’re not scrambling to catch up on five years of industry evolution.

Think about it differently: your company is temporary. Your skills are portable. Your current role will end eventually—through layoffs, company failure, restructuring, or your own choice to leave.

The only question is whether you’ll be ready when that moment comes.

This isn’t about working 80-hour weeks or sacrificing your life for your career. It’s about intentionality.

Let me share what the PMs I’ve mentored who’ve built real career resilience actually do:

They treat themselves as a portfolio, not an employee

Stop thinking of yourself as “Acme Corp’s Senior PM.” Start thinking of yourself as a product professional who happens to currently work at Acme Corp. This mental shift changes everything.

Ask yourself quarterly:

  • What am I learning that’s valuable beyond this company?

  • Could I explain my impact to someone who doesn’t know our internal systems?

  • Am I developing skills that represent current market best practices?

They establish small, consistent learning rituals

You don’t need 20 hours a week. You need consistency. James Clear’s research on habits shows that small, regular behaviors compound way more effectively than sporadic intense efforts.

The successful PMs I know dedicate roughly 2-3 hours per week. That’s it. But they do it every week:

  • Weekly: Read one article from a product thought leader and try one technique from it

  • Bi-weekly: Attend a product meetup or webinar, even virtual

  • Monthly: Coffee chat with a PM from a different company—not networking, learning

  • Quarterly: Complete one focused course in a skill gap they’ve identified

They practice modern frameworks even when their company doesn’t use them

This is the game-changer. You can learn continuous discovery, OKRs, hypothesis-driven development—regardless of what your company practices.

One PM I worked with started documenting her projects using opportunity solution trees privately. Just for herself. Her company didn’t use them, but when she interviewed elsewhere six months later, she could speak fluently about the framework and show examples. It made all the difference.

Another started running tiny experiments within his current role. Nothing formal. Just testing assumptions before building. Built the muscle memory. When he needed to interview, he had real stories to tell.

They maintain market awareness through regular reality checks

The PMs who get blindsided are the ones who haven’t looked at job postings, interviewed elsewhere, or talked to recruiters in years. They have no idea what the market values versus what their company values.

Try this:

  • Review 10 PM job postings monthly to see what skills keep appearing

  • Take one exploratory interview per year, even when you’re happy (treat it as market research)

  • Join PM communities to understand what others are working on

  • Track which skills appear in job descriptions versus which you actually use

This isn’t disloyalty. It’s being informed.

They build external credibility independent of their company

When you’re laid off, you can’t use your internal reputation. You need external proof:

  • Write Medium posts or LinkedIn articles about what you’re learning

  • Speak at local meetups or webinars

  • Contribute to open source products or nonprofit projects

  • Document case studies of your approach (anonymized if needed)

None of this requires your company’s permission. All of it makes you more valuable.

I’m not trying to scare you. Well, maybe a little. But mostly I’m trying to wake you up.

The product management profession is going through something right now. We’re seeing massive layoffs alongside desperate hiring for PMs with specific modern skills. There’s no shortage of PMs. There’s a shortage of PMs who’ve kept their skills current.

Companies will keep laying off teams for financial reasons unrelated to performance. The market will keep evolving faster than any single company’s practices. AI will keep disrupting how we work.

The only constant is change.

I’ve watched exceptional PMs—people I genuinely admire—struggle for months to find roles because they got too comfortable. And I’ve watched good-but-not-great PMs land quickly because they never stopped learning.

You have a choice right now. Keep optimizing for success in your current environment, hoping stability lasts. Or start building genuine career resilience.

Your company might love you today. The market doesn’t care about your company’s opinion. It only cares about what you can demonstrably do.

What I Want You to Do This Week

Not next month. This week.

Pick one thing:

  1. Look at five PM job postings and identify one skill that keeps appearing that you’re weak in

  2. Read one article by Teresa Torres or Marty Cagan and try applying one concept to your current work

  3. Join one PM community

  4. Reach out to one PM at a different company and ask them about how they approach discovery

  5. Start documenting one of your current projects using a modern framework, even if just for yourself

Just one. This week.

Because comfort feels like success right up until the moment it becomes failure.

The PMs who thrive through the next decade won’t be the most talented. They’ll be the most adaptable. The most curious. The ones who treated their career development as seriously as their current job performance.

Which kind of PM will you be?


PS. Here’s the thing about continuous learning—it’s not something you have to do alone. And honestly? It shouldn’t be.

I’ve been thinking a lot about how we can support each other as a community. How we can create spaces where learning modern frameworks doesn’t feel like homework, but like hanging out with people who get it. Who understand what you’re going through.

That’s why I’m offering free training on product discovery using the Lean Inception method for our community. It’s happening on December 12th, 2025, and I’ll share the details soon.

Why Lean Inception? Because it’s one of those frameworks that bridges the gap between where many teams are (requirements gathering, feature planning) and where they need to be (collaborative discovery, validated learning). It’s practical. It’s something you can try at work the next day. And honestly? It’s the kind of skill that shows up in those job postings we talked about earlier.

But more than that, it’s a chance to practice together. To ask questions. To work through what these modern approaches actually look like in practice, not just in theory.

Because that’s what I’ve learned from mentoring PMs like Ewa—the ones who bounce back fastest aren’t necessarily the smartest or most experienced. They’re the ones who stayed connected to a learning community. Who kept showing up. Who weren’t afraid to admit they didn’t know something and wanted to learn it.

Leave a comment


💪 Product job ads from last week

Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)

  1. Product Manager - PZU

  2. Senior Product Manager - InPost

  3. Product Manager - Vonage

  4. Senior Product Manager - Sigma Software Group

  5. Product Manager - mBank

    Refer a friend


🍪 Product Bites (3 bites 🍪)

🍪 The Spotlight Effect: Why Product Teams Overestimate User Attention

When Your Most Important Feature Becomes Everyone’s Blind Spot

Have you ever launched a feature you were certain would revolutionize your product, only to discover weeks later that 80% of your users hadn’t even noticed it existed? You’re not alone. The Spotlight Effect—a cognitive bias where we overestimate how much attention others pay to our appearance and behavior—doesn’t just affect teenagers worried about a bad hair day. It’s systematically destroying product teams’ ability to understand how users actually experience their products.

In product development, the Spotlight Effect manifests as a dangerous assumption: we believe users care about our product as much as we do. We imagine them studying every pixel, exploring every feature, and appreciating every clever interaction we’ve designed. The reality? Your users spend less time thinking about your product in a month than you do before your morning standup.

The Anatomy of Product Blindness

The Spotlight Effect in product development operates through three interconnected mechanisms that amplify each other in destructive ways.

The Curse of Knowledge compounds the problem. You’ve spent six months designing that new dashboard. You know every widget, every data point, every interaction pattern by heart. Your brain literally cannot simulate what it’s like to encounter this dashboard for the first time. You’ve lost the ability to see your product through fresh eyes, and the Spotlight Effect convinces you that users will somehow intuit all the context you’ve accumulated.

The Attention Asymmetry reveals the brutal truth about user engagement. While your product team obsesses over every detail, your users are juggling 47 browser tabs, three messaging apps, and a conference call while trying to accomplish one specific task in your product. They’re not admiring your elegant information architecture—they’re frantically searching for the “Submit” button so they can move on to their next task.

The Feature Inflation Paradox emerges from this disconnect. Teams interpret low feature adoption not as evidence that users don’t notice features, but as proof that the features aren’t prominent enough. So they make them bigger, add more notifications, create elaborate onboarding tours—anything to force users to pay attention. This only creates more noise, making it even harder for users to find what they actually need.

The Hidden Cost of Assumed Attention

Slack learned this lesson the hard way in 2019 when they redesigned their sidebar navigation. The product team had spent months crafting what they believed was a more intuitive information hierarchy. They were so confident in the design’s clarity that they provided minimal migration guidance, assuming users would naturally understand the improvements.

The backlash was immediate and fierce. Users couldn’t find basic functions they’d relied on for years. What seemed like obvious improvements to the product team were incomprehensible changes to users who had developed muscle memory around the old design. Slack had fallen victim to the Spotlight Effect, assuming users would invest the cognitive effort to understand and appreciate their design decisions.

Netflix, on the other hand, has built an empire on accepting that users pay minimal attention. Their “Skip Intro” button appears exactly when users want it—not because they conducted extensive user research, but because they accepted a simple truth: users don’t care about opening credits the way content creators do. They measured actual behavior, not imagined attention, and discovered that 15% of viewing time was being spent on content users actively wanted to skip.

The data on user attention is sobering. Eye-tracking studies show that users spend an average of 10-20 seconds on a homepage before deciding whether to stay or leave. During feature onboarding, completion rates drop by 25% with each additional step. Microsoft found that 90% of features in their Office suite had never been used by 95% of users—not because the features were bad, but because users never noticed they existed.

Breaking Free from the Spotlight

Overcoming the Spotlight Effect requires fundamental changes in how we design, communicate, and measure product success. Here’s the FOCUS framework for building products that work with, not against, limited user attention:

Find the Natural Path. Instead of designing the ideal user journey, observe the actual paths users take. Amplitude discovered that only 12% of users followed their intended onboarding flow. By placing key features along the paths users were already taking—rather than trying to redirect them—they increased feature adoption by 43%.

Optimize for Scan-ability. Users don’t read; they scan. They don’t explore; they hunt. Design your product like a highway sign system—clear, hierarchical, and instantly comprehensible at 70 mph. Spotify’s “Made for You” section succeeds because it requires zero cognitive effort to understand its purpose.

Communicate Through Behavior. Stop writing feature announcements that no one reads. Instead, reveal functionality through user actions. When Notion detects you typing “/”, it shows you available commands. When you start dragging a block, it shows you where you can drop it. The feature education happens in the moment of need, not in a tutorial users will skip.

Use Progressive Disclosure. Accept that users will only ever use 20% of your features—but different users need different 20%s. Start with the essential, reveal the advanced only when behavior indicates readiness. Adobe Creative Cloud initially overwhelmed users with hundreds of features. By hiding advanced tools until users demonstrated proficiency with basics, they increased new user retention by 35%.

Simplify Relentlessly. Every element in your interface is competing for the user’s limited attention. If something isn’t earning its keep, it’s actively harming the experience. When Airbnb reduced their search filters from 17 to 5 (with advanced options hidden), search-to-booking conversion increased by 12%.

The Attention Reality Check

Implementing the FOCUS framework starts with brutal honesty about your product’s attention footprint. Create an Attention Audit by tracking three metrics:

The Notice Rate: What percentage of users ever interact with a feature? If it’s below 10% after three months, the feature might as well not exist.

The Time to Discovery: How long does it take new users to find critical features? If it’s more than 30 seconds, you’re already losing them.

The Cognitive Load Score: Count the number of decisions users must make to complete core tasks. Each decision is an opportunity for abandonment.

Figma revolutionized design tools not through more features, but through radical attention efficiency. Their multiplayer cursors solve the “where are we looking?” problem without requiring any user education. Their component system makes design systems discoverable at the point of use. They succeeded by accepting that designers are focused on designing, not on learning tools.

The Spotlight Paradox

Here’s the uncomfortable truth: the more you care about your product, the worse you become at designing it. Your passion creates a spotlight that blinds you to the reality of user indifference. The most successful product teams have learned to design for users who barely care, who won’t read instructions, who will abandon your product the moment it requires conscious thought.

This isn’t cynicism—it’s empathy. Your users aren’t lazy or stupid. They’re busy, stressed, and trying to accomplish real goals in which your product is merely a tool, not a destination. When you stop expecting them to bask in your product’s spotlight and start designing for their scattered attention, you create products that feel effortless, intuitive, and indispensable.

The next time you’re designing a feature, ask yourself: Would this still work if the user only gave it 5% of their attention? If the answer is no, you’re not designing for reality—you’re designing for the spotlight that exists only in your mind. And in the brutal attention economy of modern software, products designed for imaginary attention inevitably lose to those designed for actual behavior.

Remember: Your users aren’t watching you. They’re barely watching at all. Design accordingly.

Leave a comment


🍪 The Product Hierarchy of Effects: From Awareness to Advocacy

Engineering the Complete Journey from Stranger to Evangelist

Every product manager dreams of viral growth—users so delighted they can’t stop telling others about your product. Yet most products die in obscurity, not because they lack quality, but because teams optimize random touchpoints without understanding the psychological progression that transforms strangers into advocates. The Product Hierarchy of Effects reveals this hidden architecture of user transformation, showing why 70% of products fail not at the feature level, but at the transition level.

Borrowed from advertising psychology and reimagined for digital products, the Hierarchy of Effects maps the cognitive and emotional stages users must traverse to become true product advocates. Unlike traditional funnels that measure actions, this framework illuminates the mental transformations required at each stage. It explains why having the best features means nothing if users never reach the mental state where those features matter.

The Seven Stages of Product Transformation

The journey from stranger to evangelist isn’t linear—it’s hierarchical. Each stage builds on the previous one, and weakness at any level cascades upward, capping your product’s potential regardless of investment in higher stages.

Stage 1: Awareness - The user knows your product exists. This isn’t just brand recognition; it’s category understanding. Notion struggled for years because users aware of the product didn’t understand what category it belonged to. Was it a note-taking app? A database? A wiki? Only when they positioned it as “your all-in-one workspace” did awareness translate to comprehension.

Stage 2: Comprehension - The user understands what your product does. This is where most products fail their first test. You have 8 seconds on a landing page to communicate your value. Zoom succeeded here with brilliant simplicity: “Video conferencing that doesn’t suck.” No feature lists, no technical specifications—just instant comprehension of the problem solved.

Stage 3: Interest - The user sees potential personal relevance. Interest isn’t curiosity—it’s the spark of “this might solve MY problem.” Spotify’s “Made for You” playlists generate interest by demonstrating the product already understands you, before you’ve even started using it properly.

Stage 4: Consideration - The user actively evaluates your product against alternatives and the status quo. This is where the mental cost-benefit calculation happens. Stripe wins at consideration not through features but through developer experience—their documentation makes the switching cost feel manageable.

Stage 5: Trial - The user attempts to accomplish their first real goal. This isn’t onboarding—it’s the moment of truth where promised value meets reality. Canva’s template gallery ensures users succeed at creating something beautiful within minutes, validating the consideration stage decision.

Stage 6: Adoption - The product becomes part of the user’s regular workflow. This requires habit formation, not just satisfaction. Grammarly embeds itself into adoption through browser extensions and app integrations, becoming present wherever users write, making the transition from trial to adoption almost unconscious.

Stage 7: Advocacy - Users actively recommend your product to others. True advocacy requires emotional investment beyond functional satisfaction. Apple doesn’t create advocates through specifications but through identity—users recommend iPhones because doing so reinforces their own self-image as discerning, creative individuals.

The Cascade Failure Pattern

The hierarchy’s power lies in revealing why traditional product metrics mislead us. You can’t fix poor advocacy with a referral program if users are stuck at the consideration stage. You can’t improve adoption with better features if users fail at comprehension.

Peloton initially focused heavily on the advocacy stage, building elaborate social features and referral programs. But they discovered their real problem was at the consideration stage—the $2,000+ price point created an insurmountable mental barrier for most potential users. Only when they introduced payment plans and the digital-only option did consideration rates improve, unlocking all the advocacy investments they’d already made.

Meanwhile, Clubhouse achieved massive awareness during the pandemic but failed at comprehension. Users knew it existed but couldn’t understand what it was for. Was it podcasting? Conference calls? Social networking? By the time they clarified their value proposition, users had already moved through the hierarchy with competing products like Twitter Spaces.

The BRIDGE Framework for Hierarchy Optimization

Moving users up the hierarchy requires different strategies at each stage. The BRIDGE framework provides specific tactics for each transition:

Build Awareness through Category Creation. Don’t compete for attention in existing categories—create new ones. Calendly didn’t market itself as “scheduling software” but created the category of “scheduling links,” making awareness immediately actionable.

Reduce Complexity for Comprehension. Every additional concept required to understand your product decreases comprehension by 20%. WhatsApp’s genius was requiring zero explanation: “Messaging that uses internet instead of SMS.” No features, no complications, instant comprehension.

Ignite Interest through Personalization. Show users themselves in your product before they’ve even signed up. LinkedIn’s public profiles create interest by showing users what their professional presence could look like. Pinterest’s onboarding asks for interests first, then immediately shows relevant content, making the product instantly personally relevant.

Demonstrate Value during Consideration. Free trials fail when they’re just time-limited access. Successful consideration strategies prove value immediately. Loom’s instant video creation—no download required—demonstrates value faster than users can list objections.

Guarantee Success in Trial. Amazon Prime’s genius wasn’t free shipping—it was making the first purchase feel like a win. They initially lost money on early Prime members but understood that successful trial experiences create lifetime customers. Your trial phase should be engineered for success, not evaluation.

Embed into Workflows for Adoption. Products that require behavior change fail at adoption. Products that enhance existing behaviors succeed. Superhuman doesn’t replace email; it makes existing email behavior feel superior. They literally study user workflows and customize the product to match, ensuring adoption feels like enhancement, not change.

Measuring Hierarchy Health

Traditional funnel metrics hide hierarchy problems. Instead, measure transition rates between mental states:

Awareness → Comprehension Rate: Of those who know you exist, what percentage understand what you do? Measure this through landing page time-to-understanding tests. If it takes more than 10 seconds, you have a comprehension problem.

Comprehension → Interest Rate: Of those who understand, what percentage see personal relevance? Track this through engagement with personalized content versus generic features.

Interest → Consideration Rate: Of interested users, what percentage actively evaluate? Measure through comparison page visits, pricing page engagement, and competitor comparison searches.

Consideration → Trial Rate: Of those evaluating, what percentage attempt real use? This isn’t sign-ups—it’s meaningful first actions.

Trial → Adoption Rate: Of trial users, what percentage integrate into routine use? Measure through frequency patterns, not just retention.

Adoption → Advocacy Rate: Of regular users, what percentage actively recommend? Track unsolicited mentions, not just referral program participation.

Linear succeeded by identifying their hierarchy bottleneck at the consideration stage. Engineers loved the product (high trial→adoption rates) but couldn’t justify switching from Jira (low consideration→trial rates). Instead of adding features, they built a Jira importer that made the consideration stage feel risk-free. Consideration→trial rates increased by 300%.

The Compound Effect of Hierarchy Optimization

The hierarchy’s stages compound multiplicatively, not additively. A 10% improvement at each stage doesn’t create a 70% improvement—it creates a 95% improvement in overall conversion from awareness to advocacy.

Duolingo mastered this multiplication effect. They optimized awareness through meme-worthy social content. They simplified comprehension to “5 minutes a day to learn a language.” They generated interest through streaks and social competition. They reduced consideration friction with free access. They guaranteed trial success with lessons impossible to fail. They embedded adoption through notifications that feel helpful, not pushy. They triggered advocacy through social proof of streaks and progress.

Each optimization was modest. Combined, they created a language-learning juggernaut with more users than the entire population of Spain.

The Hierarchy Reality

Here’s what most product teams get wrong: they try to skip stages. They want users to go straight from awareness to advocacy. They build referral programs before ensuring comprehension. They optimize adoption features while users are stuck at consideration. They pour resources into the wrong stage of the hierarchy, like renovating the penthouse while the foundation crumbles.

The Product Hierarchy of Effects isn’t just a framework—it’s a physics equation for user transformation. You can’t violate its laws any more than you can ignore gravity. Users must climb each rung sequentially. Your job isn’t to build a better product at the top of the hierarchy; it’s to build better bridges between each stage.

The next time you’re planning product investments, map them to hierarchy stages. Are you solving actual transition problems, or are you optimizing stages users never reach? Remember: the best advocacy features in the world are worthless if users never make it past consideration. The most elegant onboarding flow means nothing if users don’t comprehend your value proposition.

Your users aren’t failing to become advocates because your product lacks features. They’re stuck at a specific stage of the hierarchy, unable to make the next mental transition. Find that stage. Fix that transition. Watch your entire hierarchy unlock.

Leave a comment


🍪 The Decoy Effect in Feature Packaging: Strategic Pricing Psychology

How Adding the “Wrong” Option Makes Everything Right

You’ve crafted two perfect pricing tiers. Basic at $10 for casual users. Premium at $25 for power users. Clear value proposition, logical progression, simple choice. Six months later, 85% of users are stuck on Basic, your revenue is flatlined, and you’re wondering why users don’t see the Premium value that seems so obvious to you. Welcome to the paradox of rational pricing in an irrational world—where adding a “bad” option can double your revenue overnight.

The Decoy Effect—also called the asymmetric dominance effect—reveals how our brains make comparative rather than absolute value judgments. When presented with two options, we struggle to assess value. Add a third, strategically inferior option, and suddenly the choice becomes obvious. This isn’t manipulation; it’s providing the cognitive anchor that helps users understand and appreciate your product’s true value.

The Architecture of Irrational Choice

The Decoy Effect operates through three psychological mechanisms that transform confusion into clarity, hesitation into action.

Relative Value Processing is how our brains actually evaluate options. We don’t assess a $25 price tag in isolation—we assess it relative to alternatives. Without a reference point, Premium feels expensive. With a $20 decoy that offers less value than the $25 Premium, Premium suddenly feels like a bargain. The Economist famously offered digital-only for $59, print-only for $125, and digital+print for $125. The print-only option was the decoy—nobody wanted it, but its presence made digital+print look like an incredible deal. Subscriptions increased by 30%.

The Compromise Effect exploits our tendency to avoid extremes. Given three options, we gravitate toward the middle, seeing it as the safe, balanced choice. This is why restaurant wine lists price the second-cheapest wine with the highest markup—it’s where uncertainty drives decisions. Netflix originally offered Basic ($8.99), Standard ($13.99), and Premium ($17.99). Standard wasn’t just the most popular—it was designed to be, positioned as the compromise between “too basic” and “unnecessarily premium.”

Cognitive Load Reduction is the hidden benefit of decoys. Comparing two different feature sets requires mental effort. Comparing three where one is clearly inferior? The decision makes itself. Our brains love shortcuts, and decoys provide the perfect heuristic: “If Option C is worse than B but costs almost the same, B must be valuable.”

The Decoy Playbook in Product Design

Implementing decoys isn’t about tricking users—it’s about helping them recognize value they already want but can’t quite justify. Here’s how successful products engineer choice architecture:

The Feature Decoy Strategy. Zoom’s paid tiers include Basic (free), Pro ($14.99), and Business ($19.99). The Pro tier is the decoy—it lacks critical features like cloud recording and admin controls that most paying customers need. It exists to make Business feel essential and reasonably priced. Result: 73% of paying customers choose Business over Pro.

The Quantity Decoy Method. Dropbox offers 2GB free, 2TB for $9.99, and 3TB for $16.58. The 3TB plan is the decoy—most users don’t need 3TB, but its existence makes 2TB feel generous rather than restrictive. The pricing per GB makes 2TB appear optimal, driving users away from free toward paid plans.

The Time-Limited Decoy. Adobe Creative Cloud offers monthly at $79.99, annual paid monthly at $59.99, and annual prepaid at $659.88. The monthly plan is the decoy—so expensive it makes annual commitment feel sensible, while the prepaid option makes monthly billing feel flexible. 82% choose annual paid monthly, Adobe’s preferred option for predictable revenue.

The PRICE Framework for Strategic Packaging

Creating effective decoys requires careful calibration. The PRICE framework ensures your packaging psychology drives revenue without destroying trust:

Position Your Target. Decide which tier you want most users to choose—usually your middle option with the best margin and feature balance. This becomes your target; everything else is designed to make it shine.

Reference Point Creation. Your decoy should be 75-90% of your target’s price but offer 40-60% of its value. Too cheap and it becomes attractive; too expensive and it seems ridiculous. The sweet spot makes your target feel like incredible value. When Spotify introduced their Duo plan ($12.99) between Individual ($9.99) and Family ($15.99), Individual subscribers upgrading to Duo increased by 35%, but more importantly, Duo-to-Family upgrades increased by 45%.

Inferior But Believable. Your decoy must seem like someone might want it, even if few actually do. Complete garbage options trigger suspicion. Microsoft Office 365 Personal ($69.99) versus Family ($99.99) for up to 6 users—Personal is the decoy, believable for singles but making Family irresistible for anyone who might share.

Clear Differentiation. The superiority of your target over your decoy must be immediately obvious. Don’t make users do math or read feature matrices. Visual hierarchy should guide the eye to your target. Slack’s tier visualization literally highlights “Most Popular” on Pro, with Plus sitting awkwardly close in price but clearly inferior in capability.

Ethical Implementation. Decoys should help users find appropriate value, not trick them into overspending. Your target tier should genuinely serve most users best. The decoy clarifies value; it doesn’t manufacture false need.

The Decoy Metrics That Matter

Traditional conversion metrics miss decoy dynamics. Track these instead:

Choice Distribution Shift: Before and after decoy introduction, how does tier selection change? Successful decoys should push 20-40% of users from lower to target tiers.

Decision Velocity: Time from pricing page visit to purchase. Effective decoys reduce decision time by 30-50% by eliminating analysis paralysis.

Cognitive Load Indicators: Support tickets about pricing, feature comparison questions, and pricing page bounce rates. Good decoys reduce all three.

Value Perception Score: Survey users about whether they feel they’re getting good value. Counterintuitively, well-designed decoys increase satisfaction with chosen tiers.

Evernote discovered their Plus tier ($34.99) was cannibalizing Premium ($69.99) because the feature difference wasn’t clear. They repositioned Plus as a decoy by removing key features and adding a Personal tier ($64.99) just below Premium. Premium subscriptions increased by 127%, and customer satisfaction improved because users felt confident in their choice.

The Multi-Dimensional Decoy

Advanced decoy strategies go beyond simple pricing to create multi-dimensional comparisons:

Feature Complexity Decoys: Basecamp offers Personal (3 projects), Business ($99, 500 projects), and Enterprise (unlimited, custom pricing). Business is actually the decoy—500 projects is overwhelming for most teams, making Personal perfect for small teams and Enterprise essential for large ones. The middle tier exists to segment the market without requiring users to predict their growth.

Support Level Decoys: Many SaaS products offer email support, priority support, and dedicated support tiers. Priority support is often the decoy—expensive enough to make email feel reasonable for small teams, but limited enough to push serious businesses to dedicated support.

User Limit Decoys: Figma’s pricing includes Starter (free, 3 files), Professional ($12/user), and Organization ($45/user). Professional is partially a decoy—its user minimum and feature set push serious teams to Organization while keeping hobbyists on Starter.

The Decoy Effect in Feature Development

Beyond pricing, decoys influence feature perception and adoption:

The Feature Anchor: Gmail’s paid storage tiers (100GB for $1.99, 200GB for $2.99, 2TB for $9.99) make 2TB feel essential even though 92% of users never exceed 100GB. The middle tier exists to anchor value perception.

The Complexity Decoy: Notion’s template gallery includes simple, intermediate, and advanced templates. The advanced ones are decoys—so complex they make intermediate templates feel approachable, increasing adoption among new users who might otherwise feel overwhelmed.

The Integration Decoy: Zapier’s plan limits on “Zaps” include 100, 750, and 2000. The 750 tier is the decoy—just limited enough to frustrate growing businesses into the 2000 tier, but substantial enough to make the jump feel justified.

The Psychology of Justified Decisions

The Decoy Effect works because it solves a fundamental human problem: we need to justify our decisions to ourselves and others. Without context, choosing the expensive option feels indulgent. With a decoy, it feels smart.

This is why enterprise software always has three tiers, why restaurants offer small/medium/large (with medium having the best margins), and why SaaS companies discovered that two options convert worse than three. We don’t just want good value—we want to feel clever for finding it.

HubSpot mastered this with their Starter ($45), Professional ($1,600), and Enterprise ($3,200) tiers. Professional is priced to feel expensive compared to Starter but reasonable compared to Enterprise. It’s not manipulation—it’s providing the context that helps buyers justify the investment they already know they need to make.

The Ethical Imperative of Strategic Pricing

Here’s the counterintuitive truth: decoys can increase customer satisfaction. When users clearly understand why they chose their tier, they experience less buyer’s remorse. When the comparison makes their choice feel smart, they’re more likely to succeed with the product.

The alternative—two tiers with no context—creates anxiety. Users on Basic wonder if they’re missing out. Users on Premium wonder if they’re overpaying. Add a strategic decoy, and both groups feel confident in their choice.

The Decoy Effect isn’t about manipulation—it’s about clarity. In a world of infinite options and limited attention, helping users quickly identify appropriate value is a service, not a scheme. Your users aren’t purely rational economic actors, and pretending they are doesn’t make your pricing more “honest”—it makes it less helpful.

The next time you’re designing pricing or packaging, remember: users don’t evaluate options in isolation. They need context, anchors, and comparisons to understand value. Give them a decoy—a reference point that makes the right choice obvious. Because in the psychology of pricing, the “wrong” option isn’t wrong at all—it’s the key to helping users find what’s right for them.

Leave a comment


🔥 MLA #week 33

The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forward—no overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.

Why it matters? Culture isn’t built overnight. It’s the sum of consistent, small actions. MLA creates momentum—one small win at a time—and turns those wins into lasting change. Small actions, big impact

MLA: Decision Transparency Hour

Challenge Area: Communication & alignment in product decision-making
Action: Spend 30–60 minutes documenting ONE recent important decision and share the reasoning publicly (Slack, wiki, etc.)


Why This Matters

Most people see decisions as headlines, not as stories.

They hear: “We’re sunsetting feature X”, “We’re pivoting to segment Y”, “We’re delaying this launch” — but they don’t see the tradeoffs, constraints, or signals behind them.

When reasoning stays in a few people’s heads:

  • Teams feel decisions are arbitrary or political

  • People re-ask the same “why?” in DMs and side meetings

  • New joiners and adjacent teams repeat old mistakes

  • Leaders burn time re-explaining the same context

By making one important decision fully transparent, you:

  • Model how to think, not just what to do

  • Show respect for people’s need for context

  • Create an artifact others can learn from and build on

This is a small action with big ripple effects: one hour that can permanently upgrade how your team understands decisions.


How to Execute

1. Choose the Decision & Context

Pick one recent decision (last 2–4 weeks) that:

  • Affects multiple people or teams

  • Involved real tradeoffs (you said “no” to something)

  • People are still asking about, or might in the future

Examples: prioritizing one bet over another, pausing a project, changing pricing, shifting a roadmap, redefining a KPI.

You are not trying to find the “perfect” decision — just one that is meaningful and representative.


2. Select Timing & Setting

Block 30–60 minutes this week in your calendar:

  • Title: “Decision Transparency Hour – [Decision Name] – #MLAChallenge”

  • Choose format:

    • Async-first: write a short doc and share in Slack / wiki with a thread for questions

    • Live-first: 20–30 min live session + written summary shared afterwards

Pick whatever fits your team’s habits — the key is that the reasoning becomes visible and discoverable.


3. Frame the Action Properly (Invite & Messaging)

Let people know what you’re doing and why:

“I’m running a Decision Transparency Hour as part of the #MLAChallenge.
I’ll walk through how we made the recent decision about [X] — the options we considered, why we chose this direction, and what risks we’re taking.
Goal: give everyone more context and make it easier to learn from our decisions, not to defend them.”

Key framing:

  • This is about learning and clarity, not blame

  • Questions are welcome; witch-hunts are not

  • The decision is not automatically “up for renegotiation” — you’re sharing the why, not reopening the vote (unless you explicitly choose to)


4. Prepare a Simple Decision Doc (15–30 minutes)

Create a lightweight doc or page with these headings:

  • Decision:

  • Owner(s):

  • Date:

  • Problem we were trying to solve:

  • Options we considered:

  • Why we chose this option:

  • What we explicitly did not choose (and why):

  • Key assumptions & risks:

  • How we’ll know if this was a good decision (signals / metrics / timeframe):

Link any supporting docs (research, experiments, financial models), but keep this main page short and readable in 5–7 minutes.


5. Execute with Intention

If live:

  1. Spend 10–15 minutes walking through the doc.

  2. Invite questions like:

    • “What feels unclear?”

    • “What risks do you see that we didn’t mention?”

    • “What context from your side might change how we monitor this?”

  3. Capture key questions and clarifications directly into the doc.

If async:

  1. Post the doc in a visible channel.

  2. Ask people to react to 2–3 specific prompts, e.g.:

    • “What did you learn that you didn’t know about this decision?”

    • “Is there anyone else who really needs this context?”

  3. Reply to questions in-thread and fold the important ones back into the doc.

Stay curious, not defensive. You’re modelling the behavior you want others to copy.


6. Follow Up & Reinforce Learning

Within 1–2 weeks:

  • Add a short section at the bottom of the doc:

    • “What happened since:”

      • Have we seen early signals (positive or negative)?

      • Did we adjust anything based on feedback?

  • Share a brief update:

    • in Slack / all-hands / team meeting,

    • linking back to the doc.

Then, when the decision comes up again, you have a single source of truth instead of re-litigating it from scratch.


Expected Benefits

Immediate Wins

  • Fewer one-off “why did we do this?” pings

  • Shared context for everyone affected by the decision

  • A concrete example of “what good decision documentation looks like”

Relationship / Cultural Improvements

  • Increased trust: people see you’re not hiding the reasoning

  • More constructive questions (“Help me understand…”) instead of cynical commentary

  • Psychological safety to talk about tradeoffs, risks, and uncertainty

Long-Term Organizational Alignment

  • A growing library of past decisions and their logic

  • Stronger habit of documenting reasoning, not just outcomes

  • Clearer, more consistent standards for how big decisions are made and communicated


Call to Action

Run your Decision Transparency Hour this week.

Then share:

  • Which decision you chose

  • One question that surprised you

  • One thing you’ll do differently next time you make a decision

Use the #MLAChallenge hashtag so others can learn from your experience and start their own transparency experiments.

🎯 Remember: The goal isn’t to make every decision perfect — it’s to make the reasoning visible so everyone can learn, align, and improve together.

Leave a comment


📚 Monthly Book Club for Product Managers

The Art of Thinking Clearly by Rolf Dobelli

The Art of Thinking Clearly - Rolf Dobelli | Książka w Lubimyczytac.pl -  Opinie, oceny, ceny

A Product Manager’s Guide to 99 Cognitive Errors

About the Author

Rolf Dobelli isn’t your typical business guru – and that’s precisely what makes his perspective invaluable. A Swiss entrepreneur and novelist with a PhD in philosophy, Dobelli founded getAbstract, the world’s largest library of business book summaries, giving him unique insight into the patterns of business thinking across thousands of works. His background spans the corporate world (including time at Swissair), the startup ecosystem, and academia. This multifaceted experience allows him to dissect cognitive errors not as an outside observer, but as someone who has fallen prey to them and learned from the experience.

What’s particularly refreshing about Dobelli’s approach is his humility. He doesn’t position himself as someone who has transcended these biases – instead, he writes as a fellow traveler who has mapped the terrain of human error through both research and painful personal experience. His writing style reflects his novelist background: each bias is presented as a short, engaging story rather than a dry academic treatise. Having spent years analyzing what makes business books effective through getAbstract, he knows how to make complex psychological concepts stick.

Why This Book Matters for Product Managers

In product management, every decision counts. From feature prioritization to user research interpretation, from stakeholder negotiations to strategic planning – we’re constantly making choices that shape our products’ futures. Rolf Dobelli’s “The Art of Thinking Clearly” serves as an essential field guide to the minefield of cognitive biases that can derail even the most experienced product managers.

What makes this book particularly valuable isn’t just its comprehensive catalog of 99 cognitive errors – it’s the practical, accessible way Dobelli presents each bias with real-world examples that immediately resonate with anyone who’s sat through a sprint planning meeting or defended a product decision to executives.

The book’s structure – short, digestible chapters each focused on a single bias – mirrors the way product managers actually consume information: in sprints between meetings, during commutes, in those rare moments of quiet reflection. You can read about the “narrative fallacy” in five minutes and immediately recognize it in the story you told stakeholders about user behavior last week.

Moreover, Dobelli understands that in the business world, being right isn’t enough – you need to be persuasive. By understanding the cognitive biases that affect not just your own thinking but also your stakeholders’, users’, and team members’ thinking, you gain a strategic advantage in communication and influence. This dual application – improving both your decision-making and your ability to guide others’ decisions – makes the book doubly valuable for product leaders.

Key Concepts Every Product Manager Should Master

1. The Survivorship Bias

What it is: We systematically overestimate success probabilities because we only see the winners, not the failures.

PM Application: When benchmarking against successful products, we often ignore the graveyard of failed features and startups that attempted similar approaches. That sleek onboarding flow from Spotify? Hundreds of companies tried similar patterns and failed. Understanding survivorship bias helps us evaluate competitor strategies more realistically and avoid cargo-cult product development.

Real-world example: Consider how we analyze successful marketplace businesses like Airbnb or Uber. We study their growth tactics, their early features, their marketing strategies. But we don’t see the hundreds of marketplace startups that used identical strategies and failed. When Airbnb’s founders talk about their cereal box fundraising, it becomes a charming origin story. But how many failed startups also tried creative fundraising stunts? The difference between Airbnb and failure might have been timing, luck, or a hundred other factors we can’t see by only studying the survivor.

Practical tip: Always ask “How many attempts failed before this succeeded?” when analyzing successful features or products. Create a “failure library” documenting unsuccessful experiments in your industry to balance your perspective.

2. The Sunk Cost Fallacy

What it is: We continue investing in failing projects because we’ve already invested so much.

PM Application: That feature you’ve been building for six months that user testing reveals nobody wants? The temptation to “just polish it a bit more” rather than kill it is the sunk cost fallacy in action. Dobelli reminds us that past investments shouldn’t influence future decisions – only future value should.

The psychology behind it: Dobelli explains that this bias stems from our deep-seated loss aversion and our desire to appear consistent. Admitting a project should be killed feels like admitting we were wrong, which triggers ego-protective mechanisms. In organizational contexts, this is amplified by career concerns – nobody wants to be the PM who “wasted” six months of engineering time.

Real-world example: Consider Google+, which consumed enormous resources for years despite clear signals it wouldn’t succeed against Facebook. The more Google invested, the harder it became to walk away. Each pivot and relaunch represented throwing good money after bad, driven by the psychological weight of previous investments rather than realistic future prospects.

Practical tip: Institute regular “kill decision” checkpoints where the question isn’t “How much have we invested?” but “Would we start this project today knowing what we know now?” Create a “sunset celebration” culture where killing failed experiments is seen as learning, not failure.

3. The Confirmation Bias

What it is: We seek information that confirms our existing beliefs and ignore contradicting evidence.

PM Application: During user research, we unconsciously give more weight to feedback that supports our product vision while downplaying criticism. This bias can turn user interviews into expensive validation theater rather than genuine discovery.

The insidious nature: What makes confirmation bias particularly dangerous in product management is how subtle it can be. It’s not that we consciously ignore negative feedback – we just happen to remember the positive comments more vividly, we unconsciously ask leading questions, we interpret ambiguous responses optimistically. Dobelli shows how even the way we Google information (”benefits of X” vs. “problems with X”) reflects this bias.

Real-world example: A PM developing a new social feature might interpret users saying “that’s interesting” as validation, while dismissing concerns about privacy as edge cases. They might conduct five user interviews, and if three are positive, focus on those while explaining away the negative two as “not our target users” or “they didn’t understand the concept fully.”

Practical tip: Assign a devil’s advocate in research sessions whose job is to actively seek disconfirming evidence. Rotate this role to prevent it from becoming dismissible. Use structured research protocols that force you to document both supporting and contradicting evidence equally.

4. The Availability Heuristic

What it is: We overweight easily recalled information when making decisions.

PM Application: That one angry customer email can overshadow hundreds of satisfied users. The recent production incident makes us over-index on stability at the expense of innovation. Dobelli shows how our most recent or memorable experiences disproportionately influence our product decisions.

The media amplification effect: In the digital age, this bias is supercharged by social media and news cycles. A single viral tweet complaining about your product can feel like a crisis, even if it represents 0.001% of your user base. Dobelli warns that vivid, emotionally charged events create mental “scars” that distort our risk assessment for years.

Real-world example: After the infamous United Airlines passenger dragging incident, airlines industry-wide revised policies and training programs. While the incident was horrific, the statistical likelihood of such an event was infinitesimal. Yet its vivid, viral nature made it seem like a common risk requiring massive systematic change.

Practical tip: Maintain a decision log with actual data. Before making decisions based on “what we’re hearing from users,” check the actual numbers. Create a “signal strength” metric that weights feedback by statistical significance, not emotional impact.

5. The Feature-Positive Effect

What it is: We give more weight to what is present than what is absent.

PM Application: We notice the features competitors have that we don’t, but fail to recognize the complexity they’ve avoided that we’ve embraced. We celebrate shipped features but don’t track the value of what we deliberately didn’t build.

The complexity creep: This bias is particularly insidious in product development because it compounds over time. Each feature that’s present demands attention, maintenance, and consideration in future decisions. But the absence of features – the simplicity, the focused user experience, the reduced cognitive load – is invisible and thus undervalued.

Real-world example: Compare early Instagram (just photo sharing with filters) to the feature-rich Facebook app. Instagram’s success came partly from what it didn’t have: no games, no marketplace, no groups, no events. But when we analyze Instagram’s success, we focus on what it had (great filters, simple sharing) not on what it deliberately excluded.

Practical tip: Maintain a “features we said no to” list with reasoning. Regularly review it to reinforce the value of simplicity and focus. Calculate the “complexity cost” of new features in terms of maintenance, user education, and future flexibility.

6. The Halo Effect

What it is: One positive attribute causes us to assume other positive attributes exist.

PM Application: A beautifully designed prototype can make us overestimate its functional value. A feature request from a high-profile customer seems more valid than identical requests from unknown users. A team member who excels at presentations might be given more weight in technical discussions where they have less expertise.

The design halo: Dobelli particularly warns about the danger of aesthetic bias in decision-making. A polished presentation can make a weak strategy seem strong. A beautiful UI can mask poor information architecture. A charismatic leader’s product vision might go unchallenged despite logical flaws.

Real-world example: Consider how Apple’s reputation for design excellence creates a halo that affects perception of their products’ functionality. When Apple Maps launched with significant problems, many users initially defended it because Apple’s design halo made them assume the functional issues must be minor or would be quickly fixed.

Practical tip: Evaluate different aspects of products, people, and proposals independently. Use structured scoring rubrics that separate aesthetic, functional, and strategic considerations. Deliberately seek input from people who aren’t swayed by the same halos you are.

7. The Narrative Fallacy

What it is: We create coherent stories to explain random events, finding patterns where none exist.

PM Application: We construct elaborate narratives about why products succeeded or failed, ignoring the role of randomness. “We succeeded because we pivoted at exactly the right moment” might really mean “we got lucky with timing.” These narratives become organizational myths that guide future decisions despite being built on false causation.

The post-hoc rationalization: After a product succeeds, we backward-engineer a story that makes success seem inevitable. Dobelli shows how these stories become more refined and causally clear over time, even as they become less accurate. The messy reality of luck, timing, and chaos gets smoothed into a clean narrative arc.

Real-world example: The story of Twitter’s founding has been refined over the years into a narrative about solving the problem of status updates. But the early reality was much messier – a side project at a podcasting company, multiple pivots, near-death experiences, and a lot of fortunate accidents. The clean narrative obscures the actual lessons about experimentation and persistence.

Practical tip: When analyzing successes or failures, explicitly acknowledge the role of factors outside your control. Document decisions in real-time with the actual reasoning, not retrospective narratives. Use probabilistic thinking: “This increased our chances of success by 20%” rather than “This caused our success.”

8. The Planning Fallacy

What it is: We consistently underestimate the time and resources required to complete tasks.

PM Application: Every product manager has promised a feature in “just two sprints” that took six months. Dobelli reveals this isn’t just optimism – it’s a systematic bias where we imagine ideal scenarios and ignore the myriad ways things typically go wrong.

The compound effect in product development: In product management, the planning fallacy is particularly devastating because delays cascade. A two-week delay in the design phase pushes back development, which pushes back testing, which pushes back launch. Marketing campaigns are mistimed, seasonal opportunities are missed, and competitor advantages are lost.

Real-world example: The initial iPhone was announced in January 2007 but didn’t ship until June, and even then with significant features missing (like copy-paste). Despite Apple’s resources and experience, they fell victim to the planning fallacy. Steve Jobs later admitted they barely made the delayed deadline and considered delaying further.

Practical tip: Use “reference class forecasting” – look at how long similar projects actually took, not how long you hope this one will take. Add a “chaos buffer” of 50% to initial estimates. Track your estimation accuracy over time to calibrate your personal planning fallacy factor.

9. The Social Proof Bias

What it is: We assume something is correct if many others are doing it.

PM Application: “Everyone is adding stories features” becomes a reason to add stories to your product, regardless of whether it makes strategic sense. Industry “best practices” get adopted wholesale without considering context. Popular frameworks get implemented because other successful companies use them, not because they fit your specific needs.

The benchmark trap: Dobelli warns that social proof is particularly dangerous in competitive industries because it leads to convergence. Everyone copies everyone else, leading to a sea of sameness where differentiation disappears. The very act of following social proof eliminates the advantage you might have gained from thinking independently.

Real-world example: After Clubhouse’s initial success, dozens of major platforms added audio room features – Twitter Spaces, Facebook Live Audio Rooms, Spotify Greenroom, Discord Stage Channels. Most of these features saw minimal adoption because the companies were following social proof rather than actual user needs.

Practical tip: For every industry trend or competitor feature, ask “What specific problem does this solve for OUR users?” Create a “contrarian thesis” for each major product decision – what would happen if you did the opposite of industry consensus?

The Compound Effect of Biases in Product Development

Dobelli masterfully illustrates how these biases don’t operate in isolation – they compound and interact in dangerous ways. Consider a typical product development scenario:

You notice a competitor’s new feature getting press coverage (availability heuristic), interpret early positive tweets as widespread demand (confirmation bias), continue investing despite poor initial metrics because you’ve already committed resources (sunk cost fallacy), and benchmark success only against products that survived long enough to be noticed (survivorship bias).

This cascade of errors can transform a minor misjudgment into a major product failure. But it gets worse: these biases create feedback loops. The narrative fallacy causes us to create a compelling story about why this feature is essential, which triggers social proof as team members align around the narrative, which strengthens confirmation bias as we collectively seek supporting evidence.

The Bias Stack in Action: Imagine a product team deciding to add blockchain features to their app in 2021. The availability heuristic makes blockchain seem essential (constant media coverage). Social proof kicks in (everyone’s doing it). The halo effect from successful crypto companies makes it seem like a sure win. Confirmation bias leads to cherry-picking success stories. The narrative fallacy creates a compelling story about being “pioneers in Web3.” Once development starts, sunk cost fallacy prevents abandoning the project even as user interest wanes. The result? Resources wasted on features users never wanted, while core product improvements languish.

Practical Frameworks for Bias-Resistant Decision Making

The Pre-Mortem Technique

Before launching a feature, imagine it has failed spectacularly. Work backward to identify what went wrong. This technique, which Dobelli endorses, helps surface hidden assumptions and biases before they become costly mistakes.

How to run an effective pre-mortem:

  1. Gather the team and announce: “It’s six months from now. This feature has failed catastrophically. What happened?”

  2. Have everyone write individual narratives of failure (prevents groupthink)

  3. Share and cluster the failure modes

  4. For each failure mode, identify early warning signs

  5. Build monitoring and checkpoints for these warning signs into your launch plan

The pre-mortem is particularly powerful because it legitimizes pessimism in a culture that often punishes it. It gives permission to voice doubts that confirmation bias would normally suppress.

The Outside View

Force yourself to look at your product decisions as if you were an external consultant. What would someone with no emotional investment in the current roadmap recommend? This perspective shift helps neutralize ego-protective biases.

Techniques for achieving the outside view:

  • Write a brief as if hiring a consultant, listing only facts, not interpretations

  • Ask someone from a different team to review your product strategy

  • Use the “board of advisors” mental model: What would Warren Buffett/Elon Musk/your favorite PM think?

  • Apply the “competitor test”: If your biggest competitor was considering this exact decision, what would you advise them?

The Decision Journal

Document your predictions and reasoning when making significant product decisions. Review these regularly to calibrate your judgment and identify recurring biases. Dobelli emphasizes that without feedback loops, we never improve our decision-making.

What to track in your decision journal:

  • The decision and date

  • Your confidence level (0-100%)

  • Key assumptions you’re making

  • What evidence you’re relying on

  • What evidence you might be ignoring

  • Which biases might be affecting you

  • Predicted outcomes and timelines

  • Actual outcomes (reviewed quarterly)

Over time, patterns emerge. Maybe you’re consistently overconfident about technical complexity (planning fallacy) or underestimate competitor responses (availability heuristic). This self-knowledge becomes a personal bias correction factor.

The Reference Class Forecasting Method

Instead of thinking about your specific situation, look at the reference class – similar situations across many cases. This technique, which Dobelli borrowed from Daniel Kahneman, is particularly powerful for combating the planning fallacy and optimism bias.

Application example: Instead of asking “How long will our mobile app redesign take?” ask “How long do mobile app redesigns typically take for companies of our size?” Instead of “Will users adopt our new social feature?” ask “What percentage of social features added to non-social products achieve meaningful adoption?”

Biases in Team Dynamics

The book brilliantly extends beyond individual biases to team dynamics – crucial for product managers who facilitate group decisions:

Social Proof in Sprint Planning: Teams often converge on solutions too quickly because everyone assumes others have done the analysis. Dobelli calls this “pluralistic ignorance” – everyone privately doubts but publicly agrees because they think everyone else truly believes.

Example: A team discussing a complex technical architecture. Nobody fully understands the proposed solution, but because the senior engineer seems confident, everyone nods along. Each person assumes others understand it better, so they don’t ask clarifying questions. The result is a team committed to a plan nobody fully comprehends.

Authority Bias in Stakeholder Management: We overweight opinions from senior leaders even when junior team members have more relevant expertise. Dobelli notes this bias is particularly strong in hierarchical organizations where disagreeing with authority carries career risk.

Example: The CEO suggests adding a feature they saw at a conference. Despite user research showing no demand and the engineering team warning about technical debt, the feature gets prioritized. The PM, caught between authority bias and evidence, often sides with authority.

Groupthink in Product Strategy: The desire for harmony can suppress critical evaluation of product direction. Dobelli identifies several conditions that foster groupthink: cohesive groups, insulation from outside opinions, directive leadership, and high stress.

Example: A close-knit product team that’s been together for years develops blind spots. They finish each other’s sentences, share inside jokes, and have developed a shorthand that excludes new perspectives. Their product strategy sessions become echo chambers where challenging the consensus feels like betrayal.

The Responsibility Diffusion Effect: In group settings, individual responsibility dissolves. Everyone assumes someone else has done the deep thinking, the user research, the competitive analysis. Dobelli warns this is particularly dangerous in product committees where decisions happen by consensus.

Example: A product steering committee of eight people reviews feature proposals. Each member assumes others have scrutinized the details, so they focus on high-level strategy. Result: Features get approved that nobody has thoroughly evaluated.

The Organizational Amplification of Biases

Dobelli’s insights become even more powerful when we consider how organizations amplify individual biases:

The Cascade Effect: When a respected team member falls prey to a bias, it cascades through the organization. Their confidence triggers social proof, their seniority triggers authority bias, and soon the entire organization is aligned around a biased decision.

The Documentation Bias: We document successes more thoroughly than failures, creating an organizational memory that suffers from survivorship bias. Post-mortems for failures are often brief and blame-focused, while success stories become detailed case studies that ignore the role of luck.

The Promotion Bias: People who were right by luck get promoted, embedding their biased decision-making patterns into organizational culture. Dobelli notes that in environments with long feedback loops (like product development), it’s nearly impossible to distinguish luck from skill in the short term.

The Metrics Theater: Organizations often measure what’s easy rather than what’s important (streetlight effect). This creates a false sense of data-driven decision-making while actually reinforcing biases. We have data, so we must be objective – except the data itself is biased by what we chose to measure.

Biases in Different PM Contexts

During User Research

  • The Framing Effect: How you ask questions determines the answers. “What frustrates you about X?” yields different insights than “How could X be better?”

  • The Recency Effect: Users overweight recent experiences. That bug they hit yesterday looms larger than months of smooth usage.

  • The Social Desirability Bias: Users tell you what they think you want to hear, especially in face-to-face interviews.

During Prioritization

  • The Urgency Effect: We prioritize urgent over important, tactical over strategic.

  • The Completion Bias: We favor finishing nearly-done projects over starting more valuable ones.

  • The Not-Invented-Here Bias: We undervalue solutions we didn’t create ourselves.

During Launches

  • The Overconfidence Effect: We underestimate risks and overestimate our preparation.

  • The Illusion of Control: We think we can manage more variables than we actually can.

  • The Fundamental Attribution Error: If the launch succeeds, it’s our brilliant strategy. If it fails, it’s market conditions.

The Paradox of Awareness

Dobelli honestly addresses a crucial limitation: knowing about biases doesn’t make you immune to them. Like optical illusions that still trick your eyes even when you know they’re illusions, cognitive biases continue to influence us despite our awareness.

This creates what Dobelli calls “meta-biases” – biases about our biases:

  • The Bias Blind Spot: We see biases in others but not ourselves

  • The Sophistication Effect: Because we know about biases, we think we’re less susceptible

  • The Overcorrection Bias: Sometimes we overcompensate, creating new errors in the opposite direction

The value isn’t in achieving bias-free thinking (impossible) but in creating systems and processes that account for these predictable errors. This is where product management frameworks like jobs-to-be-done, design sprints, and A/B testing shine – they’re essentially bias-mitigation systems disguised as productivity tools.

Why awareness alone isn’t enough: Dobelli uses the analogy of optical illusions. Even when you know the two lines are the same length, your brain still sees them as different. Similarly, even when you know about confirmation bias, your brain still preferentially notices confirming evidence. The solution isn’t to trust your corrected perception but to use a ruler – objective measurement tools that bypass perception altogether.

Critiques and Limitations

While invaluable, the book has limitations product managers should consider:

  1. Overwhelming Catalog: 99 biases can feel paralyzing. Focus on the 10-15 most relevant to your current challenges rather than trying to remember all of them. Dobelli himself acknowledges this challenge and suggests treating the book as a reference manual rather than trying to memorize everything.

  2. Limited Tech Context: Written from a general business perspective, you’ll need to translate examples to digital product contexts. Dobelli’s examples often come from finance, journalism, and traditional business, requiring mental translation to apply them to software products, user behavior, and digital ecosystems.

  3. Individual vs. Systematic: The book focuses on individual decision-making but could deeper explore organizational structures that amplify or mitigate biases. While Dobelli touches on group dynamics, a product manager might want more on how to design organizational processes that systematically reduce bias.

  4. The Action Gap: Knowing about biases is one thing; Dobelli could provide more concrete tools and checklists for daily application. The book is stronger on diagnosis than treatment, though this may be intentional – forcing readers to develop their own contextual solutions.

  5. Cultural Context: Most examples come from Western, particularly European and American, business contexts. Biases may manifest differently in other cultural contexts, and some mitigation strategies may not translate across cultures.

  6. The Cynicism Risk: After reading about 99 ways your brain fails you, it’s easy to become paralyzed by doubt or cynical about all decision-making. Dobelli could better balance highlighting biases with celebrating human judgment when properly channeled.

How to Apply This Book

Week 1-2: Awareness Building

  • Read the book with a highlighter, marking biases you recognize from your own experience

  • Share top 3 most relevant biases with your team

  • Start a “bias of the day” Slack channel where team members can share examples they spot

Week 3-4: Bias Spotting

  • During meetings, privately note which biases might be at play

  • Don’t call them out yet – just observe and document

  • Review your last three major product decisions through the lens of the biases

Week 5-6: Process Integration

  • Choose 2-3 critical biases for your team’s current challenges

  • Design specific countermeasures (e.g., anonymous voting to counter authority bias)

  • Add bias checks to existing ceremonies (retrospectives, decision reviews)

  • Create a “bias checkpoint” template for major decisions

Week 7-8: Tool Development

  • Build a decision-making template that explicitly addresses top biases

  • Create a “red team” rotation for important product decisions

  • Develop bias-aware user research protocols

  • Design metrics dashboards that counter availability heuristic

Month 3: Cultural Evolution

  • Rotate “bias of the month” discussions in team meetings

  • Celebrate caught biases as learning moments, not failures

  • Build a team vocabulary around cognitive errors

  • Share bias post-mortems to normalize discussing mental errors

Ongoing: Personal Practice

  • Keep a decision journal for your top 3 decisions each week

  • Partner with another PM for monthly bias reviews

  • Build personal checklists for your most common biases

  • Track prediction accuracy to calibrate confidence levels

The Meta-Learning

Perhaps the book’s greatest gift isn’t the specific biases but the meta-learning it provides: the humble recognition that our brains, despite being remarkable pattern-recognition machines, are riddled with predictable flaws. For product managers, whose job is essentially to be right about the future more often than wrong, this humility is both sobering and liberating.

It’s sobering because it reveals how many ways our judgment can fail. Every feature prioritization, every user research interpretation, every strategic decision is vulnerable to multiple simultaneous biases. The confident PM who “trusts their gut” is revealed to be trusting a gut full of cognitive errors.

It’s liberating because it explains why smart people make bad product decisions – it’s not incompetence but human nature. That failed feature wasn’t necessarily a personal failure; it might have been the predictable result of unchecked biases. And unlike incompetence, we can build systems to manage human nature.

This meta-learning extends to how we view others’ decisions. Understanding biases builds empathy. When stakeholders make seemingly irrational demands, when users behave in puzzling ways, when team members reach bizarre conclusions – often, there’s a bias at work. Understanding the bias helps us address the root cause rather than the symptom.

Building a Bias-Aware Product Culture

The long-term value of Dobelli’s work isn’t in individual bias awareness but in building bias-aware product cultures. This means:

Psychological Safety: Teams must feel safe admitting when biases affected their judgment. If admitting bias leads to punishment, biases go underground but don’t disappear.

Systematic Processes: Build bias checks into your product development lifecycle:

  • Research protocols that explicitly counter confirmation bias

  • Prioritization frameworks that address urgency effect and sunk cost

  • Launch procedures that account for overconfidence and planning fallacy

  • Retrospectives that identify biases, not just outcomes

Language and Vocabulary: Give your team the language to discuss biases without judgment. “I think we might be falling into sunk cost fallacy here” should be as normal as “We need more user research.”

Celebrating Bias Detection: When someone spots a bias before it causes damage, celebrate it. Create a “Bias Catcher of the Month” award. Make bias detection a valued skill, not a form of criticism.

External Perspectives: Regularly bring in outside voices – advisors, consultants, new team members – who aren’t subject to your organization’s accumulated biases. Fresh eyes see what familiar ones miss.

The Competitive Advantage of Bias Awareness

In product management, small advantages compound into market dominance. A product team that makes 10% better decisions, 10% more often, will dramatically outperform competitors over time. Bias awareness provides exactly this kind of systematic advantage.

Consider two product teams:

  • Team A relies on intuition, experience, and traditional decision-making

  • Team B uses the same inputs but adds bias-aware processes and frameworks

Team B will:

  • Kill failing projects faster (no sunk cost fallacy)

  • Learn from failures more effectively (no narrative fallacy)

  • Spot real user needs through the noise (no availability heuristic)

  • Make more accurate timeline predictions (no planning fallacy)

  • Avoid feature bloat (understanding feature-positive effect)

Over months and years, these advantages compound. Team B ships more valuable features, wastes fewer resources, and responds more accurately to market signals. Their product gradually pulls ahead, not through any single brilliant decision but through consistently better judgment.

Personal Reflection Exercises

Dobelli’s framework becomes most powerful when internalized through personal reflection:

The Bias Autobiography: Write about three major product decisions you’ve made. For each, identify which biases might have influenced you. How might the outcome have differed with bias-aware processes?

The Prediction Audit: Review predictions you made six months ago. Which came true? Which didn’t? Can you identify biases that led to incorrect predictions?

The Devil’s Dictionary: For each of your current product beliefs, write the strongest possible counter-argument. Which beliefs survive this scrutiny? Which reveal themselves as bias-driven assumptions?

The Outside Observer: Imagine a consultant reviewing your product strategy with complete objectivity. What would puzzle them? What would seem obviously biased? What sacred cows would they question?

Five Immediately Actionable Takeaways

  1. The 10-10-10 Rule: Before major decisions, ask how you’ll feel about this in 10 minutes, 10 months, and 10 years. This counters multiple biases including hyperbolic discounting and emotional reasoning. Set a reminder to actually check your feelings at these intervals.

  2. The Stranger Test: Would you recommend this course of action to a stranger in the same situation? This simple question cuts through ego-protective biases and sunk cost fallacies. Even better: Actually ask a stranger (or someone from a different team).

  3. The Base Rate Check: Before getting excited about a new opportunity, ask “What percentage of similar attempts have succeeded?” Ground your optimism in statistical reality. Build a database of base rates for common product decisions.

  4. The Inversion Principle: Instead of asking “What could go right?” systematically explore “What could go wrong?” Our brains naturally focus on positive outcomes; deliberate inversion balances this tendency. Make “How could this fail?” a standard question in every product review.

  5. The Confidence Calibration: When making predictions, assign confidence levels (e.g., 60% confident this feature will increase retention). Track accuracy over time to calibrate your judgment. Most PMs discover they’re overconfident and need to adjust their certainty downward.

Advanced Applications for Senior PMs

For experienced product managers, Dobelli’s framework enables advanced applications:

Bias-Based User Segmentation: Understand that different user segments fall prey to different biases. Power users might suffer from feature-positive effect (wanting every possible feature), while new users might be swayed by social proof (what their friends use).

Competitive Bias Analysis: Analyze competitors not just for features but for the biases driving their product decisions. Are they falling for social proof by copying others? Are they stuck in sunk cost with legacy features?

Stakeholder Bias Mapping: Map which biases each stakeholder is most susceptible to. The sales team might overweight availability heuristic (the last lost deal), while engineering might fall for technical sophistication bias (choosing complex solutions).

Bias-Aware Roadmapping: Design roadmaps that explicitly account for planning fallacy, overconfidence, and optimism bias. Build in buffer time, kill criteria, and checkpoint reviews.

Market Timing Through Bias: Sometimes the key to product success is understanding market biases. Launching when the market is in the grip of social proof (everyone needs X) or availability heuristic (recent event makes Y seem critical) can accelerate adoption.

The Philosophical Implications

Beyond practical applications, Dobelli’s work raises profound questions about product management and human judgment:

If our brains are so flawed, should we trust them at all? The answer isn’t to abandon human judgment but to augment it with systems that compensate for known flaws. Like a pilot who trusts instruments over intuition in fog, product managers need objective frameworks for navigating uncertainty.

The book also challenges the “visionary founder” mythology. If even brilliant minds fall prey to predictable biases, perhaps successful products come less from genius insights and more from systematic processes that reduce error rates. This democratizes product management – you don’t need to be a visionary if you can build bias-resistant systems.

Final Verdict: Essential Reading

“The Art of Thinking Clearly” should be required reading for every product manager, not because it will make you a perfect decision-maker (nothing will), but because it provides the vocabulary and framework to discuss, recognize, and mitigate the cognitive errors that plague product development.

In a field where the difference between success and failure often comes down to a few critical decisions, understanding these 99 cognitive errors isn’t just intellectual curiosity – it’s professional survival. The book won’t eliminate your biases, but it will make you a more thoughtful, systematic, and ultimately successful product manager.

The next time you’re in a heated roadmap discussion, user research session, or strategy meeting, you’ll find yourself recognizing these patterns in real-time. And that moment of recognition – that pause before the bias takes hold – is where better products are born.

More importantly, Dobelli’s work fosters intellectual humility. In an industry full of confident proclamations and bold visions, the ability to say “I might be wrong because of X bias” is refreshing and valuable. This humility, paradoxically, leads to more confident decision-making because it’s grounded in systematic thinking rather than gut feelings.

Rating: 9/10 for Product Managers

Who should read this: Every PM, but especially those with 2+ years experience who have enough context to recognize these biases in action. New PMs might find it overwhelming without real-world context.

Read this before: Making your next big product decision, conducting user research, or facilitating team strategy sessions. Also invaluable before performance reviews, reorganizations, or any situation where clear thinking matters.

Pair this with:

  • “Thinking, Fast and Slow” by Daniel Kahneman for deeper theory

  • “Influence” by Robert Cialdini for persuasion biases

  • “Nudge” by Thaler & Sunstein for behavioral design applications

  • “Superforecasting” by Philip Tetlock for prediction improvement

  • “The Black Swan” by Nassim Taleb for understanding randomness

The One Sentence That Changes Everything: “The good news is that we can learn to recognize and compensate for these biases – the bad news is that it requires constant vigilance and systematic processes, not just good intentions.”

Leave a comment


📝 The Psychology Behind B2B Buying: What Kahneman, Cialdini, and Pink Teach Us About Personas

The $2.3 Million Decision Made in 200 Milliseconds

Tomasz, a VP of Operations at a mid-size manufacturing company, had spent three months evaluating enterprise resource planning systems. He’d created spreadsheets comparing 47 features across 8 vendors. He’d attended demos, read analyst reports, and built detailed ROI models.

Then he sat through a presentation from the final two contenders.

Vendor A had better features, lower total cost of ownership, and stronger references. Their presentation was comprehensive—packed with data, specifications, and implementation timelines.

Vendor B had a good product but wasn’t objectively superior. However, their sales

User's avatar

Continue reading this post for free, courtesy of Destare Foundation.

Or purchase a paid subscription.
© 2026 PRODUCT ART · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture