💜 PRODUCT ART 💜

💜 PRODUCT ART 💜

Why Product Roadmaps Are Destroying Strategic Thinking | Decision Fatigue: How to Protect Your Team from Cognitive Burnout

Issue #222

Destare Foundation's avatar
Alex Dziewulska's avatar
Sebastian Bukowski's avatar
Jakub Sirocki's avatar
+2
Destare Foundation
,
Alex Dziewulska
,
Sebastian Bukowski
, and 3 others
Oct 21, 2025
∙ Paid

In today's edition, among other things:

💜 Editor’s Note - Why Product Roadmaps Are Destroying Strategic Thinking (by

Alex Dziewulska
)

💜 Decision Fatigue: How to Protect Your Team from Cognitive Burnout (by

Ɓukasz DomagaƂa
)

đŸ’Ș Interesting opportunities to work in product management

đŸȘ Product Bites - small portions of product knowledge

📚 Monthly Book Club for Product Managers

đŸ”„ MLA week#31

Join Premium to get access to all content.

It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).

Grab a notebook 📰 and your favorite beverage đŸ”â˜•.

DeStaRe Foundation

Editor’s Note by Alex 💜

Why Product Roadmaps Are Destroying Strategic Thinking


Here comes my fav season - no, it’s not x-mas. It’s yearly strategy meetings and roadmap planning. We will all gather in our conferenve rooms and we will systematically tell a lie. We will dress Gannt charts, Excel rows and our plans into strategy. And roadmaps. We will waste 2 months to be back to putting down fires in February. I’m tired of this drama.

And I know you are too.

Every year, we perform this ritual. We convince ourselves that this time will be different. This time our estimates will be accurate. This time stakeholders will understand that dates are tentative. This time we’ll actually follow the plan. But here’s what fifteen years in product has taught me: our roadmap isn’t a strategy—it’s a psychological security blanket that’s suffocating innovation.

Your product roadmap is lying to you. Not maliciously—it genuinely believes its own fiction. But every feature date you commit to, every quarterly plan you present with confidence, every stakeholder you appease with a timeline—you’re participating in theater that systematically destroys your ability to think strategically. The entire product management profession has convinced itself that detailed planning equals strategic thinking, when research across organizational behavior, behavioral economics, and strategic management proves the opposite: traditional roadmaps are the single greatest obstacle to innovation in modern product organizations.

We’ve turned product management into a commitment factory. Every quarter, thousands of product managers sit in conference rooms, presenting Gantt charts disguised as strategy, making promises about features they haven’t validated to stakeholders who mistake certainty for competence. MIT Sloan research found that conventional planning systems actively disrupt learning within strategic experiments. A systematic analysis examining startups identified premature scaling—switching to growth mode before achieving product/market fit—as responsible for 70% of startup failures. Yet here we are, still playing this game, because admitting the truth would mean acknowledging we’ve been performing strategic theater instead of doing strategic work.

Think about your last roadmap review. How much time did you spend justifying dates versus understanding customer problems? How many features on that roadmap came from actual discovery versus stakeholder appeasement? Recent industry research reveals troubling patterns: when senior executives influence roadmaps, teams focus overwhelmingly on outputs over outcomes. Product managers report dramatically lower confidence when ideas come from leadership rather than discovery—a five-fold difference in some studies.

Daniel Kahneman’s Nobel Prize-winning research on cognitive biases explains why roadmaps feel essential while being destructive. The planning fallacy—our systematic tendency to underestimate time, costs, and risks—makes every roadmap a work of fiction. In one of Kahneman and Tversky’s studies, when students gave 99% confidence intervals for thesis completion, only 45% finished within those timeframes. We’re not just bad at estimating; we’re predictably, systematically, catastrophically overconfident about our ability to predict the future.

The illusion of control, first documented by psychologist Ellen Langer in 1975, reveals something more insidious. Simply being assigned to a management role leads to an illusory sense of personal control over outcomes that are actually beyond reach. Creating a roadmap triggers this bias through six psychological factors: personal action (you made the plan), familiarity (planning feels known), advance knowledge (defining desired outcomes), success attribution (taking credit when things work), positive mood (optimism about the future), and personal involvement (deep investment in the plan).

Barry Staw’s foundational 1976 research on escalation of commitment shows why roadmaps become prisons. Once we’ve committed to a plan publicly, we continue pursuing it despite mounting evidence of failure. The sunk cost fallacy compounds this—the more we invest in a roadmap, the harder it becomes to abandon. Real-world disasters follow this pattern: Denver International Airport went $2 billion over budget, Berlin Brandenburg Airport hit €6.5 billion over budget and opened 10 years late. In software development contexts, research indicates that a significant percentage of projects experience escalation, with managers who initiate projects being least likely to perceive them as failing.

Goodhart’s Law delivers the killing blow: “When a measure becomes a target, it ceases to be a good measure.” Once we measure success by roadmap adherence, teams optimize for feature delivery over customer value, on-time shipping over building the right thing, hitting milestones over learning and adaptation. The metric corrupts the very behavior it’s meant to measure.

The companies dominating their markets have already abandoned traditional roadmaps—they just don’t advertise it. Spotify’s “Think It, Build It, Ship It, Tweak It” model explicitly rejects delivery dates. Henrik Kniberg explains their philosophy: “We don’t launch on date, we launch on quality.” Product ideas have no deadline in the Think It stage because they’re “not worth building until we can show a compelling narrative and runnable prototype.” This approach enabled viral growth from 0 to 1 million US paying subscribers in approximately one year.

Netflix’s Strategy/Metrics/Tactics framework, documented by former VP of Product Gibson Biddle, replaces roadmaps with outcome-driven strategy. They separate high-level product hypotheses from proxy metrics measuring success and experiments testing those strategies. Biddle’s philosophy cuts through the mythology: “Roadmaps are a prototype for your strategy—not commitments.” This enabled Netflix to improve monthly churn from 10% to 2%, survive the dot-com bubble, and expand successfully from DVDs to streaming to original content to gaming.

Amazon’s Working Backwards approach starts with a customer press release written before any development begins. If the team can’t write a compelling press release explaining why customers should care, they don’t build the product. No roadmap, no timeline—just relentless focus on customer value. This framework built everything from AWS to Alexa.

Teresa Torres, who’s trained over 17,000 product managers globally, advocates continuous discovery with weekly customer touchpoints minimum. Harvard Business School research shows that 95% of products fail, primarily because they don’t address real customer needs. The solution isn’t better roadmaps—it’s replacing roadmap planning with continuous learning. Companies using her approach report eliminating feature factory dynamics, reducing cognitive biases through continuous feedback, achieving faster learning, and maintaining fresh insights as markets evolve.

I’ve watched brilliant product teams turn into feature-copying machines, spending days analyzing competitor roadmaps instead of understanding what makes their company uniquely powerful. Melissa Perri arrived at one job to find 20 features on a whiteboard from the previous year’s roadmap, many written into client contracts but never delivered. Teams were “crunching to finish these features and ship them to customers” regardless of whether they remained relevant.

The contract mentality transforms roadmaps from strategic tools into political documents. Industry research shows that roadmap presentations become interrogations where “everyone has seen the deck already” and you’re “fielding a barrage of questions under what feels like a massive microscope.” The presentation isn’t about sharing information—it’s about “evangelizing your product strategy and persuading stakeholders.”

This political theater has devastating consequences. Paul Brown captures it perfectly: “Discovery dies: Teams stop asking questions because the roadmap already has the answers. And when the promised results don’t materialize, discovery gets blamed as ‘wasted time.’” Early commitments shut down better paths that emerge later. Once a roadmap locks you in, those alternatives evaporate. You stop comparing; you just comply.

Marty Cagan’s assessment after coaching hundreds of teams is damning: “Weak teams plod through the roadmap they have been assigned, month after month” while strong teams focus on achieving outcomes. His two inconvenient truths about product development—that at least half of ideas won’t work and even good ideas require several iterations—expose roadmaps as fundamentally incompatible with reality.

The alternative frameworks aren’t theoretical—they’re battle-tested at scale. Outcome-based planning focuses on business results and customer value rather than features and dates. Teams receive clear objectives and key results, then determine solutions through discovery and experimentation. The Now/Next/Later framework, used by over 7,000 product teams, organizes work into time horizons without fixed deadlines. Theme-based roadmaps organize around strategic problems rather than solution commitments. OKR-based planning makes objectives the roadmap itself, with teams determining how to achieve them.

Research consistently shows these approaches deliver superior results. Companies using agile, outcome-based approaches demonstrate significantly faster revenue growth and higher profits than traditional planning organizations. The evidence spans academic research, industry analysis, and real-world results from companies like Google, Intel, Spotify, and thousands of others who’ve made the shift.

The implementation path is clear. Start by flipping your resource allocation—spend 60-70% of strategic analysis time on customer insight and capability development, not competitive intelligence. Transform your meetings from competitive review sessions to capability development workshops. Change your metrics from feature delivery to customer problem depth. Build bias countermeasures into planning: devil’s advocacy, blind analysis, minimum customer contact requirements. Create learning systems that share customer insights, not just competitive intelligence.

I’m not asking you to abandon all structure—I’m challenging you to abandon the illusion that detailed feature roadmaps create strategic clarity. Every hour you spend crafting beautiful roadmap slides is an hour not spent understanding customers. Every commitment you make to a feature date is a door you close to better solutions. Every stakeholder you appease with false certainty is trust you’ll lose when reality intrudes.

The most successful companies have already made this shift. They maintain strategic focus through vision and objectives while preserving tactical flexibility through continuous discovery. They measure success by outcomes achieved, not features shipped. They treat uncertainty as reality, not something to hide behind confident roadmaps.

Your organization has a choice. Continue the comfortable mediocrity of roadmap theater, where everyone pretends that planning equals strategy, where political appeasement matters more than customer value, where the appearance of control substitutes for actual learning. Or embrace the productive discomfort of genuine strategic thinking—where you admit you don’t know all the answers, where you learn through experimentation, where you measure success by impact not output.

The evidence is overwhelming. The business case is compelling. The only question is whether you have the courage to stop performing strategic theater and start doing strategic work. Will you continue participating in planning rituals that actively prevent innovation? Or will you lead the transformation from feature factories to learning organizations?

Your next roadmap review is coming. Will you present another fictional timeline of uncommitted features? Or will you stand up and say what every product manager knows but fears to admit: “We don’t know what we’ll build three months from now—and that’s exactly how it should be.”

Leave a comment


đŸ’Ș Product job ads from last week

Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)

  1. Senior Product Manager - booksy

  2. Product Manager - ƻabka Group

  3. Senior Product Manager - Dealfront

  4. Product Manager - 12Go

  5. Product Manager - wayo.tech

    Refer a friend


đŸȘ Product Bites (3 bites đŸȘ)

đŸȘ The Endowment Effect: Why Users Overvalue What They Already Have

How Ownership Psychology Shapes Product Strategy and Feature Adoption

Your competitor just launched a feature that’s objectively better than yours. Side-by-side comparisons show it’s faster, more intuitive, and cheaper. Yet your users aren’t switching. They’re not even trying the alternative. When you ask why, they say your solution “works fine” and switching “isn’t worth the hassle”—even though the competitor offers free migration.

Welcome to the endowment effect in action.

The Endowment Effect is a cognitive bias where people ascribe more value to things merely because they own them. Once something becomes “mine,” its perceived worth increases dramatically—often by 2-3x compared to identical items we don’t own. In behavioral economics, this is one of the most powerful forces shaping human decision-making, and in product management, it’s both your greatest asset and your most formidable competitor.

Daniel Kahneman and Amos Tversky’s groundbreaking research in the 1980s demonstrated this perfectly. They gave half of their study participants a coffee mug and asked them to set a selling price. The other half didn’t receive a mug but were asked how much they’d pay to buy one. The result? Mug owners demanded twice as much to sell their mugs as non-owners were willing to pay. Same mug, same people—but ownership doubled perceived value.

For product teams, this insight is transformative. It means that getting users to adopt your product—to feel ownership—creates a moat that competitors struggle to cross. But it also means displacing incumbent solutions requires far more than marginal improvements. You’re not just competing with features; you’re competing with the psychological weight of ownership.

The Psychology of “Mine”

Think of your brain as having two different pricing systems. One evaluates things you don’t own (buyer mode), and the other evaluates things you do own (seller mode). These systems use wildly different math.

In buyer mode, we’re critical and cautious. We focus on what’s missing, what could go wrong, and whether we really need this. We anchor on price and look for reasons to save money.

In seller mode, we’re defensive and optimistic. We focus on benefits, past investments, and unique qualities. We anchor on value and look for reasons to hold on.

The endowment effect is the gap between these two modes. And it’s not small—research consistently shows 2-3x valuation differences for identical items.

Why this happens in our brains:

  1. Loss Aversion: Losing something we own feels roughly twice as painful as gaining something new feels good. Kahneman’s prospect theory shows that losses loom larger than gains, making us irrationally protective of the status quo.

  2. Effort Justification: We’ve invested time learning our current solution. That sunk cost creates psychological ownership—we justify past effort by overvaluing what we learned to use.

  3. Identity Fusion: Products become part of how we see ourselves. Apple users don’t just own iPhones—being an “iPhone person” becomes part of their identity, making switching feel like betraying themselves.

When Evernote users resisted migrating to Notion despite Notion’s superior features, it wasn’t stubbornness—it was the endowment effect. Years of notes, organizational systems, and workflows had created deep psychological ownership. Notion wasn’t competing against Evernote’s features; they were competing against users’ identities as “Evernote people.”

The Three Manifestations in Product Management

The endowment effect shows up differently depending on whether you’re defending an incumbent position or challenging one. Understanding these manifestations helps you strategize accordingly.

1. The Incumbent’s Moat: Why Users Don’t Leave

If you’re the established solution, the endowment effect is your secret weapon. Users overvalue your product simply because they already use it.

How it protects you:

  • Users tolerate more bugs in tools they already own than in tools they’re evaluating

  • Feature gaps that would disqualify you during evaluation get excused after adoption

  • Competitors need 10x improvements, not 2x, to overcome ownership psychology

  • Switching costs feel larger than they actually are (psychological barrier exceeds functional barrier)

Real-world example: Microsoft Office dominated for decades despite Google Workspace offering free, cloud-based collaboration. The endowment effect made Office’s installed base remarkably sticky—users owned their workflows, keyboard shortcuts, and muscle memory. Google needed radical advantages (real-time collaboration, zero local storage) to overcome ownership inertia.

Strategic implication: If you’re an incumbent, your job isn’t just to add features—it’s to deepen ownership. More customization, more invested time, more personalization. Every additional element of ownership strengthens your moat.

2. The Challenger’s Burden: Why 10x Better Isn’t Enough

If you’re challenging an incumbent, the endowment effect is your primary adversary. You’re not competing on features alone—you’re asking users to give up something they psychologically own.

Why it blocks you:

  • Users evaluate your product in buyer mode (critical, skeptical) but evaluate incumbents in seller mode (generous, forgiving)

  • Your feature advantages need to overcome ownership attachment, not just match functionality

  • Even free products face resistance because switching costs include psychological loss

  • Users irrationally fear change more than they rationally desire improvement

Real-world example: Slack faced this when targeting Microsoft Teams users. Even though many enterprises found Slack superior, Teams’ integration with Microsoft 365 created deep ownership—Teams was already “theirs.” Slack couldn’t just be better; they needed to be worth the psychological pain of switching.

Strategic implication: If you’re a challenger, marginal improvements fail. You need either 10x better experiences, zero switching costs (seamless migration), or fundamentally different value propositions that make comparison irrelevant.

3. The Feature Adoption Trap: Why Users Ignore Your New Features

The endowment effect doesn’t just apply to products—it applies to workflows within products. Users own their current way of doing things, making new features surprisingly hard to adopt even when objectively superior.

Why it happens:

  • Users have already invested in learning existing features—new features compete with that investment

  • Current workflows feel comfortable and “theirs”—new workflows feel foreign and risky

  • Status quo bias makes “keep doing what works” feel safer than “try something potentially better”

Real-world example: Adobe Photoshop users notoriously ignore newer, more efficient features because they’ve mastered older workflows. The ownership of their current methodology outweighs the potential efficiency gains of new tools.

Strategic implication: Feature adoption isn’t just about building great features—it’s about helping users let go of what they already own.

Designing for Ownership: The Incumbent Strategy

If you’re building a product that users already adopt, your strategic goal is to amplify the endowment effect. Here’s how to deepen psychological ownership:

1. Maximize Customization and Personalization

The more users customize your product, the more it becomes uniquely “theirs.” Every personalization choice increases ownership attachment.

Tactical implementation:

  • Let users customize interfaces, themes, and layouts (visual ownership)

  • Enable workflow customization (behavioral ownership)

  • Allow naming, tagging, and organizing systems (cognitive ownership)

  • Support plugins, extensions, or integrations (ecosystem ownership)

Example: Notion’s blank-canvas approach creates extreme ownership. Every workspace is unique to its creator. Users invest hours building their perfect systems, making Notion almost impossible to leave—they’d be abandoning their creation, not just a tool.

Measurement: Track customization depth. Users who customize 3+ elements have 4x better retention than default-configuration users.

2. Increase Invested Effort Over Time

The more effort users invest, the more they’ll value the product. This isn’t about creating friction—it’s about creating meaningful investment opportunities.

Tactical implementation:

  • Gamification and progression systems (achievement ownership)

  • Content creation features (creative ownership)

  • Historical data and archives (temporal ownership—”years of history here”)

  • Relationships and networks built within the product (social ownership)

Example: Spotify’s carefully curated playlists and “Liked Songs” history create massive ownership. Users don’t just subscribe to music—they own a decade of musical identity. Switching to Apple Music means losing that curated self.

Measurement: Correlate time invested with retention. Find the “ownership threshold”—the point where users have invested enough that churn drops dramatically.

3. Make Data Portable But Migration Painful

This seems contradictory, but it’s strategic brilliance. Offer data export (ethical, builds trust) while ensuring that migration still means losing something valuable.

What can’t be exported:

  • Workflow configurations and customizations

  • Collaborative histories and comments

  • Integration connections and automations

  • Learned preferences and AI personalization

Example: Google Photos offers data export, but migrating means losing face recognition tags, automatic albums, search functionality, and years of organizational metadata. The data is portable; the context and intelligence aren’t.

Ethical boundary: Never hold data hostage. Always enable export. But recognize that data alone isn’t what users own—they own the experience layer built on top of data.

4. Create Identity Associations

When your product becomes part of users’ identity, the endowment effect amplifies. Users don’t just own the product—they own being “the type of person who uses this product.”

Tactical implementation:

  • Build community around your product (social identity)

  • Enable public sharing and profiles (reputation ownership)

  • Create distinctive terminology and culture (tribal identity)

  • Support certification and expertise development (professional identity)

Example: Figma users don’t just use design software—they’re part of the “Figma community.” Conference talks, plugins, design systems shared publicly—all reinforce identity ownership that transcends features.

Overcoming Ownership: The Challenger Strategy

If you’re trying to displace an incumbent, you need strategies specifically designed to overcome endowment effect resistance:

1. The Seamless Migration Strategy

Make switching so effortless that users lose nothing they currently own. Import everything—data, structure, workflows, even muscle memory if possible.

Tactical implementation:

  • One-click imports that preserve structure, not just data

  • Automatic recreation of workflows and customizations

  • Keyboard shortcut compatibility with incumbents

  • Visual similarity during transition period (reduce foreign-ness)

Example: When Superhuman launched, they studied Gmail power users’ keyboard shortcuts and replicated them. Users switching from Gmail didn’t have to abandon their muscle memory—they could transfer ownership of their workflow shortcuts.

Measurement: Track migration completion rates and time-to-first-value post-migration. Success means users feeling they “own” your product within hours, not weeks.

2. The 10x Differentiation Strategy

Don’t compete on the incumbent’s terms. Offer something so fundamentally different that comparison becomes irrelevant—you’re not asking users to replace what they own; you’re offering something new to own.

Tactical implementation:

  • Identify jobs the incumbent can’t do (new value, not replacement value)

  • Position as complementary initially, then gradually replace incumbent

  • Focus on new user behaviors, not better versions of old behaviors

  • Create new metrics of success that incumbents don’t measure

Example: Notion didn’t directly compete with Evernote on note-taking. They competed on “building your own workspace” —a fundamentally different value proposition. Users didn’t replace Evernote; they eventually stopped needing it because Notion solved broader problems.

Strategic insight: If you’re 2x better at what incumbents do, you’ll lose to endowment effect. If you’re 10x better at something incumbents don’t do, you win.

3. The Trojan Horse Strategy

Enter organizations through new users who don’t own the incumbent. Build ownership with them first, then let network effects challenge incumbent users.

Tactical implementation:

  • Target new team members who haven’t invested in incumbent workflows

  • Focus on departments the incumbent doesn’t serve well

  • Build viral loops so new users bring existing users

  • Create collaborative features that require others to at least try your product

Example: Slack entered enterprises through small teams and startups, not by displacing Microsoft Lync in Fortune 500 IT departments. By the time large companies noticed Slack, grassroots adoption had created ownership in enough users to overcome corporate incumbent bias.

4. The Gradual Ownership Transfer Strategy

Help users slowly build ownership in your product while still using the incumbent. Don’t force an immediate switch—let ownership transfer naturally.

Tactical implementation:

  • Freemium models that require no commitment

  • Side-by-side usage periods (try us while keeping incumbent)

  • Progressive feature adoption (start with one use case, expand over time)

  • Psychological “trial ownership” (30 days to feel ownership before paying)

Example: Airtable positions as “start for just this one project.” Users don’t abandon their incumbent spreadsheet system—they just try Airtable for one use case. As that use case succeeds, ownership grows, and incumbent dependence shrinks.

The Feature Launch Paradox: Fighting Ownership Within Your Own Product

Here’s where it gets meta: even within your product, users develop endowment of their existing workflows. Launching new, better features means asking users to give up workflows they already own.

Why Feature Adoption Fails Despite Obvious Value

The Current Workflow Endowment:

  • Users own their existing process (even if inefficient)

  • Learning new features means admitting time invested in old way was wasted

  • Change feels like loss, not gain

  • Status quo bias favors “good enough” over “potentially better”

Common mistake: Product teams assume that obviously superior features will naturally get adopted. They don’t. Endowment effect protects existing workflows just as it protects incumbent products.

Strategies for Feature Adoption Against Endowment Effect

1. Make New Features the Default for New Users New users have no workflow ownership yet. Make them default into better features, then let success stories convert existing users.

2. Gradual Deprecation with Emotional Sensitivity Don’t kill old features abruptly. Give users time to build ownership of new features before losing old ones. Provide transition paths, not forced migrations.

3. Show Concrete Loss Metrics Help users see what their current workflow costs them. “You could save 2 hours per week” is more compelling than “this new feature is cool.” Make the endowment cost visible.

4. Enable Hybrid Periods Let users run old and new workflows simultaneously. Ownership transfers gradually, not instantly. Once new workflow proves itself, users naturally let go of old one.

Example: When Gmail introduced tabs (Primary, Social, Promotions), they didn’t force users to adopt them. They enabled them by default for new users, offered easy toggle for existing users, and let positive word-of-mouth gradually convert skeptics. Ownership transferred naturally over time.

The Ethics of Ownership Psychology

Let’s address the uncomfortable question: Is exploiting the endowment effect manipulative?

It’s ethical when:

  • You’re genuinely delivering value users want to keep

  • Switching costs are real (learning, migration, customization) not artificially inflated

  • You enable data portability and don’t hold users hostage

  • Ownership deepening reflects genuine product improvement

It crosses the line when:

  • You deliberately create unnecessary switching costs to trap users

  • You prevent data export or make it functionally useless

  • You deepen ownership through dark patterns rather than genuine value

  • You exploit sunk cost psychology to keep users in objectively bad experiences

The test: Would your users thank you for the ownership they feel, or resent you for the lock-in you’ve created?

Notion users feel grateful for their customized workspaces (ethical ownership deepening). Users trapped in legacy enterprise software with terrible UX but impossible migration costs feel resentful (unethical lock-in).

The endowment effect should reinforce value, not replace it.

Measuring Ownership Depth

How do you know if users actually “own” your product versus just using it? Track these leading indicators:

Ownership Metrics:

  • Customization rate: % of users who personalize settings, themes, layouts

  • Content creation: Amount of user-generated content, workflows, or configurations

  • Time invested: Hours spent building, organizing, or optimizing

  • Integration depth: Number of connected tools or workflows dependent on your product

  • Emotional language: Support tickets saying “my workspace,” “my system,” “my data” (possessive pronouns signal ownership)

Ownership Threshold Analysis: Identify the point where users transition from trial to ownership. At Dropbox, users who added 1GB+ of data had 10x better retention—that was their ownership threshold. Find yours.

Switching Cost Perception Survey: Periodically ask users: “If you had to switch to [competitor], what would you lose?” The longer and more emotional the list, the deeper the ownership.

The Long Game: Ownership as Product Strategy

The endowment effect teaches us that product strategy isn’t just about features—it’s about cultivating ownership over time. The best products become irreplaceable not because they’re technically superior, but because users can’t imagine their lives without them.

For incumbents: Your moat isn’t your code—it’s your users’ sense of ownership. Deepen it continuously. Every feature should ask: “Does this make users feel like this product is more uniquely theirs?”

For challengers: Your enemy isn’t the incumbent’s features—it’s users’ attachment to them. Your strategy must either make ownership transfer seamless, offer 10x different value, or grow new ownership in parallel until it surpasses incumbent attachment.

For feature adoption: Your new feature isn’t competing with old features—it’s competing with users’ ownership of old workflows. Respect that ownership while creating paths to new, better ownership.

Your implementation challenge: Look at your current product. Ask: “What do users actually own here?” Not what they use—what they own. The customizations, the history, the relationships, the identity. Then ask: “Are we deepening ownership, or just adding features?”

Because in the end, the products that win aren’t always the best products. They’re the products users feel they own.

And what we own, we don’t easily let go.

Leave a comment


đŸȘ Dual-Track Agile: Running Discovery and Delivery in Parallel

Why Building the Right Thing Matters More Than Building Things Right

Your engineering team is a well-oiled machine. Sprint velocity is high, code quality is solid, and features ship on schedule. There’s just one problem: three months after launch, usage data shows that 80% of users never even try your carefully crafted features. You built the wrong things, brilliantly.

This is the classic failure mode of traditional Agile. Teams become experts at delivery—turning requirements into working software—but terrible at discovery—figuring out which requirements actually matter. We optimize execution while ignoring direction.

Dual-Track Agile is a product development approach where discovery work (learning what to build) and delivery work (building it) run in parallel, continuous tracks. Instead of a linear “discover first, build second” process, both activities happen simultaneously, feeding insights into each other. Discovery stays one or two sprints ahead of delivery, ensuring that by the time engineers start coding, we’ve already validated that we’re solving real problems for real users.

Marty Cagan and Jeff Patton pioneered this approach after observing a painful pattern: Agile teams were shipping faster than ever, but building the wrong things faster than ever. The problem wasn’t execution methodology—it was the absence of continuous learning. Dual-Track Agile solves this by making discovery a first-class citizen in the development process, not a phase that happens once and disappears.

The Single-Track Trap

Think of traditional Agile as a factory production line. Raw materials (requirements) enter one end, and finished products (features) exit the other. The line is optimized for throughput, quality, and speed. Perfect—except nobody’s checking if we’re manufacturing products anyone wants.

Here’s what single-track Agile typically looks like:

Phase 1 (Discovery - happens once, upfront):

  • Product manager writes requirements document

  • Designers create mockups

  • Stakeholders review and approve

  • Stories get written and added to backlog

Phase 2 (Delivery - happens continuously):

  • Engineers pull stories from backlog

  • Features get built, tested, and shipped

  • Team celebrates velocity and sprint completion

  • Repeat forever, assuming Phase 1 got everything right

The fatal flaws:

  1. Discovery becomes a phase, not a practice: Once initial discovery is “done,” teams stop learning. But user needs evolve, markets shift, and initial assumptions prove wrong.

  2. Long feedback loops: By the time you learn a feature doesn’t work, you’ve built it, shipped it, and moved on to the next feature. Course correction is expensive and demoralizing.

  3. Requirement handoff disease: Product managers “throw requirements over the wall” to designers, who throw designs over the wall to engineers. Nobody owns the outcome—everyone owns their phase.

  4. False confidence in certainty: When discovery happens upfront, teams mistake guesses for facts. Requirements feel validated because they’re written down, not because they’re actually tested.

Spotify experienced this painfully in their early years. Teams would spend weeks building features based on upfront requirements, only to discover post-launch that users didn’t care. High delivery velocity just meant building the wrong things faster.

The Parallel Tracks Model

Imagine instead of a single production line, you have two parallel conveyor belts running at slightly different speeds. The discovery track stays one or two sprints ahead of the delivery track, constantly learning and validating before code gets written.

Discovery Track (continuous, ongoing):

  • Week 1-2: Research and validate problem for Feature A

  • Week 3-4: Research and validate problem for Feature B

  • Week 5-6: Research and validate problem for Feature C

  • (continues indefinitely)

Delivery Track (continuous, ongoing):

  • Week 3-4: Build Feature A (validated in weeks 1-2)

  • Week 5-6: Build Feature B (validated in weeks 3-4)

  • Week 7-8: Build Feature C (validated in weeks 5-6)

  • (continues indefinitely)

The key principle: By the time engineers start building Feature A, the product team has already validated that it solves a real problem. Discovery isn’t done—it’s just moved on to Feature B while delivery works on Feature A.

This creates a continuous learning loop where insights from delivery feed back into discovery. When Feature A launches and you learn it’s missing something, that learning informs Feature B’s discovery work. The tracks connect and inform each other.

Why this works:

  • Reduced waste: You don’t build unvalidated features

  • Faster learning: Feedback loops shrink from months to weeks

  • Shared ownership: The whole team participates in both discovery and delivery

  • Risk reduction: You validate before investing significant engineering time

  • Maintained velocity: Delivery track runs uninterrupted because discovery feeds it validated work

Amazon Web Services runs dual-track processes across hundreds of teams. Their “working backwards” documents go through extensive discovery validation before engineering begins building, but discovery never stops—it continues exploring the next set of problems while delivery executes on validated solutions.

The Discovery Track: What Actually Happens

Discovery isn’t a vague “figure things out” activity. In dual-track Agile, discovery has specific practices and outputs. Here’s what high-performing teams do in the discovery track:

Week 1: Problem Validation

Goal: Confirm the problem is real and worth solving.

Activities:

  • User interviews (5-8 users experiencing the problem)

  • Data analysis (how many users encounter this? how often?)

  • Support ticket review (what are users saying about this?)

  • Competitive analysis (how do others solve this?)

Output: Problem brief documenting: who has this problem, how often they encounter it, current workarounds, and impact if solved.

Go/no-go decision: Is this problem significant enough to warrant building a solution? If not, move to next opportunity.

Week 2: Solution Exploration

Goal: Identify potential solutions and validate which approach resonates with users.

Activities:

  • Design studio sessions (team generates multiple solution approaches)

  • Low-fidelity prototypes (sketches, wireframes, clickable prototypes)

  • Solution testing with users (5-8 concept tests)

  • Technical feasibility assessment (can we actually build this?)

Output: Validated solution direction with user feedback, technical constraints identified, and rough effort estimate.

Go/no-go decision: Do we have a solution users want that we can build? If not, iterate or pivot.

Handoff to Delivery Track

Once discovery validates both problem and solution, the work transitions to delivery. But here’s the critical part: discovery doesn’t “hand off and disappear.” Discovery stays engaged as delivery progresses, ready to answer questions and adjust based on new learnings.

What gets handed off:

  • Validated problem brief (why we’re building this)

  • Tested solution design (what we’re building)

  • User feedback and insights (what we learned)

  • Success metrics (how we’ll know it works)

  • Open questions and risks (what we’re still uncertain about)

What discovery does during delivery:

  • Monitors delivery progress and answers questions

  • Conducts additional mini-tests if needed

  • Starts discovery for the next feature

  • Prepares for post-launch learning (what to measure, how to iterate)

Intercom runs exemplary dual-track processes. Their discovery team stays involved throughout delivery, running additional concept tests if designs evolve, and preparing launch measurement plans while engineers build.

The Delivery Track: Building Validated Solutions

The delivery track in dual-track Agile looks similar to traditional Agile, with one crucial difference: work entering the delivery track has already been validated. This changes everything.

Sprint Planning with Validated Backlog

Traditional Agile sprint planning:

  • Product manager presents stories based on requirements

  • Team estimates complexity

  • Team commits to sprint goals

  • Team starts building, discovering problems mid-sprint

Dual-track Agile sprint planning:

  • Product manager presents validated opportunities with user evidence

  • Team reviews discovery findings and designs

  • Team estimates with better information (less uncertainty)

  • Team commits to sprint goals knowing the problem is real

  • Team builds with confidence, minimal mid-sprint surprises

Impact: Estimation accuracy improves by 40-60% because teams aren’t guessing about vague requirements—they’re estimating validated solutions.

Ongoing Discovery Support During Delivery

Discovery doesn’t disappear during delivery sprints. Discovery team members remain available for:

Design clarifications: When edge cases emerge, designers can quickly test solutions rather than guess.

Requirement questions: When engineers need clarification, product managers reference actual user research, not assumptions.

Scope negotiations: When time constraints arise, the team can intelligently cut features based on which elements users validated as most important.

Example: During delivery, an engineer might discover a technical constraint that makes the validated design difficult. Instead of guessing a workaround, the designer can quickly test an alternative with 3-5 users and return with validated feedback—all within a day or two.

Continuous Deployment with Learning

Because dual-track Agile emphasizes validated solutions, teams can deploy more confidently. But deployment isn’t the end—it’s the beginning of the next learning cycle.

Post-deployment discovery activities:

  • Usage analytics review (are users adopting the feature?)

  • User feedback collection (what’s their experience?)

  • Success metric tracking (is it solving the problem we validated?)

  • Iteration planning (what should we improve?)

These learnings feed directly back into the discovery track, either for feature iterations or for future opportunities.

Stripe exemplifies this beautifully. Their payments features go through rigorous discovery before development, but post-launch, they immediately begin discovery for the next iteration—based on real usage data and user feedback.

Building the Dual-Track Team Structure

Dual-track Agile requires intentional team structure. Here’s how to organize for success:

The Core Team Roles

Product Manager (splits time across both tracks):

  • 60% in discovery: Leading problem validation, prioritization, solution direction

  • 40% in delivery: Answering questions, adjusting scope, planning launches

Designer (splits time across both tracks):

  • 60% in discovery: Creating prototypes, running concept tests, exploring solutions

  • 40% in delivery: Refining designs, supporting engineering, handling edge cases

Engineers (primarily in delivery, participating in discovery):

  • 80% in delivery: Building validated features

  • 20% in discovery: Assessing technical feasibility, advising on constraints, participating in solution brainstorms

User Researcher (if you have one - primarily in discovery):

  • 90% in discovery: Conducting interviews, running usability tests, synthesizing insights

  • 10% in delivery: Supporting post-launch measurement and learning

The Weekly Rhythm

Successful dual-track teams establish consistent rituals:

Monday:

  • Discovery showcase: Discovery track shares last week’s learnings with delivery team

  • Delivery planning: Delivery track plans the week’s development work

Wednesday:

  • Mid-sprint check-in: Delivery track surfaces blockers, discovery track provides support

  • Research sessions: Discovery track runs user interviews or tests

Friday:

  • Sprint demo: Delivery track shows completed work

  • Discovery planning: Discovery track plans next week’s research activities

  • Cross-track sync: Both tracks discuss how learnings are informing each other

The critical meeting: Discovery Showcase

This is where discovery track shares validated opportunities with the entire team. It’s not a handoff meeting—it’s a collaborative session where engineers and designers engage with the problem and solution before sprint planning.

Agenda:

  1. Problem evidence (user interviews, data, support tickets)

  2. Solution validation (prototype tests, user feedback)

  3. Technical considerations (feasibility discussion)

  4. Success metrics (how we’ll measure impact)

  5. Q&A and refinement

Atlassian’s teams run weekly discovery showcases where product and design present validated opportunities to the entire squad. Engineers actively participate, suggesting technical alternatives and identifying implementation risks before work enters sprints.

Common Pitfalls and How to Avoid Them

Even with good intentions, teams struggle with dual-track implementation. Here are the most common failure modes:

Pitfall #1: Discovery Becomes a Bottleneck

What happens: Discovery track can’t keep up with delivery velocity. Delivery team runs out of validated work and starts pulling unvalidated stories from backlog.

Why it happens: Too few people doing discovery, or discovery trying to validate everything perfectly.

Solution:

  • Time-box discovery activities (2 weeks max per opportunity)

  • Use “good enough” validation, not perfect certainty

  • Build discovery capacity (entire team participates, not just PM)

  • Maintain a buffer of 2-3 validated opportunities ahead of delivery

Pitfall #2: Discovery and Delivery Stop Communicating

What happens: Discovery validates solutions, hands them off, and disappears. Delivery builds in isolation. By launch, the solution has diverged from validation.

Why it happens: Teams treat tracks as separate teams instead of one team with two activities.

Solution:

  • Daily standups include both track updates

  • Discovery team members attend sprint planning and reviews

  • Delivery team members participate in key discovery activities

  • Shared accountability for outcomes, not separate track metrics

Pitfall #3: Discovery Lacks Rigor

What happens: “Discovery” becomes product manager opinions dressed up as validation. No real user testing occurs.

Why it happens: Pressure to feed delivery track leads to shortcuts.

Solution:

  • Define minimum validation criteria (e.g., “interviewed 8 users, tested with 5”)

  • Review discovery outputs in team showcases (transparency creates accountability)

  • Track discovery quality metrics (how often do validated features succeed post-launch?)

  • Celebrate learning, even when it invalidates ideas

Pitfall #4: Delivery Ignores Discovery Findings

What happens: Engineers build what’s in the spec, ignoring nuances from discovery. The solution technically matches requirements but misses user needs.

Why it happens: Discovery findings don’t make it into actionable engineering stories.

Solution:

  • Include “why this matters” context in every story

  • Link stories to original user research

  • Engineers participate in at least some user testing

  • Retrospectives explicitly review: “Did we build what discovery validated?”

Measuring Dual-Track Success

How do you know if dual-track Agile is working? Track these leading and lagging indicators:

Discovery Track Metrics

Leading indicators:

  • Number of opportunities validated per month

  • Discovery cycle time (days from idea to validated solution)

  • Percentage of opportunities that pass validation (should be 40-60%; if higher, you’re not rigorous enough)

  • Team participation in discovery activities

Lagging indicators:

  • Feature adoption rate post-launch (validated features should have 3-5x higher adoption)

  • Time to first value (validated solutions should reach users faster)

  • Feature satisfaction scores (validated features should score higher)

Delivery Track Metrics

Leading indicators:

  • Sprint predictability (% of committed work completed)

  • Time from story grooming to deployment

  • Number of mid-sprint scope changes (should decrease with better discovery)

Lagging indicators:

  • Feature success rate (% of shipped features that meet success criteria)

  • Engineering rework (should decrease when building validated solutions)

  • Time to product-market fit for new products

Cross-Track Health Metrics

The most important metric: Percentage of delivered features that were validated through discovery before development. Target: 80%+.

If this drops below 60%, your delivery track is outpacing discovery, and you’re likely building unvalidated features.

The feedback loop metric: Average time from feature launch to discovery incorporating learnings into next opportunity. Target: Under 2 weeks.

This measures whether your tracks are actually informing each other or operating independently.

Scaling Dual-Track Across Multiple Teams

Dual-track Agile becomes more complex but more valuable as organizations scale. Here’s how to maintain effectiveness with multiple teams:

The Portfolio Discovery Function

Challenge: Each team can’t discover in isolation—they need coordinated discovery across related products.

Solution: Create a portfolio discovery practice that:

  • Conducts cross-product user research

  • Identifies opportunities that span multiple teams

  • Validates platform-level solutions

  • Shares insights across all product teams

Example: Shopify runs a centralized UX research team that conducts discovery work shared across dozens of product teams, preventing redundant research and ensuring consistent user understanding.

The Dependency Coordination Problem

Challenge: Team A’s delivery depends on Team B’s delivery, but their discovery tracks aren’t aligned.

Solution:

  • Synchronize discovery tracks for dependent teams (run discovery simultaneously)

  • Create shared discovery showcases across teams

  • Use roadmap planning to align which opportunities get validated when

  • Build discovery-level APIs (Team B validates the interface Team A needs, even if internal implementation isn’t ready)

The Discovery Capacity Scaling

Challenge: As you add delivery teams, you need proportionally more discovery capacity.

Solution:

  • Train everyone in discovery practices (don’t centralize discovery in PM/design only)

  • Use discovery rotations (engineers and others take turns leading discovery work)

  • Leverage research ops (tools and processes that make discovery more efficient)

  • Hire for discovery skills, not just delivery skills

The Cultural Shift: From Certainty to Learning

The hardest part of dual-track Agile isn’t the process—it’s the cultural transformation. Traditional Agile optimizes for execution certainty. Dual-track Agile embraces learning uncertainty.

Old mindset: “We know what to build. Let’s execute flawlessly.” New mindset: “We have hypotheses about what to build. Let’s learn quickly and build what’s validated.”

Old question: “Are we on schedule?” New question: “Are we learning fast enough?”

Old success: Shipped all planned features on time. New success: Shipped validated features that users adopted and succeeded with.

Old failure: Missed sprint commitments. New failure: Built features nobody uses.

This shift is uncomfortable. It means acknowledging uncertainty, admitting we don’t have all the answers, and being willing to invalidate our own ideas. But it’s also liberating—because we’re optimizing for outcomes, not output.

Your implementation challenge: Start small. Pick one team and one product area. Run dual-track for one quarter. Don’t try to transform the entire organization overnight.

In sprint 1, while delivery works on the current roadmap, start discovery for the next opportunity. Validate the problem with 5-8 users. Test solution concepts. By sprint 3, you’ll deliver your first fully-validated feature.

Then measure: Did users adopt it faster? Did it meet success criteria better? Did the team feel more confident building it?

If yes, expand. If no, iterate on your discovery practices.

Because in the end, dual-track Agile isn’t about running two tracks. It’s about finally connecting what we build to what users actually need.

And that connection—between learning and building, between discovery and delivery—is what transforms good teams into great products.

Leave a comment


đŸȘ The Spotlight Effect: Why Users Think Everyone Notices Their Mistakes (And Your Bugs)

How Overestimating Social Attention Shapes Product Design and Error Handling

You’re in a user testing session. The participant clicks the wrong button, realizes their mistake immediately, and their face flushes red. “I’m so stupid,” they mutter. “Everyone’s going to think I don’t know what I’m doing.” You glance at the observation room—nobody’s judging. Most aren’t even watching closely. But the user is convinced they’re under a spotlight, being scrutinized by an imaginary audience.

This is the spotlight effect in action, and it shapes how users interact with your product in ways you’ve probably never considered.

The Spotlight Effect is a cognitive bias where people dramatically overestimate how much others notice and remember their appearance, actions, and mistakes. We believe we’re center stage in other people’s attention when, in reality, everyone else is too busy being center stage in their own mental spotlight to notice us much at all.

Thomas Gilovich and colleagues at Cornell University first documented this phenomenon in 2000. In their famous t-shirt study, participants wearing embarrassing shirts estimated that 50% of people in a room noticed the shirt. In reality, only 23% did. We consistently overestimate social attention by a factor of 2-3x.

For product teams, this insight is transformative. Users aren’t just navigating your interface—they’re navigating their anxiety about being watched, judged, and found incompetent. Understanding the spotlight effect helps us design products that reduce social anxiety, normalize mistakes, and build experiences where users feel safe to explore and learn.

The Invisible Audience in Your Product

Imagine your user’s brain contains a mental theater. In this theater, they’re always on stage, performing for an audience of everyone they know—and many they don’t. Every action, every mistake, every confused moment feels like it’s being broadcast to this watchful crowd.

The problem? That audience doesn’t actually exist. Nobody’s paying that much attention.

How the spotlight effect manifests in product usage:

The Mistake Magnification Effect: A user makes a small error—clicks the wrong tab, misspells a search query, can’t find a feature. To them, this feels like a massive public failure that everyone can see. In reality, most mistakes are private, invisible, and completely normal.

The Competence Performance Anxiety: Users believe that struggling with your product signals their incompetence to others. This creates hesitation to try new features, ask questions, or explore unfamiliar paths—not because the product is hard, but because struggling feels socially risky.

The Permanence Illusion: Users overestimate how long others remember their mistakes. They’ll avoid features where they previously struggled, believing everyone remembers that time they couldn’t figure out how to export a file, when in truth, nobody noticed or cares.

Real-world impact: When Duolingo analyzed why users abandoned lessons, they discovered many left after making mistakes—not because the content was too hard, but because they felt embarrassed by errors they believed the app was “judging” them for. The spotlight effect was causing unnecessary churn.

The Social Dimension of Solo Products

Here’s what makes the spotlight effect particularly insidious in product design: users experience social anxiety even in ostensibly solo activities. You might think, “My product is single-player—users work alone, so social pressure doesn’t apply.” Wrong.

The imagined audience is always present:

  1. Future Self as Audience: Users worry their past mistakes will be visible to their future self, creating anxiety about leaving evidence of incompetence in their own work history.

  2. Potential Collaborators as Audience: Even if users work alone now, they imagine future scenarios where colleagues might see their work, judging their process and competence.

  3. The Platform as Judge: Users anthropomorphize products, believing the app itself is watching and judging them. Every error message feels like criticism from a teacher, not helpful guidance.

  4. The Abstract “Everyone”: Users imagine a generalized audience of competent users who would never make these mistakes, creating comparison anxiety even without specific individuals watching.

When Notion users work in private workspaces, they still organize meticulously and hide messy work-in-progress pages—not for any real audience, but for imagined future viewers. The spotlight effect operates independently of actual social presence.

Designing for the Spotlight Effect: Error Handling That Reduces Shame

Traditional error handling often amplifies spotlight effect anxiety. Here’s how to design errors that normalize mistakes instead of magnifying embarrassment:

1. Normalize Errors Through Language

Traditional error message: “Error: Invalid input. Please try again.”

Spotlight-conscious error message: “Hmm, that format didn’t work. Let’s try: [example].”

Why it works: The shift from “you failed” to “we’re problem-solving together” removes judgment. “Let’s” implies collaboration, not performance evaluation.

Implementation examples:

  • Grammarly: Instead of “You made a mistake,” they frame suggestions as “Grammarly found 3 ways to strengthen your writing.” The user isn’t wrong—they’re being supported.

  • Stripe: When API calls fail, error messages say “This happened because...” and “Here’s how to fix it,” never “You did this wrong.” Technical errors are framed as normal, expected events, not user incompetence.

2. Make Mistakes Invisible to Imagined Audiences

Users worry less about errors if they know others can’t see them. Design privacy into mistake-making.

Private draft states: Let users work in draft mode where mistakes are invisible to others until they explicitly publish.

Undo everything: Make all actions reversible. If users know mistakes can be erased without trace, anxiety decreases dramatically.

No public error history: Don’t display persistent records of user errors. Failed login attempts, incorrect form submissions, deleted items—let these disappear from history.

Example: Google Docs autosaves continuously but doesn’t show a public “version history” by default. Users can work messily, make mistakes, and revise freely, knowing colleagues only see the polished final version—unless users specifically choose to reveal the process.

3. Use Frequency Signals to Normalize Mistakes

Tell users how common their experience is. This immediately reduces spotlight effect anxiety.

Traditional approach: User makes mistake silently, assumes they’re the only one struggling.

Spotlight-conscious approach: “Don’t worry—this trips up most people at first. Here’s the trick...”

Implementation tactics:

  • “Common confusion” indicators: When users hit a frequently problematic feature, show: “90% of new users find this confusing at first. Here’s a quick guide.”

  • Aggregate struggle signals: “2,000 people searched for this today—let us help you find it faster.”

  • Normalized learning paths: “Most users take 3-4 tries to get this right. You’re on attempt 2—you’re doing great.”

Example: Figma’s onboarding includes messages like “Most designers discover this feature in week 3” when users find advanced capabilities. This normalizes not knowing everything immediately, reducing competence anxiety.

4. Celebrate Learning, Not Perfection

Reframe mistakes as progress, not failures. Users under spotlight effect pressure believe mistakes signal incompetence. Show them mistakes signal learning.

Learning progress indicators:

  • “You’ve tried 5 different approaches—that’s how you master this!”

  • “First time using this feature? Most users experiment a few times before finding their workflow.”

  • Achievement unlocks for trying new things, not just succeeding

Example: Codecademy celebrates “attempts,” not just correct answers. Their interface shows “You’re learning! Try adjusting line 3” rather than “Wrong. Try again.” This reframes errors as productive experimentation.

Social Features and the Spotlight Amplifier

When products include actual social components—sharing, collaboration, public profiles—the spotlight effect intensifies dramatically. Design social features with spotlight anxiety in mind:

1. Granular Privacy Controls

Let users control exactly who sees what. The more control they have, the less spotlight anxiety they experience.

Bad social design: Everything is public by default, users must opt into privacy.

Good social design: Everything is private by default, users opt into sharing when comfortable.

Best social design: Granular sharing controls that let users share different things with different audiences at different times.

Example: LinkedIn lets users control whether profile edits are announced, whether they appear in search results, and who sees their activity. This granularity reduces anxiety about being watched constantly.

2. Normalize Low Activity

Users worry that low engagement—few posts, few likes, little activity—signals their irrelevance to others. Design social features that normalize varied engagement levels.

Tactics:

  • Don’t display “last active” timestamps that create pressure to be constantly present

  • Avoid public metrics that make low numbers shameful (follower counts, post engagement)

  • Celebrate lurking as legitimate participation: “4,000 people read this without commenting—and that’s great!”

Example: Slack doesn’t show public “most active user” leaderboards because they recognized this created performance anxiety. Users contribute when they have value to add, not to maintain appearance of engagement.

3. Reduce Permanence Anxiety

Users fear their past mistakes, clumsy early work, or outdated opinions will permanently damage their reputation. Give them escape hatches.

Implementation:

  • Edit and delete capabilities without “edited” stamps (or make stamps optional)

  • Auto-archive old content that no longer represents the user

  • Ephemeral content options (posts that disappear after time period)

  • Easy bulk deletion (”delete all posts before 2022”)

Example: Instagram now lets users archive posts instead of deleting them. Users can remove old, embarrassing content from public view without losing it entirely—reducing anxiety about past self being judged.

Onboarding: When Spotlight Effect Anxiety Peaks

New user onboarding is when spotlight effect anxiety is highest. Users feel maximally incompetent and maximally watched. Design onboarding that explicitly addresses this:

1. Normalize Not Knowing

First-time user experience should say:

  • “Everyone starts here—you’re exactly where you should be”

  • “This looks complex, but we’ll take it step by step”

  • “Most users take 15 minutes to feel comfortable. Take your time.”

Anti-pattern: Assuming users should already know things, creating anxiety about appearing stupid for needing help.

2. Private Practice Spaces

Give users sandbox environments where they can experiment without consequences or visibility.

Examples:

  • Canva: Provides unlimited design drafts that are private by default

  • Salesforce: Offers sandbox instances for learning without affecting real data

  • Adobe: Includes tutorial projects separate from real work

This lets users make mistakes freely, knowing nobody will see their clumsy early attempts.

3. Progressive Disclosure of Social Features

Don’t throw users into social spaces immediately. Let them build competence privately first.

Onboarding sequence:

  1. Private workspace: Learn basic features alone

  2. Small group collaboration: Share with 1-2 trusted people

  3. Team spaces: Engage with broader group

  4. Public sharing: Share beyond organization (optional)

This progression builds confidence before exposing users to larger audiences.

The Positive Spotlight: Recognition Without Pressure

While the spotlight effect often creates anxiety, it can also be harnessed positively. Users do want recognition—just not for mistakes.

Strategic Spotlight Design Principles

1. Let users control their spotlight moment: Don’t force recognition. Offer it, let users accept or decline.

Bad: “Your achievement has been shared with your team!” Good: “Great work! Want to share this with your team?”

2. Recognize effort, not just outcomes: Spotlight moments shouldn’t require perfection.

Examples:

  • “You’ve been consistent—15 days in a row!”

  • “You explored 5 new features this week—nice!”

  • “You helped 3 teammates yesterday”

These celebrate engagement without requiring flawless performance.

3. Make recognition feel earned, not hollow: Empty praise amplifies spotlight anxiety (users fear others think they don’t deserve recognition).

Earned recognition: “You’re in the top 10% of power users” (objective, specific) Hollow recognition: “You’re amazing!” (subjective, vague)

Measuring Spotlight Effect Impact

How do you know if spotlight anxiety is affecting your product? Track these indicators:

Behavioral Signals

Hesitation patterns:

  • Users hovering over buttons without clicking (anxiety about making wrong choice)

  • High rates of undo actions (immediate regret/embarrassment)

  • Low exploration of new features (fear of incompetence)

  • Preference for private/solo modes over collaborative features

Avoidance behaviors:

  • Users who stop using features where they previously struggled

  • Low social feature adoption despite high product usage

  • Reluctance to ask for help or engage with support

Direct Feedback Analysis

Review support tickets and user interviews for spotlight effect language:

Red flags:

  • “I felt stupid when...”

  • “I didn’t want to look incompetent...”

  • “I was embarrassed that...”

  • “I didn’t want anyone to see me struggling with...”

When these phrases appear, spotlight effect anxiety is actively harming user experience.

Experimental Validation

A/B test spotlight-conscious design:

Test A (control): Standard error message: “Error: Invalid format” Test B (spotlight-conscious): “This format is tricky—here’s an example”

Measure:

  • Error recovery rate (do users try again?)

  • Feature abandonment (do users give up?)

  • Time to success (do users persist longer?)

Teams who run these tests consistently find 20-40% improvement in success rates with spotlight-conscious messaging.

The Cultural Antidote: Creating Safe-to-Fail Environments

Beyond individual design decisions, the most powerful way to reduce spotlight effect anxiety is creating a product culture where mistakes are not just tolerated but expected.

Cultural Signals to Embed

1. Lead with humanity: Your product should feel made by humans who also make mistakes.

Tactics:

  • Show behind-the-scenes of how your team works (including mistakes and iterations)

  • Use conversational, imperfect language in your product

  • Share your own product improvement journey (here’s what we’re fixing)

Example: Basecamp’s updates often say “We screwed this up, here’s how we’re fixing it.” This vulnerability normalizes imperfection.

2. Redefine success metrics: If you measure only perfect execution, users will fear anything less.

Expanded success metrics:

  • Learning velocity (how quickly do users try new things?)

  • Experimentation rate (how often do users explore?)

  • Recovery rate (how well do users bounce back from errors?)

When you optimize for learning over perfection, users feel permission to be imperfect learners.

3. Build community around shared struggles: Let users see they’re not alone in finding things hard.

Implementation:

  • User forums where people discuss challenges openly

  • “Common questions” sections that normalize confusion

  • User stories featuring learning journeys, not just success stories

Example: Stack Overflow’s entire model is built on normalizing not knowing things. Asking questions isn’t shameful—it’s how the community functions.

The Bigger Picture: Designing for Human Psychology

The spotlight effect reminds us that users bring their full human psychology to every interaction with our products. They’re not just completing tasks—they’re managing their sense of self, protecting their ego, and navigating social anxiety.

The best products don’t just solve functional problems. They solve psychological problems.

Traditional UX asks: “Can users complete this task efficiently?”

Psychology-informed UX asks: “Can users complete this task while feeling competent, confident, and psychologically safe?”

The difference is profound. A feature might be technically usable but psychologically hostile—efficient to complete but anxiety-inducing to attempt.

Your implementation challenge: Pick one error message, one onboarding screen, or one social feature in your product. Rewrite it through the lens of the spotlight effect.

Ask yourself:

  • What does a user under an imaginary spotlight fear when they see this?

  • How can we normalize their experience?

  • How can we make mistakes feel safe?

  • How can we reduce social performance anxiety?

Then test it. Not just for task completion, but for how it makes users feel.

Because ultimately, users will forgive many technical shortcomings if your product makes them feel competent and safe.

But they’ll abandon technically perfect products that make them feel watched, judged, and incompetent.

The spotlight isn’t real. But the anxiety it creates absolutely is.

And the products that win are those that turn that imaginary spotlight into a warm, supportive light—illuminating the path forward, not exposing every stumble along the way.

Leave a comment


đŸ”„ MLA #week 31

The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forward—no overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.

Why it matters? Culture isn’t built overnight. It’s the sum of consistent, small actions. MLA creates momentum—one small win at a time—and turns those wins into lasting change. Small actions, big impact

MLA: Cross-Team Shadow Day

Product Management Challenge Area: Cross-Team Collaboration and Empathy Building

Why This Matters

In most organizations, teams operate in silos, leading to:

  • Miscommunication

  • Misaligned goals

  • Lack of mutual understanding

  • Reduced overall organizational effectiveness

The One-Day Challenge: Interdepartmental Shadowing

Challenge Objective

Invite a colleague from another department to shadow your team for one meeting, breaking down barriers and fostering cross-functional understanding.

Implementation Steps

  1. Choose the Right Participant

    • Select someone from a different department who has an indirect impact on your product

    • Potential candidates:

      • Marketing team member responsible for product positioning

      • Customer support representative

      • Sales team member

      • Financial or operations team member

  2. Select the Appropriate Meeting

    • Choose a meeting that provides insight into your team’s decision-making process:

      • Sprint planning

      • Brainstorming session

      • Product review

      • Team retrospective

  3. Prepare the Invitation

    • Craft a welcoming, learning-focused message: “We’d love to invite you to observe one of our team meetings. This is an opportunity to gain insights into how we approach [specific aspect of work]. We’re also eager to hear your unique perspective.”

  4. Prepare Your Team

    • Inform team members in advance about the visitor

    • Encourage explanation of:

      • Technical jargon

      • Specific processes

      • How their work connects to the visitor’s role

  5. During the Meeting

    • Create space for the visitor to:

      • Ask questions

      • Observe team dynamics

      • Share insights

    • Ensure an open, non-defensive environment

  6. Follow-Up

    • Debrief with the visitor:

      • What surprised them?

      • What did they learn?

      • Any suggestions for improved collaboration?

Expected Benefits

Immediate Wins

  • Fresh perspective on team processes

  • Breaking down initial communication barriers

  • Increased mutual understanding

Relationship/Cultural Improvements

  • Build empathy across departments

  • Create informal communication channels

  • Reduce interdepartmental friction

Long-Term Organizational Alignment

  • Develop a more holistic view of product development

  • Identify potential collaboration opportunities

  • Create a culture of openness and continuous learning

Call to Action

Completed the challenge? Share your experience!

  • Use hashtag: #MLAChallenge

  • What did you discover?

  • How did it change your perspective?

Leave a comment


📚 Monthly Book Club for Product Managers

Blindspot: Hidden Biases of Good People Mahzarin R. Banaji ‱ Cena, Opinie -  Allegro

Blindspot: Hidden Biases of Good People by Mahzarin Banaji & Anthony Greenwald

Confronting the Unconscious Biases That Shape Product Decisions

Mahzarin Banaji and Anthony Greenwald’s “Blindspot: Hidden Biases of Good People” presents uncomfortable truths about the mental shortcuts our brains take without our conscious awareness. For product managers, understanding these hidden biases is crucial—not just for creating inclusive products, but for making better decisions, conducting unbiased user research, and building diverse teams that drive innovation.


The Uncomfortable Truth About Our Minds

We like to think of ourselves as rational, fair-minded decision-makers. Product managers especially pride themselves on data-driven thinking and objective analysis. But Banaji and Greenwald, two of the world’s leading social psychologists, present compelling evidence that our minds operate on two levels: the conscious thoughts we’re aware of and the automatic associations that happen beneath our awareness.

These automatic associations—what the authors call “mindbugs”—are mental shortcuts that help our brains process the overwhelming amount of information we encounter daily. While these shortcuts served our ancestors well in survival situations, they create systematic biases in modern contexts, affecting everything from hiring decisions to product design choices to how we interpret user research data.

The book’s central revelation is discomforting: even people who consciously reject stereotypes and discrimination harbor implicit biases that can influence their behavior and decisions. For product managers, this means that despite our best intentions to build products for everyone, our unconscious biases may be systematically excluding certain user groups or overlooking critical user needs.

The Science of Implicit Bias

The Implicit Association Test: Measuring the Unmeasurable

At the heart of “Blindspot” is the Implicit Association Test (IAT), a research tool developed by the authors that has been taken by millions of people worldwide. The IAT measures the strength of associations between concepts (like race, gender, or age) and evaluations (like good or bad) or stereotypes (like career-oriented or family-oriented).

The test works by measuring reaction times when people categorize words and images. The premise is simple: if you hold a strong automatic association between two concepts, you’ll respond faster when they’re paired together than when they’re paired with opposing concepts. The results have been remarkably consistent and often surprising: most people show implicit biases that contradict their explicit beliefs.

For product managers, the IAT reveals an important truth: what users say in interviews or surveys may differ significantly from their automatic associations and behaviors. This disconnect between explicit attitudes and implicit associations has profound implications for user research methodologies and product validation.

How Blindspots Develop: The Automaticity of Prejudice

Banaji and Greenwald explain that implicit biases aren’t character flaws or moral failings—they’re the result of how our brains learn patterns from our cultural environment. From childhood, we absorb associations from media, social interactions, and cultural narratives. These patterns become automatic neural pathways that activate without conscious intention.

The authors demonstrate through decades of research that these biases are:

  • Universal: Nearly everyone has them, regardless of their conscious beliefs

  • Automatic: They activate quickly and unconsciously

  • Malleable: They can change based on context and exposure

  • Consequential: They affect real-world decisions and behaviors

Understanding this psychological reality helps product managers recognize that building inclusive products requires more than good intentions—it demands systematic approaches to counteracting unconscious bias at every stage of product development.

The Six Blindspots: A Framework for Product Managers

The authors identify six major categories of implicit bias, each with direct implications for product management:

1. The In-Group Favoritism Blindspot

Humans naturally favor people who are similar to themselves—whether by race, gender, age, education, or shared experiences. In product development, this manifests when teams unconsciously design for “people like us,” assuming their own needs, behaviors, and contexts are universal.

Consider how early fitness trackers were calibrated primarily for male physiology, failing to accurately track women’s health metrics. Or how voice recognition systems were trained predominantly on male voices, leading to significantly worse performance for female users. These weren’t malicious decisions—they were blindspots created by teams designing for their own in-group without recognizing they were doing so.

For product managers, combating in-group favoritism requires actively seeking out perspectives and use cases that differ from the team’s default assumptions. This means not just diverse hiring, but also diverse user research panels, advisory boards, and beta testing groups that challenge the team’s blindspots.

2. The Association Blindspot: Stereotypes We Don’t Endorse

We hold automatic associations between groups and characteristics even when we consciously reject those stereotypes. A product manager might explicitly believe that elderly users are capable of learning new technology while simultaneously designing onboarding flows that assume older users need excessive hand-holding or simplified interfaces.

These stereotype-based associations affect product decisions in subtle ways: choosing stock photos for marketing materials, writing microcopy that assumes certain user capabilities, or prioritizing features based on unconscious assumptions about who the “real” users are.

The authors provide compelling evidence that these associations affect not just how we perceive others, but how we interpret ambiguous information. When reviewing user research data, confirmation bias combines with stereotype associations to create powerful blindspots—we see what we expect to see and overlook contradictory evidence.

3. The Attribution Blindspot: Different Standards for Different Groups

We tend to attribute success and failure differently depending on whether someone belongs to our in-group or out-group. When an in-group member succeeds, we attribute it to ability and hard work. When they fail, we blame external circumstances. For out-group members, we reverse this pattern.

In product management, this blindspot affects how we interpret user behavior. When a user from our imagined target demographic struggles with our product, we might attribute it to confusing design. When a user outside our primary demographic struggles, we might attribute it to their lack of technical sophistication or effort.

This attribution blindspot is particularly dangerous during user testing and research synthesis. Product teams may dismiss feedback from certain user segments as outliers or edge cases, while treating similar feedback from preferred user segments as critical insights requiring immediate action.

4. The Outsider Blindspot: Not Recognizing Our Own Biases

Perhaps the most insidious blindspot is our inability to recognize our own biases while readily identifying them in others. Most people rate themselves as less biased than average—a statistical impossibility that reveals how poorly calibrated we are at assessing our own fairness.

Product managers often fall into this trap when conducting competitive analysis or evaluating other products. We can easily spot when competitor products exhibit bias or exclusionary design, yet remain blind to similar issues in our own products. This outsider blindspot makes it difficult to implement effective bias-reduction strategies because we don’t believe we need them.

The authors emphasize that recognizing this blindspot is the first step toward addressing it. Once we accept that we all have biases we cannot directly perceive, we can implement systematic processes to counteract them rather than relying on our flawed self-assessment.

5. The Preference for “Merit” That Isn’t Merit-Based

We believe we evaluate people and ideas based purely on merit, but research shows that our judgments of merit are heavily influenced by group membership. Studies demonstrate that identical resumes receive different ratings depending on whether they have traditionally male or female names. The same product pitch receives different evaluations depending on the presenter’s demographic characteristics.

For product managers, this blindspot affects prioritization decisions, feature requests evaluation, and stakeholder feedback incorporation. An idea suggested by a senior engineer might receive more weight than the same idea from a junior designer, not because the engineer’s reasoning is stronger, but because of implicit associations about whose ideas carry more “merit.”

This bias becomes particularly problematic in data-driven organizations that pride themselves on objective decision-making. When we believe our processes are purely merit-based, we become less vigilant about bias, allowing it to operate unchecked beneath the surface of “objective” metrics and frameworks.

6. The Disability Blindspot: Invisible Users

While not explicitly named as a separate category by the authors, their research on outgroup neglect has profound implications for disability inclusion in product design. People without disabilities often fail to consider accessibility needs, not out of malice but because these needs aren’t salient in their automatic thinking.

Product managers frequently treat accessibility as an afterthought or a compliance checkbox rather than a core user need. This blindspot leads to products that work beautifully for able-bodied users while creating insurmountable barriers for users with disabilities—barriers that could have been avoided with inclusive design from the start.

The economic argument is compelling: the CDC estimates that one in four adults in the US lives with a disability, representing a massive user base that products systematically underserve due to this blindspot.

Practical Applications for Product Managers

Reimagining User Research Through the Bias Lens

Understanding implicit bias fundamentally changes how product managers should conduct and interpret user research. The authors’ work reveals several critical considerations:

Diversify Research Participants Systematically: Don’t rely on convenience sampling or social networks that mirror the product team’s demographics. Actively recruit participants across age, race, gender, socioeconomic status, ability, and geographic location. Build recruiting processes that counteract natural in-group bias.

Question Your Interpretations: When analyzing user research, explicitly ask: “Would I interpret this behavior differently if the user had different demographic characteristics?” Document and challenge assumptions about why users behave as they do.

Design Research Protocols to Minimize Bias: Use structured interview guides and consistent evaluation rubrics to reduce the impact of implicit associations. When possible, blind reviewers to demographic information when analyzing research data.

Recognize the Limits of Self-Reported Data: Users’ explicit statements about their preferences, behaviors, and needs may not align with their actual usage patterns due to their own implicit biases and social desirability bias. Combine stated preferences with behavioral data.

Building Bias-Resistant Decision-Making Processes

Banaji and Greenwald emphasize that awareness alone doesn’t eliminate bias—we need systematic processes that counteract automatic associations. For product management, this means:

Implement Structured Decision Frameworks: Use consistent criteria and scoring systems for prioritization decisions, feature evaluation, and resource allocation. Make implicit criteria explicit and documented.

Diverse Decision-Making Teams: Include people with different backgrounds, experiences, and perspectives in key product decisions. Research shows that diverse teams make better decisions and catch blindspots that homogeneous teams miss.

Pre-Mortems for Bias: Before launching features or making major product decisions, conduct a “bias pre-mortem” where the team explicitly asks: “What user groups might we be overlooking? What assumptions are we making about user needs or capabilities? How might our own backgrounds be limiting our perspective?”

Data Disaggregation: Break down product metrics by user demographics whenever possible. Overall satisfaction scores might mask serious problems for specific user segments. Aggregate data can hide exclusionary design.

Inclusive Design as Bias Mitigation

The authors’ research provides a psychological foundation for inclusive design practices. When product teams design for edge cases and diverse needs from the outset, they create better products for everyone—a principle known as the “curb-cut effect.”

Start with Extremes: Instead of designing for the “average user” (who doesn’t exist), design for users with the most constraints. If your product works for a user with limited hand mobility, it likely works better for everyone.

Challenge Default Assumptions: Every product embeds assumptions about users—their technical literacy, language skills, access to resources, physical abilities, and cultural contexts. Make these assumptions explicit and question them systematically.

Test at the Margins: Include users with disabilities, older adults, people with limited internet access, and other marginalized groups in all phases of user testing, not just during accessibility audits.

The Neuroscience Behind Better Products

One of the book’s strengths is explaining the cognitive mechanisms that create bias, which helps product managers understand not just what biases exist but why they persist despite our best efforts to overcome them.

The Two Systems of Thinking in Product Context

Building on dual-process theories of cognition, the authors explain how automatic (System 1) and controlled (System 2) thinking interact. Product managers make hundreds of decisions daily, and most must rely on automatic thinking to function efficiently. The problem is that automatic thinking is where implicit biases operate most strongly.

Understanding this cognitive architecture helps product managers identify which decisions warrant the extra cognitive effort of controlled, deliberate thinking. Major product decisions, user research interpretation, and prioritization frameworks deserve System 2 thinking with explicit bias checks. Minor UI decisions might rely more on established design systems and patterns that have been vetted for inclusion.

Neuroplasticity and Bias Reduction

Encouragingly, Banaji and Greenwald present evidence that implicit biases can change through targeted intervention and exposure. The brain’s neuroplasticity means that associations can be weakened and new patterns can be formed.

For product organizations, this suggests several strategies:

Exposure to Counter-Stereotypical Examples: Regularly engage with users, leaders, and experts who contradict stereotypical associations. If your team implicitly associates “tech-savvy user” with young males, intentionally showcase older women who are power users.

Mindfulness in Decision Contexts: Creating moments of reflection before key decisions can activate controlled thinking that counteracts automatic bias. Simple prompts like “Have I considered diverse user perspectives?” can interrupt automatic patterns.

Environmental Design: Just as Norman’s “Design of Everyday Things” shows how physical environments shape behavior, Banaji and Greenwald demonstrate that social and informational environments shape automatic associations. Diversifying the imagery, voices, and perspectives your team encounters regularly can gradually shift implicit associations.

Limitations and Criticisms: What Product Managers Should Know

While “Blindspot” offers crucial insights, product managers should be aware of ongoing debates about implicit bias research:

The IAT Controversy: Some researchers question whether IAT scores predict discriminatory behavior as strongly as originally claimed. The correlation between IAT results and real-world behavior appears weaker than early research suggested.

However, this limitation doesn’t undermine the book’s core value for product managers. Even if implicit associations don’t perfectly predict individual behavior, the extensive evidence for systematic bias in aggregate is overwhelming. Product managers should focus on the patterns, not individual predictions.

Context Dependency: Implicit biases are highly context-dependent and can vary significantly based on immediate environmental cues. This means that bias-reduction interventions may need to be continuous and embedded in work processes rather than one-time training sessions.

The Action Gap: Knowing about bias doesn’t automatically translate to reducing it. Product managers need concrete processes and accountability mechanisms, not just awareness. The book could be stronger in providing specific implementation guidance.

Key Takeaways for Product Managers

  1. Bias is automatic, not intentional: Good intentions don’t prevent implicit bias from affecting product decisions. Build systematic processes to counteract unconscious associations.

  2. Diversify everything: User research participants, product teams, beta testers, and advisory boards should reflect the diversity of your potential user base—and beyond.

  3. Question your interpretations: When analyzing user data or making product decisions, explicitly ask how implicit bias might be shaping your conclusions.

  4. Design for the margins: Building products that work for users with the most constraints typically creates better products for everyone.

  5. Make the implicit explicit: Document assumptions about users, create structured decision frameworks, and disaggregate data by user demographics.

  6. Continuous exposure matters: Regularly engaging with diverse users and perspectives can gradually shift automatic associations.

  7. Process over awareness: Knowing about bias isn’t enough—implement systematic checks, diverse decision-making, and inclusive design practices.

Conclusion: Building Products for Everyone Requires Seeing Our Blindspots

“Blindspot” challenges product managers to confront uncomfortable truths about how our minds work. The unconscious biases we all carry don’t make us bad people, but left unexamined, they lead to products that systematically fail certain user groups while working beautifully for others.

The book’s greatest contribution to product management is providing a scientific foundation for why inclusive design and diverse teams aren’t just ethical imperatives—they’re practical necessities for building products that serve real human needs. Our blindspots prevent us from seeing opportunities, understanding users, and creating truly innovative solutions.

In an increasingly diverse global marketplace, products that reflect the biases of homogeneous teams will struggle to compete with products built by teams that actively counteract their blindspots. Understanding implicit bias isn’t just about avoiding harm—it’s about unlocking the full potential of your product by seeing users your automatic associations might otherwise overlook.

For product managers committed to building better products, “Blindspot” offers both a wake-up call and a roadmap. The journey toward reducing bias is ongoing and requires constant vigilance, systematic processes, and humility about our cognitive limitations. But the reward—products that truly serve diverse human needs—makes the uncomfortable work of examining our blindspots worth the effort.

As Banaji and Greenwald demonstrate, we can’t eliminate our automatic associations entirely, but we can build products and processes that work despite them. The first step is acknowledging that even good people have hidden biases—and that with awareness and systematic action, we can prevent those biases from limiting what we build and who we serve.

Leave a comment


📝 Decision Fatigue: How to Protect Your Team from Cognitive Burnout

The Day the Team Stopped Deciding

Tuesday, 10:47 AM. Daily standup. Microphones on, cameras too. I ask the standard questions: “What are you planning today? Any blockers?”

Silence.

Not the “I’m still thinking” kind of silence, but the “I have no energy left” kind. I can see it in their eyes. The developer stares at the screen like it’s a void. The Product Owner opens their mouth, closes it, opens it again. Yet the sprint was going OK.

Keep reading with a 7-day free trial

Subscribe to 💜 PRODUCT ART 💜 to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 PRODUCT ART
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture