API Design for PMs: The Strategic Guide to Building API Products | Bystander Effect at Daily Scrum - why "everyone sees the problem" ends with nobody solving it
Issue #230
In today's edition, among other things:
đ Stop Burning Money on Acquisition While Gamers Laugh at Your Retention Strategy - Editorâs note (by Alex Dziewulska)
đ API Design for PMs: The Strategic Guide to Building API Products (by Alex Dziewulska)
đ Bystander Effect at Daily Scrum - why âeveryone sees the problemâ ends with nobody solving it (by Ćukasz DomagaĆa)
đȘ Interesting opportunities to work in product management
đȘ Product Bites - small portions of product knowledge
đ„ MLA week#35
Join Premium to get access to all content.
It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).
Grab a notebook đ° and your favorite beverage đ”â.
Editorâs Note by Alex đ
Stop Burning Money on Acquisition While Gamers Laugh at Your Retention Strategy
Hereâs a truth that should embarrass every product leader reading this: While youâre pouring budgets into customer acquisition, a six-year-old looter-shooter called The Division 2 is achieving retention rates that would make your CFO weep with joy. The gaming industry spent a decade and billions of dollars perfecting retention science through brutal trial and error. Theyâve documented everything that works. Theyâve published the research. And weâre still pretending we need to figure this out ourselves while watching our customer acquisition costs spiral into economic impossibility.
The math isnât subtle. Acquiring a new customer costs 5-25x more than keeping one. A 5% improvement in retention produces 25-95% increases in profit. Yet 54% of marketers allocate more budget to acquisition than retention, even while acknowledging retention delivers higher ROI. This isnât a knowledge problemâitâs a willful ignorance problem. We know better, and weâre doing it anyway.
Iâve watched product teams chase âgrowth at any costâ metrics while their retention numbers bleed out like a broken faucet nobody bothers to fix. Weâve convinced ourselves that acquiring new customers is âreal growthâ while keeping existing ones happy is mere âmaintenance.â The result? Customer acquisition costs have increased 60% over five years while efficiency continues cratering.
Let me paint you a picture of just how broken this thinking has become. Fourth-quartile SaaS companies now spend $2.82 to acquire $1 of new ARRâa 41% efficiency gap versus median performers. Average payback periods have stretched to 20-30 months. Meanwhile, the probability of selling to an existing customer sits at 60-70% compared to just 5-20% for new prospects.
But hereâs what really exposes the madness: We know all this. Optimove surveyed 221 B2C executives in 2023, and they openly admitted retention delivers the highest ROI. Then they allocated more budget to acquisition anyway. This isnât strategyâitâs addiction to vanity metrics wrapped in âgrowthâ language that sounds impressive in board meetings.
The gaming industry figured out what we refuse to acknowledge: sustainable business models arenât built on an endless treadmill of acquisition. Theyâre built on creating experiences so compelling that customers never want to leave. And theyâve spent the last decade documenting exactly how to do it.
Tom Clancyâs The Division 2 launched in March 2019. Ubisoft initially planned one year of post-launch support before winding it down. Instead, something remarkable happened: they looked at the engagement data and reversed course entirely. Six years later, the game just had its strongest year since 2021, with the May 2025 Battle for Brooklyn DLC driving player counts up 82% on Steam.
Think about that for a moment. A game that was supposed to be done after one year is now in year six and growing. Not just survivingâactively thriving and expanding. How many of your products can claim that kind of trajectory?
The answer lies in what game designers call âend-game loopsâârepeatable content architectures that give players reasons to return long after the main story concludes. Division 2 operates on predictable content cadence: three 12-week seasons annually, each with roughly 100 levels of progression rewards. Weekly rotations cycle through Manhunt targets, League challenges, Global Events, and Apparel Events. This creates âappointment gamingââspecific, predictable reasons to engage at regular intervals.
But the real genius is in the investment loops. The Summit offers 100 floors with targeted loot selection. Countdown delivers 15-minute experiences for time-constrained players. The Descent roguelike mode adds variety. Each feeds into an intricate build-crafting system where players recalibrate and optimize gear. The more they invest in their characters, the higher the switching costs become. Sound familiar? It shouldâthis is the IKEA Effect and loss aversion working in perfect concert.
The community infrastructure amplifies everything. The Division subreddit maintains 381,000 members. The official Discord houses 134,000 participants. Ubisoft collaborates with content creators like NothingButSkillz, turning passionate players into retention-driving ambassadors. In September 2025, they released in-game cosmetics featuring these creatorsâa direct acknowledgment that community sustains engagement.
The contrast with failures illuminates what works. Anthem collapsed within months because BioWare couldnât sustain the live-service model. Marvelâs Avengers lost Square Enix an estimated $200 million despite stronger IP and bigger launches. The difference? Division 2 was architected for retention from day one. The others bolted retention mechanics onto acquisition-first products and paid the price.
Path of Exile demonstrates the same principles at even longer timescales. The gameâs 3.23 Affliction League achieved the highest player retention in its historyâin year twelve of operation. The quarterly league system essentially resets progression every three months, giving veterans fresh-start experiences that maintain engagement across more than a decade. Warframe, launched in 2013, has accumulated 60+ million registered players and grew revenue 27% year-over-year recently, proving free-to-play with retention-first thinking can sustain decade-long business models.
These arenât outliers. This is a $4.9 billion market growing at 24-36% annually, projected to reach $18-22 billion by 2030. That growth derives almost entirely from retention economics: in-game purchases and microtransactions now account for 36% of total Games-as-a-Service revenue. The gaming industry discovered something weâre still debating in product management: retention compounds in ways acquisition never can.
Let me tell you what happens when you prioritize acquisition over retention while your competitor does the opposite. You end up paying escalating costs to replace the customers they keep forever. You build on sand while they build on bedrock. And eventually, the economics catch up with you in ways that no amount of growth-hacking can fix.
The data on this is absolutely brutal. Bain & Company found loyal customers spend 67% more in their third year compared to their first six months. Thatâs not linear growthâthatâs exponential value expansion. Companies with Net Revenue Retention above 120% command 20-40% valuation premiums. Bessemer Venture Partnersâ benchmarks show top-quartile companies scaling to $10M ARR maintain 145%+ NRR, effectively growing without acquiring new customers.
But hereâs what should terrify you: mobile apps lose 77% of daily active users within three days of install. Only 5.6% remain after 30 days. If youâre pouring acquisition budget into a leaky bucket with those retention numbers, youâre not building a businessâyouâre funding a Sisyphean exercise in futility.
Duolingo figured this out and achieved 4.5x DAU growth primarily through retention optimization, not acquisition. They doubled daily active users from 16 million (2021) to over 30 million (2023) using gaming-inspired mechanics: streaks (loss aversion), leaderboards (social proof), and progressive challenges (variable rewards). When they simplified streak requirements to one lesson per day, retention improved because the action became easier. Streak Wagersâbetting gems to maintain streaksâproduced +14% Day-7 retention.
The behavioral science here isnât complicated. Kahneman and Tverskyâs research established that losses feel approximately 2x as painful as equivalent gains. Breaking a 100-day streak feels like losing something valuable, even though it was entirely self-imposed. This is the foundation of loss aversion, and gaming has been weaponizing it for retention for years.
Nir Eyalâs Hook Model (Trigger â Action â Variable Reward â Investment) maps directly to these retention loops. The investment phase proves most critical: users who put in time, data, effort, or money create switching costs for themselves. BF Skinnerâs research on variable reward schedules explains why unpredictable content drops maintain engagement better than fixed schedules. Dopamine releases during anticipation of reward, not just upon receiving itâwhich explains why the possibility of rare loot maintains engagement even when rewards are infrequent.
The product world has examples too. Spotifyâs personalization engine drives 73% retention after the first month against an industry average of 45-55%. Discover Weekly users stream 2x as long as non-users. Netflix estimates their recommendation algorithm creates $1 billion+ in annual customer retention revenue. These companies understand what gaming proved: personalization creates perceived value that makes switching costs insurmountable.
Stop pretending acquisition is strategy when itâs really just expensive table stakes. Start building retention architectures into your product from day one, not bolting them on after youâve hemorrhaged customers for two quarters.
The gaming industry offers a retention playbook thatâs been battle-tested across billions of dollars and millions of users. The principles transfer directly: create predictable content cadence that gives users specific reasons to return, build investment loops where user effort creates switching costs, develop variable reward schedules that maintain engagement through anticipation, enable community infrastructure that turns users into ambassadors, and personalize experiences to create unique value.
For mature products, this means reconstructing your budget allocation. The framework is straightforward: startups need 70% acquisition / 30% retention to establish product-market fit. Growth-stage companies should aim for 50/50. Mature businesses should flip to 30% acquisition / 70% retention. The transition point arrives when high churn erodes growth gains, CAC rises beyond efficient payback periods, and product-market fit is established.
For early-stage products, this means architecting for retention before you scale acquisition. Brandon Hall Group research found strong onboarding improves retention by 82% and productivity by 70%+. Duolingo discovered that pushing signup until after users complete a test lesson produced a 20% jump in next-day retention. The principle: demonstrate value before asking for commitment.
Push notifications, when used correctly, show remarkable impact. Users receiving one or more notifications in their first 90 days demonstrate 3x higher retention than those receiving none. A single onboarding push notification within the first week produces a 71% retention increase over two months. But the key word is âcorrectlyââgaming has learned the hard way that aggressive notification strategies backfire. The art is in creating genuine appointment mechanics, not just interruption.
The behavioral science gives you the roadmap. Use loss aversion by creating streaks, progress bars, and investment loops. Leverage the IKEA Effect through customization and co-creation. Apply variable reward schedules to maintain anticipation. Build social proof through leaderboards and community showcases. Make the core loop so compelling that returning feels like coming home, not checking a todo list.
The evidence is overwhelming. The business case is irrefutable. The gaming industry has already paid the tuition for this education through expensive trial and error, documented everything that works, and made it freely available. The only thing standing between product teams and sustainable growth is the willingness to admit that our acquisition-first orthodoxy has been economically irrational.
Division 2âs six-year trajectory proves retention architectures work. The $4.9 billion Games-as-a-Service market growing at 24-36% annually proves the business model is sound. Duolingoâs 4.5x DAU growth, Spotifyâs 73% first-month retention, and Netflixâs $1 billion+ retention revenue prove the principles transfer beyond gaming. The behavioral science from Kahneman, Skinner, and Eyal explains exactly why these mechanics work.
So hereâs my challenge to you: audit your budget allocation this quarter. If youâre spending more on acquisition than retention for a mature product, youâre paying a premium to subsidize competitors who understand compound growth. If your retention numbers look like mobile app averages (77% churn within three days), you donât have a growth problemâyou have a product problem that no amount of acquisition spend will solve.
The gaming industry stopped chasing new players years ago and started building worlds worth staying in. Itâs time the product world learned from them. Your customers are waiting for reasons to stay. The question is whether youâll build those reasons before your competitors do.
Supporting Kasia Dahlkeâs Research
Kasia, a 5th-year psychology student at WSB Merito University in GdaĆsk, is conducting research for her masterâs thesis on stress and coping styles in the IT industry (age group 35-50). The topic connects powerfully with todayâs newsletterâĆukasz writes about Bystander Effect at Daily Scrum, where impediments hang for days because everyone sees the problem but nobody takes ownership. That diffusion of responsibility? Itâs not just slowing down your sprint. Itâs creating the chronic stress Kasia is studying.
If you work in IT and fall within this age range, the survey takes about 10 minutes: https://lnkd.in/dEBCH9qK
If you donât meet the criteria, every share helps. Kasia will share the research findingsâand understanding how IT professionals cope with stress might reveal why some teams suffer from Bystander Effect while others donât. Because stress patterns and team dysfunction often walk hand in hand.
đȘ Product job ads from last week
Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)
Product Manager - Veeam Software
Product Manager - SALESmanago
Product Manager - Sporty Group
Product Manager - Haut.AI
Senior Product Manager - Vonage
đȘ Product Bites (3 bites đȘ)
đȘ The Contrast Effect: How Feature Sequencing Changes Everything
Why the order in which users experience features determines perceived value
Hereâs an uncomfortable truth: your productâs features donât have inherent value. The same feature can feel revolutionary or disappointing depending solely on what users experienced immediately before it. This is the Contrast Effect in actionâa cognitive bias where our perception of something is fundamentally shaped by comparison to what we just encountered. In product management, this means that feature sequencing isnât just about flow; itâs about manipulating perceived value through strategic ordering.
Consider Spotifyâs experience evolution. When they introduced Discover Weekly in 2015, they didnât just drop it into the app randomly. They positioned it after users had spent time with their manually-created playlistsâmaking the algorithmic curation feel like magic by contrast. The feature would have felt less impressive as a first impression, but after users struggled with playlist maintenance, it felt transformative. Thatâs not luck; thatâs understanding how contrast shapes perception.
The Psychology of Comparison
The Contrast Effect, first studied by psychologist Solomon Asch in the 1940s, reveals that humans donât evaluate experiences in isolation. Weâre comparison machines, constantly calibrating our perception against recent reference points. When you lift a light object after a heavy one, it feels lighter than it actually is. When you experience a smooth interface after a clunky one, it feels smoother than objectively measured.
For product teams, this creates both opportunity and danger. Research from behavioral economics shows that 78% of user satisfaction is determined by expectation versus reality, not by absolute quality. Your feature isnât good or badâitâs better or worse than what came before it. This explains why Teslaâs Full Self-Driving updates feel impressive even when imperfect (contrasted against manual driving), while minor Gmail redesigns spark outrage (contrasted against the familiar version users loved).
The ORDER Framework for Strategic Sequencing
To leverage the Contrast Effect deliberately, we need a systematic approach to feature ordering. The ORDER framework provides exactly that:
O - Orient with Friction Begin user journeys with intentional friction or limitation. This isnât about creating bad experiences; itâs about establishing a baseline that makes improvements feel dramatic. Slackâs original onboarding required teams to manually invite each memberâtedious work that made their later bulk-invite feature feel like a massive upgrade. The friction wasnât accidental; it created contrast.
R - Ramp Gradually Introduce improvements incrementally, allowing each enhancement to shine against its predecessor. Discord mastered this with their noise suppression rollout. They first released basic noise reduction, then Krisp integration, then their proprietary AI-based systemâeach step feeling revolutionary because users could directly compare it to the previous version. Had they launched the final version immediately, users would have had no reference point for appreciation.
D - Delight Unexpectedly Position your most impressive features after users have formed expectations based on standard functionality. Notion does this brilliantly by introducing database views only after users have created simple pages. The contrast between âjust another note appâ and âholy shit, this is a databaseâ creates memorable moments that drive retention. Their data shows 67% higher conversion rates when database features are discovered organically versus shown upfront.
E - End Strongly Place powerful capabilities toward the end of feature sequences to benefit from the Peak-End Ruleâs interaction with contrast. Adobeâs Creative Cloud onboarding ends with AI-powered features like Neural Filters and Auto-Reframeâcapabilities that feel extraordinary when contrasted against the traditional tools users just learned. This sequencing makes the entire product feel more advanced.
R - Reset Strategically Periodically reintroduce baseline experiences to renew contrast appreciation. Figma does this through their occasional âperformance modeâ that temporarily disables real-time collaboration, then re-enables itâreminding users of the contrast between solo and collaborative work. After experiencing the limitation, users report 43% higher satisfaction with standard mode.
The Dark Side of Poor Sequencing
While strategic sequencing amplifies value, poor ordering destroys it. The most common failure pattern is leading with your best featuresâcreating unrealistic expectations that make everything afterward feel disappointing. This is why many SaaS products struggle after impressive demos; the actual workflow canât compete with the highlight reel users just saw.
Microsoft Teams learned this lesson painfully. Early versions frontloaded advanced collaboration features, making basic chat feel primitive by comparison. Usage data revealed that 61% of new users felt overwhelmed and churned before discovering value. Their 2019 redesign inverted the sequenceâstarting with simple messaging, then gradually revealing channels, tabs, and integrations. Retention improved by 34% with identical features, just differently ordered.
Another trap is maintaining flat parity throughout the experience. When everything feels consistently mediocre (or consistently excellent), nothing stands out. LinkedInâs feed algorithm deliberately mixes high-value content with standard updates because constant quality creates numbness. The contrast keeps users engagedâan algorithmic application of strategic sequencing.
Practical Implementation Strategies
Audit Your Current Sequence Map your actual user journey from first contact through advanced usage. For each feature transition, ask: âWhat contrast is this creating?â Most teams discover theyâve accidentally ordered features chronologically (by build date) rather than psychologically (by optimal contrast). Miroâs product team conducted this audit and found their âinfinite canvasâ feature was introduced too earlyâbefore users understood why theyâd need it. Moving it later in the sequence increased feature adoption by 52%.
Design Deliberate Downsteps Create moments where you intentionally limit capability before expanding it. Superhumanâs email app does this by initially hiding their command palette, requiring users to learn basic actions through traditional UI. Once habits form, the command palette reveal feels like unlocking superpowersâa contrast that turns users into advocates. They report that 89% of users who discover the command palette become daily active users.
Test Sequence Variations Run A/B tests on feature ordering, not just feature design. Dropbox discovered that showing storage limits before introducing Paper collaboration reduced Paper adoption by 41%. The contrast made Paper feel like âusing up precious spaceâ rather than âgetting free value.â Reversing the sequence transformed perception completely.
Map Emotional Journeys Plot the emotional highs and lows of your current sequence. You want a rhythm of challenge-and-relief, not constant struggle or effortless simplicity. Duolingoâs lesson structure exemplifies this: easy questions bookend difficult ones, creating contrast that makes completion feel earned without overwhelming users. Their data shows this sequencing reduces abandonment by 28% compared to difficulty-sorted questions.
The Anchor-Shift Strategy
One advanced application of the Contrast Effect is strategic anchor-shiftingâdeliberately changing usersâ reference points to alter value perception. When Figma introduced FigJam (their whiteboarding tool), they didnât position it against Miro or Mural. Instead, they sequenced the user experience to contrast FigJam against Figmaâs design toolâmaking the whiteboard feel refreshingly simple and collaborative by comparison.
This same principle explains Netflixâs content strategy. They sequence shows within genres to create contrast: placing a mediocre thriller after a terrible one makes it feel better than placing it after an excellent one. Their algorithm doesnât optimize for absolute quality; it optimizes for positive contrast. Internal studies show this approach increases completion rates by 23% compared to quality-based sorting.
When Contrast Backfires
The Contrast Effect has a dangerous inverse: negative contrast can destroy perceived value. When Apple Music launched in 2015, many users had just experienced Spotifyâs personalized Discover Weekly. The contrast made Apple Musicâs generic âFor Youâ recommendations feel worse than they objectively were. The sequencing timing couldnât have been worseâusersâ reference points were at their highest.
This is why timing matters as much as ordering. Launching a good feature right after a competitorâs great one creates negative contrast. Slack experienced this when they introduced their video calling feature shortly after Zoomâs pandemic surge. The contrast was inevitable and unfavorable, regardless of Slackâs objective quality. They ultimately pivoted to integrating Zoom rather than fighting the negative contrast.
The Long-Term Sequencing Map
Strategic sequencing isnât just for initial onboardingâitâs a continuous practice throughout the product lifecycle. Consider how Adobe transformed Creative Cloudâs value perception through multi-year sequencing:
Year 1: Establish baseline with traditional desktop tools (creating reference point) Year 2: Introduce cloud storage and sync (contrast: âmy files, everywhereâ) Year 3: Add collaboration features (contrast: âno more version conflictsâ) Year 4: Integrate AI capabilities (contrast: âhours of work in secondsâ) Year 5: Launch mobile apps with full feature parity (contrast: âpro work on my phoneâ)
Each phase feels revolutionary because it contrasts against the established baseline. Had Adobe launched everything simultaneously, users would have no appreciation for the progression. The sequencing itself creates value through contrast.
Conclusion: Value Is Relative, Not Absolute
The Contrast Effect reveals an uncomfortable reality for product teams: we donât build features with inherent value. We build features that will be evaluated against whatever users experienced immediately before. This means that feature quality is necessary but insufficientâsequencing is equally critical.
The best product teams think like experience DJs, carefully curating the order in which users encounter capabilities. They understand that a good feature after a great one feels disappointing, while a good feature after a mediocre one feels amazing. They design not just what users experience, but the sequence that shapes how that experience is perceived.
Your features arenât just solving problemsâtheyâre creating reference points for every feature that follows. The question isnât whether contrast is affecting your product; itâs whether youâre controlling the contrast deliberately or leaving it to chance. Because in a world where perception is reality, the feature sequence isnât just the path through your productâitâs the path to perceived value itself.
The next time you plan a feature rollout, donât just ask âis this good?â Ask: âwhat will users experience right before this, and does that create the contrast we want?â Because in product management, value isnât what you buildâitâs what users feel when they compare it to what came before.
đȘ The Curse of Knowledge: Why Product Teams Canât See What Users See
Breaking free from expert blindness in product development
Thereâs a devastating paradox at the heart of product development: the more we know about our product, the less capable we become of understanding what our users need. This cognitive bias, known as the Curse of Knowledge, makes it nearly impossible for experts to remember what itâs like to be a beginner. Once you understand how something works, you literally cannot un-know it. For product teams living and breathing their product daily, this curse is especially severeâand itâs quietly sabotaging user experiences across the industry.
Stanford psychologist Elizabeth Newton demonstrated this effect perfectly in 1990 with a simple experiment. She asked people to tap out the rhythm of well-known songs like âHappy Birthdayâ on a table while listeners tried to guess the song. Tappers predicted that 50% of listeners would recognize the songs. The actual success rate? Just 2.5%. The tappers couldnât imagine the experience of hearing taps without the melody playing in their heads. This same dynamic plays out in product teams every single day.
The Architecture of Expert Blindness
The Curse of Knowledge isnât laziness or lack of empathyâitâs a fundamental limitation of human cognition. Once information moves from working memory into long-term memory and becomes automatic, we lose access to what it felt like not to know it. This is why experienced developers genuinely believe their CLI tools have âintuitiveâ interfaces, and why designers canât understand why users donât immediately grasp their âsimpleâ navigation patterns.
Research from Carnegie Mellon shows that experts consistently overestimate beginnersâ knowledge by 40-70%. When Dropboxâs engineering team first designed their file syncing system, they assumed users would understand concepts like âselective syncâ and âoffline accessâ naturally. Usability testing revealed that 82% of new users had no mental model for these concepts. The engineers werenât being arrogantâthey were cursed by knowledge they couldnât un-know.
This blindness compounds in product teams because of homogeneity. When everyone on your team is a power user, no one can accurately simulate the novice experience. Microsoft Wordâs ribbon interface, controversial when launched in 2007, was extensively tested by the product teamâall of whom were Word experts with years of muscle memory. They literally couldnât see the disorientation normal users would feel. It took real-world deployment to reveal the cognitive gap.
The BLIND Framework for Breaking the Curse
To overcome expert blindness, we need systematic practices that compensate for our cognitive limitations. The BLIND framework provides structure:
B - Bring in True Beginners Not âintermediate users.â Not âpeople who havenât used our product in a while.â Actual beginners whoâve never seen your product and donât work in your industry. Notionâs breakthrough in user-friendliness came from weekly sessions with people whoâd never used note-taking apps beyond Apple Notes. Watching a teacher struggle to create her first database revealed assumptions the team didnât even know theyâd made. They discovered their âsimpleâ linked database concept required understanding three abstract layers simultaneouslyâobvious to the team, incomprehensible to users.
L - Log Your Assumptions Write down everything you assume users already know. Figma did this exercise and discovered their team assumed users understood: vector vs. raster graphics, frames vs. groups, components vs. instances, auto-layout logic, and constraint behavior. Five concepts that felt basic to designers were completely foreign to their growing audience of non-designers. Making this explicit allowed them to create targeted learning experiences rather than assuming knowledge.
I - Invite Explanation Attempts Ask team members to explain features to imaginary users without jargon. Record these explanations. Discordâs voice channel feature seemed self-explanatory to their gaming-focused team until they tried explaining it to users from corporate backgrounds. The explanation required understanding: persistent voice rooms, push-to-talk vs. voice activation, channel permissions, and speaker priority. What felt like âjust click to joinâ was actually a complex concept requiring prerequisite knowledge.
N - Navigate as Outsiders Use your product while pretending youâve never seen it before. Better yet, watch someone else use it for the first time without helping. Superhumanâs founders spent 100+ hours watching new users attempt their first emails. They discovered that âkeyboard shortcutsâ (obvious to power users) felt like secret codes to normal people. This led to their training programânot because users were incapable, but because the team finally saw their own curse of knowledge.
D - Document the Gap Measure the difference between team assumptions and user reality. Slack analyzed support tickets and found that 64% of questions were about features the team considered âbasic.â The most common questionââHow do I know if someone saw my message?âârevealed a massive knowledge curse. The team knew about read receipts, online indicators, and message threading, but had never designed a clear mental model for message state. They were too close to see it.
Common Curse Symptoms in Product Teams
The âItâs Obviousâ Epidemic When team members consistently describe features as âobviousâ or âintuitive,â youâre witnessing the curse in action. Whatâs obvious is what you already know. Basecampâs team caught themselves using âobviouslyâ in 23 feature discussions during one sprint. They instituted a rule: replace âobviouslyâ with âif you already know X.â This linguistic shift revealed hidden assumptions: âObviously, youâd click the checkboxâ became âIf you already know checkboxes select items, youâd click it.â The second version prompted them to add clearer selection affordances.
The Minimal Documentation Trap Teams cursed by knowledge write documentation for themselves, not users. They skip âobviousâ steps and assume context. Appleâs developer documentation has historically struggled with thisâwritten by engineers who canât remember not knowing Objective-C or Swift. Their improvement came from having technical writers (less cursed) rewrite docs, resulting in 41% fewer support requests for the same APIs.
The Power User Roadmap When every feature on your roadmap excites your team but confuses new users, youâre building for cursed experts. GitHub experienced this when they kept adding advanced workflow features while basic concepts like âwhat is a pull request?â remained mysterious to newcomers. Their 2020 shift toward progressive disclosureâhiding advanced features until users were readyâcame from recognizing their own curse.
The Onboarding Test: Where Curses Surface
User onboarding is where the Curse of Knowledge does maximum damage. Your team has been using the product for months or years; they canât simulate the confusion of minute one. LinkedInâs original onboarding assumed users understood: networks, connections, endorsements, recommendations, and profile completion value. New users saw a barrage of unexplained actions. Their multi-year onboarding evolution showed learning to see through beginnersâ eyes.
Duolingoâs success partly stems from founder Luis von Ahnâs obsession with defeating the curse. Despite being an expert in language learning, he forced his team to watch videos of people using language learning products for the first time. They discovered that beginners donât think in terms of âlessonsâ or âskillsââthey think âI want to order coffee in Spanish.â This insight led to their scenario-based learning approach, which increased completion rates by 37%.
Techniques for Temporary Curse-Lifting
The Feynman Technique Explain your feature to a child or someone from a completely different field. PayPalâs team used this with their fraud detection system. Explaining it to a kindergarten teacher revealed theyâd built interfaces assuming users understood: transaction velocity, geolocation anomalies, device fingerprinting, and risk scores. The teacherâs confusion prompted a complete redesign using plain language and visual metaphors.
The Fresh Eyes Rotation Regularly rotate team members to projects they havenât worked on. Spotify does this quarterly, specifically to combat knowledge curse. When a team that built playlist features switches to podcast discovery, they bring fresh confusion to established patterns. This revealed that Spotifyâs information architecture made perfect sense to playlist builders but was incomprehensible to everyone else.
The Changelog Time Machine Review your productâs evolution from version 1.0 to now. What seemed like logical, incremental improvements to the team often creates a Frankenstein of assumptions for new users. Photoshopâs interface is the quintessential exampleâeach tool made sense when added, but the accumulation created an interface only comprehensible to those who lived through its evolution. Their 2020 âDiscoverâ panel was an admission that expertise had blinded them to complexity.
The Documentation Smell Test
If your documentation includes phrases like:
âSimply...â
âJust...â
âObviously...â
âClearly...â
âOf course...â
Youâre witnessing the curse in written form. These words signal that youâre describing something that feels easy to you because you already know it. Stripe famously banned these words from their documentation, forcing writers to be explicit about every step. Developer feedback showed this reduced time-to-first-integration by 52% despite longer documentationâbecause nothing was assumed.
The Cost of Staying Cursed
The Curse of Knowledge isnât just a UX inconvenienceâitâs a business liability. When Dropbox first launched, their homepage explained features in terms clear to the team but meaningless to users. Conversion was underwhelming. Drew Houstonâs famous explainer video succeeded not because it was entertaining, but because it was created by someone who remembered what it felt like not to understand cloud storage. That perspective shift increased signups by 10%, translating to millions of users.
Twitterâs early stagnation was partly curse-driven. The team knew how to use Twitter, so they assumed everyone else would too. They didnât see that â@ replies,â âretweets,â and âhashtagsâ were incomprehensible jargon to normal people. Their 2009-2010 push to explain these concepts through in-app education coincided with their hockey-stick growth. Teaching what theyâd assumed was obvious unlocked millions of users.
Building Anti-Curse Systems
The most sophisticated product teams build permanent systems to combat knowledge curse:
Continuous Beginner Exposure Atlassian requires every product team member (including executives) to watch at least one user testing session monthly with actual beginners. Not demos. Not customer feedback. Raw, unfiltered footage of confused first-time users. This institutional practice keeps the curse at bay through constant exposure to the beginner mindset.
Assumption Audits Before any major launch, Intercom conducts âassumption auditsâ where they list everything users need to know for a feature to make sense. Then they test whether users actually know these things. For their Product Tours feature, they assumed users understood: triggers, targeting rules, CSS selectors, and A/B testing concepts. Testing revealed that 73% of their target users understood none of these. The audit forced a complete redesign.
The Grandmother Test Could you explain this feature to your grandmother? If not, youâre cursed. Venmoâs peer-to-peer payment concept seemed simple to the fintech team but required explaining: digital wallets, bank account linking, payment authorization, and social feeds. Testing with older users (who brought no assumptions) forced simplification that benefited all users. Post-redesign, completion of first payment improved by 44%.
Conclusion: Expertise Is a Double-Edged Sword
The Curse of Knowledge reveals a cruel irony: the better we become at building products, the worse we get at seeing them clearly. Every day of expertise, every line of code written, every design decision madeâthey all make us less capable of understanding our usersâ actual experience. The curse isnât a bug in human cognition; itâs a feature. We need automaticity to function. But in product development, automaticity blinds.
The best product teams donât try to avoid expertiseâthey build systems to compensate for it. They understand that expert judgment is powerful for solving problems but terrible for identifying them. They know that every assumption they make is a potential barrier for users, and that the only way to find these barriers is to watch real beginners stumble over them.
Your expertise is your greatest asset and your greatest liability. The question isnât whether youâre cursedâyou are. The question is whether youâve built systems to reveal what your curse prevents you from seeing. Because in product development, what you canât un-know might be exactly what your users need you to understand.
The next time something seems âobviousâ or âsimpleâ to your team, sound the alarm. That feeling of obviousness isnât clarityâitâs the curse talking. And on the other side of that curse, thereâs a user struggling with something youâve forgotten how to see. The path to better products isnât through more expertise. Itâs through systematically unlearning what expertise has taught you to ignore.
đȘ The Pratfall Effect: When Showing Imperfection Builds Trust
Why admitting product limitations can strengthen user relationships
In an industry obsessed with perfection, hereâs a counterintuitive truth: your productâs flaws might be your greatest asset for building trust. The Pratfall Effect, discovered by psychologist Elliot Aronson in 1966, reveals that perceived competence actually increases when people show small imperfectionsâunder the right conditions. For product teams, this means that strategic transparency about limitations, mistakes, and trade-offs can strengthen user relationships more than maintaining a facade of flawlessness. The question isnât whether to hide your imperfections. Itâs which ones to reveal, and how.
Aronsonâs original experiment demonstrated this beautifully. He had participants listen to recordings of a quiz show contestant. In one version, the contestant answered 92% of questions correctly. In another, the same contestant answered 92% correctly but also spilled coffee on himself, admitting the mishap. Counter to expectations, the coffee-spilling version was rated as more likeable by 20%. The pratfallâthe small, humanizing mistakeâmade excellence relatable rather than intimidating. The perfect contestant felt distant. The excellent-but-human contestant felt trustworthy.
The Psychology of Strategic Imperfection
The Pratfall Effect works because perfection creates psychological distance. When something seems flawless, we assume itâs either fake or unattainable. Small imperfections serve as proof of authenticityâthey signal honesty in a world of curated perfection. For products, this creates a powerful opportunity: admitting specific limitations can increase perceived trustworthiness more than claiming universal capability.
But thereâs a critical caveat: the Pratfall Effect only works when baseline competence is already established. An incompetent person making mistakes becomes less likeable, not more. The effect is specifically about competent entities showing small, relatable imperfections. This explains why early-stage startups rarely benefit from highlighting flaws (competence not yet proven), while established products can leverage limitations strategically (competence established, authenticity valued).
Research from the Journal of Consumer Psychology shows that 67% of users trust brands more when they admit specific limitations, compared to brands that claim universal capability. But context matters enormously. The same admission can build trust or destroy credibility depending on what youâre admitting, how you frame it, and whether users already perceive you as competent.
The TRUST Framework for Strategic Imperfection
Not all imperfections are created equal. Some build trust; others destroy credibility. The TRUST framework helps identify which limitations to acknowledge:
T - Transparent Trade-offs Acknowledge deliberate design decisions where you chose one strength over another. Basecampâs homepage explicitly states: âWe donât do integrations.â This isnât a limitationâitâs a philosophy. By framing it as a trade-off (simplicity over extensibility), they attract users who value their approach while filtering out those who donât. This transparency increases trust because it demonstrates self-awareness and clear values. Their customer satisfaction scores show that users who discover this upfront stay 2.3x longer than those who discover it later.
R - Relatable Constraints Share limitations that users can understand and empathize with. When Superhuman launched at $30/month, they openly discussed why: âWe have 15 people supporting 10,000 users. We canât charge less without sacrificing the quality you expect.â This admission did something remarkableâit turned price from a barrier into a value signal. Users understood the constraint and respected the honesty. Their waiting list grew by 300% after implementing radical transparency about pricing rationale.
U - Upcoming Improvements Admit current limitations while showing what youâre building toward. Notionâs approach to performance issues exemplifies this: âOur large databases are slow. Hereâs why, and hereâs what weâre doing about it.â They published technical blog posts explaining architecture constraints and their roadmap for improvement. Rather than hiding problems, they turned them into engagement opportunities. Users went from frustrated to investedâforum discussions about performance shifted from complaints to helpful suggestions, with community engagement increasing 43%.
S - Scope Boundaries Explicitly define what your product doesnât do. Slackâs early positioning was masterful: âWeâre team communication, not project management.â By clearly stating what they werenât building, they avoided disappointing users who needed those features while attracting users who valued focus. This boundary-setting increased user satisfaction scores by 31% because expectations were properly calibrated from day one.
T - Temporal Honesty Be upfront about whatâs not ready yet. Figmaâs approach to their mobile app is instructive: âMobile is currently view-only. Full editing is coming, but weâre not rushing it.â This honest timeline prevented users from downloading the app expecting full functionality, then rage-quitting when they discovered limitations. Their mobile app ratings averaged 4.2 stars despite limited functionalityâbecause expectations matched reality.
The Dark Pattern Opposite: False Perfection
The inverse of the Pratfall Effect is equally powerfulâand dangerous. Companies that project flawlessness create what psychologists call âfailure amplificationâ: when imperfections inevitably surface, theyâre seen as betrayals rather than normal occurrences. Appleâs âIt Just Worksâ branding created this trap. When things donât just work (and they sometimes donât), user frustration is amplified by violated expectations.
Compare this to Discord, which openly admits: âSometimes our servers go down. When they do, weâll tell you why.â Their status page doesnât hide incidentsâit celebrates transparency. When outages occur, users are frustrated but not betrayed. The expectation of occasional imperfection was set, so reality doesnât violate the promise. Their post-incident reports regularly go viral on Hacker News, praised for honesty and technical depth. What could be reputation damage becomes reputation building.
Implementation Strategies for Different Product Stages
Early Stage: Selective Vulnerability When your product is new and credibility is fragile, the Pratfall Effect is risky. Users need to believe in your core competence before theyâll forgive limitations. Focus on demonstrating one thing you do exceptionally well, then acknowledge specific non-core limitations.
Linear (the project management tool) nailed this sequence. They launched with exceptional keyboard shortcuts and speedâestablishing clear competence. Only after this was proven did they openly discuss what they werenât building: time tracking, resource management, advanced reporting. The sequence mattered. Competence first, limitations second.
Growth Stage: Transparent Evolution As you scale, user expectations inflate. The Pratfall Effect becomes your shield against impossible standards. Stripeâs approach is instructive: they publish incident reports that are almost embarrassingly detailed, including not just what broke but why they didnât catch it earlier. These reports have become legendary in the developer communityânot because failures are celebrated, but because honesty is respected. Their NPS scores consistently rank highest among payment processors, partly because users trust them to be honest when things go wrong.
Mature Stage: Philosophical Transparency Established products can leverage the Pratfall Effect to humanize their brand and maintain authenticity at scale. Netflixâs blog posts about why certain shows arenât available in certain regions turned a source of frustration into an educational opportunity. They explained licensing complexities, geographic restrictions, and content costs with genuine honesty. Did this solve the problem? No. Did it reduce support tickets by 28%? Yes. Users donât need every problem solvedâthey need to understand and trust the reasoning.
The Apology Architecture
When mistakes happen (and they will), how you acknowledge them determines whether the Pratfall Effect helps or hurts. The architecture of a good apology includes:
Own It Completely No weasel words, no âmistakes were madeâ passive voice. GitHubâs 2020 outage response exemplifies this: âWe messed up. Hereâs specifically how.â They didnât blame cloud providers, didnât hide behind technical jargon, didnât minimize impact. They owned it. Community response was overwhelmingly supportive.
Explain Without Excusing Users want to understand what happened, but explanations can sound like excuses. The distinction is transparency of reasoning versus deflection of responsibility. When Monzo (UK digital bank) had a payment processing failure, they published a timeline showing exactly where their system failed and why. The explanation included their mistakes in judgment, not just technical failures. This builds trust because it demonstrates learning.
Show Systemic Change Acknowledgment without action is empty. GitLab turned a database deletion incident into a 30-day transparency blog series documenting every change they made to prevent recurrence. Each post attracted 50,000+ readers. The pratfall (deleting production data) became a showcase for their engineering culture and commitment to improvement. What could have been fatal became formative.
When Not to Use the Pratfall Effect
Strategic imperfection has clear boundaries. Never acknowledge:
Core Competency Failures If your payment processor admits their payments sometimes fail randomly, thatâs not a humanizing pratfallâthatâs a credibility killer. Stripe can admit to specific, explained outages. They cannot admit to fundamental unreliability.
Security or Privacy Weaknesses âWeâre working on encryptionâ doesnât humanize; it terrifies. Some domains demand perfection perception because the stakes are too high for authentic vulnerability.
Competitor-Created Expectations If every competitor offers a feature, admitting you donât have it doesnât build trustâit signals inadequacy. The Pratfall Effect works for deliberate trade-offs, not capability gaps.
Measuring the Effect
How do you know if strategic imperfection is working? Look for:
Support Ticket Patterns Honest upfront communication should reduce tickets from disappointed expectations. ConvertKit saw support volume drop 37% after adding a prominent âWhat We Donât Doâ section to their homepage.
Community Sentiment Analysis When limitations are discussed in user communities, do people defend your reasoning or criticize your gaps? Roam Researchâs community actively explains to newcomers why certain features donât existâbecause the teamâs transparency about philosophy created advocates.
Conversion Quality Over Quantity Strategic imperfection should reduce total signups but increase qualified signups. When Gumroad explicitly stated âWeâre simple by design, not feature-rich,â their signup rate dropped 12% but cancellation rates dropped 43%. They attracted fewer users but better-fit users.
The Long-Term Relationship Dividend
The Pratfall Effect isnât just a tactic for individual incidentsâitâs a foundation for long-term user relationships. Products that consistently demonstrate honest self-awareness create permission for imperfection that competitors donât enjoy. When users trust that youâll be honest about problems, they give you room to have problems.
Notion users tolerate performance issues that would sink competitor products because Notion has established a pattern of honesty about constraints. Their community suggests workarounds, shares tips, and patiently waits for improvementsâbecause they trust the teamâs transparency about whatâs being worked on and why it takes time.
This trust dividend compounds over time. First-time users might not forgive a limitation. But users whoâve watched you honestly navigate multiple challenges develop deep loyalty. Theyâve seen the pattern: problem emerges, you acknowledge it honestly, you fix it thoughtfully, you explain what you learned. This pattern builds trust that survives individual incidents.
Conclusion: Perfection Is a Barrier, Humanity Is a Bridge
The Pratfall Effect reveals a profound truth about user relationships: people donât trust perfectionâthey trust authenticity. In an industry where every product promises seamless experiences and revolutionary capabilities, strategic imperfection becomes a differentiator. Not because users want flawed products, but because they want honest partners.
The best product teams understand that admitting specific limitations, trade-offs, and mistakes doesnât weaken their positionâit strengthens it, provided competence is already established. They know that users are sophisticated enough to understand that every design involves choices, that every product has constraints, and that every team makes mistakes. What users canât tolerate is being lied to.
Your product doesnât need to be perfect. It needs to be honest. The pratfallâthe small admission of limitation, the transparent explanation of trade-offs, the genuine apology when things breakâthese arenât weaknesses to hide. Theyâre opportunities to demonstrate the authenticity that perfection can never convey.
The next time youâre tempted to hide a limitation or gloss over a mistake, remember: your users arenât looking for perfection. Theyâre looking for partners they can trust. And trust isnât built by never fallingâitâs built by being honest when you do. In product development, the pratfall effect isnât a bug. Itâs a feature. And learning to use it strategically might be the most important relationship skill your team can develop.
Because in the end, excellence gets users to try your product. But humanity gets them to stay.
đ„ MLA #week 35
The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forwardâno overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.
Why it matters? Culture isnât built overnight. Itâs the sum of consistent, small actions. MLA creates momentumâone small win at a timeâand turns those wins into lasting change. Small actions, big impact
MLA: Thank You Thursday
Why This Matters:
Product success depends on countless invisible contributions from across the organizationâthe developer who stayed late to fix a critical bug, the designer who iterated five times to get the flow right, the ops person who streamlined a process you didnât even know was broken. Yet these efforts often go unnoticed, especially when they happen outside your immediate team. A simple, public thank you breaks down silos, builds appreciation across departments, and creates a culture where cross-functional collaboration is celebrated, not just expected. This small ritual can transform how teams see each otherâfrom separate units to partners in a shared mission.
How to Execute:
1. Choose the Right Person:
Look for someone from outside your immediate product team who made your work easier or better this week:
A developer who helped debug an edge case you discovered
A designer who adjusted mockups based on user feedback you shared
A data analyst who created a dashboard that clarified your decisions
A customer success person who provided crucial user insights
A marketing colleague who improved your featureâs messaging
An ops team member who unblocked a deployment or process
The key is specificityâchoose someone whose contribution you can describe concretely, not just someone you generally appreciate.
2. Select the Right Channel:
Pick a channel where both teams can see the recognition:
Company-wide Slack channel or Teams chat
All-hands meeting shout-out section
Product teamâs public channel (if they have visibility)
Shared project channel where both teams collaborate
Avoid private messagesâpublic recognition creates broader cultural impact and shows others what collaboration looks like.
3. Frame Your Thank You Properly:
Make it specific, genuine, and focused on impact:
Good example: âđ Thank You Thursday shout-out to @Sarah from Data! Last week she built a custom dashboard that showed us exactly where users were dropping off in the onboarding flow. That insight helped us prioritize fixes that increased activation by 12%. Sarah, your work directly improved our usersâ experienceâthank you! #MLAChallengeâ
Avoid generic praise: âThanks to @Sarah for being awesome!â
What makes a good thank you:
Names the specific action or contribution
Explains the impact or outcome
Connects their work to user or business value
Uses genuine, conversational tone (not corporate speak)
4. Prepare Your Team:
Announce to your product team that youâre starting Thank You Thursday
Encourage them to participate tooâmake it a team ritual
Remind them to watch for contributions from other departments throughout the week
Create a shared note or Slack thread where people can jot down âthank you candidatesâ during the week so you donât forget by Thursday
5. Execute with Intention:
Set a recurring calendar reminder for Thursday mornings
Spend 5 minutes reflecting on the week: who helped you succeed?
Write your thank you message with careâit should feel personal, not templated
Post it publicly and tag the person
Donât overthink itâgenuine appreciation beats perfect wording
6. Follow Up and Build the Habit:
After the first Thursday, reflect: How did it feel? How did the person respond?
Notice how it changes your awareness throughout the weekâyouâll start seeing contributions you used to miss
Invite others to join: âIâm doing Thank You Thursday to recognize cross-team collaboration. Want to join?â
After 4 weeks, gather your team to discuss: Has this changed how we work together? What have we learned?
Expected Benefits:
Immediate Wins:
People feel seen and valued for their contributions
Creates positive emotional moments in the workday
Costs zero budget and takes 5 minutes
Visible demonstration of cross-functional appreciation
Relationship & Cultural Improvements:
Strengthens relationships between product and other departments
Makes collaboration more rewarding and less transactional
Builds psychological safetyâpeople know their efforts matter
Creates a record of positive interactions that reinforces team bonds
Encourages reciprocal recognition from other teams
Long-Term Organizational Alignment:
Shifts culture from siloed work to collaborative partnership
Makes invisible work visible, encouraging more of it
Creates social incentives for cross-functional support
Builds organizational memory of âhow we work together wellâ
Establishes gratitude as a leadership practice, not just a nice-to-have
Let us know how it went and what conversations it sparked! Use the hashtag #MLAChallenge to share your story. Letâs inspire each other to make recognition everyoneâs business.
đ Bystander Effect at Daily Scrum - why âeveryone sees the problemâ ends with nobody solving it
Third day in a row. Iâm sitting at the daily and hearing the same story. Backend developers report they canât finish the API integration because they lack access to the test environment. DevOps nods. Product Owner nods. Frontend developer nods. Everyone knows. Everyone understands. Everyone feels itâs important.
Nobody takes ownership.
âWell, someone should handle thisâ - a suggestion drops. And again, those nodding





