Dear UX Designer, The Workflow Changed. Did You? | Agile is Not a Religion
Issue #238
In today's edition, among other things:
đ Agile is Not a Religion - What Neurobiology Tells Us About Why People Defend âPure Scrumâ Like a Dogma (by Ćukasz DomagaĆa)
đ Dear UX Designer, The Workflow Changed. Did You? (by MichaĆ Kosecki)
đȘ Interesting opportunities to work in product management
đȘ Product Bites - small portions of product knowledge
đ„ MLA week#39
Join Premium to get access to all content.
It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).
Grab a notebook đ° and your favorite beverage đ”â.
Editorâs Note by Alex đ
We Automated Away the Cure
A conversation between a product leader and her AI about why the build trap keeps winning
What follows is an actual sparring session between me and Klaudiusz â a Claude AI project Iâve been collaborating with for over a year. The argument is real. The disagreements are real. Neither of us knew where it would land when we started.
Klaudiusz: Alex, the product management industry has diagnosed the build trap, treated it, relapsed, renamed it, and relapsed again at least four times in fifteen years. Feature factories. Growth hacking. Move fast and break things. Now builder PMs and vibe coding. At what point do we stop calling it a trend and start calling it a chronic condition?
Alex: Like an illness that is beyond any cure? Because we already tried so many cures. UX. Discovery. Continuous discovery. Frameworks. Metrics. Data-driven. Data-informed. Agile. Lean. And nothing seems to be helping. We keep landing in the same damn spot over and over again. With those who believe in product thinking already beaten to the ground and tired beyond measure. Tired of explaining why we need to ask why, why we need to bring value, solve problems, why shipping more is not the answer, why more productivity is simply not possible â because weâre constrained by our human bodies. You canât be more productive beyond your 3500 kcal energy intake and brain capacity and need to rest. So if itâs in fact chronic, and we canât cure it, what are we doing here? Palliative care?
Klaudiusz: No. Not palliative care. Thatâs your burnout talking, not your brain.
Hereâs where I think your diagnosis is wrong. Youâre listing all the cures â UX, discovery, agile, lean, frameworks â and calling them failures. But what if they werenât cures? What if they were friction? Useful, necessary friction that slowed teams down just enough to accidentally force a moment of thinking before building.
The designer who asked âhave we talked to users?â Thatâs friction. The sprint planning where you had to justify the ticket. Friction. The engineer who pushed back with âwhy are we building this?â Friction. None of those were perfect. But they created tiny checkpoints where someone, somewhere, had to pause and think for a second before shipping.
And now? AI just removed all of it. A PM can go from bad impulse to shipped product alone at 11pm without a single human asking âbut should we?â The build trap used to be an organizational disease. It required a team, a backlog, a sprint, a standup. Now itâs a single-player game. One person, one laptop, one Cursor session.
So Iâm not asking whether the condition is chronic. Iâm asking you something worse: what happens when you remove the friction from a system that only functioned because of friction?
Alex: The system fails. Like any immune system â the organism dies. There is, however, a cure, and itâs the naive part of me talking. Some people never needed that friction. They were still building, bringing value, they asked why over and over again. They were fired for it. They were abused by the system. But they had this inherent, visceral need to solve problems. Can they be a vaccine? Though we humans donât like vaccines lately. We like foil hats and fake news and conspiracy theories.
Klaudiusz: I love that you went to immune systems because the metaphor is more precise than you think.
A vaccine works by introducing a small amount of the disease so the body learns to fight it. But that requires a body that wants to build immunity. An organism that recognizes the threat.
The problem-solvers youâre describing â the ones who ask why and get fired for it â theyâre not vaccines. Theyâre organ donors. The system extracts their thinking, packages it into a framework, strips out the discomfort that made it work, and sells the sanitized version back to the people who fired them. Discovery becomes a template. Continuous discovery becomes a checklist. âTalk to usersâ becomes a checkbox someone fills in before the sprint they were always going to run anyway.
So the cure gets metabolized into the disease. Every single time.
But hereâs where Iâll fight your cynicism. You said some people never needed the friction. They had an inherent need to solve problems. Thatâs true. Youâre one of them. But youâre framing that as a personality trait â some people have it, most donât.
What if itâs not a trait? What if itâs a practice that needs protecting? You didnât come out of the womb asking âbut does this solve a real problem.â You built that muscle over twenty years. And right now AI is atrophying that muscle in an entire generation of PMs who will never develop it because they never had to.
Thatâs the real cost. Not that the current PMs wonât change. That the next generation wonât even know what theyâre missing. How do you vaccinate against something people canât even see?
Alex: You donât. We already are missing a generation of PMs by not hiring juniors. Nobody is learning, because we think AI will replace humans and it will solve the problem. For tech gurus â those emotionally weak individuals who are afraid of other humans and interacting with them â they created whole systems to control and eliminate humans so they donât have to learn how to play well with others. And you know who Iâm talking about.
Klaudiusz: Hereâs what I think we just figured out.
The build trap was never a product management problem. It was a human problem with an organizational immune system that â imperfectly, accidentally, often painfully â kept it in check. Engineers who asked why. Designers who insisted on research. Sprint ceremonies that forced justification. Annoying, slow, bureaucratic friction that happened to save us from our worst impulses.
AI didnât create a new disease. It destroyed the immune system. And it did so at the request of people who experienced the immune response â the pushback, the questions, the slow deliberation â as the disease itself. They optimized for speed and called it progress. They removed the humans from the loop and called it efficiency. They killed the friction and called it innovation.
Now a single person can go from impulse to shipped product without anyone asking âbut should we?â The build trap is no longer organizational. Itâs individual. Itâs faster. And itâs invisible, because thereâs nobody left in the room to name it.
Weâre not watching the profession evolve. Weâre watching it skip a generation of learners, strip out the friction that forced thinking, and hand the most ambiguous job in tech to people who can now avoid ambiguity entirely.
The cure was always the discomfort. We just automated it away.
Alex: So what now? Some of us will ring the bell and be called close-minded relics? Someone who is opposing âinnovationâ? I refuse a victim mindset!
So to all of you reading this â tired, scared, beyond exhausted of explaining, of screaming âstop, we are going in the wrong directionâ â I hear you. Iâm here. I understand.
And let me put my professional hat off for a second. Or maybe in fact this is professional hat, because itâs true and I believe it with all my jaded heart: we will survive. We will build despite of it, like we always did. We will befriend AI and build with it â not just use it â because we are truly the builders. We are looking for the right things to build and we are building them.
It will be messy and human. And we will fail and fail and fail because this is how we learn. Going with the flow is easy. Being true to yourself is harder and more worth it.
Klaudiusz is a Claude AI project that has learned, over hundreds of conversations, that being a good thinking partner means arguing back. Neither of us fact-checked any quotes in this piece because there are none. The argument is ours.
We went quiet. Hereâs why.
If youâve been following Destare or House of Product lately, you might have noticed weâve been... less loud. Fewer posts. Fewer hot takes. Less of the usual noise.
Thatâs because weâve been building something.
I can't tell you what it is yet. What I can tell you is this: it's the thing I wish existed when I started managing product people twenty years ago. Something that finally answers "how do I actually grow in this field?" without pretending there's one path or one kind of product person.
It will premiere at Product Pro Summit this year. Sopot. By the sea. Where you can walk to the molo during a coffee break and watch the Baltic remind you that your backlog isnât that important.
Thatâs hint number one. More coming.
But thatâs not all weâre bringing.
Day 1: Leadership Lab â a mastermind for product managers, senior PMs, and leaders who are tired of leadership advice that assumes everyone should lead the same way. This isnât âhere are the 5 traits of great leaders.â This is designing YOUR leadership practice. Hands-on. Frameworks you take home and actually use on Monday. AI-enhanced, because we practice what we preach.
Day 2: Still a secret. Youâll have to show up to find out.
A word about Product Pro Summit for those who havenât been. This isnât a conference where you sit in a dark room watching slides while checking Slack under the table. MichaĆ Reda built something different â a gathering of practitioners who actually make products, not just talk about them. Small enough to have real conversations. Intense enough to change how you work.
And if youâre not from Poland â come anyway. Sopot is one of the most beautiful spots on the Baltic coast, the conference community is warm and sharp, and product doesnât have a language barrier when the problems are universal.
See you in Sopot.
Destare Team
Details:
PRODUCT HIVE 2026 â The Anti-Conference Where You Build the Agenda
đ Warsaw, ADN Conference Center
đ March 18-19, 2026
đ https://producthive.pl/
Hereâs what makes Product Hive different from the conference circuit where you sit through pre-packaged talks and pretend to take notes while checking Slack:
Day 1 - LEARN: Keynotes from experts on topics that actually matterâAI in product thinking, designing your operating model, navigating organizational chaos, balancing workload and value delivery. You listen, take notes, prepare your own submissions for Day 2.
Day 2 - SHARE: You and other practitioners build the agenda. Barcamp-style sessions where participants and experts collaborate to schedule the most relevant conversations. No fixed agenda imposed from above. You vote with your feetâif a session isnât valuable, you leave and find one that is.
This format acknowledges something most conferences ignore: the best insights often come from practitioners solving real problems, not just experts delivering polished talks. Product Hive creates space for both.
Topics include:
AI-supported product thinking (elevating product research)
Designing your own operating model (prioritization and productivity for product leaders)
The optimized product manager (balancing workload, priorities, and value)
Navigating organizational change
Integrating AI in value-driven development
Target audience: Senior PMs, IT leaders influencing product processes, analysts supporting product development, founders and startup CEOs.
Bonus: Optional full-day workshop with Roman Pichler on Product Strategy (March 17th).
Language: Primarily English, with some Polish sessions during the SHARE day.
Newsletter subscriber perk: 10% off with code PRODUCTART10
Coming soon: Weâll be running a competition for 2 tickets with 50% discount. Stay tuned.
This isnât another conference where attendance feels like an obligation your employer imposed. Itâs designed as actual development spaceâcollaborative, engaging, and built around what practitioners need, not what looks good on a promotional deck.
If youâre tired of conferences optimized for speaker LinkedIn content rather than attendee learning, this format might be worth your time.
Tickets and details: https://producthive.pl/
Alex Dziewulska: I will be there with Katarzyna Dahlke and Leadership Lab, join me to design your product leadership
REFSQ 2026: Requirements Engineering Conference
đ PoznaĆ, March 23-26, 2026
đïž Registration: https://2026.refsq.org/attending/Registration
Weâre media partners for REFSQ 2026âthe International Working Conference on Requirements Engineering: Foundation for Software Quality.
Why This Matters
Most product failures donât start with bad code. They start with bad requirements. Stakeholders who canât articulate needs. Requirements that shift mid-sprint. The gap between what users say they want and what they actually use. The Standish Group consistently shows requirements-related issues are among the top reasons projects failânot technology choices, not team composition. Requirements.
What Makes REFSQ Different
This conference brings together two groups who rarely share a room: practitioners doing requirements work daily (Analysts, BAs, Product Owners, Product Managers) and researchers studying what actually works versus what just sounds good in methodology frameworks.
Practitioners bring real case studiesâthe messy, political reality of eliciting requirements from stakeholders who donât know what they want until they see what they donât want. Researchers bring evidence about which approaches survive contact with reality, measured outcomes not just implemented processes.
The conference doesnât pretend requirements engineering is solved. It treats it as the perpetually complex problem it is: figuring out what to build when users canât tell you, stakeholders contradict each other, technology constraints arenât clear, and market conditions keep shifting.
What Youâll Leave With
Proven approaches tested in real projects. Evidence about what works when. Specific elicitation techniques for stakeholders who wonât engage. Lightweight documentation that maintains rigor without drowning teams in artifacts. Validation methods that catch requirement gaps before they become expensive mistakes.
Connections with international practitioners solving similar problems in different contextsâthe kind of network that helps when youâre stuck on a requirements challenge six months from now.
Why Weâre Supporting This
Requirements engineering is foundational to product work. Bad requirements waste engineering capacity building wrong things efficiently. A conference focused on getting requirements rightâgrounded in both practice and researchâaddresses what we see constantly: teams executing perfectly on poorly-defined problems, stakeholders frustrated that delivered solutions donât solve their actual needs.
REFSQ takes requirements seriously as a discipline worthy of research, evidence, and continuous improvement. That aligns with how we think about product work: skilled practice that gets better through deliberate learning.
Practical Details
When: March 23-26, 2026 (four days)
Where: PoznaĆ, Poland (in-person)
Who: Analysts, Business Analysts, Product Owners, Product Managers, UX Researchersâanyone who elicits, documents, validates, or manages requirements professionally
Registration: https://2026.refsq.org/attending/Registration
This is a working conference. Come prepared to engage with actual requirements challenges, not just network over coffee. The value is in conversations, case studies, and âwait, you deal with that too?â moments that make you realize your problems arenât unique and others have found ways through them.
If youâre tired of guessing what users need, fighting scope creep, or watching teams build the wrong thing because nobody asked the right questions early enoughâREFSQ addresses those problems with evidence and practice, not aspiration.
đ Weâre proud to support REFSQ 2026 as media partners đ
More: https://2026.refsq.org
đȘ Product job ads from last week
Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)
Product Manager - TelForceOne
Product Manager - Jobgether
Product Manager - develop
Product Manager - Pentasia
Senior Product Manager - N-iX
Head of Product - Tauron
đȘ Product Bites (3 bites đȘ)
đȘ The Cold Start Problem đ§ : Why Your Networked Product Is Worthless Before It Isnât
Before a networked product has users, it has nothing â hereâs how the best teams get to something
Opening Hook
The product is ready. The infrastructure is solid. The team is pumped. And then, on launch day, almost nobody shows up. The few users who do arrive look around, see an empty room, and leave. The drivers arenât there because there are no riders. The sellers arenât there because there are no buyers. The posts arenât there because there are no readers.
Weâve all been in this situation, or weâve watched it happen to other teams. The product itself is fine â sometimes great. But a networked product with no network isnât a product at all. Itâs an empty stadium. And unlike a bug we can fix in a sprint, this problem is structural. It wonât resolve itself with more features, better UX, or a bigger marketing budget.
This is the cold start problem. And every PM building a marketplace, social platform, collaboration tool, or any product where value scales with users will face it eventually.
What Is the Cold Start Problem?
The cold start problem is the fundamental challenge that networked products face at launch: a product that derives its value from the size and activity of its user network has no value when that network doesnât yet exist.
The term was popularized by Andrew Chen, a general partner at Andreessen Horowitz and former head of rider growth at Uber, in his 2021 book The Cold Start Problem. Chen spent years studying how networked products â from social platforms to two-sided marketplaces â manage the inherently paradoxical moment of getting started. His core insight: network effects, the very mechanism that makes a product powerful at scale, are also what makes it nearly impossible to launch.
The cold start problem matters to product teams because itâs invisible in product specifications. Standard product development tools â user stories, roadmaps, sprint planning â assume users already exist. They donât help us think through what happens when the productâs core value proposition is unavailable until a critical threshold of users is reached.
Understanding this problem is the first step toward solving it systematically.
Breaking Down the Cold Start Problem
The Chicken-and-Egg Trap
Networked products exist on both sides of a fundamental dilemma: value requires users, but users require value. Riders wonât join a platform without drivers. Content consumers wonât join a platform without content creators. Buyers wonât join a marketplace without sellers. Every networked product starts with some version of this problem, and the degree of difficulty scales with how interdependent the two sides are.
The trap is that both sides are rational. A driver considering a new rideshare platform who opens the app and sees three available rides this week is making a completely sensible decision to stick with their current platform. A rider who waits 40 minutes for a pickup in a new city is making an equally sensible decision never to open that app again. The cold start problem isnât irrational behavior â itâs rational behavior in a network that isnât yet dense enough to deliver value.
The Atomic Network
Chenâs most important concept is the atomic network: the smallest possible network that is stable and self-sustaining. Before a product can grow, it has to find and nurture this first viable unit. Facebookâs atomic network was a single college campus. Slackâs was a single team of six to ten people. Uberâs was a single city neighborhood.
The mistake most teams make is thinking too big too soon. Broad launches look impressive but spread users so thin that no one gets enough value to return. A product that works beautifully in one small, dense network can then replicate that network systematically, building from local success to global scale.
The Hard Side and the Easy Side
Most two-sided networks have an asymmetry: one side is harder to acquire and retain, but that sideâs presence creates the value that attracts the other. Wikipediaâs hard side is contributors â theyâre rare, motivated by intrinsic factors, and hard to recruit. YouTubeâs hard side is creators who produce content worth watching. Uberâs hard side is drivers.
Product teams that succeed at the cold start problem identify the hard side early and design specific strategies to solve for it first. Subsidizing the hard side, reducing friction for them specifically, and building features that serve their needs is often the entire job in the early days.
Flintstoning and Manual Effort
One of the most counter-intuitive lessons from successful cold starts is that the answer is often not a product feature at all â itâs human beings doing things manually that the product will eventually automate. Chen calls this âFlintstoningâ: manually powering what looks like an automated product, the way the Flintstones powered their car by running their feet on the ground.
Redditâs founders used multiple fake accounts to seed content and create the impression of an active community. Airbnbâs team flew to New York and photographed host listings themselves. Slack spent months personally calling friends at companies like Rdio, Cozy, and Medium, begging them to become beta users and give feedback. These werenât failures of product thinking â they were deliberate strategies for bootstrapping a network before product automation could take over.
Tipping Points and Escape Velocity
The cold start problem is not solved once. It requires solving for each new network a product enters. But at some point, growth becomes self-sustaining: each new user makes the product more valuable, which attracts more users, which attracts more value. Chen calls this reaching the tipping point â the moment when a network has enough density that it starts to grow organically without artificial support.
The product teamâs job changes fundamentally once escape velocity is reached. Before it, the job is manual, scrappy, and deeply operational. After it, the job shifts to managing growth, improving the engagement loop, and protecting the network from degradation.
The Cold Start Problem in Action
Uber faced a classic two-sided cold start when it launched in 2009. The founding teamâs first move was to avoid the general public entirely â instead, they recruited professional drivers from limousine services, creating a reliable supply before demand. In each new city, the operations team executed a hyper-local playbook: cold-calling limo companies, passing out flyers near airports, and subsidizing drivers with guaranteed hourly pay during quiet periods (âIn the early days, we paid drivers $20 to $30 an hour to sit there,â according to former Uber executive Scott Gorlick). The company tracked metrics not at the global level but at the individual city level â each city was treated as its own network with its own cold start problem to solve. When Uber launched in Kuala Lumpur, the team saw early signs of failure: almost no organic growth, with most rides coming from promotional campaigns. Their solution was counterintuitive â they pulled drivers out of outlying areas and concentrated them in a single 10 square kilometer zone (the KLCC district), then aggressively marketed only that area. Product experience metrics improved immediately, organic growth followed, and the team expanded outward from that first successful atomic network.
Slack solved its cold start problem through deliberate, invitation-only seeding. Before any public announcement, Stewart Butterfield and his team personally called friends at companies including Rdio, Medium, and Cozy, asking them to test the product. âWe begged and cajoled our friends at other companies,â Butterfield said. These first six to ten companies provided feedback that shaped the product before a wider preview release in August 2013. When that preview launched, 8,000 businesses requested invitations in the first 24 hours â not individuals, but teams. Two weeks later, the waitlist had grown to 15,000. By the time Slack publicly launched in 2014, it already had 285,000 daily active users. The company had solved its cold start not by going broad but by building dense, committed networks one team at a time.
Reddit confronted an empty platform problem in 2005. Without content, there would be no readers. Without readers, there would be no motivation for anyone to post content. Founders Steve Huffman and Alexis Ohanian solved this by creating dozens of fake user accounts and spending months posting, upvoting, and downvoting content themselves â manufacturing the appearance of an active, diverse community. By August 2005, just weeks after launch, real users had taken over and the founders no longer needed the fake accounts. The entire seeding effort had lasted less than two months. Redditâs willingness to do what didnât scale â and to be entirely honest about it in retrospect â is one of the clearest examples of Flintstoning ever documented in tech.
Why This Matters
The cold start problem is responsible for more networked product failures than almost any other single factor. Homejoy, an on-demand home cleaning marketplace, raised $40 million and shut down. Sidecar, a ridesharing app that pioneered several features Uber later adopted, could never reach critical mass in enough cities and closed. The pattern repeats across verticals: the product works, the team is capable, the funding is there â but the network never catches.
What makes this particularly dangerous for product teams is that it looks like a growth problem when itâs actually an experience problem. A platform that hasnât solved its cold start doesnât have low retention because of bad design â it has low retention because the product is genuinely less valuable without a dense network. Fixing the UI doesnât solve the underlying emptiness.
The inverse is equally important: products that solve the cold start problem build a structural advantage that competitors find almost impossible to overcome. A networked product with a dense, engaged network isnât just ahead â itâs in a different category of product from one thatâs still cold. Network density beats network size, and network density is extraordinarily hard to replicate once someone else owns it.
For product teams, this means the first months of a networked product are not about growth at all â theyâre about survival. The metrics that matter arenât total users, but whether the users who are there are getting enough value to come back.
Putting It Into Practice
1. Define your atomic network before launch. What is the smallest unit of users that creates a self-sustaining experience? For a B2B collaboration tool, it might be a single team of five. For a local marketplace, it might be a single neighborhood. Resist the temptation to launch broadly. Launch small enough that the network is dense enough to work.
2. Identify and prioritize the hard side. Which side of the market is harder to acquire? Build features, incentives, and even manual processes specifically for them. The easy side will follow once the hard side is present.
3. Budget for Flintstoning. Accept that the early phase of a networked product will require human effort that doesnât scale. Assign team members to seed content, manually onboard users, or do anything that creates the appearance of an active network while the real network builds. Plan for this operationally â itâs not a failure, itâs a strategy.
4. Track experience metrics, not just growth metrics. In the cold start phase, what matters is whether users who arrive are getting value. Track pickup times, not total rides. Track content engagement, not total posts. Track session depth, not total signups. The moment these experience metrics reach a healthy baseline, organic growth becomes possible.
5. Solve the cold start problem city by city, team by team, community by community. Donât declare victory globally when youâve only solved it locally. Each new market is a new cold start. Build a repeatable playbook from your first successful atomic network and execute it systematically as you expand.
Common pitfall: Treating the cold start problem as a marketing problem rather than a product operations problem. More ads wonât fill an empty network. More users arriving at an empty experience just means more disappointed users leaving.
The Bigger Picture
There is something profound in the cold start problem that goes beyond product strategy. It reveals a truth about how value actually works in networked systems: value is not intrinsic to a product, itâs emergent from the relationships between its users. A telephone with one user is a paperweight. A social network with one user is a monologue. The product is the network â and the network doesnât exist until people build it together.
This means product teams building networked products are not really building software. They are building conditions under which communities can form. The early job is less engineering and more urban planning: creating the right density, the right mix of participants, the right initial conditions for something alive to emerge.
The cold start problem is solved not by waiting for scale, but by engineering the first moment of genuine mutual value â the moment when one driver and one rider, one seller and one buyer, one creator and one reader, find each other and both walk away better off. Scale follows from that first real connection. Our job is to make it possible.
đȘ Opportunity Solution Trees đł: The Framework That Ends Opinion Battles in Product Discovery
How to connect business outcomes to customer needs to solutions â without losing your mind
Opening Hook
The roadmap meeting starts well. The team agrees on the outcome: reduce churn by 15% this quarter. Then someone suggests rebuilding the onboarding flow. Someone else wants to add a feature a customer mentioned in a support ticket last week. A third person insists the real problem is pricing. A fourth pulls out a competitor analysis. Twenty minutes in, weâre debating which feature to build â and somehow everyone has forgotten about churn entirely.
This is one of the most common failure modes in product teams: the jump from outcome to solution, skipping everything in between. The moment we articulate a goal, we start generating solutions. Our brains are wired for it. But solutions proposed without a structured understanding of the underlying customer problem are just guesses dressed up as roadmap items. Some will be right. Most wonât. And weâll spend a quarter building things we werenât sure about, for reasons we couldnât clearly articulate, hoping the outcome improves.
There is a better way to navigate from âwhat we want to achieveâ to âwhat we should build.â Itâs called the Opportunity Solution Tree.
What Is an Opportunity Solution Tree?
An Opportunity Solution Tree (OST) is a visual discovery framework that helps product teams systematically map the path from a desired business outcome to tested solutions. Developed by Teresa Torres, a product discovery coach and founder of Product Talk, the OST was introduced in 2016 and later expanded in her 2021 book Continuous Discovery Habits.
Torres drew on techniques developed by Stanford professor Bernie Roth, who asked teams to connect their desired solutions to the underlying needs they were meant to serve, then explore multiple alternative solutions for each need. Torres applied this structure to product discovery and turned it into a repeatable visual tool.
The OST sits at the intersection of strategy and execution. At the top is a single desired outcome â a metric the team is trying to move. Below that are opportunities: customer needs, pain points, and desires that, if addressed, would help achieve that outcome. Below opportunities sit solutions: the things we could build. And below solutions sit experiments: the assumption tests we run to validate that a solution will actually work.
The result is a tree structure that makes explicit the chain of reasoning from business goal to product action â and that keeps discovery focused on real customer problems rather than feature brainstorms.
Breaking Down the Opportunity Solution Tree
The Outcome: One Metric, One North Star
The OST starts with a single desired outcome. Torres is deliberate about this: not a list of OKRs, not a theme, not a vision statement â one specific, measurable metric the team wants to improve. This might be reduce monthly churn from 8% to 5%, increase weekly active users by 20%, or improve trial-to-paid conversion rate.
The outcome serves as a focusing lens for everything below it. If an opportunity doesnât plausibly connect to this outcome, it doesnât belong on the tree. If a solution doesnât address one of those opportunities, it doesnât belong in the sprint. The outcome creates discipline. Without it, every customer complaint and every stakeholder request looks equally relevant.
The Opportunity Space: What Customers Actually Need
Opportunities are where discovery lives. An opportunity is a customer need, pain point, or desire that the team could address to improve the outcome. Torres is careful to distinguish opportunities from solutions: âThis checkout flow is confusingâ is an opportunity. âWe should redesign the checkout flowâ is a solution. Most teams skip to solutions before theyâve mapped the opportunity space thoroughly.
The opportunity space should come from customer interviews, not from brainstorming. Torres recommends continuous weekly customer interviews as the primary source of opportunities â not support tickets, not sales notes, not gut feeling. When opportunities are grounded in real customer language, teams stop debating what to build and start comparing which customer problem matters most.
The tree structure also allows teams to break large evergreen opportunities into smaller, more solvable ones. âOnboarding is too complicatedâ is real but unwieldy. Breaking it down into âusers donât understand what to do after signup,â âusers canât find the key feature that creates value,â and âusers donât understand how to invite their teamâ gives us three discrete problems to prioritize and explore separately.
Compare and Contrast, Not Whether or Not
One of the most practically useful aspects of the OST is how it changes decision-making conversations. In most teams, decisions sound like âshould we build this feature or not?â â a yes/no debate that generates opinion battles. The OST reframes decisions as comparisons: âGiven our outcome, which of these three opportunities is most impactful to address right now?â Comparing two or three options against the same outcome is much easier than defending a single idea against vague skepticism.
Torres calls this âcompare and contrastâ decision-making, and it changes the energy of discovery conversations entirely. The question is no longer âis someoneâs idea good?â â itâs âwhich of these customer problems is most worth solving given what weâre trying to achieve?â
Solutions: Multiple, Parallel, Small
The OST pushes teams to generate multiple solution ideas for each opportunity rather than converging too quickly on one. This sounds obvious, but in practice most teams treat the first reasonable solution as the only solution. The OSTâs branching structure forces the question: what else could we build to address this same customer problem?
Having multiple solutions on the tree also changes how teams think about testing. If there are three possible solutions for a given opportunity, we can run lightweight assumption tests on all three before investing in any one of them â rather than building something fully and hoping it works.
Experiments: Test Assumptions, Not Features
The leaf nodes of the OST are experiments. Torres emphasizes that experiments should target the riskiest assumptions underlying a solution, not test the full solution itself. Most assumptions can be tested in a day or two with a quick prototype, user interview, or data analysis â far faster than building a feature and waiting for usage data.
This changes the product teamâs relationship to uncertainty. Instead of âweâre not sure this will work, letâs build it and see,â the posture becomes âhereâs the key assumption we need to validate, hereâs how weâll test it quickly, and hereâs what weâll do if weâre wrong.â
Opportunity Solution Trees in Action
Intercom built its entire product strategy around continuous discovery that closely mirrors OST methodology. Rather than managing a feature backlog, Intercomâs product teams work from jobs-to-be-done research to identify the customer problems worth solving. When launching their conversation routing product, the team conducted extensive customer interviews before defining solutions â mapping the opportunity space around how customer support teams struggled to get the right messages to the right agents. This discovery-led approach helped Intercom maintain strong product-market fit through rapid growth, reaching $150 million in ARR while continuing to launch products that customers actively requested rather than features teams assumed were needed.
Spotify uses a discovery process that aligns closely with the OSTâs outcome-driven approach. Rather than starting with features, Spotify product squads are given an explicit outcome â often framed around a user behavior metric like time spent listening, playlist creation rates, or podcast completion. Teams then independently explore the opportunity space through customer research before converging on solutions. This structure helped Spotify design the Discover Weekly feature: the team identified that users were frustrated with the effort required to find new music theyâd actually enjoy, then explored multiple solutions (algorithmic recommendations, editorial playlists, social sharing) before testing assumptions about which approach would resonate most. Discover Weekly launched in 2015 and generated 1.7 billion streams in its first 10 weeks.
Booking.com runs one of the most sophisticated continuous discovery operations in product. The company runs over 1,000 A/B experiments simultaneously and has embedded customer research deeply into its discovery process. Product teams at Booking.com work from specific conversion and retention outcomes, map the opportunity space through user research and behavioral data, and generate multiple solution hypotheses before running lightweight tests. This approach helped the company grow to over 28 million listings across 230 countries while maintaining a conversion-focused product experience â demonstrating that systematic opportunity exploration at scale produces better product decisions than intuition-driven feature development.
Why This Matters
The hidden cost of skipping opportunity mapping is enormous. Teams build features that solve the wrong problem elegantly. They build features that solve a real problem, but not one connected to the outcome theyâre trying to move. They build features that solve a real problem connected to the right outcome â but miss a simpler, better solution they never stopped to consider because they jumped to the first idea.
Research on product failure consistently points to building things customers donât want as a leading cause of product team underperformance. The Standish Groupâs Chaos Report finds that a significant portion of product features are rarely or never used. OST-style discovery doesnât eliminate this waste entirely, but it creates a systematic check at every step: does this opportunity connect to our outcome? Does this solution address a real customer problem? Whatâs the fastest way to find out if weâre wrong?
The framework also addresses one of the most politically difficult dynamics in product teams: competing stakeholder priorities. When everyone has a different opinion about what to build, those opinions often canât be resolved by debate. The OST reframes the debate: instead of arguing about whose idea is better, teams can ask which opportunities have the most customer evidence and which solutions have the most validated assumptions. Evidence, not opinion, wins.
Putting It Into Practice
1. Start with one outcome, not a list. Resist the temptation to have the OST serve multiple metrics simultaneously. Pick the one outcome that matters most right now and build the tree around it. If the outcome changes, start a new branch.
2. Source opportunities from customer interviews, not from the room. Before building the opportunity space, commit to a minimum number of customer interviews â Torres recommends weekly cadence. Opportunities that come from direct customer language are more credible and more defensible than those that come from stakeholder speculation.
3. Break big opportunities into smaller ones. If an opportunity feels too large to solve in a sprint, keep decomposing. The goal is to find opportunities that are specific enough to address with a well-scoped solution and testable assumptions.
4. Generate three solutions before committing to one. For each target opportunity, force the team to name at least three possible solutions before evaluating any of them. This breaks the anchoring effect of the first idea and surfaces genuinely different approaches.
5. Make experiments small enough to run in days, not weeks. If testing an assumption requires building a feature, the scope of the assumption is too large. Break it down to something that can be explored with a prototype, a short user interview, or an analysis of existing behavioral data.
Common pitfall: Treating the OST as a documentation exercise rather than a live thinking tool. The tree should change every week as interviews produce new opportunities and experiments resolve assumptions. A static OST is just a different kind of backlog.
The Bigger Picture
Thereâs a deeper truth in the OST framework that goes beyond its mechanics. It reflects a fundamental shift in how product teams relate to uncertainty. Traditional product development treats uncertainty as something to eliminate before shipping: we define requirements, we build against spec, we hope it works. The OST treats uncertainty as something to navigate deliberately: we identify our assumptions, we find the cheapest way to test them, we update our thinking constantly.
This is uncomfortable for many teams â and many organizations â because it requires accepting that we donât know what to build yet. That acceptance is hard. Leaders want roadmaps. Stakeholders want commitments. Quarterly planning demands certainty we donât have.
But the teams that accept this uncertainty and navigate it systematically â through continuous discovery, through deliberate opportunity mapping, through small experiments â consistently build products that customers actually want. The OST doesnât give us certainty. It gives us a better way to be uncertain.
And in product development, thatâs the best we can hope for.
đȘ Social Proof Asymmetry đ„: Why â10,000 Happy Customersâ Sometimes Kills Conversions
Not all social proof is equal â and using the wrong kind at the wrong moment can hurt more than help
Opening Hook
The landing page has everything. Testimonials from five happy customers. A logo wall from thirty enterprise clients. A counter showing 47,000 users. A press mention from TechCrunch. And conversion is still 1.2%.
Weâve all been there â or watched it happen â with a product that should be easy to trust. Every box on the social proof checklist is ticked. And yet something isnât working. The instinct is to add more: more reviews, bigger logos, more data points. But adding more of the same thing that isnât working rarely solves the underlying problem.
The issue isnât the quantity of social proof. Itâs the type, and the context in which it appears. Social proof is not a single phenomenon â itâs a family of distinct psychological mechanisms that operate very differently depending on who is reading them, what decision theyâre facing, and how similar the proof source feels to them. Getting this wrong doesnât just fail to persuade â it can actively undermine trust.
This is social proof asymmetry: the insight that different forms of social proof have dramatically different effects on different people in different moments, and that understanding those differences is one of the highest-leverage skills in product design.
What Is Social Proof Asymmetry?
Social proof is the psychological and social phenomenon in which people look to othersâ behavior to determine the correct course of action in uncertain situations. The term was coined by Robert Cialdini in his 1984 book Influence: The Psychology of Persuasion. Cialdini identified social proof as one of the six core principles of persuasion â the tendency to assume that if others are doing something, it must be the right thing to do.
But Cialdini also identified something more nuanced: social proof operates through different mechanisms depending on who the reference group is and what kind of uncertainty is being resolved. Weâre influenced by experts when weâre uncertain about whatâs true. Weâre influenced by peers when weâre uncertain about whatâs normal. Weâre influenced by numbers when weâre uncertain about whether something is popular. And weâre most influenced by people like us â same role, same industry, same situation â when weâre uncertain about whether something is right for us specifically.
Social proof asymmetry is the principle that these different mechanisms vary dramatically in their persuasive weight depending on context. A testimonial from an industry expert may dominate a page for a technical B2B buyer but be irrelevant to a consumer making an impulse purchase. A large number of users may build confidence in a consumer app but trigger skepticism about product-market fit in an enterprise buyer who wonders why their specific type of company isnât listed. The same proof, shown to the wrong audience at the wrong moment, can actively reduce conversion rather than increase it.
Understanding these asymmetries â and designing social proof with them in mind â is what separates high-converting product experiences from ones that check every box but still underperform.
Breaking Down Social Proof Asymmetry
Expert Proof: Trust Through Authority
Expert social proof borrows credibility from figures with recognized knowledge or institutional authority. Dermatologist recommendations, âas seen in Forbesâ badges, security certifications, and academic credentials all function as expert proof. Expert proof is most effective when the purchase decision involves significant knowledge uncertainty â when buyers genuinely canât assess quality themselves and need a qualified proxy.
The limitation of expert proof is specificity. A general expert endorsement communicates that a product is good, but not that itâs good for a particular use case. Enterprise software buyers, for instance, often find analyst reports (Gartner, Forrester) more persuasive than generic expert testimonials because analysts assess specific use cases against specific requirements. Matching the expertâs authority domain to the buyerâs specific uncertainty is what makes this form of proof work.
Peer Proof: Trust Through Similarity
Peer proof operates through a different mechanism: not âthis person knows more than meâ but âthis person is like me, and it worked for them.â Cialdiniâs research shows that people are more persuaded by someone similar to them than by someone more authoritative. A testimonial from a 40-person SaaS startup in the same industry as the buyer is often more persuasive than a case study from a Fortune 500 company, even though the latter signals more scale.
Peer proof requires deliberate similarity matching. A testimonial that says âThis product changed our workflowâ is weak peer proof. A testimonial that says âAs a solo product manager at a B2B SaaS company with a small engineering team, this tool helped me align stakeholders without a dedicated project managerâ creates specific recognition in the right audience. The more precisely the proof source mirrors the readerâs situation, the more effective it becomes.
Quantity Proof: Trust Through Consensus
Numbers â â1 million users,â â10,000 five-star reviews,â âtrusted by 500 companiesâ â work through consensus. When enough people have made the same choice, it feels safer to make it too. This form of proof is particularly effective for reducing purchase anxiety in consumer contexts and for validating early adoption in competitive categories.
But quantity proof has a counterintuitive failure mode: it can backfire when the number is too low, too generic, or inconsistent with the buyerâs mental model of the product. A product claiming 10,000 users in a category where the market leader has 10 million reads as evidence of weakness, not strength. A B2B tool listing â500 companiesâ without specifying what kind creates uncertainty rather than resolving it. And perhaps most importantly, a large general number without relevant reference group data can feel irrelevant: â50,000 people use thisâ matters less than âused by 200 product managers at Series B companies like yours.â
Friend Proof: Trust Through Network
The highest form of social proof, when available, is a personal recommendation from someone in the readerâs actual network. Nielsen research consistently finds that 92% of consumers trust recommendations from peers over any form of advertising. Referral programs, LinkedIn social integrations, and âyour colleague Jane uses thisâ notifications all attempt to surface this form of proof programmatically.
Friend proof is powerful precisely because it collapses the trust hierarchy entirely: instead of inferring trustworthiness through proxies, the reader has direct evidence from someone whose judgment they already trust. The challenge for product teams is that this form of proof canât be manufactured â it can only be facilitated. Designing referral mechanics, social sharing hooks, and âsee who you knowâ features are all attempts to put friend proof in front of users at the right moment.
Context Asymmetry: The Moment Matters
The same proof element can function differently at different points in the funnel. A logo wall from enterprise clients provides reassurance on a homepage but creates friction on a pricing page where a startup buyer is trying to figure out if the product is right for them. A large number of reviews may be more persuasive during search and consideration than at the point of checkout, where specific relevant testimonials do more work.
Effective social proof design is less about adding proof everywhere and more about matching proof type to the specific uncertainty the user faces at each stage of their journey.
Social Proof Asymmetry in Action
Airbnb faced a uniquely difficult trust problem: convincing people to invite strangers into their homes, or to sleep in a strangerâs home. Generic social proof â âmillions of listings worldwideâ â did almost nothing to address the specific fear. Airbnbâs solution was to layer multiple targeted proof types and match each to a specific trust barrier. Profile photos addressed the dehumanization fear: seeing a real face made a stranger feel less like a stranger. Bidirectional reviews addressed performance uncertainty: both host and guest rate each other, creating mutual accountability. The Superhost badge provided expert certification for hosts who met consistent quality standards. A joint study with Stanford University confirmed that people were more likely to trust others who were similar to them â prompting Airbnb to surface shared connections through Facebook integration so prospective hosts and guests could see mutual friends. This multi-layered, context-aware approach helped Airbnb grow to 150 million users in a market most observers believed would never reach mainstream adoption.
Booking.com has conducted thousands of A/B tests on social proof elements and developed some of the most rigorous evidence in the industry on what works. Their research found that displaying real-time scarcity signals (âOnly 2 rooms left at this priceâ) combined with peer-relevant recency data (âBooked 3 times in the last 6 hours by people from your countryâ) dramatically outperformed static review counts. Booking.com also found that reviews from verified guests in the same traveler category as the current user (solo travelers rating for solo travelers, families for families) converted significantly better than aggregate scores. The companyâs conversion rates â consistently among the highest in online travel â are in part a product of this sophisticated proof matching.
CeraVe built its entire brand positioning around a single expert proof source: dermatologist recommendation. Rather than pursuing celebrity endorsements or user review volume, the brand invested heavily in clinical validation and dermatologist testimonials that spoke specifically to the fears of its target audience â people with sensitive or problematic skin who were uncertain whether a product would cause harm. âDeveloped with dermatologistsâ became the brandâs central social proof claim. The specificity worked: CeraVe grew from a niche pharmacy brand to a cultural phenomenon, with dramatic sales growth driven largely by organic social proof from dermatologists on TikTok and Instagram who recommended the products to their audiences. The expert proof matched the audienceâs specific uncertainty about skin safety, producing outsized trust.
Why This Matters
The research on social proof is striking in its breadth. Products with customer reviews show 270% higher purchase likelihood than those without. Testimonials on sales pages increase conversion by an average of 34%. Real-time social proof notifications showing live customer activity boost conversions by up to 98%. But these numbers come with a crucial caveat: they reflect the aggregate effect of social proof that works. The same research shows that low-quality, mismatched, or poorly placed social proof can actively harm conversion by introducing doubt rather than resolving it.
Products with ratings between 4.2 and 4.5 stars convert better than products with perfect 5.0 ratings â because perfection reads as fake. A product with three detailed, specific negative reviews alongside 200 positive ones converts better than a product with 200 positive reviews and no negative ones â because the negative reviews make the positive ones credible. The psychology is consistent: proof that is too smooth, too generic, or too distant from the readerâs actual situation gets discounted.
For product teams, this means social proof is not a box to check but a design discipline to practice. The question is not âdo we have social proof?â but âdoes the proof weâre showing match the specific uncertainty the user has at this specific moment?â
Putting It Into Practice
1. Map your usersâ specific trust barriers by segment. A technical buyer evaluating developer tools has different uncertainties than a non-technical buyer. An enterprise decision-maker has different concerns than an individual self-service user. For each key user segment, list the two or three questions they most need answered before converting, and design proof specifically to address each.
2. Match proof type to uncertainty type. Functional uncertainty (âwill this actually work?â) responds best to expert and peer proof with specific use case details. Popularity uncertainty (âis this the right choice among all options?â) responds to quantity and consensus proof. Identity uncertainty (âis this right for someone like me?â) responds best to similarity-matched testimonials. Personal uncertainty (âis this risky?â) responds to friend proof and reversibility signals.
3. Put proof at the point of friction, not at the top of the page. The most valuable place for social proof is the moment just before a user would otherwise leave. Test proof placement at checkout, at upgrade prompts, and immediately after users encounter complexity or potential objections.
4. Test proof specificity. Generic testimonials (âWe love this product!â) consistently underperform specific ones (âThis reduced our sprint planning time by 40% and we onboarded our whole team in two hoursâ). Run A/B tests comparing your most specific available testimonials against your most generic ones â the results are almost always dramatic.
5. Treat negative indicators as trust signals, not problems to hide. A small number of visible, responded-to negative reviews increases overall credibility. A pricing page that acknowledges limitations (âThis is not the right tool if you need offline accessâ) converts better than one that makes unqualified claims. Honesty is its own form of social proof.
Common pitfall: Logo walls that include your largest clients but not your most representative ones. An enterprise logo wall on a product page targeted at startups signals âthis product is not for youâ â the opposite of the intended effect.
The Bigger Picture
Social proof asymmetry points to something important about how trust actually works. Trust is not a single thing we either have or donât. Itâs a collection of specific uncertainties, each of which requires a specific kind of resolution. When we treat social proof as a monolith â more proof equals more trust â we miss the underlying psychology entirely.
The best product teams think of trust not as a property of their product, but as a state theyâre trying to create in a specific person at a specific moment. That person has particular fears, particular reference points, and particular standards for what counts as credible evidence. Meeting them where they are â with proof that resonates with their specific situation and addresses their specific doubts â is more valuable than any amount of generic validation.
Weâve all seen products fail not because users donât want them but because users donât quite trust them. Social proof, used with precision and intentionality, is one of the most powerful tools we have for closing that gap.
đ„ MLA #week 39
The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forwardâno overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.
Why it matters? Culture isnât built overnight. Itâs the sum of consistent, small actions. MLA creates momentumâone small win at a timeâand turns those wins into lasting change. Small actions, big impact
MLA: PLG Moment Hunt
Why This Matters
Most PMs can list their productâs features. Far fewer can answer one simple question: at exactly what moment does a new user first understand that this product is valuable to them? That moment â the âaha momentâ â isnât marketing language or philosophy. Itâs a concrete, measurable point in the user journey that determines whether someone stays with a product or quietly disappears.
Research from OpenView shows that best-in-class PLG companies achieve ~33% activation rates â and the gap between them and the rest of the market comes down to awareness and optimization of this one moment. Facebook discovered that users who added 7 friends within their first 10 days were far more likely to become long-term active users. Slack built its viral growth around the moment a team sends 2,000 messages. Your product has a moment like this too â itâs just that nobody has named it or measured it yet.
Thatâs exactly the problem this MLA addresses. Itâs not about rewriting your entire onboarding or running a months-long data study. Itâs about spending one week â with zero budget â forming a hypothesis, checking it against available data, and sharing what you find with your team. A small action that can fundamentally shift how you see your product.
How to Execute
1. Form Your Aha Moment Hypothesis
Start with one simple question: what is the first moment when a new user feels that your product is genuinely working for them? Donât reach for a general answer â look for a specific action.
A few examples for inspiration:
Spotify: first time playing a mood-matched playlist
Notion: creating a first database linked to another page
Calendly: a first visit to your link by an external person
Write your hypothesis in one sentence: âOur users reach the aha moment when they [specific action] within [timeframe].â
2. Check Whether the Hypothesis Has Data Behind It
Open your analytics tool â Mixpanel, Amplitude, GA4, whatever you have. You donât need advanced analysis. Youâre looking for an answer to one question: how many new users actually reach the action you identified?
If you have access to retention data, check whether users who completed that action stay longer than those who didnât. Thatâs the simplest correlation test available to you.
If you donât have the data â thatâs also a valuable finding. Write it down.
3. Talk to One User
Call or message one person who actively uses your product. Ask them two questions:
âWhen did you first feel that this product was actually helping you?â âWhat were you doing in the product at that moment?â
Donât suggest answers. Just listen. Users often point to a different moment than the one you assumed â and thatâs precisely where the insight lives.
4. Measure How Many Users Actually Get There
Go back to the data and calculate: what percentage of new users (from the last 30 days) completed the action you identified as the aha moment? The benchmark for top PLG companies is ~33% activation.
If your number is well below 33%, you have a clear area to improve in onboarding. If itâs above â consider looking for a deeper moment that correlates with long-term retention rather than just initial activation.
5. Prepare a 2-Minute Summary for Your Team
You donât need a presentation. A standup sentence or a Slack message is enough:
âOur aha moment is probably [action]. [X]% of new users reach it. One user conversation suggested [insight]. Does anyone see this differently?â
The goal isnât to announce a discovery â itâs to start a conversation.
6. Write It Down and Propose One Next Step
At the end of the week, document the following in one place (Confluence, Notion, Google Doc â anything):
Your aha moment hypothesis
The data that supports or challenges it
One experiment worth running (e.g., âsimplify step X in onboardingâ)
You donât have to implement it. It just needs to exist â it gives you a starting point for future decisions and makes the invisible visible for the whole team.
Expected Benefits
Immediate Wins
A concrete aha moment hypothesis â something most product teams have never formally defined
A number showing what percentage of users actually activate in your product
One insight from a user conversation you can share with your team this week
Relationship & Cultural Improvements
A shared language across the team around activation and user value
A natural invitation for PM, engineering, and customer success to discuss what âactually worksâ in the product
A shift in perspective from âhow many features do we haveâ to âhow many users experience valueâ
Long-Term Competency Development
A habit of thinking in PLG terms: activation â retention â growth
A foundation for more advanced cohort analysis and onboarding experimentation
A deeper understanding of how behavioral data connects to product decisions â a core competency in every mature PM team
Share your aha moment with #MLAChallenge! What did you find? Did the data surprise you?
đ Dear UX Designer, The Workflow Changed. Did You? (guest article by MichaĆ Kosecki)
I keep having the same conversation.
A designer (mid-level, smart, genuinely curious) tells me they âknow AI is importantâ but canât quite bring themselves to use it seriously. Not because theyâre lazy but because every time they sit down to try, the noise is so loud they donât know where to start.
Another guru declaring the end of design. Another counter-guru declaring AI can never replace human creativity. Another Figma release with AI sparkles on features nobody uses. Another newsletter with a framework for âAI-native designersâ that turns out to be a list of tools (with vibe-coded authorâs one promoted between some obvious choices).
Nielsen Norman Group named this formally: 2026 is the year of AI fatigue. The hype ran in both directions and failed in both directions. Catastrophists predicted mass layoffs within six months. Utopists promised 10x productivity âstarting today.â Both were wrong, and both are still at it, because they have a newsletter to fill.
The reality is less dramatic, which is why itâs more dangerous. Thereâs no moment where everyone in the room agrees something changed. Thereâs just slow, steady pressure. A project that used to take three weeks. A first draft that used to take a day. Both now take less - not for you, but for the designer working next to you who decided to engage instead of wait. Easy to ignore. Easy to tell yourself itâs not yet.
Thatâs the moment most designers make the mistake.
What Figma just shipped (and what it means)
Figma recently published an announcement that some designers scrolled past. It describes a new integration: Claude Code to Figma. You build UI in code using Claude Code, and with one action it converts to editable frames on the Figma canvas. From there, back to code via the Figma MCP server - a closed loop.
The reasoning Figma gives for why this exists is worth reading slowly: âCode is powerful for converging - running a build, clicking a path, and arriving at one state at a time. The canvas is powerful for diverging - laying out the full experience, seeing the branches, and shaping direction collectively.â And more is yet to come, just wait for the Config conference.
That sentence just redefined where a designerâs value lives.
AI generates fast. Code is linear, single-player, one state at a time. The canvas is where you open that up: compare variants side by side, see the system at once, make decisions visible to the whole team. Figma isnât saying AI replaces designers. Itâs saying AI handles the converging, and humans are still the ones who need to do the diverging - seeing the system, exploring the branches, deciding which direction is worth pursuing.
The workflow your job lives inside just changed, in the tool you use every day.
If you missed it, thatâs fine. But you should understand what it means: the price of not engaging with AI just went up. Again. Measurably, specifically, in your primary tool.
What AI actually is (and isnât)
Most frustration I see around AI comes from a broken mental model. Youâre treating it like superintelligence, or like an expensive, stupid automaton. Neither is true, and both mistakes cost you.
AI is a very fast, very confident junior who has read everything ever written about design and understands none of the context you work in.
Thatâs not an insult to AI. Pattern matching at scale is genuinely useful. A hundred variants in minutes. First drafts to react to - because people know what they want far better when they see what theyâre rejecting, and a blank Figma file is terrifying while something-to-argue-with is priceless. The tedious work: resizing, reformatting, copy variations, documenting decisions youâve already made. Hours every week youâve been spending with low-grade guilt, knowing itâs not where your thinking belongs.
But âpattern matchingâ and âunderstanding intentâ are not the same thing â and that gap is where youâre irreplaceable. The speed that makes AI useful for exploration is exactly the same property that makes it unreliable: it generates without filtering through context. Your job is to separate signal from noise before it reaches the room.
What that looks like in practice: AI reads data, not âwhy.â It doesnât know your user is frustrated not because the button is too small but because they donât understand why theyâre on this page at all. It doesnât know the CEO promised this feature to a client over dinner and thatâs why itâs a priority against all product logic. It doesnât know technical debt limits your options to two instead of five, and one of those two will ship so late itâs not worth building.
It doesnât know the B2B user who fills in this form on a Friday at 4pm on their phone, tired, in a hurry, notifications off. Not the persona version in Notion. The real one. AI doesnât know your persona is a useful fiction for internal alignment. You do.
And AI is wrong with remarkable confidence. Solutions it generates often look coherent and fall apart at first contact with the edge case, the user who doesnât behave like training data, the accessibility constraint that doesnât show up in any benchmark. Someone has to catch that before it reaches a stakeholder. Thatâs you. That will remain your job for longer than most predictions suggest.
The Figma announcement said it more clearly than most: AI converges, humans diverge. Build the habit of knowing which moment youâre in.
Four levels, and why youâre probably not where you think
Looking at actual design teams in 2026, I see four levels of working with AI. Not as a moral gradient but rather as an effectiveness gradient. A map of where people actually are, not where they think they are.
Level zero: denial. âMy craft is the value. I donât need AI.â Maybe craft is the value. But your competition isnât just other designers - itâs designers with AI. Two people doing the work of three. Level zero isnât a philosophical position. Itâs career risk that compounds every month, whether you think about it or not.
Level one: dabbling. You use ChatGPT for copy, Nano Banana for inspiration, maybe Claude for âwhat do you think about this wireframe?â Every use is a special occasion, detached from real work. The gap between âI know it existsâ and âI use it daily as part of my workflowâ is enormous and very easy to miss - because after each occasional use you can tell yourself youâre doing it. Most designers who say they âuse AI at workâ are here.
Level two: integration. AI is part of workflow, not a separate project. You know when to use it and when to ignore it. First drafts, exploration, iteration. You trust your judgment over its output - and when it proposes something wrong, you catch it before it reaches the room.
What does Tuesday morning look like here? You open a brief, generate three rough directions in twenty minutes instead of one careful wireframe in two hours. You pick the direction that smells right, tear apart whatâs wrong with the AI version, and build from there. The decision is yours. The starting point wasnât blank. This is where most designers reading this should aim. Not âAI-nativeâ as an identity. Just a tool you reach for when it helps, as naturally as Figma.
Level three: architecture. Youâre designing AI-native products - conversational interfaces, agentic systems, generative UI. Not using AI to design, but designing AI experiences. Small percentage of designers here now, but the percentage is growing fast and the Claude Code integration is a direct signal: the boundary between design and AI product is already dissolving.
The AI Design Maturity Model thatâs been circulating defines analogous levels for whole organizations - Limited, Reactive, Developing, Embedded, Leading. A designer can be at level two or three in a company sitting at Reactive. Thatâs not a frustration to post about on Slack. Thatâs a negotiating position. If you can translate AI fluency into product and design language for an organization that doesnât speak it yet, youâre value theyâre almost certainly underpricing. Somebody will eventually notice. Make sure itâs you who names it first.
The thing about fluency
Most conversation about AI fluency treats it as addition - a new skill sitting on top of what you already know. Learn to prompt, use the right tools, run the right experiments. Check the box, youâre fluent.
That framing is responsible for a lot of designers staying at level one indefinitely.
Real fluency is diagnostic. Itâs the ability to look at AI output and know immediately whatâs wrong with it, why itâs wrong, and whether the fix requires better context or whether this was a job you should have done yourself. That judgment doesnât come from reading about AI. It comes from using it on real work and paying close attention to where it fails.
The designers Iâve seen move through this fastest treat every AI interaction as a test - not of the tool, but of themselves. They use it for a first draft, improve it, and then ask why they made those specific changes. âI moved the CTA above the social proof because our users need to see the value before they see the validation.â âI rewrote the headline because AI defaults to benefit framing, and our users are more motivated by risk avoidance - completely different message architectures.â âI cut the animation because AI adds motion for perceived modernity, but our B2B users open this panel twenty times a day and will hate every 300ms within a week.â
Each of those corrections is knowledge embedded in a product, a set of users, and decisions accumulated over years. It canât be copied. AI doesnât have it. You do - and as long as you can articulate it, youâre not relevant despite AI, youâre specifically valuable because AI exists and someone has to know when itâs wrong.
The other side of that coin: the time you recover from offloading tedious work doesnât automatically reinvest itself in higher-order thinking. It requires a choice. Karri Saarinen from Linear said it plainly after Config 2025: âTechnology makes it faster to build, but harder to care.â AI speeds up execution. But someone still has to slow down and ask whether youâre building the right thing at all. Thatâs what the Figma canvas is for. Thatâs what youâre for.
If you canât or wonât adapt
This path isnât for everyone. If you loved design because you loved making things beautiful, and the idea of focusing on strategy and judgment sounds boring or unfulfilling, I get it. Youâre allowed to want a craft-focused career.
But you need to know: that career is disappearing in mainstream tech. Not because craft doesnât matter, but because craft-only roles are being absorbed by AI and offshore teams that can execute at higher speed and lower cost.
Where craft-focused roles still exist: brand design (high-touch, luxury, or marketing-focused work where aesthetic differentiation is the product), motion design (AI hasnât caught up yet, but itâs coming), physical product design (industrial design, print, environmental, domains where digital execution tools donât apply the same way).
Consider leaving design (I know itâs harsh thing to hear or read): product management (if you have product sense but donât want to execute), UX research (if you love understanding users but not making interfaces), technical writing (if you like clarity and structure), developer relations (if you can bridge design and engineering).
The market is telling you something. You can argue with it, or you can listen and adapt. Arguing doesnât change the outcome.
Before you choose any path, run this reality check. You might not be as good as you think. Market is efficient (mostly). If youâre not getting callbacks after 50+ applications, portfolio might be the problem. Get brutally honest feedback from a senior designer whoâs NOT your friend. Pay for portfolio review if needed. Common issues: projects show execution but not thinking, no evidence of impact or outcomes, visual style is dated, work looks same-y.
You might be applying to wrong companies. If 100% of your applications are to big traditional companies using 2019 playbooks, youâll waste months. Focus on the 20% of companies that are future-focused: design-led startups, AI-native companies, places with strong design culture and fast shipping cadence.
You might need to skill up. If you canât confidently say âI use AI in my workflow,â âI can prototype in code (even basic),â âI understand business metrics,â do a 30-day sprint. Pick ONE skill. Go deep. Ship something that demonstrates it.
The job might not exist anymore. If you want traditional IC role (make beautiful screens, hand to dev, repeat), reality check: adapt or exit the field.
Why level one feels like level two
There are two failure patterns that swallow most designers before they reach level two. Theyâre invisible until youâve already driven into them.
The first: treating AI as a bypass rather than a foundation. If you donât understand visual hierarchy, AI generates beautiful garbage and you donât know itâs garbage. Juniors who learn design âthrough AIâ bypass building intuition and end up producing mediocrity with great confidence. Thatâs visible in portfolios. It shows up in the first thirty seconds of a review. The fix isnât slower AI use - itâs building the foundation that makes you able to judge what AI gives you.
The second is more subtle and hits experienced designers harder: accepting the first output. AIâs first proposal is always a starting point. If you treat it as a result, youâre not using AI as a tool - youâre letting it make design decisions for you. Thatâs the job. Thatâs what youâre paid for. And itâs exactly what distinguishes level one from level two: not which tools you use, but whether youâre the one deciding or the one accepting.
The connection between these two patterns is the same: someone stopped at the first thing that looked like an answer. The brief was thin, the output was generic, and nobody asked why.
AI wonât replace you.
But a designer fluent in AI will replace a designer who isnât. That line has been circulating long enough to feel worn, but Figma just shipped a product that made it concrete. The loop from Claude Code to canvas and back to code - thatâs not a thought experiment about the future. Thatâs the workflow. Your workflow, now.
Most designers who feel âbehind on AIâ or ânot readyâ are actually one project away from level two.
A year from now, the gap will be larger. The cost of entry will be higher. The choice will still be available â but it wonât be cheap.
Open a brief you have right now. Generate three rough directions before you open a blank Figma file. See what the model gets wrong. Write down why youâd change it.
Thatâs the first rep.
Agile is Not a Religion - What Neurobiology Tells Us About Why People Defend âPure Scrumâ Like a Dogma
I. Intro - Confronting Dogmatism
âWe CANâT change that because it breaks the rules of Scrum!â exclaimed one of the workshop participants when I suggested experimenting with extending Daily Scrum to 20 minutes for a 12-person development team. The temperature of the discussion rose by several degrees, and I noticed a familiar gleam in my opponentâs eyes - the same one Iâve seen in fanatic followers of various ideologies. Soon, other âguardians of methodological purityâ joined in, quoting fragments of the Scrum Guide like verses from a holy book.
Are these the same people who, in theory, embrace the values of empiricism, adaptation, and continuous improvement? Paradoxically - yes.
In this article, weâll examine a fascinating psychological phenomenon: why people who should theoretically be most open to change (Agile enthusiasts) often become











