The Emotional PM: How Your Feelings Shape Team Performance More Than Your Frameworks | Dear UX Designer, your craft just became table stakes
Issue #234
In today's edition, among other things:
đ The Emotional PM: How Your Feelings Shape Team Performance More Than Your Frameworks (by Alex Dziewulska)
đ Dear UX Designer, your craft just became table stakes (by MichaĆ Kosecki)
đȘ Interesting opportunities to work in product management
đȘ Product Bites - small portions of product knowledge
đ„ MLA week#37
Join Premium to get access to all content.
It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).
Grab a notebook đ° and your favorite beverage đ”â.
Editorâs Note by Alex đ
The Velocity Delusion
The shrew is back. Stinking tongue at your service.
2025 tried to take me out. Depression does that sometimes. But itâs retreating now â back where it belongs â and 2026 brought good energy I wasnât expecting.
I missed this. I missed you. Letâs go đ
Your fastest teams are learning the slowest.
There. I said it.
I sit with product teams who deploy twelve times a day. Impressive pipelines. Beautiful dashboards. I ask one question: why did your last three features succeed or fail?
Silence.
They can tell me deploy counts. Cycle times. Lead time to production. They cannot tell me if any of it mattered. If shipping is learning, more shipping should equal more learning.
That math doesnât work.
The velocity gospel has never been more entrenched. DORA metrics everywhere. Accelerate principles tattooed on engineering managers. LinkedIn flooded with hundred-deploys-a-week humble-brags. We measure teams in cycle time like it means something.
Pendoâs research found 80% of features in the average software product are rarely or never used. Eighty percent. Weâre shipping faster than ever. Weâre understanding less than ever.
Let me tell you how this ends. Iâve watched it for twenty years.
High velocity doesnât create learning. It destroys the conditions learning requires.
We got here through a misreading of Toyota. The DevOps movement borrowed lean manufacturing â smaller batches, faster feedback, continuous improvement. Shorter cycles should mean faster learning. Airtight logic.
Except manufacturing has something software doesnât: immediate, unambiguous feedback. When a car door doesnât fit, you know instantly. When a feature ships, you might not know if it worked for months. Or ever. Nobodyâs measuring.
Forsgren, Humble, and Kim showed high-performing teams tend to have higher deployment frequency. Correlation. Not causation. The 2023 DORA team explicitly warned against using these metrics for team comparisons.
We took nuanced research and turned it into a cargo cult. Stop worshipping the freaking deploy counter and look at what youâre actually building.
Hereâs whatâs happening inside high-velocity environments.
Sophie Leroyâs research on attention residue is brutal. When you switch tasks, part of your cognitive capacity stays stuck on the previous work. The more rushed, the stronger the residue. High-velocity environments create perpetual attention residue. Youâre never fully present. Youâre always partially somewhere else.
Bar-Eli studied 286 penalty kicks. Goalkeepers jump left or right nearly every time â even though staying center would statistically improve their odds. Why? Missing while standing still feels worse than missing while diving.
Weâd rather be wrong in motion than wrong while waiting. Organizations amplify this through incentives. You get promoted for shipping. Nobody ever got fired for deploying fast. Nobody ever got promoted for killing a bad idea before it wasted six months of engineering time.
But the real damage is what velocity crowds out.
Francesca Gino and Bradley Staats ran a study at Wiproâs call center. One group spent 15 minutes at the end of each day reflecting on what theyâd learned. The other group kept working.
The reflection group performed 20% better.
Fifteen minutes. Thatâs it. Thatâs what it costs to actually learn from your work. But high-velocity teams canât spare fifteen minutes. Thereâs always another deployment waiting.
This isnât magic. Itâs mechanics.
Anders Ericsson spent his career studying expertise. Deliberate practice requires four things: a well-defined goal, motivation to improve, immediate feedback, and opportunities for repetition with refinement.
Notice whatâs not on that list. Speed.
Ericsson found that practice without feedback structure produces nothing. Golfers who play for thirty years without structured feedback donât improve. They repeat their mistakes faster.
Thatâs what most high-velocity teams are doing. Not iterating. Recurring.
I want to be honest about the strongest case for velocity. In uncertain environments, you learn by doing. Shipping gets work to customers faster. Smaller batches reduce risk.
All true.
The fatal flaw is assuming shipping equals learning. It doesnât. Learning requires time to observe. Space to analyze. Capacity to change behavior based on what you found.
Most high-velocity teams ship into a void. No instrumentation. No outcome reviews. No time budgeted for analysis. They deploy constantly and learn nothing â then celebrate their cycle time metrics while competitors quietly figure out what customers actually need.
Iâve watched this pattern across organizations, continents, seniority levels. The teams that build the right things look different. They ship less. They learn more per shipment. They ask âwhat would tell us this worked?â before asking âwhen can we ship it?â
Iâm not going to give you a tidy prescription. That would be dishonest given how deep the velocity cult runs.
But I refuse to pretend speed and learning are the same thing. Theyâre not.
The teams that will dominate the next decade wonât be the fastest. Theyâll be the ones who figured out that learning requires something velocity keeps stealing.
Thatâs not a productivity problem. Thatâs architecture.
The race doesnât go to the swift if the swift are running in circles.
Help Shape PRODUCT PRO SUMMIT 2026
Product Pro Summit organizers are asking for your inputâand weâre passing that invitation to our community. They want to design sessions, workshops, and topics that deliver real value, not just another conference with forgettable framework talks.
Hereâs where you come in: What product management topics actually keep you up at night? What skills do you wish youâd developed three years ago? What conversations would make traveling to a conference genuinely worth it?
The organizers are asking now because theyâd rather design something practitioners need than promote something they think looks good. Share your thoughtsâthe problems youâre facing, the gaps in current conferences, the workshops that would actually move your practice forward.
Tell them what matters to you, and theyâll make magic happen at the summit.
This isnât crowdsourcing for the sake of engagement. This is conference organizers acknowledging that the best content comes from understanding what the community actually needsânot what looks impressive on an agenda.
Share your ideas: Link
The Summit happens in 2026. The conversation starts now.
PRODUCT HIVE 2026 â The Anti-Conference Where You Build the Agenda
đ Warsaw, ADN Conference Center
đ March 18-19, 2026
đ https://producthive.pl/
Hereâs what makes Product Hive different from the conference circuit where you sit through pre-packaged talks and pretend to take notes while checking Slack:
Day 1 - LEARN: Keynotes from experts on topics that actually matterâAI in product thinking, designing your operating model, navigating organizational chaos, balancing workload and value delivery. You listen, take notes, prepare your own submissions for Day 2.
Day 2 - SHARE: You and other practitioners build the agenda. Barcamp-style sessions where participants and experts collaborate to schedule the most relevant conversations. No fixed agenda imposed from above. You vote with your feetâif a session isnât valuable, you leave and find one that is.
This format acknowledges something most conferences ignore: the best insights often come from practitioners solving real problems, not just experts delivering polished talks. Product Hive creates space for both.
Topics include:
AI-supported product thinking (elevating product research)
Designing your own operating model (prioritization and productivity for product leaders)
The optimized product manager (balancing workload, priorities, and value)
Navigating organizational change
Integrating AI in value-driven development
Target audience: Senior PMs, IT leaders influencing product processes, analysts supporting product development, founders and startup CEOs.
Bonus: Optional full-day workshop with Roman Pichler on Product Strategy (March 17th).
Language: Primarily English, with some Polish sessions during the SHARE day.
Newsletter subscriber perk: 10% off with code PRODUCTART10
Coming soon: Weâll be running a competition for 2 tickets with 50% discount. Stay tuned.
This isnât another conference where attendance feels like an obligation your employer imposed. Itâs designed as actual development spaceâcollaborative, engaging, and built around what practitioners need, not what looks good on a promotional deck.
If youâre tired of conferences optimized for speaker LinkedIn content rather than attendee learning, this format might be worth your time.
Tickets and details: https://producthive.pl/
Alex Dziewulska: I will be there with Katarzyna Dahlke and Leadership Lab, join me to design your product leadership
đȘ Product job ads from last week
Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)
Product Manager - Global Payments
Product Manager - Mastercard
Product Manager - Sygnity
Product Manager - Luxoft
Product Manager - Allegro
đȘ Product Bites (3 bites đȘ)
đȘ The Cobra Effect đ: When Your Solution Breeds the Problem
Why well-intentioned product fixes often amplify the very issues theyâre designed to solve
Weâve all been there. A metric is tanking, leadership is concerned, and the team rallies to implement a fix. Incentives are realigned, processes are redesigned, and everyone celebrates the clever solution. Six months later, the problem is somehow worse than beforeâand a new set of problems has emerged. The team looks around, bewildered: How did trying to fix this make it worse?
This is the Cobra Effect in action, and it haunts product teams more often than weâd like to admit.
What Is the Cobra Effect?
The Cobra Effect describes a phenomenon where an intervention intended to solve a problem inadvertently makes the problem worseâtypically because the incentive structure encourages behaviors that amplify rather than reduce the issue.
German economist Horst Siebert coined the term in his 2001 book on economic policy, drawing from an apocryphal story from British colonial India. According to the tale, British authorities in Delhi, alarmed by venomous cobras, offered a bounty for every dead snake. Initially successful, the program soon backfired: enterprising locals began breeding cobras specifically to collect the bounty. When officials discovered the scheme and cancelled the program, breeders released their now-worthless snakes into the wildâleaving Delhi with more cobras than before.
Whether historically accurate or not, the parable captures a universal truth about incentive design: people respond to the incentives you create, not the outcomes you intend.
Breaking Down the Cobra Effect
The Cobra Effect manifests through several interrelated mechanisms that product teams should recognize:
The Measure-Target Collapse
When we turn a metric into a target, people optimize for the metric rather than the outcome itâs meant to represent. British economist Charles Goodhart captured this elegantly: âWhen a measure becomes a target, it ceases to be a good measure.â We want fewer support tickets, so we incentivize ticket closuresâand suddenly tickets are being closed without resolution, pushed to other queues, or discouraged from being filed at all.
Rational Actors, Irrational Systems
The cobra breeders werenât villainsâthey were rational actors responding logically to the incentive structure presented to them. In product organizations, employees rarely game the system out of malice. Theyâre simply doing what the system rewards. When sprint velocity becomes a KPI, story point inflation becomes inevitable. When review speed is measured, thoroughness suffers. The system creates the behavior.
Second-Order Blindness
Weâre remarkably good at predicting first-order effects (âif we reward X, weâll get more Xâ) and remarkably poor at predicting second-order effects (âbut people will figure out how to get X without actually doing Yâ). This blindness is exacerbated under deadline pressure, when we need solutions fast and donât have time to think through unintended consequences.
The Feedback Loop Delay
Cobra Effects often take time to materialize. The bounty program initially workedâdead cobras piled up, officials congratulated themselves. Only later did the breeding farms emerge. In product development, a misaligned incentive might boost short-term metrics while slowly eroding the foundation it stands on. By the time we notice, the damage is entrenched.
The Cobra Effect in Action
Wells Fargoâs Account Scandal: In one of the most striking examples of the Cobra Effect in modern business, Wells Fargoâs aggressive cross-selling targets created precisely the problem they were meant to address. Management wanted deeper customer relationships, so they set quotas for new accounts per customerâfamously captured in the slogan âeight is great.â Employees, facing intense pressure and potential termination for missing targets, began opening accounts without customer authorization. By 2016, regulators discovered that staff had created approximately 3.5 million unauthorized accounts. The bank paid over $3 billion in fines and settlements, fired 5,300 employees, and suffered incalculable reputational damage. The incentive meant to deepen customer relationships had destroyed customer trust entirely.
The Hanoi Rat Massacre: The French colonial government in Vietnam replicated the cobra mistake almost exactly. Facing a rat infestation in Hanoi, officials offered a bounty for rat tailsâreasoning that tails would prove the rats were killed. Soon, officials noticed rats running through the streets without tails: hunters were catching rats, cutting off their tails for the bounty, and releasing them to breed more rats. Rat farms emerged on the outskirts of the city. The program was quietly cancelled, but the rat population had grown.
Bug Bounty Backfires: Software teams have encountered their own cobra effects with internal bug-tracking incentives. When teams reward finding bugs without equally rewarding preventing them, some engineers learn to leave vulnerabilities in code they can âdiscoverâ later. When QA bonuses are tied to bugs found, the incentive shifts from quality assurance to fault-findingâand potentially from collaboration to competition with developers.
Why This Matters
The Cobra Effect matters because product teams are constantly designing incentive systemsâfor users, for employees, for partners. Every gamification element, every KPI, every performance review structure is an incentive system in disguise. And every one of them can backfire.
Research from Forrester suggests that misaligned incentives contribute to up to 70% of project failures related to user adoption. The problem isnât that we lack good intentionsâitâs that we underestimate the creativity humans bring to optimizing for whatever target we set. People are remarkably ingenious at finding the shortest path to the reward, even when that path undermines the rewardâs purpose.
The danger is particularly acute when stakes are high and measurement is easy. âWhat gets measured gets managedâ sounds like wisdom until we realize it also means âwhat gets measured gets manipulated.â The more we tie consequences to specific metrics, the more energy flows toward gaming those metrics rather than achieving actual outcomes.
Putting It Into Practice
Pre-mortems, Not Post-mortems: Before launching any incentive structure, run a pre-mortem. Gather the team and ask: âItâs six months from now and this system has backfired spectacularly. What happened?â Force people to imagine gaming strategies, loopholes, and unintended consequences. You wonât catch everything, but youâll catch the obvious onesâwhich are often the ones that cause the most damage.
Measure the System, Not Just the Target: If youâre incentivizing ticket closures, also measure reopened tickets, customer satisfaction post-close, and escalation rates. If youâre rewarding feature output, also track feature usage and removal rates. Triangulating multiple metrics makes gaming harder and surfaces manipulation faster.
Design for the Rational Gamer: Assume that some percentage of people will respond to exactly what you measure, not what you mean. Design your incentives as if youâre creating rules for a game where players are trying to winâbecause you are, and they will.
Build in Feedback Loops: Create mechanisms to detect when solutions are backfiring, and commit in advance to changing course. Many cobra effects persist not because theyâre invisible but because admitting failure is politically costly. Establish review points and define in advance what âthis isnât workingâ would look like.
Question Simple Solutions to Complex Problems: The Cobra Effect thrives on oversimplification. When a multifaceted problem is reduced to a single metric, gaming is almost guaranteed. Complex problems require complexâor at least multi-dimensionalâsolutions.
The Bigger Picture
The deeper lesson of the Cobra Effect isnât just about incentive designâitâs about humility. Systems fight back. People are creative. The future resists our attempts to control it.
In product development, weâre often under pressure to show quick wins, to demonstrate that our interventions are working. This pressure pushes us toward simple, measurable solutions that can backfire precisely because theyâre simple and measurable. The most robust solutions are often the ones that resist easy quantification.
The cobra breeders werenât the problem. The problem was a system designed without understanding how people would respond to it. Every time we create an incentiveâfor users, employees, or ourselvesâweâre running the same experiment. The question isnât whether people will optimize for our metrics. They will. The question is whether optimizing for our metrics actually achieves what we intended.
Before you implement your next clever solution, pause and ask: Am I designing a bounty program, or am I breeding cobras?
đȘ The Focusing Illusion: Why Users Lie About What Matters
How the psychology of attention systematically distorts feature requests and user research
The interview went perfectly. Users were enthusiastic, nodding along as we described the feature. âYes, absolutely,â they said. âThat would be a game-changer.â We built it. We shipped it. And then... nothing. The feature sits untouched in our analytics dashboard, a monument to confident misunderstanding.
Sound familiar? Weâve all experienced the painful gap between what users say they want and what they actually use. What if this gap isnât random noise, but a predictable psychological pattern we can learn to navigate?
What Is the Focusing Illusion?
The Focusing Illusion is a cognitive bias identified by Nobel laureate Daniel Kahneman and described with devastating simplicity: âNothing in life is as important as you think it is while you are thinking about it.â
When we focus our attention on any factorâa feature, a problem, a purchaseâthat factor temporarily expands in importance. The very act of thinking about something inflates our perception of how much it matters. This isnât deception; itâs how human cognition works. Our brains arenât equipped to simultaneously weigh all the factors that affect our decisions. We consider whatâs in front of us, and whatâs in front of us always seems more important than it actually is.
Kahneman illustrated this with a famous study on happiness and geography. When asked whether Californians are happier than Midwesterners, most peopleâincluding Californians themselvesâsay yes. The pleasant weather looms large when we think about it. But when researchers actually measured life satisfaction, residents of both regions reported virtually identical levels of happiness. Why? Because 99% of lifeârelationships, work, health, meaningâis the same everywhere. Weather matters, but not nearly as much as we think it does when weâre thinking about it.
For product teams, the implications are profound. Every user interview, every feature request, every prioritization conversation is distorted by this bias.
Breaking Down the Focusing Illusion
The Interview Trap
When we sit down with a user and ask about a specific problem, weâre essentially shining a spotlight on that problem. The userâs attention focuses on it, and in that moment, the problem genuinely feels critical to them. Theyâre not lyingâtheyâre experiencing an attention-inflated version of their reality. Three months later, when we ship the solution, theyâve long since stopped thinking about it. Other problems have claimed the spotlight.
The Priority Paradox
Users can accurately report their problems. What they canât accurately report is how those problems rank against everything else in their lives. When a customer says âI really need feature X,â what they mean is âFeature X seems important right now, in this conversation, while Iâm thinking about it.â They havenât mentally stacked it against the other 47 things competing for their time and attention. When forced to actually prioritizeâby allocating their time, attention, or moneyâthe feature often falls far down the list.
Adaptation Blindness
The Focusing Illusion is amplified by our failure to anticipate adaptation. We imagine how good weâll feel when we get the new feature, the new car, the new job. What we donât imagine is that weâll stop noticing it. Humans adapt to positive changes remarkably quicklyâa phenomenon hedonic psychologists call the âhedonic treadmill.â The feature that seems transformative in an interview will feel like furniture within weeks of adoption.
Context Collapse
User research typically happens outside the context where the product is actually used. We ask people to reconstruct their past experiences or imagine future ones, but both exercises are subject to focusing effects. When I sit in a conference room discussing my workflow, Iâm not actually experiencing my workflowâIâm thinking about selected parts of it, which necessarily exaggerates those parts.
The Focusing Illusion in Action
Microsoftâs 70% Rule: Microsoftâs extensive research on product usage revealed a startling finding: approximately 70% of features in complex software products are rarely or never used. Many of these features began as user requests that seemed urgent in research settings but proved inconsequential in practice. Users asked for them, validated them, and then ignored themânot because they were poorly implemented, but because the importance users felt during research didnât persist into daily use.
The Build-Measure-Abandon Cycle: A B2B product team conducted thorough customer interviews about a requested integration. Customers were emphatic: this integration would unlock significant value and would definitely be implemented if built. The team prioritized, built, and launched. When they followed up with the same customers, many hadnât implemented the integration and couldnât remember asking for it. Other operational challengesâones they hadnât mentioned in interviews because no one askedâhad consumed their attention.
Spotifyâs Discovery Problem: Early Spotify research consistently showed users wanted more control over music discoveryâmore filters, more categories, more customization. When implemented, these features saw limited engagement. What users actually responded to were algorithmically-generated playlists like Discover Weeklyâsomething users couldnât have requested because they didnât know to want it. Users focused on articulated problems; the real opportunity was in problems they couldnât name.
Why This Matters
The Focusing Illusion matters because product teams rely heavily on direct user input, and direct user input is systematically biased toward whatever users happen to be thinking about. This creates several failure modes.
We overbuild for stated needs. Features that emerged from enthusiastic interview feedback often get premium development attention, even when behavioral data suggests lukewarm adoption of similar features. The enthusiasm was realâbut real enthusiasm triggered by focused attention doesnât predict real behavior triggered by distributed attention.
We underbuild for unstated needs. The problems users donât mention in interviews arenât necessarily less importantâthey might simply be less top-of-mind. The most transformative product improvements sometimes come from observing what users struggle with rather than asking what they want.
We mistake certainty for priority. When users express strong preferences, we interpret strength as importance. But the Focusing Illusion means that any preference feels strong in the moment of expression. The certainty tells us about the psychology of focus, not the hierarchy of needs.
Putting It Into Practice
Stack Rank Problems, Not Features: When users identify a problem, donât just note itâask them to describe all their challenges and rank them. This forces the problem out of isolation and into competition with reality. The feature request that seemed critical might rank fifth or sixth when stacked against actual priorities. This technique, sometimes called Customer Problem Stack Ranking, surfaces true priorities that survive beyond the interview context.
Observe Before You Ask: Behavioral observation is less susceptible to focusing effects than self-report. Before asking users what they want, watch what they do. The gap between stated and revealed preferences often contains the most valuable insights.
De-focus Your Research: Instead of drilling into specific problems, start broad. Ask about goals, workflows, and frustrations without priming particular solutions. Let usersâ attention wander to what actually matters, rather than directing it where youâve already decided to look.
Test Commitment, Not Agreement: Agreement is cheap; commitment is expensive. When users say they want a feature, probe for commitment signals: Would they pay for it? Would they switch products for it? Would they invest time learning it? Hypothetical agreement means little. Demonstrated willingness to sacrifice means everything.
Apply a Temporal Discount: Treat research enthusiasm the way youâd treat any inflated numberâdiscount it. If five out of ten users express strong interest, assume two or three actually have a persistent need. This isnât cynicism; itâs calibrating for a known bias.
The Bigger Picture
The Focusing Illusion reveals an uncomfortable truth about user research: we cannot trust what users tell us, at least not at face value. This isnât because users are dishonestâitâs because the act of asking distorts the answer. The interview room is a magnifying glass, and everything under it looks larger than life.
This doesnât mean user research is useless. It means user research is one input among many, subject to known biases that we can partially correct for. The teams that build products users actually love are the ones who triangulateâcombining stated preferences with behavioral data with observation with experimentation.
Perhaps the most valuable lesson of the Focusing Illusion is epistemological humility. Weâre not as good at knowing what we want as we think we are. Neither are our users. The goal isnât to find the perfect research method that bypasses human biasâit doesnât exist. The goal is to be appropriately skeptical of any single signal, including the loud and confident ones.
When a user tells you something is essential, hear it as âthis seems essential right now, in this moment, while Iâm focused on it.â Thatâs still valuable information. Itâs just not the same as âthis will still seem essential in three months when Iâm focused on something else entirely.â
đȘ The MAYA Principle: Most Advanced Yet Acceptable
Why innovation must walk the narrow corridor between boring and terrifying
Weâve all seen it happen. A startup builds something genuinely revolutionaryâtechnologically brilliant, years ahead of its timeâand the market shrugs. Meanwhile, a competitor launches something less impressive but more approachable, and users flock to it. The better technology lost. It happens with unsettling regularity.
The common explanation is âbad timingâ or âpoor marketing.â But what if the real explanation is more fundamental? What if thereâs a predictable zone of acceptance that every successful innovation must navigate, and the brilliant products that fail are the ones that overshoot it?
What Is the MAYA Principle?
MAYAâMost Advanced Yet Acceptableâis a design philosophy developed by Raymond Loewy, often called the father of industrial design. Working from the 1930s through the 1970s, Loewy created some of the most iconic designs of the 20th century: the Coca-Cola bottle, the Shell Oil logo, the Greyhound bus, the S1 locomotive, the interiors of NASA spacecraft, and Air Force One.
Loewyâs prolific success across radically different domains wasnât luckâit emerged from a deep understanding of how people respond to novelty. He observed that humans are pulled by two opposing forces: neophilia, the attraction to new things, and neophobia, the fear of anything too new. Successful design, he argued, must satisfy both impulses simultaneously.
As Loewy put it: âThe adult publicâs taste is not necessarily ready to accept the logical solutions to their requirements if the solution implies too vast a departure from what they have been conditioned into accepting as the norm.â
In other words: being right isnât enough. Being right in a way people can acceptâthatâs the challenge.
Breaking Down the MAYA Principle
The Zone of Acceptance
Imagine a spectrum running from completely familiar to completely novel. At the familiar end, products blend into the backgroundâusers ignore them because they offer nothing new. At the novel end, products trigger resistanceâusers reject them because they require too much cognitive or behavioral change. MAYA occupies the sweet spot between these extremes: advanced enough to capture attention, familiar enough to feel safe.
Gradual Evolution, Not Revolution
Loewy advocated for designing for the future but delivering the future gradually. Rather than introducing radical change all at once, successful products move users incrementally toward new paradigms. Each iteration pushes slightly past current comfort zones while maintaining enough continuity with the previous version to feel recognizable. Users donât adapt to the future in one leap; they adapt through a series of small steps.
The Familiarity-Novelty Balance
Derek Thompson, author of âHit Makers,â synthesized Loewyâs insight this way: âTo sell something familiar, you must make it surprising. To sell something surprising, you must make it familiar.â This dual imperative explains why breakthrough products often succeed not by being the most innovative, but by being innovative in a package users already understand.
The Moving Target
What counts as âacceptableâ isnât fixedâit shifts over time as users adapt. Yesterdayâs radical is todayâs familiar. This means successful product teams donât just find the MAYA zone once; they continuously recalibrate as user expectations evolve. The iPhone of 2007 would feel primitive today, but it was precisely calibrated to what users could accept in 2007.
The MAYA Principle in Action
Appleâs iPod-to-iPhone Pipeline: Apple didnât invent the smartphoneâpredecessors like Palm Pilots and Windows Mobile devices existed for years. But those devices violated MAYA: they were advanced but not acceptable, requiring users to adopt entirely new mental models. Appleâs approach was different. The iPod, launched in 2001, gradually evolved from a device with physical buttons to touch-sensitive scroll wheels to eventually full touchscreens. By the time the iPhone launched in 2007, users had been gently guided toward touchscreen interaction for six years. The iPhone itself was introduced not as a handheld computer but as âa phoneââa familiar category that made the radical leap acceptable. Each subsequent iPhone iteration changed incrementally, never shocking users with too much novelty at once.
Google Glassâs MAYA Failure: Google Glass represented the opposite trajectory. When it launched in 2014, the technology was genuinely advancedâa head-mounted display with voice-activated interface. But it was nowhere near acceptable. The form factor was socially awkward, earning wearers the nickname âGlassholes.â The interaction model was foreignâno familiar reference point for how to use it. Glass was too far ahead of what users could comfortably adopt, and despite significant hype, it failed to achieve mainstream acceptance. Nearly a decade later, consumer AR wearables still havenât reached mass adoptionâsuggesting Glass wasnât just ahead of its time, but outside the corridor of acceptable change entirely.
Spotifyâs Invisible Innovation: Spotify exemplifies MAYA in software. Under the hood, Spotify uses extraordinarily sophisticated machine learning to generate personalized recommendations. But users never interact with the complexity directly. Instead, they see playlists, album covers, and shuffle buttonsâinterface elements borrowed from decades of physical and digital music consumption. The innovation happens backstage; the user experience remains anchored in the familiar. Users get cutting-edge recommendation engines wrapped in metaphors they already understand.
Teslaâs Familiar Revolution: Electric vehicles represent a fundamental shift in automotive technology. Teslaâs approach to MAYA was instructive: despite the revolutionary drivetrain, Teslas look like cars, are purchased like cars, and are driven like cars. The innovation is masked by conventional form factor and interaction models. Compare this to earlier EV experiments that looked alien and signaled âdifferentâ at every touchpointâand struggled to gain adoption despite environmental enthusiasm.
Why This Matters
MAYA matters because product teams often confuse technical superiority with market success. The most advanced solution is not necessarily the most successful solution. Adoption depends not just on what a product does, but on whether users can bridge the gap between their current mental models and what the product requires.
This creates a strategic paradox. We want to build products that are ahead of the marketâthatâs where competitive advantage lives. But being too far ahead means building products nobody will use. The gap between âinnovativeâ and âacceptableâ is where promising products go to die.
For product managers, MAYA provides a framework for evaluating feature decisions and product direction. Itâs not enough to ask âIs this better?â We must also ask âIs this different in ways users can absorb?â The answers arenât always the same.
Putting It Into Practice
Map Your Novelty Budget: Every product has a limited budget for noveltyâthe total amount of new behavior you can ask users to adopt before triggering rejection. Spend it wisely. If your core value proposition requires significant behavioral change, minimize novelty everywhere else. Radical functionality wrapped in conventional UI often outperforms moderate functionality wrapped in radical UI.
Find Your Familiarity Anchors: Identify the mental models, metaphors, and interaction patterns your users already understand. Root your innovation in these anchors. The iPhone was a âphone.â Slack was âemail, but better.â The most successful new products often describe themselves in terms of old products, then gradually reveal their true nature.
Stage Your Innovations: Rather than shipping all your advances at once, consider sequencing them over releases. Each release pushes slightly past the current acceptable threshold, moving users incrementally toward your vision. Apple removed the iPodâs buttons over six yearsânot in one release.
Prototype for Unfamiliarity: When testing new concepts, watch specifically for confusion, hesitation, and workarounds. These signals indicate youâve crossed out of the acceptable zone. The goal isnât to eliminate noveltyâitâs to calibrate novelty to what users can process.
Differentiate by Context: MAYA tolerance varies by user segment, product category, and cultural context. Early adopters accept more novelty than mainstream users. Consumer products typically require more familiarity than enterprise tools. Whatâs acceptable in Tokyo may not be acceptable in Tulsa. Know your audienceâs specific threshold.
The Bigger Picture
The MAYA principle is fundamentally about empathyâunderstanding that users live in their present, not in our imagined future. We can see where the technology should go. Theyâre seeing where their habits already are. The bridge between these perspectives is the work of product design.
Thereâs a temptation to believe that truly great products are so obviously superior that users will adapt to them regardless of unfamiliarity. History suggests otherwise. The technology graveyard is full of better solutions that users couldnât accept. Betamax was arguably superior to VHS. The Segway was revolutionary transportation technology. Google Wave was a more powerful collaboration tool than what replaced it. Being better is only half the battle.
Loewyâs insight, decades before cognitive science had the vocabulary for it, was that human capacity for change is limited and must be respected. We donât experience products in isolationâwe experience them against the backdrop of everything we already know and do. Innovation that ignores this context isnât innovation; itâs wish fulfillment.
The goal isnât to dampen ambition. Itâs to sequence ambition strategically. The most transformative products donât ask users to change everything at once. They guide users, step by step, from familiar ground into the future. Each step is Most Advanced Yet Acceptable. And through this patient progression, what was once radical becomes the new normalâthe foundation for the next step forward.
The future doesnât arrive all at once. Itâs adopted one acceptable increment at a time.
đ„ MLA #week 37
The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forwardâno overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.
Why it matters? Culture isnât built overnight. Itâs the sum of consistent, small actions. MLA creates momentumâone small win at a timeâand turns those wins into lasting change. Small actions, big impact
MLA: Decision Diary
Why This Matters:
Product teams make dozens of decisions every weekâwhich feature to prioritize, which user segment to target, whether to refactor or ship, how to balance quality with speed. But these decisions often happen behind closed doors, leaving other teams wondering âwhy did they choose that?â or worse, making their own assumptions about the reasoning. When decision-making is opaque, trust erodes, misalignment grows, and the organization loses the opportunity to learn from its own choices. By documenting one significant product decision transparentlyâthe context, the options you considered, the trade-offs you weighed, and why you ultimately chose what you didâyou create a learning artifact that benefits everyone. This practice transforms decisions from mysterious black boxes into shared organizational knowledge, building both trust and collective product thinking.
How to Execute:
1. Choose the Right Decision to Document:
Select a decision that meets these criteria:
Significant enough to matter: Not trivial (âwhich color for the buttonâ) but not so strategic itâs confidential
Recently made: Ideally from this week or last week, while the reasoning is fresh
Interesting to others: Other teams would benefit from understanding your thinking
Youâre confident enough to share: You believe it was the right call, even if time will tell
Good candidates:
Prioritizing Feature A over Feature B for the next sprint
Choosing to delay a release to fix quality issues vs. shipping on time
Deciding to serve User Segment X before User Segment Y
Selecting a technical approach (build vs. buy, microservices vs. monolith)
Choosing to sunset a feature or product
Deciding how to respond to competitive pressure
Determining what metrics to optimize for in an experiment
Avoid:
Personnel decisions or anything HR-related
Decisions with legal or competitive sensitivity
Decisions still under debate or not finalized
Purely tactical execution details with no broader learning
2. Select the Right Format and Channel:
Choose where to share based on your organizationâs culture:
Options:
Dedicated decision log: Create a shared document or wiki page titled âProduct Decision Diaryâ where you add entries
Public Slack/Teams channel: Post in a product or company-wide channel
Email to stakeholders: Send to cross-functional partners who care about product direction
Team meeting share-out: Present briefly at an all-hands or product review
Notion/Confluence page: Add to existing product documentation
Key principle: Make it accessible to people outside your immediate team. The point is transparency, not just team record-keeping.
3. Structure Your Decision Documentation:
Use this template to ensure completeness:
đ DECISION DIARY ENTRY
Date: [When the decision was made]
Decision: [One clear sentence stating what you decided] Example: âWe decided to prioritize the mobile app redesign over adding new integrations for Q1.â
Context: [What circumstances led to this decision?]
What problem were you trying to solve?
What constraints were you operating under? (time, resources, strategic goals)
What external factors influenced this? (market, user feedback, business pressure)
Options Considered: [What alternatives did you evaluate?] List 2-4 options you seriously considered:
Option A: [Brief description] - Pros/Cons
Option B: [Brief description] - Pros/Cons
Option C: [Brief description] - Pros/Cons
Trade-offs Weighed: [What did you have to give up or accept?]
What are the downsides of your chosen path?
What are you explicitly NOT doing as a result?
What risks are you taking on?
Why We Chose This: [Your reasoning]
What factors tipped the scales?
What values or principles guided you? (user value, speed to market, technical debt reduction, etc.)
What data or insights informed the decision?
What assumptions are you making?
Success Criteria: [How will you know if this was the right call?]
What metrics or outcomes will you track?
What timeframe for evaluation?
Questions or Doubts: [What are you uncertain about?] (Optional but powerful)
What could prove this wrong?
What would make you reconsider?
4. Write with Clarity and Honesty:
Be specific, not vague: â âWe decided to focus on improving user experienceâ â âWe decided to redesign the onboarding flow to reduce drop-off from 60% to 40% before adding new featuresâ
Be honest about trade-offs: â âThis is the best approachâ â âThis approach prioritizes speed over perfectionâweâre accepting some technical debt to validate demand fasterâ
Be humble about uncertainty: â âWeâre confident this will succeedâ â âWe believe this is the right bet based on current data, but weâre watching user feedback closely in case we need to pivotâ
Use plain language:
Avoid jargon when possible
Explain acronyms or technical terms
Write like youâre explaining to a smart colleague from another department
5. Share and Invite Perspective:
When you publish your decision diary entry:
Frame it as learning, not defending: âI documented our decision to [X] this week. Sharing transparently so others can learn from our thinkingâand so you can spot any blind spots we might have missed.â
Explicitly invite feedback: âWhat questions does this raise? What did we miss? Would love perspective from [relevant teams].â
Tag relevant stakeholders: If this decision impacts marketing, finance, customer success, etc., tag them so they see it.
Donât make it formal or heavy: This should feel like a thoughtful memo, not a legal document. Conversational tone is fine.
6. Follow Up and Build the Habit:
Immediate follow-up:
If people ask questions in comments, respond thoughtfully within 24 hours
Thank people who offer perspectives you hadnât considered
If you learn something that changes your thinking, acknowledge it publicly
After 2-4 weeks:
Revisit the decision: Howâs it going?
Share a brief update: âUpdate on our decision to [X]: Hereâs what weâve learned so far...â
This shows you take the documentation seriously and reinforces the learning loop
Build the habit:
Start with one decision per week or every two weeks
After 4-6 entries, youâll have a valuable archive others can reference
Encourage other team members to contribute their own decision entries
Consider a monthly review where the team reflects on documented decisions
Expected Benefits:
Immediate Wins:
Creates institutional memoryâdecisions donât get lost or forgotten
Takes 20-30 minutes to document, saves hours of explanation later
Demonstrates thoughtful decision-making to stakeholders
Reduces âwhy did they do that?â confusion across teams
Makes implicit reasoning explicit and shareable
Relationship & Cultural Improvements:
Builds trust through transparencyâpeople see youâre not making decisions carelessly
Invites others into your thinking process, making them feel valued
Creates opportunities for cross-functional input before decisions are set in stone
Reduces organizational politicsâreasoning is visible, not hidden
Models good decision-making practices for junior team members
Normalizes discussing trade-offs and uncertainty honestly
Long-Term Organizational Alignment:
Creates a searchable library of âhow we think about product decisionsâ
New team members can read decision history to understand product philosophy
Patterns emerge over timeâyou see what values consistently guide choices
Prevents repeating the same debatesââwe already considered that, hereâs why we didnât do itâ
Builds organizational muscle for strategic thinking and principled decision-making
Establishes culture of learning from decisions, not just making them
Makes it easier to course-correct when assumptions prove wrongâcontext is already documented
Let us know how it went and what insights emerged from sharing your decisions! Use the hashtag #MLAChallenge to share your story. Letâs inspire each other to make decision-making everyoneâs opportunity to learn.
đ Dear UX Designer, your craft just became table stakes
MichaĆ Kosecki specializes in identifying structural chaos at the intersection of strategy, technology, and design â particularly in large, regulated organizations where real change requires understanding that technical architecture often reflects organizational silos rather than actual user needs. With 15 years of experience scaling organizations, he consistently takes on high-risk transformations requiring navigation through regulations, politics, and legacy systems. He believes in transparency as a foundation and respecting human cognitive limits.
You learned Figma. You mastered components. You spent years perfecting your eye for spacing, typography, the subtle weight of a shadow. You can make anything pixel-perfect.
And now, none of that matters as much as you thought it would.
The work you invested years mastering has become table stakes, the baseline, the price of entry.
Nielsen Norman Group said it clearly: âUI is no longer a differentiator.â If youâre just slapping together components from a design system, youâre already replaceable by AI.
This isnât the end of design. But it is the end of design as primarily an execution discipline.
What actually happened
Five forces converged in the last 24 months and fundamentally shifted where value lives in design work. Iâm going to walk through them because vague anxiety doesnât help anyone, and you need to see the whole picture to understand why your job search is brutal right now.
First: Design systems succeeded. Maybe too well. Nobody needs to redesign the same button 300 times anymore. We built Figma libraries, documented tokens, convinced leadership to invest. But when execution becomes systematized, it becomes cheaper. When it becomes cheaper, it becomes less differentiating. Think about the last five SaaS products you used. Can you tell them apart by their UI? Same patterns, same 8px grid, same components. Flip a coin and check if itâs using shadcn nowadays. Efficiency killed variety.
Second: AI crossed the execution threshold. Googleâs Gemini 3 Pro matched expert design 44% of the time 18 months ago. The models double in capability every seven months. First-draft quality from AI is solid now. The kind of work that used to take half a day now takes 30 seconds and a decent prompt. And the median human designer skill is declining as the field expands faster than seniors develop. Do the math.
Third: Users are moving away from traditional interfaces. Gartner predicts 25% decrease in mobile app usage by 2027. Users will delegate to agents instead of navigating interfaces. âBook me a flight to LA next Tuesday under $400â becomes the interaction. Your carefully crafted booking flow? Bypassed. The interface still matters (itâs what agents use) but users spend less time looking at your pixels.
Fourth: The labor market validated the shift. The World Economic Forumâs 2025 Future of Jobs report confirms what weâre seeing on the ground. By 2030, employers will value analytical thinking, AI fluency, creative thinking, technological literacy. Skills declining in importance: manual dexterity, endurance, precision, sensory-processing abilities. Physical execution skills are moving out of focus while judgment and adaptation become core. Design is just experiencing this transformation first because our work became digital before most fields.
Fifth: Interview processes evolved and exposed the gap. Tom Scott, who sits in actual hiring rooms at top tech companies, reports that interviews now include craft deep dives where interviewers scrutinize typography decisions, iconography choices, tone, metaphor, rhythm. But they also reject candidates whose work took 4-6 months for features that shouldâve shipped in weeks. They want deeper craft AND faster execution. Both at the same time.
Candidates fail because they canât explain why a problem matters, what insight drove the idea, what trade-offs they considered. Their portfolios show âboxes in boxesâ systems design, overly safe flows with inconsistent quality. Theyâre tied to old playbooks: lengthy discovery phases, research-heavy processes, traditional handoffs. They show work âselected by the teamâ but not led by them. They have no examples of working with metrics or iterative cycles. Theyâre not prototyping with AI tools at the pace these companies operate.
Depth of craft went up. Speed of execution went up, too. If you canât deliver both, youâre not competitive.
Why you canât get hired (even though âdemand is highâ)
Youâve been job hunting for 8 months. Your portfolio is solid. Youâre talented. Youâve adapted your skills. So why the fuck canât you get hired?
You keep hearing âdemand for designers has never been higher.â Tech leaders are making huge design leadership hires. Two-person startups are investing in brand and founding designers earlier than ever. And yet here you are, sending applications into the void.
Itâs a skills mismatch at industrial scale.
Companies want a new type of designer (what Tom Scott calls the âAI-native builderâ or âSuper ICâ): someone who uses AI as infrastructure, ships prototypes fast and tests them with real users, thinks in systems but delivers in pixels, has taste-led curation skills, makes impact through tangible work.
But theyâre hiring with old playbooks. Six to eight interview rounds. Portfolio reviews that judge visual polish over strategic thinking. Questions that donât map to the actual job. Budgets for mid-level specialists when they want unicorn generalists. They want $180-250K talent but budget $120-150K. Nobody acknowledges the gap, so everyone wastes time.
Tom Scott said it directly: companies âwent to market without clear view of what they actually wanted, so they wasted time interviewing people. Tried to hire new type of designer with old playbook.â Some designers get multiple offers and constant inbound. Other great people? Out of work for 6-12 months.
If youâre in Poland (or any constrained market), the numbers make it worse. 400-600 UX graduates annually. UX represents roughly 2.3% of Polandâs IT market. Maybe 500-600 open positions in boom years. Youâre not failing - the market structure is broken. But the mismatch between what companies say they want and how they actually hire exists everywhere. Poland just makes it visible faster.
Youâve heard the advice: network your way in. And yes, it works, but the math is brutal. From my conversations with designers who successfully networked into roles: average of 23 substantive conversations to land one offer. âSubstantiveâ means 30+ minute call with someone who can influence hiring. Not LinkedIn messages. Not coffee chats with people who âmight know someone.â Actual conversations with hiring managers, team leads, founders.
It works. But itâs not fast, and itâs not easy. And if youâre doing it while unemployed and running out of savings, the pressure makes it harder.
Itâs okay to feel angry (âI did everything rightâ), to bargain (âmaybe if I learn another toolâ), to grieve (âI loved making things beautifulâ). But you can sit in that grief, or you can move. Your call.
Where value moved
Jennifer Darmour, design strategist and VP of Oracle Health Design, captured this shift: âWe used to measure our success by what we produced: the screens, the flows, the features. Now the work lives beyond the artifact. The product is no longer the interface. Itâs the relationship between humans and the intelligent systems that learn from them.â
This is the market speaking, not philosophy. Value didnât disappear - it just migrated.
From execution to judgment. What AI canât do yet: curated taste, research-informed understanding, critical thinking, strategic judgment. As Darmour notes: âAI can replicate style in seconds, but it cannot create with soul. It doesnât understand why a color feels honest, or why a sentence lands with care. That remains our domain: the realm of judgment, intuition, and intent.â
AI generates 100 button variants in 30 seconds. And your job isnât making the button. Itâs deciding which variant serves user needs and business goals, and articulating why. Anyone can make something. Knowing what should be made, why, for whom, with what trade-offs? That takes judgment AI doesnât have.
From artifacts to outcomes. Companies donât want deliverables - they want solved problems. âDesign theaterâ (going through process without producing results) is dying. Nobody cares about your polished deck if the product didnât improve. Instead of 6 weeks on high-fidelity mockups, spend 3 days on a working prototype. Test it. Learn whatâs wrong. Iterate. Then polish.
From solo craft to orchestration. Youâre directing AI, developers, and stakeholders toward coherent vision. Systems thinking over pixel thinking. Your craft becomes knowing what consistency means, when to break it, how to maintain it at scale, how to evolve it without breaking trust.
What to do this week
So what do you actually do? Iâm going to give you concrete actions, not aspirational bullshit. Do these this or next week, not next quarter.
Audit your value. What percentage of your work is execution versus judgment? Be honest. If more than 70% of your time is spent making screens, tweaking spacing, choosing colors, youâre in the danger zone (and again: itâs not necessarily your fault). Track one week: hours making/refining screens, hours in research and strategy and stakeholder alignment, hours prototyping and testing solutions. If the first number dominates, your value is at risk.
Shift one project. Use AI for first-draft execution. Pick one project and let Claude or ChatGPT or v0 generate the first version. Open Cursor or Antigravity and donât freak out. Then spend your time on what actually matters: user research (what problem are we solving?), strategic thinking (why this solution over alternatives?), stakeholder alignment (how does this serve business goals?), iteration based on testing (not opinion). Notice whether output quality suffers or improves. Hint: it usually improves, because youâre spending time on the things that matter.
Build taste deliberately. Taste isnât mystical. Itâs pattern recognition plus context. Start a swipe file today: 20 examples in your domain, 20 examples adjacent to your domain, 10 examples completely outside design. For each one, write 2-3 sentences on why itâs exceptional. Do this publicly if you want. Write it down. Post the critique. Taste that lives only in your head doesnât count.
Learn to articulate judgment. Take one design decision you made recently. Write 3 paragraphs explaining why you made this choice, what alternatives you considered, what trade-offs this involves, how youâd measure if it worked. If you canât do this, your judgment isnât legible to others. And if itâs not legible, itâs not valuable. Your value is now explanation, not just creation.
Run the diagnostic. This is not perfect, but itâs what actually gets checked in interviews now, so bear with me. Score yourself honestly on these dimensions:
Can I do a craft deep dive on my own work (typography choices, iconography rationale, rhythm, contrast, metaphor)? Is my work dated, generic, or overly dependent on design systems? Do I show real product work, shipped and owned end-to-end? Can I clearly articulate my specific contribution versus the teamâs? Can I explain why this problem matters, what insight drove the solution, what trade-offs I considered? Do I have examples of working with metrics, customers, iterative cycles? Am I prototyping in new tools (AI-assisted, code-based)? Does my work take 4-6 months when it should take weeks? Can I walk through a project with clarity (context, what I did, impact, result)?
If you scored poorly on three or more, your skills arenât legible to the market as it exists in 2026. Adapt, or continue getting rejected from opportunities you actually deserve.
Start the networking math. If youâre job hunting, you need 20-25 substantive conversations with decision-makers. This week: identify 10 people who fit that profile. Reach out to 3 of them (hell, if you want, just reach out to me). Expect 1-2 to respond. Make your outreach specific: âI saw youâre hiring for this role. Iâm not applying yet, but Iâd love 15 minutes to understand what âAI-native designerâ means to your team.â Youâre asking for information, not a job. And demonstrating that youâre someone who thinks strategically about the same problems they face.
Rewrite your positioning. Old: âProduct Designer with 6 years experience in Figma, Sketch, Adobe Suite.â New: âAI-native Product Designer who ships prototypes fast, makes strategic bets on user needs, and curates taste-led experiences. Reduced time-to-validation by 60% using AI-assisted prototyping while maintaining design quality.â The formula: AI fluency, core strength, outcome focus, evidence.
If you canât or wonât adapt
This path isnât for everyone. If you loved design because you loved making things beautiful, and the idea of focusing on strategy and judgment sounds boring or unfulfilling, I get it. Youâre allowed to want a craft-focused career.
But you need to know: that career is disappearing in mainstream tech. Not because craft doesnât matter, but because craft-only roles are being absorbed by AI and offshore teams that can execute at higher speed and lower cost.
Where craft-focused roles still exist: brand design (high-touch, luxury, or marketing-focused work where aesthetic differentiation is the product), motion design (AI hasnât caught up yet, but itâs coming), physical product design (industrial design, print, environmental, domains where digital execution tools donât apply the same way).
Consider leaving design (I know itâs harsh thing to hear or read): product management (if you have product sense but donât want to execute), UX research (if you love understanding users but not making interfaces), technical writing (if you like clarity and structure), developer relations (if you can bridge design and engineering).
The market is telling you something. You can argue with it, or you can listen and adapt. Arguing doesnât change the outcome.
Before you choose any path, run this reality check. You might not be as good as you think. Market is efficient (mostly). If youâre not getting callbacks after 50+ applications, portfolio might be the problem. Get brutally honest feedback from a senior designer whoâs NOT your friend. Pay for portfolio review if needed. Common issues: projects show execution but not thinking, no evidence of impact or outcomes, visual style is dated, work looks same-y.
You might be applying to wrong companies. If 100% of your applications are to big traditional companies using 2019 playbooks, youâll waste months. Focus on the 20% of companies that are future-focused: design-led startups, AI-native companies, places with strong design culture and fast shipping cadence.
You might need to skill up. If you canât confidently say âI use AI in my workflow,â âI can prototype in code (even basic),â âI understand business metrics,â do a 30-day sprint. Pick ONE skill. Go deep. Ship something that demonstrates it.
The job might not exist anymore. If you want traditional IC role (make beautiful screens, hand to dev, repeat), reality check: adapt or exit the field.
The craft paradox
Design leaders will tell you this is âan opportunity to redefine what design means in an age of intelligence.â Theyâre right. This is a pivot point for the discipline.
But letâs be honest: youâre not excited about âshaping intelligent systemsâ when youâre worried about paying rent next month. The aspirational narrative is real, sure - but youâre worried about rent - and that gap matters.
You can resist this shift. Keep optimizing for execution. Keep competing with AI on speed and polish. Keep hoping the market will value craft the way it used to. Youâll spend the next 5 years watching your market value decline while telling yourself âcraft will come back.â It wonât.
Or you can adapt. Shift your time toward judgment. Build taste deliberately. Learn to articulate why your decisions matter. Position yourself as a strategic contributor, not just a maker of beautiful artifacts. This path is harder. It requires you to admit that what got you here wonât get you there. It requires learning new skills, having harder conversations, accepting that your identity as âthe person who makes things beautifulâ is no longer enough.
But itâs also more interesting, more strategic, more valuable.
The craft isnât dead. Itâs actually more important than ever, but only when you can deliver it at AI speed.
Companies now do craft deep dives in interviews, scrutinizing your typography, rhythm, and contrast decisions with more rigor than they did in 2019. But they also expect you to move 10x faster than you did in 2023. Craft and speed. Both. Together.
The designers who thrive wonât be those who execute fastest. Theyâll be the ones who know what to execute, why it matters, how to measure if it worked, and how to articulate the reasoning behind every decision. Judgment. Curation. Systems thinking.
And hereâs the thing: thatâs actually more interesting work. You get to focus on problems that matter instead of pixel-pushing the same button for the 47th time. You get to see your decisions ripple across products and platforms. You get to operate at the altitude where impact happens.
But it requires letting go of the identity you built around execution. It requires accepting that the tools you mastered are now just tools, not the work itself. It requires humility to admit that AI can do some things better than you, and confidence to claim the things it canât.
The craft isnât dead. Itâs table stakes.
And if you can deliver deep craft at AI speed, while articulating the strategic reasoning behind every decision, youâre not just relevant. Youâre invaluable.
Start today. In 60 days, youâll be having different conversations.
Now go do the work.
đ The Emotional PM: How Your Feelings Shape Team Performance More Than Your Frameworks
The best product management frameworks in the world cannot save a team led by someone who walks into retrospectives defensive, brings anxiety into discovery sessions, or unconsciously signals disappointment when engineers share bad news. You can master Teresa Torresâs Opportunity Solution Trees, implement Marty Caganâs empowered team model perfectly, and still watch your team underperformâbecause you never learned to manage the most powerful force in any room: emotional contagion.
Product managers operate in emotionally charged environments. Conflicting stakeholder demands create tension. Missed deadlines generate pressure. Failed experiments produce disappointment. Difficult trade-offs spark conflict. Yet PM training focuses almost exclusively on cognitive frameworksâprioritization matrices, discovery techniques, roadmap communicationâwhile ignoring the emotional dynamics that determine whether those frameworks actually work in practice.
This isnât about âsoft skillsâ or being ânice to work with.â Itâs neuroscience. Research by Hatfield and colleagues (1994) established that emotions spread through groups via unconscious mimicryâwe automatically copy othersâ facial expressions and postures, which then influence our own emotional states. Your team is literally catching your emotions before you say a word. A PMâs emotional state isnât a personal matter contained within their own experienceâitâs a team performance variable that shapes psychological safety, decision quality, creativity, and willingness to surface problems.
This guide synthesizes research from psychology, neuroscience, behavioral economics, and organizational behavior to explain exactly how emotional dynamics affect product team performanceâand what you can do about it. Youâll learn the mechanisms behind emotional contagion, how emotional states shape supposedly ârationalâ product decisions, why traditional advice to âstay calmâ fails without understanding deeper principles, and specific practices for emotional regulation in PM contexts.
Youâll walk away with four immediately usable tools: an Emotional Intelligence Self-Assessment designed specifically for PMs, a Pre-Meeting Emotional Regulation Ritual, an Emotionally-Aware Retrospective Facilitation Guide, and a Team Emotional Weather Report practice for building collective emotional intelligence.
The Emotional Landscape of Product Management
Product management sits at a peculiar intersection of organizational dynamics. PMs must influence without authority, navigate between competing stakeholder interests, and frequently deliver unwelcome newsâall while maintaining the energy and optimism needed to lead teams through uncertainty. This creates what Marty Cagan describes in âEmpoweredâ as one of the most challenging roles in any organization: responsible for outcomes without direct control over the people and resources needed to achieve them.
Consider the emotional weight of typical PM activities. Discovery sessions require











