💜 PRODUCT ART 💜

💜 PRODUCT ART 💜

The Emotional PM: How Your Feelings Shape Team Performance More Than Your Frameworks | Dear UX Designer, your craft just became table stakes

Issue #234

Destare Foundation's avatar
Alex Dziewulska's avatar
Sebastian Bukowski's avatar
Jakub Sirocki's avatar
+2
Destare Foundation, Alex Dziewulska, Sebastian Bukowski, and 3 others
Jan 27, 2026
∙ Paid

In today's edition, among other things:

💜 The Emotional PM: How Your Feelings Shape Team Performance More Than Your Frameworks (by Alex Dziewulska)

💜 Dear UX Designer, your craft just became table stakes (by MichaƂ Kosecki)

đŸ’Ș Interesting opportunities to work in product management

đŸȘ Product Bites - small portions of product knowledge

đŸ”„ MLA week#37

Join Premium to get access to all content.

It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).

Grab a notebook 📰 and your favorite beverage đŸ”â˜•.

DeStaRe Foundation

Editor’s Note by Alex 💜

The Velocity Delusion

The shrew is back. Stinking tongue at your service.

2025 tried to take me out. Depression does that sometimes. But it’s retreating now — back where it belongs — and 2026 brought good energy I wasn’t expecting.

I missed this. I missed you. Let’s go 💜


Your fastest teams are learning the slowest.

There. I said it.

I sit with product teams who deploy twelve times a day. Impressive pipelines. Beautiful dashboards. I ask one question: why did your last three features succeed or fail?

Silence.

They can tell me deploy counts. Cycle times. Lead time to production. They cannot tell me if any of it mattered. If shipping is learning, more shipping should equal more learning.

That math doesn’t work.

The velocity gospel has never been more entrenched. DORA metrics everywhere. Accelerate principles tattooed on engineering managers. LinkedIn flooded with hundred-deploys-a-week humble-brags. We measure teams in cycle time like it means something.

Pendo’s research found 80% of features in the average software product are rarely or never used. Eighty percent. We’re shipping faster than ever. We’re understanding less than ever.

Let me tell you how this ends. I’ve watched it for twenty years.

High velocity doesn’t create learning. It destroys the conditions learning requires.

We got here through a misreading of Toyota. The DevOps movement borrowed lean manufacturing — smaller batches, faster feedback, continuous improvement. Shorter cycles should mean faster learning. Airtight logic.

Except manufacturing has something software doesn’t: immediate, unambiguous feedback. When a car door doesn’t fit, you know instantly. When a feature ships, you might not know if it worked for months. Or ever. Nobody’s measuring.

Forsgren, Humble, and Kim showed high-performing teams tend to have higher deployment frequency. Correlation. Not causation. The 2023 DORA team explicitly warned against using these metrics for team comparisons.

We took nuanced research and turned it into a cargo cult. Stop worshipping the freaking deploy counter and look at what you’re actually building.

Here’s what’s happening inside high-velocity environments.

Sophie Leroy’s research on attention residue is brutal. When you switch tasks, part of your cognitive capacity stays stuck on the previous work. The more rushed, the stronger the residue. High-velocity environments create perpetual attention residue. You’re never fully present. You’re always partially somewhere else.

Bar-Eli studied 286 penalty kicks. Goalkeepers jump left or right nearly every time — even though staying center would statistically improve their odds. Why? Missing while standing still feels worse than missing while diving.

We’d rather be wrong in motion than wrong while waiting. Organizations amplify this through incentives. You get promoted for shipping. Nobody ever got fired for deploying fast. Nobody ever got promoted for killing a bad idea before it wasted six months of engineering time.

But the real damage is what velocity crowds out.

Francesca Gino and Bradley Staats ran a study at Wipro’s call center. One group spent 15 minutes at the end of each day reflecting on what they’d learned. The other group kept working.

The reflection group performed 20% better.

Fifteen minutes. That’s it. That’s what it costs to actually learn from your work. But high-velocity teams can’t spare fifteen minutes. There’s always another deployment waiting.

This isn’t magic. It’s mechanics.

Anders Ericsson spent his career studying expertise. Deliberate practice requires four things: a well-defined goal, motivation to improve, immediate feedback, and opportunities for repetition with refinement.

Notice what’s not on that list. Speed.

Ericsson found that practice without feedback structure produces nothing. Golfers who play for thirty years without structured feedback don’t improve. They repeat their mistakes faster.

That’s what most high-velocity teams are doing. Not iterating. Recurring.

I want to be honest about the strongest case for velocity. In uncertain environments, you learn by doing. Shipping gets work to customers faster. Smaller batches reduce risk.

All true.

The fatal flaw is assuming shipping equals learning. It doesn’t. Learning requires time to observe. Space to analyze. Capacity to change behavior based on what you found.

Most high-velocity teams ship into a void. No instrumentation. No outcome reviews. No time budgeted for analysis. They deploy constantly and learn nothing — then celebrate their cycle time metrics while competitors quietly figure out what customers actually need.

I’ve watched this pattern across organizations, continents, seniority levels. The teams that build the right things look different. They ship less. They learn more per shipment. They ask “what would tell us this worked?” before asking “when can we ship it?”

I’m not going to give you a tidy prescription. That would be dishonest given how deep the velocity cult runs.

But I refuse to pretend speed and learning are the same thing. They’re not.

The teams that will dominate the next decade won’t be the fastest. They’ll be the ones who figured out that learning requires something velocity keeps stealing.

That’s not a productivity problem. That’s architecture.

The race doesn’t go to the swift if the swift are running in circles.

Leave a comment


Help Shape PRODUCT PRO SUMMIT 2026

Product Pro Summit organizers are asking for your input—and we’re passing that invitation to our community. They want to design sessions, workshops, and topics that deliver real value, not just another conference with forgettable framework talks.

Here’s where you come in: What product management topics actually keep you up at night? What skills do you wish you’d developed three years ago? What conversations would make traveling to a conference genuinely worth it?

The organizers are asking now because they’d rather design something practitioners need than promote something they think looks good. Share your thoughts—the problems you’re facing, the gaps in current conferences, the workshops that would actually move your practice forward.

Tell them what matters to you, and they’ll make magic happen at the summit.

This isn’t crowdsourcing for the sake of engagement. This is conference organizers acknowledging that the best content comes from understanding what the community actually needs—not what looks impressive on an agenda.

Share your ideas: Link

The Summit happens in 2026. The conversation starts now.


Product Hive 2026

PRODUCT HIVE 2026 – The Anti-Conference Where You Build the Agenda

📍 Warsaw, ADN Conference Center

📅 March 18-19, 2026

🌐 https://producthive.pl/

Here’s what makes Product Hive different from the conference circuit where you sit through pre-packaged talks and pretend to take notes while checking Slack:

Day 1 - LEARN: Keynotes from experts on topics that actually matter—AI in product thinking, designing your operating model, navigating organizational chaos, balancing workload and value delivery. You listen, take notes, prepare your own submissions for Day 2.

Day 2 - SHARE: You and other practitioners build the agenda. Barcamp-style sessions where participants and experts collaborate to schedule the most relevant conversations. No fixed agenda imposed from above. You vote with your feet—if a session isn’t valuable, you leave and find one that is.

This format acknowledges something most conferences ignore: the best insights often come from practitioners solving real problems, not just experts delivering polished talks. Product Hive creates space for both.

Topics include:

  • AI-supported product thinking (elevating product research)

  • Designing your own operating model (prioritization and productivity for product leaders)

  • The optimized product manager (balancing workload, priorities, and value)

  • Navigating organizational change

  • Integrating AI in value-driven development

Target audience: Senior PMs, IT leaders influencing product processes, analysts supporting product development, founders and startup CEOs.

Bonus: Optional full-day workshop with Roman Pichler on Product Strategy (March 17th).

Language: Primarily English, with some Polish sessions during the SHARE day.

Newsletter subscriber perk: 10% off with code PRODUCTART10

Coming soon: We’ll be running a competition for 2 tickets with 50% discount. Stay tuned.

This isn’t another conference where attendance feels like an obligation your employer imposed. It’s designed as actual development space—collaborative, engaging, and built around what practitioners need, not what looks good on a promotional deck.

If you’re tired of conferences optimized for speaker LinkedIn content rather than attendee learning, this format might be worth your time.

Tickets and details: https://producthive.pl/

Alex Dziewulska: I will be there with Katarzyna Dahlke and Leadership Lab, join me to design your product leadership


đŸ’Ș Product job ads from last week

Do you need support with recruitment, career change, or building your career? Schedule a free coffee chat to talk things over :)

  1. Product Manager - Global Payments

  2. Product Manager - Mastercard

  3. Product Manager - Sygnity

  4. Product Manager - Luxoft

  5. Product Manager - Allegro

    Refer a friend


đŸȘ Product Bites (3 bites đŸȘ)

đŸȘ The Cobra Effect 🐍: When Your Solution Breeds the Problem

Why well-intentioned product fixes often amplify the very issues they’re designed to solve


We’ve all been there. A metric is tanking, leadership is concerned, and the team rallies to implement a fix. Incentives are realigned, processes are redesigned, and everyone celebrates the clever solution. Six months later, the problem is somehow worse than before—and a new set of problems has emerged. The team looks around, bewildered: How did trying to fix this make it worse?

This is the Cobra Effect in action, and it haunts product teams more often than we’d like to admit.


What Is the Cobra Effect?

The Cobra Effect describes a phenomenon where an intervention intended to solve a problem inadvertently makes the problem worse—typically because the incentive structure encourages behaviors that amplify rather than reduce the issue.

German economist Horst Siebert coined the term in his 2001 book on economic policy, drawing from an apocryphal story from British colonial India. According to the tale, British authorities in Delhi, alarmed by venomous cobras, offered a bounty for every dead snake. Initially successful, the program soon backfired: enterprising locals began breeding cobras specifically to collect the bounty. When officials discovered the scheme and cancelled the program, breeders released their now-worthless snakes into the wild—leaving Delhi with more cobras than before.

Whether historically accurate or not, the parable captures a universal truth about incentive design: people respond to the incentives you create, not the outcomes you intend.


Breaking Down the Cobra Effect

The Cobra Effect manifests through several interrelated mechanisms that product teams should recognize:

The Measure-Target Collapse

When we turn a metric into a target, people optimize for the metric rather than the outcome it’s meant to represent. British economist Charles Goodhart captured this elegantly: “When a measure becomes a target, it ceases to be a good measure.” We want fewer support tickets, so we incentivize ticket closures—and suddenly tickets are being closed without resolution, pushed to other queues, or discouraged from being filed at all.

Rational Actors, Irrational Systems

The cobra breeders weren’t villains—they were rational actors responding logically to the incentive structure presented to them. In product organizations, employees rarely game the system out of malice. They’re simply doing what the system rewards. When sprint velocity becomes a KPI, story point inflation becomes inevitable. When review speed is measured, thoroughness suffers. The system creates the behavior.

Second-Order Blindness

We’re remarkably good at predicting first-order effects (”if we reward X, we’ll get more X”) and remarkably poor at predicting second-order effects (”but people will figure out how to get X without actually doing Y”). This blindness is exacerbated under deadline pressure, when we need solutions fast and don’t have time to think through unintended consequences.

The Feedback Loop Delay

Cobra Effects often take time to materialize. The bounty program initially worked—dead cobras piled up, officials congratulated themselves. Only later did the breeding farms emerge. In product development, a misaligned incentive might boost short-term metrics while slowly eroding the foundation it stands on. By the time we notice, the damage is entrenched.


The Cobra Effect in Action

Wells Fargo’s Account Scandal: In one of the most striking examples of the Cobra Effect in modern business, Wells Fargo’s aggressive cross-selling targets created precisely the problem they were meant to address. Management wanted deeper customer relationships, so they set quotas for new accounts per customer—famously captured in the slogan “eight is great.” Employees, facing intense pressure and potential termination for missing targets, began opening accounts without customer authorization. By 2016, regulators discovered that staff had created approximately 3.5 million unauthorized accounts. The bank paid over $3 billion in fines and settlements, fired 5,300 employees, and suffered incalculable reputational damage. The incentive meant to deepen customer relationships had destroyed customer trust entirely.

The Hanoi Rat Massacre: The French colonial government in Vietnam replicated the cobra mistake almost exactly. Facing a rat infestation in Hanoi, officials offered a bounty for rat tails—reasoning that tails would prove the rats were killed. Soon, officials noticed rats running through the streets without tails: hunters were catching rats, cutting off their tails for the bounty, and releasing them to breed more rats. Rat farms emerged on the outskirts of the city. The program was quietly cancelled, but the rat population had grown.

Bug Bounty Backfires: Software teams have encountered their own cobra effects with internal bug-tracking incentives. When teams reward finding bugs without equally rewarding preventing them, some engineers learn to leave vulnerabilities in code they can “discover” later. When QA bonuses are tied to bugs found, the incentive shifts from quality assurance to fault-finding—and potentially from collaboration to competition with developers.


Why This Matters

The Cobra Effect matters because product teams are constantly designing incentive systems—for users, for employees, for partners. Every gamification element, every KPI, every performance review structure is an incentive system in disguise. And every one of them can backfire.

Research from Forrester suggests that misaligned incentives contribute to up to 70% of project failures related to user adoption. The problem isn’t that we lack good intentions—it’s that we underestimate the creativity humans bring to optimizing for whatever target we set. People are remarkably ingenious at finding the shortest path to the reward, even when that path undermines the reward’s purpose.

The danger is particularly acute when stakes are high and measurement is easy. “What gets measured gets managed” sounds like wisdom until we realize it also means “what gets measured gets manipulated.” The more we tie consequences to specific metrics, the more energy flows toward gaming those metrics rather than achieving actual outcomes.


Putting It Into Practice

Pre-mortems, Not Post-mortems: Before launching any incentive structure, run a pre-mortem. Gather the team and ask: “It’s six months from now and this system has backfired spectacularly. What happened?” Force people to imagine gaming strategies, loopholes, and unintended consequences. You won’t catch everything, but you’ll catch the obvious ones—which are often the ones that cause the most damage.

Measure the System, Not Just the Target: If you’re incentivizing ticket closures, also measure reopened tickets, customer satisfaction post-close, and escalation rates. If you’re rewarding feature output, also track feature usage and removal rates. Triangulating multiple metrics makes gaming harder and surfaces manipulation faster.

Design for the Rational Gamer: Assume that some percentage of people will respond to exactly what you measure, not what you mean. Design your incentives as if you’re creating rules for a game where players are trying to win—because you are, and they will.

Build in Feedback Loops: Create mechanisms to detect when solutions are backfiring, and commit in advance to changing course. Many cobra effects persist not because they’re invisible but because admitting failure is politically costly. Establish review points and define in advance what “this isn’t working” would look like.

Question Simple Solutions to Complex Problems: The Cobra Effect thrives on oversimplification. When a multifaceted problem is reduced to a single metric, gaming is almost guaranteed. Complex problems require complex—or at least multi-dimensional—solutions.


The Bigger Picture

The deeper lesson of the Cobra Effect isn’t just about incentive design—it’s about humility. Systems fight back. People are creative. The future resists our attempts to control it.

In product development, we’re often under pressure to show quick wins, to demonstrate that our interventions are working. This pressure pushes us toward simple, measurable solutions that can backfire precisely because they’re simple and measurable. The most robust solutions are often the ones that resist easy quantification.

The cobra breeders weren’t the problem. The problem was a system designed without understanding how people would respond to it. Every time we create an incentive—for users, employees, or ourselves—we’re running the same experiment. The question isn’t whether people will optimize for our metrics. They will. The question is whether optimizing for our metrics actually achieves what we intended.

Before you implement your next clever solution, pause and ask: Am I designing a bounty program, or am I breeding cobras?

Leave a comment


đŸȘ The Focusing Illusion: Why Users Lie About What Matters

How the psychology of attention systematically distorts feature requests and user research


The interview went perfectly. Users were enthusiastic, nodding along as we described the feature. “Yes, absolutely,” they said. “That would be a game-changer.” We built it. We shipped it. And then... nothing. The feature sits untouched in our analytics dashboard, a monument to confident misunderstanding.

Sound familiar? We’ve all experienced the painful gap between what users say they want and what they actually use. What if this gap isn’t random noise, but a predictable psychological pattern we can learn to navigate?


What Is the Focusing Illusion?

The Focusing Illusion is a cognitive bias identified by Nobel laureate Daniel Kahneman and described with devastating simplicity: “Nothing in life is as important as you think it is while you are thinking about it.”

When we focus our attention on any factor—a feature, a problem, a purchase—that factor temporarily expands in importance. The very act of thinking about something inflates our perception of how much it matters. This isn’t deception; it’s how human cognition works. Our brains aren’t equipped to simultaneously weigh all the factors that affect our decisions. We consider what’s in front of us, and what’s in front of us always seems more important than it actually is.

Kahneman illustrated this with a famous study on happiness and geography. When asked whether Californians are happier than Midwesterners, most people—including Californians themselves—say yes. The pleasant weather looms large when we think about it. But when researchers actually measured life satisfaction, residents of both regions reported virtually identical levels of happiness. Why? Because 99% of life—relationships, work, health, meaning—is the same everywhere. Weather matters, but not nearly as much as we think it does when we’re thinking about it.

For product teams, the implications are profound. Every user interview, every feature request, every prioritization conversation is distorted by this bias.


Breaking Down the Focusing Illusion

The Interview Trap

When we sit down with a user and ask about a specific problem, we’re essentially shining a spotlight on that problem. The user’s attention focuses on it, and in that moment, the problem genuinely feels critical to them. They’re not lying—they’re experiencing an attention-inflated version of their reality. Three months later, when we ship the solution, they’ve long since stopped thinking about it. Other problems have claimed the spotlight.

The Priority Paradox

Users can accurately report their problems. What they can’t accurately report is how those problems rank against everything else in their lives. When a customer says “I really need feature X,” what they mean is “Feature X seems important right now, in this conversation, while I’m thinking about it.” They haven’t mentally stacked it against the other 47 things competing for their time and attention. When forced to actually prioritize—by allocating their time, attention, or money—the feature often falls far down the list.

Adaptation Blindness

The Focusing Illusion is amplified by our failure to anticipate adaptation. We imagine how good we’ll feel when we get the new feature, the new car, the new job. What we don’t imagine is that we’ll stop noticing it. Humans adapt to positive changes remarkably quickly—a phenomenon hedonic psychologists call the “hedonic treadmill.” The feature that seems transformative in an interview will feel like furniture within weeks of adoption.

Context Collapse

User research typically happens outside the context where the product is actually used. We ask people to reconstruct their past experiences or imagine future ones, but both exercises are subject to focusing effects. When I sit in a conference room discussing my workflow, I’m not actually experiencing my workflow—I’m thinking about selected parts of it, which necessarily exaggerates those parts.


The Focusing Illusion in Action

Microsoft’s 70% Rule: Microsoft’s extensive research on product usage revealed a startling finding: approximately 70% of features in complex software products are rarely or never used. Many of these features began as user requests that seemed urgent in research settings but proved inconsequential in practice. Users asked for them, validated them, and then ignored them—not because they were poorly implemented, but because the importance users felt during research didn’t persist into daily use.

The Build-Measure-Abandon Cycle: A B2B product team conducted thorough customer interviews about a requested integration. Customers were emphatic: this integration would unlock significant value and would definitely be implemented if built. The team prioritized, built, and launched. When they followed up with the same customers, many hadn’t implemented the integration and couldn’t remember asking for it. Other operational challenges—ones they hadn’t mentioned in interviews because no one asked—had consumed their attention.

Spotify’s Discovery Problem: Early Spotify research consistently showed users wanted more control over music discovery—more filters, more categories, more customization. When implemented, these features saw limited engagement. What users actually responded to were algorithmically-generated playlists like Discover Weekly—something users couldn’t have requested because they didn’t know to want it. Users focused on articulated problems; the real opportunity was in problems they couldn’t name.


Why This Matters

The Focusing Illusion matters because product teams rely heavily on direct user input, and direct user input is systematically biased toward whatever users happen to be thinking about. This creates several failure modes.

We overbuild for stated needs. Features that emerged from enthusiastic interview feedback often get premium development attention, even when behavioral data suggests lukewarm adoption of similar features. The enthusiasm was real—but real enthusiasm triggered by focused attention doesn’t predict real behavior triggered by distributed attention.

We underbuild for unstated needs. The problems users don’t mention in interviews aren’t necessarily less important—they might simply be less top-of-mind. The most transformative product improvements sometimes come from observing what users struggle with rather than asking what they want.

We mistake certainty for priority. When users express strong preferences, we interpret strength as importance. But the Focusing Illusion means that any preference feels strong in the moment of expression. The certainty tells us about the psychology of focus, not the hierarchy of needs.


Putting It Into Practice

Stack Rank Problems, Not Features: When users identify a problem, don’t just note it—ask them to describe all their challenges and rank them. This forces the problem out of isolation and into competition with reality. The feature request that seemed critical might rank fifth or sixth when stacked against actual priorities. This technique, sometimes called Customer Problem Stack Ranking, surfaces true priorities that survive beyond the interview context.

Observe Before You Ask: Behavioral observation is less susceptible to focusing effects than self-report. Before asking users what they want, watch what they do. The gap between stated and revealed preferences often contains the most valuable insights.

De-focus Your Research: Instead of drilling into specific problems, start broad. Ask about goals, workflows, and frustrations without priming particular solutions. Let users’ attention wander to what actually matters, rather than directing it where you’ve already decided to look.

Test Commitment, Not Agreement: Agreement is cheap; commitment is expensive. When users say they want a feature, probe for commitment signals: Would they pay for it? Would they switch products for it? Would they invest time learning it? Hypothetical agreement means little. Demonstrated willingness to sacrifice means everything.

Apply a Temporal Discount: Treat research enthusiasm the way you’d treat any inflated number—discount it. If five out of ten users express strong interest, assume two or three actually have a persistent need. This isn’t cynicism; it’s calibrating for a known bias.


The Bigger Picture

The Focusing Illusion reveals an uncomfortable truth about user research: we cannot trust what users tell us, at least not at face value. This isn’t because users are dishonest—it’s because the act of asking distorts the answer. The interview room is a magnifying glass, and everything under it looks larger than life.

This doesn’t mean user research is useless. It means user research is one input among many, subject to known biases that we can partially correct for. The teams that build products users actually love are the ones who triangulate—combining stated preferences with behavioral data with observation with experimentation.

Perhaps the most valuable lesson of the Focusing Illusion is epistemological humility. We’re not as good at knowing what we want as we think we are. Neither are our users. The goal isn’t to find the perfect research method that bypasses human bias—it doesn’t exist. The goal is to be appropriately skeptical of any single signal, including the loud and confident ones.

When a user tells you something is essential, hear it as “this seems essential right now, in this moment, while I’m focused on it.” That’s still valuable information. It’s just not the same as “this will still seem essential in three months when I’m focused on something else entirely.”

Leave a comment


đŸȘ The MAYA Principle: Most Advanced Yet Acceptable

Why innovation must walk the narrow corridor between boring and terrifying


We’ve all seen it happen. A startup builds something genuinely revolutionary—technologically brilliant, years ahead of its time—and the market shrugs. Meanwhile, a competitor launches something less impressive but more approachable, and users flock to it. The better technology lost. It happens with unsettling regularity.

The common explanation is “bad timing” or “poor marketing.” But what if the real explanation is more fundamental? What if there’s a predictable zone of acceptance that every successful innovation must navigate, and the brilliant products that fail are the ones that overshoot it?


What Is the MAYA Principle?

MAYA—Most Advanced Yet Acceptable—is a design philosophy developed by Raymond Loewy, often called the father of industrial design. Working from the 1930s through the 1970s, Loewy created some of the most iconic designs of the 20th century: the Coca-Cola bottle, the Shell Oil logo, the Greyhound bus, the S1 locomotive, the interiors of NASA spacecraft, and Air Force One.

Loewy’s prolific success across radically different domains wasn’t luck—it emerged from a deep understanding of how people respond to novelty. He observed that humans are pulled by two opposing forces: neophilia, the attraction to new things, and neophobia, the fear of anything too new. Successful design, he argued, must satisfy both impulses simultaneously.

As Loewy put it: “The adult public’s taste is not necessarily ready to accept the logical solutions to their requirements if the solution implies too vast a departure from what they have been conditioned into accepting as the norm.”

In other words: being right isn’t enough. Being right in a way people can accept—that’s the challenge.


Breaking Down the MAYA Principle

The Zone of Acceptance

Imagine a spectrum running from completely familiar to completely novel. At the familiar end, products blend into the background—users ignore them because they offer nothing new. At the novel end, products trigger resistance—users reject them because they require too much cognitive or behavioral change. MAYA occupies the sweet spot between these extremes: advanced enough to capture attention, familiar enough to feel safe.

Gradual Evolution, Not Revolution

Loewy advocated for designing for the future but delivering the future gradually. Rather than introducing radical change all at once, successful products move users incrementally toward new paradigms. Each iteration pushes slightly past current comfort zones while maintaining enough continuity with the previous version to feel recognizable. Users don’t adapt to the future in one leap; they adapt through a series of small steps.

The Familiarity-Novelty Balance

Derek Thompson, author of “Hit Makers,” synthesized Loewy’s insight this way: “To sell something familiar, you must make it surprising. To sell something surprising, you must make it familiar.” This dual imperative explains why breakthrough products often succeed not by being the most innovative, but by being innovative in a package users already understand.

The Moving Target

What counts as “acceptable” isn’t fixed—it shifts over time as users adapt. Yesterday’s radical is today’s familiar. This means successful product teams don’t just find the MAYA zone once; they continuously recalibrate as user expectations evolve. The iPhone of 2007 would feel primitive today, but it was precisely calibrated to what users could accept in 2007.


The MAYA Principle in Action

Apple’s iPod-to-iPhone Pipeline: Apple didn’t invent the smartphone—predecessors like Palm Pilots and Windows Mobile devices existed for years. But those devices violated MAYA: they were advanced but not acceptable, requiring users to adopt entirely new mental models. Apple’s approach was different. The iPod, launched in 2001, gradually evolved from a device with physical buttons to touch-sensitive scroll wheels to eventually full touchscreens. By the time the iPhone launched in 2007, users had been gently guided toward touchscreen interaction for six years. The iPhone itself was introduced not as a handheld computer but as “a phone”—a familiar category that made the radical leap acceptable. Each subsequent iPhone iteration changed incrementally, never shocking users with too much novelty at once.

Google Glass’s MAYA Failure: Google Glass represented the opposite trajectory. When it launched in 2014, the technology was genuinely advanced—a head-mounted display with voice-activated interface. But it was nowhere near acceptable. The form factor was socially awkward, earning wearers the nickname “Glassholes.” The interaction model was foreign—no familiar reference point for how to use it. Glass was too far ahead of what users could comfortably adopt, and despite significant hype, it failed to achieve mainstream acceptance. Nearly a decade later, consumer AR wearables still haven’t reached mass adoption—suggesting Glass wasn’t just ahead of its time, but outside the corridor of acceptable change entirely.

Spotify’s Invisible Innovation: Spotify exemplifies MAYA in software. Under the hood, Spotify uses extraordinarily sophisticated machine learning to generate personalized recommendations. But users never interact with the complexity directly. Instead, they see playlists, album covers, and shuffle buttons—interface elements borrowed from decades of physical and digital music consumption. The innovation happens backstage; the user experience remains anchored in the familiar. Users get cutting-edge recommendation engines wrapped in metaphors they already understand.

Tesla’s Familiar Revolution: Electric vehicles represent a fundamental shift in automotive technology. Tesla’s approach to MAYA was instructive: despite the revolutionary drivetrain, Teslas look like cars, are purchased like cars, and are driven like cars. The innovation is masked by conventional form factor and interaction models. Compare this to earlier EV experiments that looked alien and signaled “different” at every touchpoint—and struggled to gain adoption despite environmental enthusiasm.


Why This Matters

MAYA matters because product teams often confuse technical superiority with market success. The most advanced solution is not necessarily the most successful solution. Adoption depends not just on what a product does, but on whether users can bridge the gap between their current mental models and what the product requires.

This creates a strategic paradox. We want to build products that are ahead of the market—that’s where competitive advantage lives. But being too far ahead means building products nobody will use. The gap between “innovative” and “acceptable” is where promising products go to die.

For product managers, MAYA provides a framework for evaluating feature decisions and product direction. It’s not enough to ask “Is this better?” We must also ask “Is this different in ways users can absorb?” The answers aren’t always the same.


Putting It Into Practice

Map Your Novelty Budget: Every product has a limited budget for novelty—the total amount of new behavior you can ask users to adopt before triggering rejection. Spend it wisely. If your core value proposition requires significant behavioral change, minimize novelty everywhere else. Radical functionality wrapped in conventional UI often outperforms moderate functionality wrapped in radical UI.

Find Your Familiarity Anchors: Identify the mental models, metaphors, and interaction patterns your users already understand. Root your innovation in these anchors. The iPhone was a “phone.” Slack was “email, but better.” The most successful new products often describe themselves in terms of old products, then gradually reveal their true nature.

Stage Your Innovations: Rather than shipping all your advances at once, consider sequencing them over releases. Each release pushes slightly past the current acceptable threshold, moving users incrementally toward your vision. Apple removed the iPod’s buttons over six years—not in one release.

Prototype for Unfamiliarity: When testing new concepts, watch specifically for confusion, hesitation, and workarounds. These signals indicate you’ve crossed out of the acceptable zone. The goal isn’t to eliminate novelty—it’s to calibrate novelty to what users can process.

Differentiate by Context: MAYA tolerance varies by user segment, product category, and cultural context. Early adopters accept more novelty than mainstream users. Consumer products typically require more familiarity than enterprise tools. What’s acceptable in Tokyo may not be acceptable in Tulsa. Know your audience’s specific threshold.


The Bigger Picture

The MAYA principle is fundamentally about empathy—understanding that users live in their present, not in our imagined future. We can see where the technology should go. They’re seeing where their habits already are. The bridge between these perspectives is the work of product design.

There’s a temptation to believe that truly great products are so obviously superior that users will adapt to them regardless of unfamiliarity. History suggests otherwise. The technology graveyard is full of better solutions that users couldn’t accept. Betamax was arguably superior to VHS. The Segway was revolutionary transportation technology. Google Wave was a more powerful collaboration tool than what replaced it. Being better is only half the battle.

Loewy’s insight, decades before cognitive science had the vocabulary for it, was that human capacity for change is limited and must be respected. We don’t experience products in isolation—we experience them against the backdrop of everything we already know and do. Innovation that ignores this context isn’t innovation; it’s wish fulfillment.

The goal isn’t to dampen ambition. It’s to sequence ambition strategically. The most transformative products don’t ask users to change everything at once. They guide users, step by step, from familiar ground into the future. Each step is Most Advanced Yet Acceptable. And through this patient progression, what was once radical becomes the new normal—the foundation for the next step forward.

The future doesn’t arrive all at once. It’s adopted one acceptable increment at a time.

Leave a comment


đŸ”„ MLA #week 37

The Minimum Lovable Action (MLA) is a tiny, actionable step you can take this week to move your product team forward—no overhauls, no waiting for perfect conditions. Fix a bug, tweak a survey, or act on one piece of feedback.

Why it matters? Culture isn’t built overnight. It’s the sum of consistent, small actions. MLA creates momentum—one small win at a time—and turns those wins into lasting change. Small actions, big impact

MLA: Decision Diary

Why This Matters:

Product teams make dozens of decisions every week—which feature to prioritize, which user segment to target, whether to refactor or ship, how to balance quality with speed. But these decisions often happen behind closed doors, leaving other teams wondering “why did they choose that?” or worse, making their own assumptions about the reasoning. When decision-making is opaque, trust erodes, misalignment grows, and the organization loses the opportunity to learn from its own choices. By documenting one significant product decision transparently—the context, the options you considered, the trade-offs you weighed, and why you ultimately chose what you did—you create a learning artifact that benefits everyone. This practice transforms decisions from mysterious black boxes into shared organizational knowledge, building both trust and collective product thinking.

How to Execute:

1. Choose the Right Decision to Document:

Select a decision that meets these criteria:

  • Significant enough to matter: Not trivial (”which color for the button”) but not so strategic it’s confidential

  • Recently made: Ideally from this week or last week, while the reasoning is fresh

  • Interesting to others: Other teams would benefit from understanding your thinking

  • You’re confident enough to share: You believe it was the right call, even if time will tell

Good candidates:

  • Prioritizing Feature A over Feature B for the next sprint

  • Choosing to delay a release to fix quality issues vs. shipping on time

  • Deciding to serve User Segment X before User Segment Y

  • Selecting a technical approach (build vs. buy, microservices vs. monolith)

  • Choosing to sunset a feature or product

  • Deciding how to respond to competitive pressure

  • Determining what metrics to optimize for in an experiment

Avoid:

  • Personnel decisions or anything HR-related

  • Decisions with legal or competitive sensitivity

  • Decisions still under debate or not finalized

  • Purely tactical execution details with no broader learning

2. Select the Right Format and Channel:

Choose where to share based on your organization’s culture:

Options:

  • Dedicated decision log: Create a shared document or wiki page titled “Product Decision Diary” where you add entries

  • Public Slack/Teams channel: Post in a product or company-wide channel

  • Email to stakeholders: Send to cross-functional partners who care about product direction

  • Team meeting share-out: Present briefly at an all-hands or product review

  • Notion/Confluence page: Add to existing product documentation

Key principle: Make it accessible to people outside your immediate team. The point is transparency, not just team record-keeping.

3. Structure Your Decision Documentation:

Use this template to ensure completeness:


📋 DECISION DIARY ENTRY

Date: [When the decision was made]

Decision: [One clear sentence stating what you decided] Example: “We decided to prioritize the mobile app redesign over adding new integrations for Q1.”

Context: [What circumstances led to this decision?]

  • What problem were you trying to solve?

  • What constraints were you operating under? (time, resources, strategic goals)

  • What external factors influenced this? (market, user feedback, business pressure)

Options Considered: [What alternatives did you evaluate?] List 2-4 options you seriously considered:

  1. Option A: [Brief description] - Pros/Cons

  2. Option B: [Brief description] - Pros/Cons

  3. Option C: [Brief description] - Pros/Cons

Trade-offs Weighed: [What did you have to give up or accept?]

  • What are the downsides of your chosen path?

  • What are you explicitly NOT doing as a result?

  • What risks are you taking on?

Why We Chose This: [Your reasoning]

  • What factors tipped the scales?

  • What values or principles guided you? (user value, speed to market, technical debt reduction, etc.)

  • What data or insights informed the decision?

  • What assumptions are you making?

Success Criteria: [How will you know if this was the right call?]

  • What metrics or outcomes will you track?

  • What timeframe for evaluation?

Questions or Doubts: [What are you uncertain about?] (Optional but powerful)

  • What could prove this wrong?

  • What would make you reconsider?


4. Write with Clarity and Honesty:

Be specific, not vague: ❌ “We decided to focus on improving user experience” ✅ “We decided to redesign the onboarding flow to reduce drop-off from 60% to 40% before adding new features”

Be honest about trade-offs: ❌ “This is the best approach” ✅ “This approach prioritizes speed over perfection—we’re accepting some technical debt to validate demand faster”

Be humble about uncertainty: ❌ “We’re confident this will succeed” ✅ “We believe this is the right bet based on current data, but we’re watching user feedback closely in case we need to pivot”

Use plain language:

  • Avoid jargon when possible

  • Explain acronyms or technical terms

  • Write like you’re explaining to a smart colleague from another department

5. Share and Invite Perspective:

When you publish your decision diary entry:

Frame it as learning, not defending: “I documented our decision to [X] this week. Sharing transparently so others can learn from our thinking—and so you can spot any blind spots we might have missed.”

Explicitly invite feedback: “What questions does this raise? What did we miss? Would love perspective from [relevant teams].”

Tag relevant stakeholders: If this decision impacts marketing, finance, customer success, etc., tag them so they see it.

Don’t make it formal or heavy: This should feel like a thoughtful memo, not a legal document. Conversational tone is fine.

6. Follow Up and Build the Habit:

Immediate follow-up:

  • If people ask questions in comments, respond thoughtfully within 24 hours

  • Thank people who offer perspectives you hadn’t considered

  • If you learn something that changes your thinking, acknowledge it publicly

After 2-4 weeks:

  • Revisit the decision: How’s it going?

  • Share a brief update: “Update on our decision to [X]: Here’s what we’ve learned so far...”

  • This shows you take the documentation seriously and reinforces the learning loop

Build the habit:

  • Start with one decision per week or every two weeks

  • After 4-6 entries, you’ll have a valuable archive others can reference

  • Encourage other team members to contribute their own decision entries

  • Consider a monthly review where the team reflects on documented decisions

Expected Benefits:

Immediate Wins:

  • Creates institutional memory—decisions don’t get lost or forgotten

  • Takes 20-30 minutes to document, saves hours of explanation later

  • Demonstrates thoughtful decision-making to stakeholders

  • Reduces “why did they do that?” confusion across teams

  • Makes implicit reasoning explicit and shareable

Relationship & Cultural Improvements:

  • Builds trust through transparency—people see you’re not making decisions carelessly

  • Invites others into your thinking process, making them feel valued

  • Creates opportunities for cross-functional input before decisions are set in stone

  • Reduces organizational politics—reasoning is visible, not hidden

  • Models good decision-making practices for junior team members

  • Normalizes discussing trade-offs and uncertainty honestly

Long-Term Organizational Alignment:

  • Creates a searchable library of “how we think about product decisions”

  • New team members can read decision history to understand product philosophy

  • Patterns emerge over time—you see what values consistently guide choices

  • Prevents repeating the same debates—”we already considered that, here’s why we didn’t do it”

  • Builds organizational muscle for strategic thinking and principled decision-making

  • Establishes culture of learning from decisions, not just making them

  • Makes it easier to course-correct when assumptions prove wrong—context is already documented


Let us know how it went and what insights emerged from sharing your decisions! Use the hashtag #MLAChallenge to share your story. Let’s inspire each other to make decision-making everyone’s opportunity to learn.

Leave a comment


📝 Dear UX Designer, your craft just became table stakes

MichaƂ Kosecki specializes in identifying structural chaos at the intersection of strategy, technology, and design – particularly in large, regulated organizations where real change requires understanding that technical architecture often reflects organizational silos rather than actual user needs. With 15 years of experience scaling organizations, he consistently takes on high-risk transformations requiring navigation through regulations, politics, and legacy systems. He believes in transparency as a foundation and respecting human cognitive limits.


You learned Figma. You mastered components. You spent years perfecting your eye for spacing, typography, the subtle weight of a shadow. You can make anything pixel-perfect.

And now, none of that matters as much as you thought it would.

The work you invested years mastering has become table stakes, the baseline, the price of entry.

Nielsen Norman Group said it clearly: “UI is no longer a differentiator.” If you’re just slapping together components from a design system, you’re already replaceable by AI.

This isn’t the end of design. But it is the end of design as primarily an execution discipline.


What actually happened

Five forces converged in the last 24 months and fundamentally shifted where value lives in design work. I’m going to walk through them because vague anxiety doesn’t help anyone, and you need to see the whole picture to understand why your job search is brutal right now.

First: Design systems succeeded. Maybe too well. Nobody needs to redesign the same button 300 times anymore. We built Figma libraries, documented tokens, convinced leadership to invest. But when execution becomes systematized, it becomes cheaper. When it becomes cheaper, it becomes less differentiating. Think about the last five SaaS products you used. Can you tell them apart by their UI? Same patterns, same 8px grid, same components. Flip a coin and check if it’s using shadcn nowadays. Efficiency killed variety.

Second: AI crossed the execution threshold. Google’s Gemini 3 Pro matched expert design 44% of the time 18 months ago. The models double in capability every seven months. First-draft quality from AI is solid now. The kind of work that used to take half a day now takes 30 seconds and a decent prompt. And the median human designer skill is declining as the field expands faster than seniors develop. Do the math.

Third: Users are moving away from traditional interfaces. Gartner predicts 25% decrease in mobile app usage by 2027. Users will delegate to agents instead of navigating interfaces. “Book me a flight to LA next Tuesday under $400” becomes the interaction. Your carefully crafted booking flow? Bypassed. The interface still matters (it’s what agents use) but users spend less time looking at your pixels.

Fourth: The labor market validated the shift. The World Economic Forum’s 2025 Future of Jobs report confirms what we’re seeing on the ground. By 2030, employers will value analytical thinking, AI fluency, creative thinking, technological literacy. Skills declining in importance: manual dexterity, endurance, precision, sensory-processing abilities. Physical execution skills are moving out of focus while judgment and adaptation become core. Design is just experiencing this transformation first because our work became digital before most fields.

Fifth: Interview processes evolved and exposed the gap. Tom Scott, who sits in actual hiring rooms at top tech companies, reports that interviews now include craft deep dives where interviewers scrutinize typography decisions, iconography choices, tone, metaphor, rhythm. But they also reject candidates whose work took 4-6 months for features that should’ve shipped in weeks. They want deeper craft AND faster execution. Both at the same time.

Candidates fail because they can’t explain why a problem matters, what insight drove the idea, what trade-offs they considered. Their portfolios show “boxes in boxes” systems design, overly safe flows with inconsistent quality. They’re tied to old playbooks: lengthy discovery phases, research-heavy processes, traditional handoffs. They show work “selected by the team” but not led by them. They have no examples of working with metrics or iterative cycles. They’re not prototyping with AI tools at the pace these companies operate.

Depth of craft went up. Speed of execution went up, too. If you can’t deliver both, you’re not competitive.


Why you can’t get hired (even though “demand is high”)

You’ve been job hunting for 8 months. Your portfolio is solid. You’re talented. You’ve adapted your skills. So why the fuck can’t you get hired?

You keep hearing “demand for designers has never been higher.” Tech leaders are making huge design leadership hires. Two-person startups are investing in brand and founding designers earlier than ever. And yet here you are, sending applications into the void.

It’s a skills mismatch at industrial scale.

Companies want a new type of designer (what Tom Scott calls the “AI-native builder” or “Super IC”): someone who uses AI as infrastructure, ships prototypes fast and tests them with real users, thinks in systems but delivers in pixels, has taste-led curation skills, makes impact through tangible work.

But they’re hiring with old playbooks. Six to eight interview rounds. Portfolio reviews that judge visual polish over strategic thinking. Questions that don’t map to the actual job. Budgets for mid-level specialists when they want unicorn generalists. They want $180-250K talent but budget $120-150K. Nobody acknowledges the gap, so everyone wastes time.

Tom Scott said it directly: companies “went to market without clear view of what they actually wanted, so they wasted time interviewing people. Tried to hire new type of designer with old playbook.” Some designers get multiple offers and constant inbound. Other great people? Out of work for 6-12 months.

If you’re in Poland (or any constrained market), the numbers make it worse. 400-600 UX graduates annually. UX represents roughly 2.3% of Poland’s IT market. Maybe 500-600 open positions in boom years. You’re not failing - the market structure is broken. But the mismatch between what companies say they want and how they actually hire exists everywhere. Poland just makes it visible faster.

You’ve heard the advice: network your way in. And yes, it works, but the math is brutal. From my conversations with designers who successfully networked into roles: average of 23 substantive conversations to land one offer. “Substantive” means 30+ minute call with someone who can influence hiring. Not LinkedIn messages. Not coffee chats with people who “might know someone.” Actual conversations with hiring managers, team leads, founders.

It works. But it’s not fast, and it’s not easy. And if you’re doing it while unemployed and running out of savings, the pressure makes it harder.

It’s okay to feel angry (”I did everything right”), to bargain (”maybe if I learn another tool”), to grieve (”I loved making things beautiful”). But you can sit in that grief, or you can move. Your call.


Where value moved

Jennifer Darmour, design strategist and VP of Oracle Health Design, captured this shift: “We used to measure our success by what we produced: the screens, the flows, the features. Now the work lives beyond the artifact. The product is no longer the interface. It’s the relationship between humans and the intelligent systems that learn from them.”

This is the market speaking, not philosophy. Value didn’t disappear - it just migrated.

From execution to judgment. What AI can’t do yet: curated taste, research-informed understanding, critical thinking, strategic judgment. As Darmour notes: “AI can replicate style in seconds, but it cannot create with soul. It doesn’t understand why a color feels honest, or why a sentence lands with care. That remains our domain: the realm of judgment, intuition, and intent.”

AI generates 100 button variants in 30 seconds. And your job isn’t making the button. It’s deciding which variant serves user needs and business goals, and articulating why. Anyone can make something. Knowing what should be made, why, for whom, with what trade-offs? That takes judgment AI doesn’t have.

From artifacts to outcomes. Companies don’t want deliverables - they want solved problems. “Design theater” (going through process without producing results) is dying. Nobody cares about your polished deck if the product didn’t improve. Instead of 6 weeks on high-fidelity mockups, spend 3 days on a working prototype. Test it. Learn what’s wrong. Iterate. Then polish.

From solo craft to orchestration. You’re directing AI, developers, and stakeholders toward coherent vision. Systems thinking over pixel thinking. Your craft becomes knowing what consistency means, when to break it, how to maintain it at scale, how to evolve it without breaking trust.


What to do this week

So what do you actually do? I’m going to give you concrete actions, not aspirational bullshit. Do these this or next week, not next quarter.

Audit your value. What percentage of your work is execution versus judgment? Be honest. If more than 70% of your time is spent making screens, tweaking spacing, choosing colors, you’re in the danger zone (and again: it’s not necessarily your fault). Track one week: hours making/refining screens, hours in research and strategy and stakeholder alignment, hours prototyping and testing solutions. If the first number dominates, your value is at risk.

Shift one project. Use AI for first-draft execution. Pick one project and let Claude or ChatGPT or v0 generate the first version. Open Cursor or Antigravity and don’t freak out. Then spend your time on what actually matters: user research (what problem are we solving?), strategic thinking (why this solution over alternatives?), stakeholder alignment (how does this serve business goals?), iteration based on testing (not opinion). Notice whether output quality suffers or improves. Hint: it usually improves, because you’re spending time on the things that matter.

Build taste deliberately. Taste isn’t mystical. It’s pattern recognition plus context. Start a swipe file today: 20 examples in your domain, 20 examples adjacent to your domain, 10 examples completely outside design. For each one, write 2-3 sentences on why it’s exceptional. Do this publicly if you want. Write it down. Post the critique. Taste that lives only in your head doesn’t count.

Learn to articulate judgment. Take one design decision you made recently. Write 3 paragraphs explaining why you made this choice, what alternatives you considered, what trade-offs this involves, how you’d measure if it worked. If you can’t do this, your judgment isn’t legible to others. And if it’s not legible, it’s not valuable. Your value is now explanation, not just creation.

Run the diagnostic. This is not perfect, but it’s what actually gets checked in interviews now, so bear with me. Score yourself honestly on these dimensions:

Can I do a craft deep dive on my own work (typography choices, iconography rationale, rhythm, contrast, metaphor)? Is my work dated, generic, or overly dependent on design systems? Do I show real product work, shipped and owned end-to-end? Can I clearly articulate my specific contribution versus the team’s? Can I explain why this problem matters, what insight drove the solution, what trade-offs I considered? Do I have examples of working with metrics, customers, iterative cycles? Am I prototyping in new tools (AI-assisted, code-based)? Does my work take 4-6 months when it should take weeks? Can I walk through a project with clarity (context, what I did, impact, result)?

If you scored poorly on three or more, your skills aren’t legible to the market as it exists in 2026. Adapt, or continue getting rejected from opportunities you actually deserve.

Start the networking math. If you’re job hunting, you need 20-25 substantive conversations with decision-makers. This week: identify 10 people who fit that profile. Reach out to 3 of them (hell, if you want, just reach out to me). Expect 1-2 to respond. Make your outreach specific: “I saw you’re hiring for this role. I’m not applying yet, but I’d love 15 minutes to understand what ‘AI-native designer’ means to your team.” You’re asking for information, not a job. And demonstrating that you’re someone who thinks strategically about the same problems they face.

Rewrite your positioning. Old: “Product Designer with 6 years experience in Figma, Sketch, Adobe Suite.” New: “AI-native Product Designer who ships prototypes fast, makes strategic bets on user needs, and curates taste-led experiences. Reduced time-to-validation by 60% using AI-assisted prototyping while maintaining design quality.” The formula: AI fluency, core strength, outcome focus, evidence.


If you can’t or won’t adapt

This path isn’t for everyone. If you loved design because you loved making things beautiful, and the idea of focusing on strategy and judgment sounds boring or unfulfilling, I get it. You’re allowed to want a craft-focused career.

But you need to know: that career is disappearing in mainstream tech. Not because craft doesn’t matter, but because craft-only roles are being absorbed by AI and offshore teams that can execute at higher speed and lower cost.

Where craft-focused roles still exist: brand design (high-touch, luxury, or marketing-focused work where aesthetic differentiation is the product), motion design (AI hasn’t caught up yet, but it’s coming), physical product design (industrial design, print, environmental, domains where digital execution tools don’t apply the same way).

Consider leaving design (I know it’s harsh thing to hear or read): product management (if you have product sense but don’t want to execute), UX research (if you love understanding users but not making interfaces), technical writing (if you like clarity and structure), developer relations (if you can bridge design and engineering).

The market is telling you something. You can argue with it, or you can listen and adapt. Arguing doesn’t change the outcome.

Before you choose any path, run this reality check. You might not be as good as you think. Market is efficient (mostly). If you’re not getting callbacks after 50+ applications, portfolio might be the problem. Get brutally honest feedback from a senior designer who’s NOT your friend. Pay for portfolio review if needed. Common issues: projects show execution but not thinking, no evidence of impact or outcomes, visual style is dated, work looks same-y.

You might be applying to wrong companies. If 100% of your applications are to big traditional companies using 2019 playbooks, you’ll waste months. Focus on the 20% of companies that are future-focused: design-led startups, AI-native companies, places with strong design culture and fast shipping cadence.

You might need to skill up. If you can’t confidently say “I use AI in my workflow,” “I can prototype in code (even basic),” “I understand business metrics,” do a 30-day sprint. Pick ONE skill. Go deep. Ship something that demonstrates it.

The job might not exist anymore. If you want traditional IC role (make beautiful screens, hand to dev, repeat), reality check: adapt or exit the field.


The craft paradox

Design leaders will tell you this is “an opportunity to redefine what design means in an age of intelligence.” They’re right. This is a pivot point for the discipline.

But let’s be honest: you’re not excited about “shaping intelligent systems” when you’re worried about paying rent next month. The aspirational narrative is real, sure - but you’re worried about rent - and that gap matters.

You can resist this shift. Keep optimizing for execution. Keep competing with AI on speed and polish. Keep hoping the market will value craft the way it used to. You’ll spend the next 5 years watching your market value decline while telling yourself “craft will come back.” It won’t.

Or you can adapt. Shift your time toward judgment. Build taste deliberately. Learn to articulate why your decisions matter. Position yourself as a strategic contributor, not just a maker of beautiful artifacts. This path is harder. It requires you to admit that what got you here won’t get you there. It requires learning new skills, having harder conversations, accepting that your identity as “the person who makes things beautiful” is no longer enough.

But it’s also more interesting, more strategic, more valuable.

The craft isn’t dead. It’s actually more important than ever, but only when you can deliver it at AI speed.

Companies now do craft deep dives in interviews, scrutinizing your typography, rhythm, and contrast decisions with more rigor than they did in 2019. But they also expect you to move 10x faster than you did in 2023. Craft and speed. Both. Together.

The designers who thrive won’t be those who execute fastest. They’ll be the ones who know what to execute, why it matters, how to measure if it worked, and how to articulate the reasoning behind every decision. Judgment. Curation. Systems thinking.

And here’s the thing: that’s actually more interesting work. You get to focus on problems that matter instead of pixel-pushing the same button for the 47th time. You get to see your decisions ripple across products and platforms. You get to operate at the altitude where impact happens.

But it requires letting go of the identity you built around execution. It requires accepting that the tools you mastered are now just tools, not the work itself. It requires humility to admit that AI can do some things better than you, and confidence to claim the things it can’t.

The craft isn’t dead. It’s table stakes.

And if you can deliver deep craft at AI speed, while articulating the strategic reasoning behind every decision, you’re not just relevant. You’re invaluable.

Start today. In 60 days, you’ll be having different conversations.

Now go do the work.

Leave a comment


📝 The Emotional PM: How Your Feelings Shape Team Performance More Than Your Frameworks

The best product management frameworks in the world cannot save a team led by someone who walks into retrospectives defensive, brings anxiety into discovery sessions, or unconsciously signals disappointment when engineers share bad news. You can master Teresa Torres’s Opportunity Solution Trees, implement Marty Cagan’s empowered team model perfectly, and still watch your team underperform—because you never learned to manage the most powerful force in any room: emotional contagion.

Product managers operate in emotionally charged environments. Conflicting stakeholder demands create tension. Missed deadlines generate pressure. Failed experiments produce disappointment. Difficult trade-offs spark conflict. Yet PM training focuses almost exclusively on cognitive frameworks—prioritization matrices, discovery techniques, roadmap communication—while ignoring the emotional dynamics that determine whether those frameworks actually work in practice.

This isn’t about “soft skills” or being “nice to work with.” It’s neuroscience. Research by Hatfield and colleagues (1994) established that emotions spread through groups via unconscious mimicry—we automatically copy others’ facial expressions and postures, which then influence our own emotional states. Your team is literally catching your emotions before you say a word. A PM’s emotional state isn’t a personal matter contained within their own experience—it’s a team performance variable that shapes psychological safety, decision quality, creativity, and willingness to surface problems.

This guide synthesizes research from psychology, neuroscience, behavioral economics, and organizational behavior to explain exactly how emotional dynamics affect product team performance—and what you can do about it. You’ll learn the mechanisms behind emotional contagion, how emotional states shape supposedly “rational” product decisions, why traditional advice to “stay calm” fails without understanding deeper principles, and specific practices for emotional regulation in PM contexts.

You’ll walk away with four immediately usable tools: an Emotional Intelligence Self-Assessment designed specifically for PMs, a Pre-Meeting Emotional Regulation Ritual, an Emotionally-Aware Retrospective Facilitation Guide, and a Team Emotional Weather Report practice for building collective emotional intelligence.

The Emotional Landscape of Product Management

Product management sits at a peculiar intersection of organizational dynamics. PMs must influence without authority, navigate between competing stakeholder interests, and frequently deliver unwelcome news—all while maintaining the energy and optimism needed to lead teams through uncertainty. This creates what Marty Cagan describes in “Empowered” as one of the most challenging roles in any organization: responsible for outcomes without direct control over the people and resources needed to achieve them.

Consider the emotional weight of typical PM activities. Discovery sessions require

User's avatar

Continue reading this post for free, courtesy of Destare Foundation.

Or purchase a paid subscription.
© 2026 PRODUCT ART · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture