💜 PRODUCT ART 💜

💜 PRODUCT ART 💜

Product Operating Model Series: Deployment Infrastructure

Issue #229

Destare Foundation's avatar
Alex Dziewulska's avatar
Katarzyna  Dahlke's avatar
Sebastian Bukowski's avatar
+2
Destare Foundation, Alex Dziewulska, Katarzyna Dahlke, and 3 others
Dec 09, 2025
∙ Paid

In today's edition, among other things:

💜 Editor’s Note: Your Spotify Wrapped Isn’t Reflection
💜 Product Operating Model Series: Deployment Infrastructure

Join Premium to get access to all content.

It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).

Grab a notebook 📰 and your favorite beverage đŸ”â˜•.

DeStaRe Foundation

Editor’s Note by Alex 💜

Your Spotify Wrapped Isn’t Reflection—It’s a Dopamine Trap (And You’re Building the Same Thing)

Here’s what nobody in product management wants to admit: We’ve turned year-end reflection into an engagement metric.

Those automated summaries flooding your feeds right now—Spotify Wrapped, Instagram Year in Review, Strava’s year-end stats—they’re not helping users understand their year. They’re optimizing for screenshots, shareability, and next year’s retention. And behavioral science proves they’re doing more psychological harm than good.

This matters because you’re probably building the same thing. Or worse, you’re spending these final weeks of December chasing your own metrics-driven year-end review instead of actually processing what you learned, what worked, and what needs to change.

I’ve watched product teams spend December building elaborate dashboards showing team velocity, features shipped, and OKR completion percentages—then wonder why everyone starts January burned out instead of energized. The data tells a devastating story about why this approach fails, and what we should be doing instead.

Daniel Kahneman calculated something haunting: if the psychological present lasts about three seconds, we experience roughly 600 million moments in a lifetime. Most vanish completely. Only a fraction get encoded into the narrative we call “memory.”

The distinction between what Kahneman calls the “experiencing self” and the “remembering self” reveals why automated summaries fundamentally miss the point. Your experiencing self lives through those 600 million moments—the actual texture of your days, the small frustrations and quiet satisfactions that make up the bulk of human experience. Your remembering self constructs stories from fragments, heavily influenced by peaks and endings.

Here’s where it gets dangerous for product teams: Spotify Wrapped and its descendants operate exclusively in service of the remembering self. Worse, they algorithmically choose which moments become your peaks. “Your Top 1% Artist” manufactures an artificial highlight that may not reflect anything meaningful about your actual listening experience. By arriving precisely at year’s end, these summaries become the definitive “ending” that will disproportionately shape how you evaluate your entire year.

A 2022 meta-analysis of 174 studies found the peak-end effect size is r = 0.581—a large effect. Duration neglect was “essentially nil.” Translation: how your year ends matters far more than the 51 weeks that preceded it. Automated summaries exploit this bias, not to help you understand your year, but to create shareable moments that drive next year’s engagement.

The neuroscience of genuine insight reveals an even sharper contrast. Real “Aha!” moments involve robust activity in the bilateral thalamus, hippocampus, and dopaminergic midbrain alongside cortical engagement. There’s a specific neural reward signal: a gamma-band burst over prefrontal cortex about 500ms before solution, followed by an orbitofrontal cortex burst associated with pleasure.

This reward emerges through active problem-solving engagement, not through receiving pre-digested information. When Spotify tells you what you listened to most, you’re consuming data, not creating understanding. Your Default Mode Network—the brain regions that support self-referential processing, meaning-making, and insight—never fully engages.

Jordan Etkin’s research at Duke delivers a knockout punch to quantification culture. Across six experiments examining coloring, walking, and reading, Etkin found that measurement increases output but simultaneously reduces enjoyment. The mechanism: “By drawing attention to output, measurement can make enjoyable activities feel more like work.”

The effect occurred even when participants chose to be tracked voluntarily. Those who selected to wear pedometers walked more but enjoyed walking less. Readers who tracked pages read more but found reading less pleasurable. The very people who most enjoy an activity are the ones most likely to spoil it by quantifying it.

For product teams, this finding demolishes the assumption that giving users “insights” about their behavior creates value. It often destroys it. Self-Determination Theory explains why: engagement-contingent rewards undermine intrinsic motivation (d = -0.40 across 128 studies). When enjoyable activities become tied to external metrics—minutes listened, steps walked, books finished—the shift from intrinsic to extrinsic motivation reduces genuine enjoyment.

Think about what we’re building: features that transform listening to music, exercising, reading, or any other activity people do for joy into optimization problems. We’re literally making life less enjoyable in pursuit of engagement metrics.

And then we turn around and do the same thing to ourselves. How many product managers end December reviewing feature velocity dashboards instead of asking “What did I actually learn this year about our customers?” How many teams measure success by story points completed rather than customer problems genuinely solved?

Shareable summaries aren’t just insufficient for reflection—they’re designed to trigger social comparison. When clinical psychologist Jordan Parmenter says comparison “can lead to feelings of inadequacy or pressure to appear unique,” he’s describing the intended outcome, not a bug.

Research on Fear of Missing Out links it to sleep disturbances, social anxiety, clinical depression, and overall productivity decline. A 2018 study found that limiting social media to 30 minutes daily produced significant reductions in loneliness and depression. Yet we build features explicitly designed to maximize social media sharing.

The 2021 meta-analysis revealed that passive usage—scrolling and observing rather than actively engaging—is more strongly linked to increased anxiety and depression than active participation. Year-end summaries primarily trigger passive comparison: viewing others’ curated highlights without meaningful interaction. The shareable design optimizes for engagement at the expense of psychological well-being.

For product teams, this gets personal. When you spend December comparing your shipped features to other teams, your OKR completion rates to industry benchmarks, your promotion timeline to your peers—you’re not reflecting. You’re running the comparison engine that makes you less satisfied with perfectly good work.

Shoshana Zuboff’s surveillance capitalism framework exposes the business model: year-end summaries serve as data collection mechanisms for behavioral prediction, engagement tools that create viral user-generated content (free marketing), behavioral modification instruments that influence future patterns, and FOMO triggers that drive continued platform engagement.

As Zuboff argues, platforms don’t just predict behavior—they shape it. “Surveillance capitalists now develop ‘economies of action,’ as they learn to tune, herd, and condition our behavior with subtle and subliminal cues, rewards, and punishments.”

The quantified self movement suffers from what critical scholars call “data fetishism”—users become enticed by the satisfaction numerical data offer, regardless of whether those numbers represent anything meaningful. One Quantified Self community member articulated it perfectly: “Tracking isn’t additive—it’s subtractive. You work on some question about yourself in relation to this machine-produced thing [data] and, afterward, you’re left with a narrower range of attributions you can make about your behavior or your feelings.”

For product leaders, the uncomfortable question is: Are we building tools that genuinely help people, or are we building engagement mechanisms that narrow their self-understanding while extracting behavioral data? And are we applying the same extractive mindset to our own teams?

James Pennebaker’s expressive writing research provides the strongest evidence for what genuine reflection looks like. His foundational 1986 study found that students randomly assigned to write about traumas for 4 days, 15 minutes per day, visited the student health center over the next six months at about half the rate of control participants.

The overall effect size across over 100 studies averages d = 0.16—modest but consistent. More importantly, the mechanism reveals why writing works differently from consuming summaries. People who improved used more cognitive words—”realize,” “think,” “consider,” “because,” “reason.” These words signal the construction of coherent narratives, experiencing insights, and finding paths forward.

Benefits come from the act of constructing meaning, not from having information presented. The therapeutic effect stems from organizing thoughts into coherent structure, creating meaning from experiences, and integrating experiences into one’s unified sense of self.

Critically, Pennebaker’s approach works because “participants wrote to and for themselves.” The writing was confidential; people could destroy it afterward. Research comparing private versus public disclosure found that private sharing resulted in more social support received (75% vs. 66%). The shareable design of automated summaries fundamentally conflicts with the private, honest processing that produces psychological benefits.

For product teams, this means:

Individual Level:

  • Twenty minutes of private writing about meaningful experiences beats any automated summary

  • Focus on growth-oriented questions: “What did I learn? How did I change? What challenged my assumptions?”

  • Active gratitude practices (writing and delivering letters, reflecting on why good things happened) produce large effect sizes in well-being research

  • The goal is narrative construction, not data aggregation

Team Level:

  • Structured debriefs improve team effectiveness by approximately 25% (meta-analytic effect size d = 0.67)—but only with genuine psychological safety

  • Amy Edmondson’s research shows high-performing hospital teams reported MORE errors than low-performing teams because psychological safety enabled honest reporting

  • Without psychological safety, year-end reviews become “organizational theater”—responses that resemble job interview weakness answers rather than genuine reflection

  • The prerequisite isn’t a facilitation technique—it’s a year of building trust

Organizational Level:

  • Alex Soojung-Kim Pang’s research demonstrates that creative workers experience peak productivity for approximately four hours daily before diminishing returns

  • Sonnentag’s recovery research identifies four experiences that protect against burnout: psychological detachment, relaxation, mastery experiences, and control over leisure time

  • A 1-year Finnish study found employees with high stable levels of all four recovery experiences had the least job burnout and sleep problems

  • With 83% of software developers reporting burnout, this isn’t optional

Here’s your challenge for these final weeks of December:

Turn off the automated summaries. Don’t share the Spotify Wrapped. Don’t post the Instagram year in review. Definitely don’t build a team dashboard showing velocity metrics as your “year-end retrospective.”

Instead:

For yourself:

  • Spend 20 minutes for 4 consecutive days writing about the most significant experiences of your year—what you learned, how you changed, what challenged your assumptions

  • Write privately. For yourself. Don’t share it.

  • Ask growth-oriented questions, not performance metrics: “When did I notice my fixed mindset getting triggered? What made me shift?”

  • Practice active gratitude: write letters to people who helped you, reflecting on why their actions mattered

For your team:

  • If you haven’t built psychological safety all year, don’t expect honest reflection in December

  • Replace metrics reviews with structured questions: “What did we learn? What would we do differently? What capabilities did we build?”

  • Frame it as a learning problem, not an execution problem

  • Leaders go first with vulnerability: share your own mistakes and uncertainties

For your product:

  • Question whether your “insights” features actually create insight or just generate engagement

  • Ask: Does this measurement enhance intrinsic motivation or undermine it?

  • Consider: Are we helping people understand themselves, or are we narrowing their self-perception to what’s easily quantifiable?

The evidence is overwhelming. Automated year-end summaries satisfy our hunger for certainty and control while systematically undermining the effortful meaning-making that genuine reflection requires. They exploit cognitive biases, trigger social comparison, reduce enjoyment through quantification, and serve surveillance capitalism more than human flourishing.

Every product team reading this faces a choice: keep building dopamine-chasing engagement features that leave users feeling empty, or build tools that genuinely serve human understanding and growth.

The harder choice is also the more valuable one. And it starts with how you spend these final weeks of December—whether you’ll chase the metrics or do the difficult work of actually reflecting on what matters.

Your experiencing self lived through millions of moments this year. Most are gone forever. The question isn’t what algorithm can tell you about them—it’s what meaning you’ll actively construct from the fragments that remain.

That meaning-making can’t be automated, gamified, or turned into a shareable story. It requires time, privacy, effort, and the courage to confront what you’d rather avoid.

The algorithms will be waiting when you’re done. But genuine understanding? That has to be earned.


Speaking of genuine growth over metrics theater—we’re wrapping up our Product Operating Model cycle as we close this year. The research and frameworks we’ve explored together have challenged how product teams actually operate versus how we pretend to operate in slide decks.

Next year, we’re launching two new cycles that dig deeper into what it actually takes to excel in this field: Product Leadership and Product Competences. Not the LinkedIn-friendly kind with inspirational quotes and five-step frameworks. The real kind—the messy, difficult work of developing the strategic thinking, behavioral science fluency, and human judgment that separates product theater from genuine product excellence.

Because here’s what I’ve learned watching hundreds of product people navigate their careers: the ones who thrive aren’t optimizing for the next promotion or the perfect roadmap. They’re building deep competences and developing authentic leadership capabilities that compound over years, not quarters. They’re doing the uncomfortable work of genuine skill development while others chase the dopamine hits of shipping features and hitting velocity targets.


A final request to support Kasia Dahlke’s research

Kasia, a 5th-year psychology student at WSB Merito University in GdaƄsk, is conducting research for her master’s thesis on stress and coping styles in the IT industry (age group 35-50). The topic connects deeply with today’s editorial about how we’ve turned reflection into metrics and engagement—stress in IT isn’t just about deployment frequency or velocity dashboards, it’s about the human cost of chasing those numbers.

If you work in IT and fall within this age range, the survey takes about 10 minutes: https://lnkd.in/dEBCH9qK

If you don’t meet the criteria, every share helps. Kasia promises to share the research findings—and who knows, they might reveal something that helps us all better understand the psychological reality behind the product metrics we obsess over.

Leave a comment















📝 Product Operating Model Series

Deployment Infrastructure: Quick Reference Guide

Core Principle

Deployment infrastructure provides the systems and capabilities to deploy features safely, measure their impact accurately, control their visibility strategically, and respond quickly to problems—enabling teams to prove they’re delivering value, not just shipping features.

Why This Matters

Every new capability has three possible outcomes:

  1. Customers love it and start using it immediately (what we hope for)

User's avatar

Continue reading this post for free, courtesy of Destare Foundation.

Or purchase a paid subscription.
© 2026 PRODUCT ART · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture