💜 PRODUCT ART 💜

💜 PRODUCT ART 💜

Product Operating Model Series: Test Ideas Responsibly

Wydanie #215

Destare Foundation's avatar
Alex Dziewulska's avatar
Katarzyna  Dahlke's avatar
Sebastian Bukowski's avatar
+2
Destare Foundation
,
Alex Dziewulska
,
Katarzyna Dahlke
, and 3 others
Sep 02, 2025
∙ Paid
Share

In today's edition, among other things:

💜 Editor’s Note: The High Priests of Silicon Valley
💜 Product Operating Model Series: Test Ideas Responsibly

Join Premium to get access to all content.

It will take you almost an hour to read this issue. Lots of content (or meat)! (For vegans - lots of tofu!).

Grab a notebook 📰 and your favorite beverage 🍵☕.

DeStaRe Foundation

Editor’s Note by Alex 💜

The High Priests of Silicon Valley

We're not living through a technology revolution—we're witnessing the birth of a secular religion, complete with prophets, salvation promises, and believers who've confused statistical models with divine intervention.

I've spent the last few months diving deep into what's really driving the AI frenzy, and what I discovered will make you rethink everything you thought you knew about our supposedly rational approach to artificial intelligence. The AI revolution isn't primarily about technology—it's about our species' ancient psychological need for salvation narratives, revealing how tech leaders have become unwitting high priests of a secular religion that promises technological transcendence while delivering human disappointment.

Here's the uncomfortable fact: We're hardwired to seek technological gods, and AI has become our latest object of worship.

Let me start with what's happening inside our head when we encounter AI. When Geoffrey Hinton, the "Godfather of AI," warns about artificial intelligence dangers, your brain doesn't process this as technical analysis—it processes it as prophetic revelation. When Sam Altman promises AI will create "unimaginable prosperity," you're not hearing business projections. You're hearing salvation narratives that trigger the same neural pathways our ancestors used to evaluate supernatural agents.

Nobel economist Daniel Kahneman's research reveals why AI feels magical: it activates what he calls "System 1 thinking"—the fast, intuitive, emotional processing system that relies on heuristics rather than careful analysis. Your System 1 brain treats AI outputs as coming from an intelligent, intentional agent rather than statistical pattern matching. This is the fundamental attribution error at work: we attribute agency and consciousness to algorithmic outputs, treating GPT-4 like a wise oracle rather than an extremely sophisticated autocomplete function.

But there's something deeper happening here. System Justification Theory, developed by John Jost at Harvard, explains why AI appeals to us psychologically: it promises to fix problems without requiring us to change the underlying systems that create these problems. AI allows us to maintain existing power structures while appearing to address inequality, bias, and systemic dysfunction. It's the perfect solution for people who want progress without disruption, justice without redistribution, and transformation without change.

The money trail reveals just how irrational our AI obsession has become. Robert Shiller's "narrative economics" framework shows how stories drive market behavior regardless of underlying value. The AI narrative has achieved what he calls "epidemic" status—spreading through financial markets like a contagion.

Consider this: 98% of business leaders want to adopt AI, but only 10% have models in production. Meanwhile, AI capital expenditures now contribute more to US GDP growth than all consumer spending combined. We're witnessing classic behavioral economics biases playing out at unprecedented scale: loss aversion creating AI FOMO, social proof cascades driving herd adoption, and sunk cost fallacies perpetuating failing initiatives.

The real beneficiaries aren't the organizations deploying AI—they're the companies selling the dream. McKinsey projects $2.6-4.4 trillion in AI gains (roughly the UK's entire GDP), while venture capital firm Sequoia recently admitted "the AI bubble is reaching a tipping point" after asking "Where is all the revenue?"

Here's the brutal irony: most AI projects fail at rates exceeding 80%, yet organizations resolve this cognitive dissonance by blaming "implementation challenges" rather than questioning whether they're solving the right problems in the first place.

The anthropological lens reveals the most fascinating pattern: organizational AI adoption mirrors cargo cult behavior. Just as Pacific Islanders built replica airstrips believing this would bring cargo planes, companies adopt AI terminology and processes while continuing traditional operations, believing that mimicking AI's external forms will automatically deliver promised benefits.

Religious studies professor Robert Geraci identifies three elements characterizing both early Christian apocalypticism and contemporary AI rhetoric: alienation within the current world, desire for a purified new world, and transformation of human beings for life in better conditions. Tech leaders have become modern prophets, with Max Tegmark calling the major AI CEOs "modern-day prophets with four different versions of the Gospel."

Ray Kurzweil promises "The Singularity" where we'll upload consciousness for immortality—a technological resurrection. Sam Altman writes about "The Merge" between humans and AI. Anthony Levandowski literally founded a church called "Way of the Future" dedicated to "worship of a Godhead based on Artificial Intelligence."

This isn't metaphorical. We're witnessing genuine religious behavior wrapped in secular technological language.

Here's where it gets personal: I've consulted with dozens of organizations, and the pattern is always the same. When faced with problems requiring cultural change, leadership development, or addressing systemic dysfunction, they invariably ask, "Can AI solve this instead?"

The organizational behavior research is damning. Companies exhibit what Harvard Business Review calls "learned helplessness" about human factors—convincing themselves that cultural problems are unsolvable while technology problems are manageable. Only 37% of organizations invest significantly in change management for their people, preferring AI implementations that let them avoid the messy, uncertain work of building psychological safety, developing leaders, and addressing power dynamics.

Amy Edmondson's research on psychological safety reveals why: creating trust requires leaders to acknowledge fallibility and model vulnerability. AI promises to bypass these uncomfortable human requirements. It's easier to deploy a chatbot than build authentic relationships. It's simpler to implement an algorithm than address systemic bias. It's less threatening to optimize data flows than challenge organizational hierarchies.

But here's the devastating truth: AI systems amplify existing organizational dysfunction rather than solving it. When companies use AI to eliminate hiring bias, the algorithms reflect and magnify existing cultural biases. When healthcare systems deploy AI for patient care, the technology fails because physician burnout and communication breakdowns remain unaddressed.

Before you dismiss all this as cynical skepticism, consider the counter-evidence. History is littered with "salvation mythologies" that seemed ridiculous but delivered transformative benefits. The railroad "bubble" of 1870-1890 built critical infrastructure despite individual investor losses. The Apollo program faced skepticism (public support never exceeded 53%), yet generated technologies we still use today—from GPS to HACCP food safety systems.

Current AI achievements are genuinely unprecedented: Nobel Prizes were awarded in 2024 for AI applications in physics and chemistry. AI systems now decode ancient texts, predict weather with revolutionary accuracy, and solve mathematical proofs at expert human levels. Google's Willow quantum chip performed calculations that would take classical computers 10 septillion years.

Research on innovation economics suggests that "irrational exuberance" actually benefits society by funding breakthrough discoveries that rational decision-makers would avoid. Perhaps we need technological optimism—even when it borders on religious fervor—to mobilize resources for genuine breakthroughs.

Navigating the salvation industrial complex

So what do we do with this knowledge? How do we harness AI's genuine potential while avoiding the psychological traps of salvation mythology?

First, recognize the religious dimension of your AI decisions. When you feel excitement about an AI solution, ask yourself: "Am I evaluating technology or seeking salvation?" When organizations propose AI initiatives, examine whether they're solving technical problems or avoiding human work.

Second, demand specific, measurable outcomes before implementation. The hallmark of salvation mythology is vague promises of transformation. Genuine AI applications should solve clearly defined problems with quantifiable benefits.

Third, invest equally in human systems as technological systems. For every dollar spent on AI implementation, match it with investment in change management, leadership development, and cultural transformation. The most successful AI deployments combine technological capability with organizational readiness.

Finally, maintain epistemic humility. Both AI evangelists and AI skeptics exhibit predictable cognitive biases. The truth likely lies in the nuanced middle: AI will deliver genuine breakthroughs in specific domains while falling short of salvation mythology promises in others.

The AI revolution is real, but it's not the revolution we think we're having. We're not just building better technology—we're revealing the deepest patterns of human psychology, our eternal search for transcendence, and our persistent belief that the next big thing will finally deliver us from the fundamental challenges of being human.

The question isn't whether AI will save us. The question is whether we can save ourselves from our need to be saved.

Your move: the next time someone pitches an AI solution, ask them what human problem they're avoiding, and whether technology is truly the appropriate intervention—or just the most psychologically comfortable one.


Product Operating Model Przewodnik - Zasada Skupienia - Zagubiona sztuka skupienia: dlaczego mówienie "nie" napędza największe innowacje produktowe

Destare Foundation, Alex Dziewulska, and 4 others
·
May 20
Product Operating Model Przewodnik - Zasada Skupienia - Zagubiona sztuka skupienia: dlaczego mówienie "nie" napędza największe innowacje produktowe

W dzisiejszym wydaniu między innymi:

Read full story

Product Operating Model: Powered by insights

Destare Foundation, Alex Dziewulska, and 4 others
·
Jun 3
Product Operating Model: Powered by insights

W dzisiejszym wydaniu między innymi:

Read full story

Product Operating Model: Zasada Transparentności - Przewodnik Szybkiej Referencji

Destare Foundation, Alex Dziewulska, and 4 others
·
Jun 17
Product Operating Model: Zasada Transparentności - Przewodnik Szybkiej Referencji

W dzisiejszym wydaniu między innymi:

Read full story

Product Operating Model: Placing Bets: Szybki przewodnik po modelu operacyjnym produktu

Destare Foundation, Alex Dziewulska, and 4 others
·
Jul 1
Product Operating Model: Placing Bets: Szybki przewodnik po modelu operacyjnym produktu

W dzisiejszym wydaniu między innymi:

Read full story

Product Operating Model: Przewodnik po Ocenie Ryzyka Produktowego

Destare Foundation, Alex Dziewulska, and 4 others
·
Jul 15
Product Operating Model: Przewodnik po Ocenie Ryzyka Produktowego

W dzisiejszym wydaniu między innymi:

Read full story

Product Operating Model: Szybkie Eksperymentowanie

Destare Foundation, Alex Dziewulska, and 4 others
·
Aug 19
Product Operating Model: Szybkie Eksperymentowanie

W dzisiejszym wydaniu między innymi:

Read full story

Product Operating Model

Product Operating Model - Test Ideas Responsibly - Quick Reference Guide

Core Principle

Ensuring experiments don't harm customers, revenue, company reputation, or colleagues while maintaining innovation velocity


Cagan's Original Definition

"Test ideas responsibly – prototypes and experiments should be handled in ways that protect the company and its customers"

Cagan's Core Approach (from Transformed):

• Opt-in Customers: Use customers who have "volunteered to be willing test subjects" rather than experimenting on all users • Discovery vs. Production: Keep

Keep reading with a 7-day free trial

Subscribe to 💜 PRODUCT ART 💜 to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 PRODUCT ART
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture