Your AI Is Making Your Team Faster. It's Also Making Them Interchangeable.
The 'median pull' is the AI adoption cost nobody is measuring. Here's how to spot it in your own organization.
- ● AI tools cause users' outputs to converge toward a common median — making teams faster but less differentiated.
- ● The Nature study by James Evans found that scientists using AI produced more work but fewer breakthrough ideas.
- ● The median pull is an adoption cost nobody is measuring: individual productivity up, collective originality down.
- ● Product teams can counter this by measuring output diversity, not just output volume, as an AI adoption metric.
By Brittany Hobbs · Co-host, Product Impact Podcast
Published April 9, 2026 · 6 min read
I've been sitting with a conversation I had two weeks ago with a head of product at a 300-person B2B SaaS company, and I haven't been able to let it go.
We were talking about the AI tooling rollout she'd led over the past year — Claude for research synthesis, a custom GPT for customer interview analysis, Cursor for the engineers, Notion AI for the whole product org. By her team's own reporting, productivity was up across every metric she cared about: fewer hours per research insight, faster PR drafts, more interview coverage, higher shipping velocity. Her CEO was happy. Her board was happy.
And then, almost under her breath, she said something that's been stuck in my head ever since:
"I can't tell anymore whether my team is actually good. I just know they're fast."
That sentence is a warning about something most AI adoption dashboards aren't measuring, and something most product leaders I talk to aren't prepared to see.
The median pull
The framework for what she was describing came together for me last month during a long conversation with Helen Edwards from the Artificiality Institute for Episode 5 of the Product Impact Podcast. Helen and her co-founder Dave Edwards have spent a decade studying what AI actually does to human cognition — not what it allows us to do, but what it does to us in the process.
One of the findings Helen keeps returning to is grounded in a paper by James Evans at the University of Chicago, published in Nature. The study looked at scientists using AI assistance in their research workflows. The findings were mixed in a way that should stop every product leader in their tracks.
AI-using scientists published more. Their citation rates went up by 26%. Their throughput accelerated across every measurable dimension.
But their work also began to converge. Scientists in the same field, using the same AI tools, started reaching similar conclusions. Their methodological choices narrowed. The range of research questions they pursued contracted. Their writing homogenized. The measurable diversity of their thinking dropped even as their measurable productivity rose.
Helen calls this the median pull — the observable effect where AI-using groups get more productive at the cost of becoming less distinctive from each other.
It isn't a theoretical finding anymore. I've been watching it happen in the teams I talk to.
What the median pull looks like in a product org
Think about what "team productivity" actually consists of in a product organization. It's not just output volume. It's the quality of judgment calls made in ambiguity. It's the distinctive insights that come out of customer research. It's the strategic framing that makes one product's approach different from three competitors'. It's the institutional taste that accumulates in the people who've been in the room long enough to know which tradeoffs matter.
The median pull attacks all of it.
Here's the pattern I've been noticing in conversations with product leaders over the past six months:
- Strategy decks start sounding alike. The language, the framing, even the section headings. A PM at one company told me she could tell which deck had been built with AI assistance and which hadn't — not because the AI decks were worse, but because they all had the same "shape."
- Customer research synthesis converges on the same conclusions. Multiple PMs told me their AI-assisted synthesis started generating themes that felt almost pre-written — the same "key insights" across different sets of interviews. When they went back and re-read the transcripts by hand, they found things the AI had systematically missed. Distinctive observations. Unusual phrasings. Tensions the model smoothed over because smoothing is what models are trained to do.
- Product pitches have the same structure. A founder I spoke with said her team's AI-assisted pitch drafts started feeling like "Mad Libs" — the same beats, the same arc, the same transitions, with different words plugged in.
- Even the language people use to describe their own work is converging. This one is subtle and unsettling. PMs are starting to describe their products using the same vocabulary, the same metaphors, the same framing. It's as if the model's patterns are becoming the language of the profession.
If your competitive advantage was ever that your team thought differently from your competitors' teams, the median pull is the thing eating your moat. Not your product roadmap. Not your pricing. Your organization's capacity for distinctive judgment.
The metric most product leaders are tracking is wrong
Most AI adoption dashboards I see inside client orgs track the same things. Percent of workflows augmented. Tools deployed. Prompt volume. Maybe a vague "satisfaction" score. Occasionally an "impact" metric loosely tied to business outcomes.
None of those metrics can detect the median pull. By the time you can see its effects in your business metrics, you're already 12–18 months into convergence with no baseline to measure the drift against.
Here's what I'd tell a product leader to track instead.
Divergence. When you ask two or three people on your team to analyze the same data, do they come back with different reads, or versions of the same read? If you asked the same question six months ago, was the answer more varied or less? This is hard to quantify. You have to do it qualitatively, and you have to do it on purpose. Most orgs won't do this because it requires admitting that the team's productivity gains might be coming at a cost they're not willing to name.
Rejection rate. When your team gets an AI-assisted work product — a synthesis, a draft, an analysis — how often do they push back? How often do they override it? How often do they use it verbatim? A team that's been captured by the median pull will have a very high use-verbatim rate and a very low override rate. They'll tell you this is good, because it means the AI is "working."
It isn't. It means the AI is running the thinking, and the humans are running the output.
Three things product leaders should be doing
Based on deployments that seem to be resisting the pull — with the honest caveat that these observations are early and not comprehensive — three practices keep coming up.
1. Protect time for human-only thinking. Not "no AI tools allowed" as a blanket policy. Specific, intentional blocks where AI tools aren't used. One product team I spoke with runs a two-hour window every Monday where the entire org works without any AI assistance on their hardest problem of the week. They call it "thinking alone together." The engineers hate it. It's reportedly where the best insights of the week happen.
2. Reward distinctive outputs, not efficient ones. If your performance metrics reward output volume, your team will optimize for that and the median pull will accelerate. If they reward distinctive thinking — "what did you see that nobody else saw" — the pull can partially reverse. This requires a different kind of review process, and most managers aren't trained to do it.
3. Track the baseline before you scale the tool. Before rolling out a new AI tool to your team, measure the current state of their output: the range of their thinking, the variance in their conclusions, the distinctiveness of their framing. You can only detect drift if you have a baseline. Nobody I've talked to has done this before a rollout. Everyone I've talked to wishes they had.
What this actually costs you
Here's the part that keeps me up at night.
The head of product I was talking to at the start of this piece isn't unusual. Every product leader I've talked to in the last three months has a version of the same feeling — the team is faster, the metrics look better, and somewhere underneath they can't quite trust that they're building the right things anymore.
Speed without distinctiveness is the most expensive thing a product org can produce. It's expensive because it feels cheap. Your team is more productive, your shipping velocity is up, your stakeholders are happy, and you're losing the thing that made your product worth building in the first place.
The median pull isn't the cost of AI adoption. The median pull is what AI adoption looks like when no one is watching for it.
That line is why I can't stop thinking about the conversation.
About the author: Brittany Hobbs is co-host of the Product Impact Podcast. She writes about the human and organizational layer of AI adoption — the part most metrics miss.
Related coverage on Product Impact:
- Podcast: Episode 5: The Human Impact of AI We Need to Measure, with Helen & Dave Edwards
- Field Guide: Cognitive Sovereignty — The Framework Explained
- Research referenced: James Evans et al., Nature (2026) — AI use and scientific convergence
- Related: The Artificiality Institute · The Artificiality Summit 2026
The conversations referenced in this piece are drawn from interviews conducted under Chatham House rules as part of Product Impact Podcast research. Identifying details have been omitted.
Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting.
Get AI product impact news weekly
SubscribeLatest Episodes ›
All episodesRelated
5
Microsoft's Copilot Problem Isn't Adoption. It's Coerced Adoption.
Anthropic Is No Longer a Model Company

HSBC's Chief AI Officer Starts This Week. So Do 46 Others. Most Will Quit Before 2028.
