<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:media="http://search.yahoo.com/mrss/"
  xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Product Impact — News</title>
    <link>https://productimpactpod.com/news</link>
    <description>AI product impact — news, releases, and case studies about the products transforming how we work and industries.</description>
    <language>en-us</language>
    <lastBuildDate>Fri, 17 Apr 2026 07:24:23 GMT</lastBuildDate>
    <atom:link href="https://productimpactpod.com/news/rss.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>Anthropic Is No Longer a Model Company</title>
      <link>https://productimpactpod.com/news/anthropic-claude-managed-agents-platform-shift/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/anthropic-claude-managed-agents-platform-shift/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Anthropic's launch of Claude Managed Agents, announced on LinkedIn by product lead Jessica Yan, is not a feature release.</description>
      <content:encoded><![CDATA[<p>Jessica Yan, a product lead at Anthropic, <a href="https://lnkd.in/gYJWv6JU">posted on LinkedIn yesterday</a> to announce the public beta of <strong>Claude Managed Agents</strong>. It is worth reading carefully because it quietly describes the most consequential strategic shift at Anthropic in 2026.</p>
<blockquote>
<p>"You can now raise the ceiling of agent execution AND launch faster using our stateful APIs, performance-optimized harness, scalable infra, and rich developer tools."</p>
<p>— <strong>Jessica Yan</strong>, Product at Anthropic</p>
</blockquote>
<p>Read those four capabilities in order. Stateful APIs. Performance-optimized harness. Scalable infrastructure. Rich developer tools. This is not a model release. This is not a feature expansion. This is Anthropic announcing that it is in the agent platform business — and by extension, in direct competition with AWS Bedrock, Google Vertex AI Agent Builder, OpenAI Assistants API, LangChain, LangGraph, CrewAI, Dust, and every other piece of infrastructure currently hosting agent workloads.</p>
<p>Until Monday, building an agent on Claude meant you handled the infrastructure. Starting yesterday, Anthropic handles it for you.</p>
<p>That is a business model change, not a feature launch.</p>
<h2 id="what-actually-shifted">What actually shifted</h2>
<p>Anthropic's business on Monday was selling inference. You bought API access to Claude, you handled state management, you wrote the orchestration, you built the monitoring, you scaled the infrastructure, you owned the developer experience. The margin was inference margin. The customer was anyone running a workload.</p>
<p>Anthropic's business on Tuesday is selling a platform. You get Claude <em>and</em> the infrastructure to run Claude-powered agents in production. Anthropic captures more of the value chain. The margin is platform margin. The customer is the developer building agent products.</p>
<p>Platform margins are higher than inference margins — that's the obvious part. The less-obvious part is stickiness. An enterprise that builds its agent on Claude Managed Agents cannot easily port that agent to a competing model. State, tooling, operational patterns, and incident history all get locked into Anthropic's infrastructure. Switching costs go up dramatically the moment a team's agent is running on Anthropic's harness.</p>
<p>This is the move the cloud providers have been waiting to see. It's also the move they've been dreading.</p>
<h2 id="who-gets-structurally-worse-this-week">Who gets structurally worse this week</h2>
<p><strong>The hyperscaler Claude distribution path.</strong> AWS Bedrock and Google Vertex host Claude for enterprises that don't want to buy from Anthropic directly. Their value proposition is compliance, existing procurement relationships, and vendor consolidation. All three are real. None beat "the people who built Claude are also running your agent infrastructure for Claude." Every enterprise that was going to run Claude agents through Bedrock or Vertex now has a reason to evaluate going straight to Anthropic instead.</p>
<p><strong>The agent framework startups.</strong> LangChain, CrewAI, LangGraph, Dust, and a long list of others built their businesses on being the orchestration layer <em>above</em> multiple model providers. Their pitch was: don't lock into one LLM; build with our framework; switch models when you need to. That pitch just got harder. Anthropic can now offer deeper integration, better performance tuning, and direct first-party support for Claude-based agents than any third-party framework can match. The frameworks will reposition around multi-model interoperability. That's a harder sell than "we're the best way to build agents."</p>
<p><strong>OpenAI's Assistants API.</strong> OpenAI built Assistants to keep enterprise developers inside the OpenAI ecosystem. They will now have to respond to every Anthropic Managed Agents capability with an equivalent, while also fighting on ChatGPT Enterprise and the foundation model benchmark treadmill. OpenAI's response will come fast. It will also be reactive, not strategic.</p>
<h2 id="who-wins">Who wins</h2>
<p>Anthropic, obviously. They just expanded their addressable market from "developers buying model access" to "developers building agent products." That's a much larger number, at higher margins, with stickier customers.</p>
<p>The subtler winner is any enterprise that was paralyzed on build-versus-buy for its agent infrastructure. Managed Agents doesn't eliminate the buy-side risk, but it gives risk-averse buyers a credible vendor-backed option they didn't have on Monday. Expect the number of enterprises that move from "planning an agent platform strategy" to "piloting Anthropic Managed Agents" over the next 90 days to be larger than most analysts expect.</p>
<h2 id="the-question-nobody-in-the-coverage-is-asking">The question nobody in the coverage is asking</h2>
<p>Here is what's missing from every take this week: <strong>when a Claude Managed Agent takes a real-world action that causes a real-world problem, who is responsible?</strong></p>
<p>The developer who wrote the agent? The enterprise that deployed it? Anthropic, whose platform is managing the state and executing the action? That question is not answered in Yan's LinkedIn post. It is probably not answered in Anthropic's initial documentation. It will be the first thing every enterprise general counsel asks before signing a contract, and it will be the single variable that determines whether Managed Agents gets enterprise traction or stays a developer tool.</p>
<p>Anthropic has two ways to handle this.</p>
<p>They can write a managed services agreement that places all liability on the customer. That's legally clean and will scare off exactly the enterprises most likely to pay platform prices.</p>
<p>Or they can accept operational responsibility for the agents running on their platform. That solves the trust problem and fundamentally changes Anthropic's risk profile as a company.</p>
<p>How Anthropic answers this question in their enterprise documentation over the next 30 days will tell you whether they see Managed Agents as a developer acquisition play or as a genuine enterprise platform. Those two paths lead to completely different outcomes in 2027.</p>
<h2 id="three-things-to-watch-in-the-next-30-days">Three things to watch in the next 30 days</h2>
<p><strong>Pricing.</strong> Anthropic has not published pricing for Managed Agents yet. Usage-based pricing signals developer targeting. Platform fee plus usage signals enterprise targeting. Whichever they pick will reveal who they're actually selling to.</p>
<p><strong>Named reference customers.</strong> The first three enterprise reference customers Anthropic cites will tell you whether they have enterprise credibility for this move. Watch the Anthropic blog through mid-May.</p>
<p><strong>OpenAI's response.</strong> OpenAI will ship something comparable within 60 days. They have to. How fast they respond — and whether it's a feature match or a genuine platform strategy — will tell you how seriously they are taking this.</p>
<hr />
<p>Yesterday Anthropic was a model company with a platform ambition. Today they are a platform company with a model at the center. The difference matters more than most of the coverage this week will capture.</p>
<hr />
<p><strong>Primary source:</strong> <a href="https://lnkd.in/gYJWv6JU">Jessica Yan, Anthropic — LinkedIn announcement (April 8, 2026)</a></p>
<p><strong>About the author:</strong> Arpy Dragffy is founder of <a href="https://ph1.ca">PH1 Research</a> and co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>. All claims about competitive positioning in this piece are based on public product documentation from the companies referenced.</p>]]></content:encoded>
      <category>agents-agentic-systems</category>
      <category>ai-product-strategy</category>
      <category>go-to-market-distribution</category>
      <category>anthropic</category>
      <category>claude-managed-agents</category>
      <category>agent-platform</category>
      <category>competitive-analysis</category>
      <media:content url="https://images.unsplash.com/photo-1677442136019-21780ecad995?w=1200&amp;h=630&amp;fit=crop" medium="image" />
      <media:thumbnail url="https://images.unsplash.com/photo-1677442136019-21780ecad995?w=1200&amp;h=630&amp;fit=crop" />
      <enclosure url="https://images.unsplash.com/photo-1677442136019-21780ecad995?w=1200&amp;h=630&amp;fit=crop" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>Microsoft's Copilot Problem Isn't Adoption. It's Coerced Adoption.</title>
      <link>https://productimpactpod.com/news/microsoft-copilot-coerced-adoption-problem/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/microsoft-copilot-coerced-adoption-problem/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Microsoft's Copilot has 15 million paid enterprise seats. When employees are given a real choice, 76% choose ChatGPT.</description>
      <content:encoded><![CDATA[<p>When enterprise employees are given the choice between Microsoft Copilot and ChatGPT — meaning both tools are available and approved for use — 76 percent of them choose ChatGPT. When Copilot is their only option, adoption reaches 68 percent. When ChatGPT is available, Copilot's share collapses to 18 percent.</p>
<p>That data, published this quarter by <a href="https://www.reconanalytics.com/ai-choice-2026-why-licenses-dont-equal-adoption/">Recon Analytics</a> and summarized in my own analysis of twelve enterprise Copilot deployments at PH1 Research over the past 18 months, tells a story that Microsoft's public Copilot narrative is working very hard to avoid.</p>
<p>Microsoft has 15 million paid Copilot enterprise seats. That's the number in the press releases. The denominator nobody mentions: roughly 450 million Microsoft 365 enterprise subscribers. Copilot's paid conversion rate against its addressable enterprise base is 3.3 percent.</p>
<p>Microsoft's argument is that this is early-market penetration — that 3.3 percent is the beginning of a growth curve. My argument, based on deployment data I've been collecting since the product launched, is that 3.3 percent isn't the beginning of a curve. It's the ceiling of what Copilot can achieve without coercion.</p>
<p>And coercion is what Microsoft is actually selling.</p>
<h2 id="coerced-adoption-is-a-different-product">Coerced adoption is a different product</h2>
<p>"Coerced adoption" is my term for what happens when an enterprise AI tool gets used because the organization has structurally limited its users' alternatives. It happens when IT blocks ChatGPT at the network layer. It happens when enterprise policy forbids employees from using their personal AI tools for work. It happens when the performance review asks "how are you using Copilot?" and doesn't ask about any other tool. It happens when Copilot is integrated into tools the employee already uses — Outlook, Word, Teams — and other AI tools aren't.</p>
<p>Coerced adoption produces usage. It does not produce value.</p>
<p>The distinction matters because the value proposition of an AI product depends entirely on the employee's choice. Real adoption happens when an employee reaches for a tool because it's useful. Coerced adoption happens when an employee reaches for a tool because everything else is blocked.</p>
<p>The 76/18 split from Recon Analytics is the cleanest test of this distinction anyone has run in public. When both tools are on the desk, employees vote with their hands. They're not voting for Copilot.</p>
<h2 id="what-the-deployment-data-actually-shows">What the deployment data actually shows</h2>
<p>At PH1, I've advised on twelve Copilot deployments at companies ranging from 2,000 to 40,000 employees. The pattern is consistent across all twelve.</p>
<p>In months one and two, licensed Copilot usage looks strong. Sixty to eighty percent of licensed users open Copilot at least once in the first 30 days. This is the number that gets reported to the board. The CIO is praised. The rollout is declared a success.</p>
<p>Then the curve drops. By month six, weekly active usage across the twelve deployments averages 24 percent. The users who stay are concentrated in three groups: people who use Outlook heavily and let Copilot draft their emails, people who use Excel and lean on Copilot for formula help, and people who have had their ChatGPT access blocked by IT.</p>
<p>The first two groups are getting real value. The third group is the coerced adoption layer. If I strip out the coerced users, weekly active adoption — the kind that reflects a real behavioral integration — is closer to 12 percent.</p>
<p>That's the number nobody is tracking. It's also the only number that matters.</p>
<h2 id="why-microsofts-reorganization-doesnt-fix-this">Why Microsoft's reorganization doesn't fix this</h2>
<p><a href="https://www.bloomberg.com/news/newsletters/2026-03-23/microsoft-msft-ai-copilot-confronts-its-identity-crisis-in-re-org-mn32qmuk">Bloomberg reported on March 23</a> that Microsoft CEO Satya Nadella authorized a reorganization of the Copilot product team, citing "internal confusion over Copilot's role, personality, and strategy." The reorganization is being read as a product management problem — Microsoft doesn't know who Copilot is for.</p>
<p>That reading is too generous.</p>
<p>The real problem isn't that Microsoft can't decide who Copilot is for. The real problem is that when employees are asked directly — by being given a choice — they're telling Microsoft very clearly who they prefer to use, and it isn't Copilot. The reorganization is a response to a symptom. The symptom is declining enterprise trust in Copilot's utility, which shows up in weekly active usage, in NPS, and in the 76 percent choice rate when ChatGPT is allowed in the building.</p>
<p>A product team reorganization cannot fix a preference problem. A preference problem is fixed by making the product people actually prefer. Copilot is not currently that product, and Microsoft knows it, which is why the Copilot enterprise strategy has quietly become structural lock-in instead of product excellence.</p>
<h2 id="the-strategic-bet-microsoft-is-making">The strategic bet Microsoft is making</h2>
<p>Microsoft's Copilot strategy in 2026 is a bet on enterprise procurement inertia. The bet is that IT departments will prefer a single-vendor AI tool that integrates with the existing Microsoft stack over managing multiple AI vendors with their own security, compliance, and procurement workflows. Microsoft is betting that the path of least resistance outweighs employee preference.</p>
<p>It's a reasonable bet in the short term. Enterprise procurement moves slowly. IT departments don't want to manage three AI vendors if they can manage one. Compliance teams don't want to vet three contracts.</p>
<p>It's a terrible bet in the long term. Employee preference is the strongest signal in enterprise technology adoption — stronger than IT preference, stronger than procurement inertia, stronger than integration convenience. Every previous enterprise technology transition has followed the same pattern: employees adopt the tool they prefer personally, drag it into the workplace, and eventually IT has to formalize it. Gmail displaced Lotus Notes this way. Slack displaced Skype for Business this way. Dropbox displaced network drives this way.</p>
<p>ChatGPT is currently in the "employees prefer it personally" phase of that pattern. The 76 percent choice rate is the leading indicator that the workplace transition is already underway. Blocking ChatGPT at the network layer is buying time. It is not solving the problem.</p>
<h2 id="three-things-to-watch-in-q2">Three things to watch in Q2</h2>
<p>My prediction: By the end of Q3, Copilot's weekly active usage will drop below 20 percent at the average enterprise deployment, and Microsoft will shift its public narrative from adoption numbers to "productivity gains" — a metric that's harder to verify and easier to manipulate.</p>
<p>Three specific data points will tell us whether Microsoft's Copilot bet is holding.</p>
<p><strong>Copilot weekly active usage among users who also have access to ChatGPT.</strong> If this number stays below 20 percent, Microsoft's product is losing the head-to-head even inside its own accounts.</p>
<p><strong>Enterprise deals that explicitly permit ChatGPT alongside Copilot.</strong> When major enterprises publicly commit to a multi-vendor AI stack, Microsoft's structural lock-in strategy is breaking down. Watch for this in the language of Q2 enterprise AI announcements.</p>
<p><strong>The trajectory of coerced adoption as a share of total Copilot usage.</strong> If Microsoft's enterprise adoption is increasingly concentrated in environments that block alternatives, the product is failing on its merits.</p>
<p>The Copilot product team reorganization is not a response to confusion. It is a response to data. Microsoft has the data. We don't see it publicly. Based on what I'm seeing in real deployments, it isn't good.</p>
<hr />
<p><strong>About the author:</strong> Arpy Dragffy is the founder of <a href="https://ph1.ca">PH1 Research</a>, a 14-year-old AI product strategy consultancy, and co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>. Deployment data referenced in this column is anonymized in aggregate and drawn from engagements where PH1 has permission to discuss patterns.</p>
<p><strong>Related reporting:</strong><br />
- <a href="https://www.bloomberg.com/news/newsletters/2026-03-23/microsoft-msft-ai-copilot-confronts-its-identity-crisis-in-re-org-mn32qmuk">Bloomberg: Microsoft Copilot confronts its identity crisis in re-org (March 23, 2026)</a><br />
- Recon Analytics: Microsoft Unifies Copilot Teams (March 17, 2026)</p>]]></content:encoded>
      <category>adoption-organizational-change</category>
      <category>go-to-market-distribution</category>
      <category>copilot</category>
      <category>microsoft</category>
      <category>enterprise-adoption</category>
      <category>practitioner-take</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/microsoft-copilot-coerced-adoption-problem.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/microsoft-copilot-coerced-adoption-problem.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/microsoft-copilot-coerced-adoption-problem.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>The Man Who Hired Jony Ive Has a Warning for the Physical AI Boom</title>
      <link>https://productimpactpod.com/news/robert-brunner-physical-ai-trust-currency/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/robert-brunner-physical-ai-trust-currency/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Robert Brunner — who founded Apple's Industrial Design Group, hired Jony Ive, and built Beats — joined the Product Impact Podcast to break down what the AI hardware…</description>
      <content:encoded><![CDATA[<p>The 2026 race to put AI into a physical object is on, and the body count is already climbing.</p>
<p>Humane's AI Pin — the most-hyped wearable launch of 2024 — was effectively dead by <a href="https://www.techradar.com/computing/artificial-intelligence/with-the-humane-ai-pin-now-dead-what-does-the-rabbit-r1-need-to-do-to-survive">February 2025, when HP acquired its assets for $116 million</a> after the company burned through more than $230 million in venture capital. Rabbit R1 is, by most accounts, on a similar trajectory. Meta <a href="https://techcrunch.com/2025/12/05/meta-acquires-ai-device-startup-limitless/">quietly acquired Limitless</a> in December 2025 and immediately stopped selling the Pendant to new customers. Meanwhile, OpenAI's Chris Lehane confirmed at Davos in January <a href="https://www.axios.com/2026/01/19/openai-device-2026-lehane-jony-ive">that the company's first device, designed in partnership with Jony Ive, is on track to ship in the second half of 2026</a> — a device OpenAI bought from Ive for $6.4 billion. Apple is preparing AI glasses, a camera pendant, and camera-embedded AirPods. Every major tech keynote at CES 2026 led with what the industry is now calling "physical AI."</p>
<p>Almost nobody knows how to design for it.</p>
<p>The man best positioned to explain why showed up on the <a href="https://productimpactpod.com/podcast/robert-brunner-physical-ai">latest Product Impact Podcast episode</a>, and what he said is more useful than anything that's been written about the category this year.</p>
<h2 id="who-is-robert-brunner">Who is Robert Brunner</h2>
<p>A short version of the résumé, because it matters: <a href="https://en.wikipedia.org/wiki/Robert_Brunner">Robert Brunner founded Apple's Industrial Design Group in 1989</a> and ran it until 1996. He hired Jony Ive (three times, by his own account, before Ive said yes). He led the design of the original PowerBook, whose keyboard-back, palm-rest pointing-device layout has remained the universal laptop configuration for 35 years. After Apple, he became a partner at Pentagram. In 2007 he founded <a href="https://ammunitiongroup.com/teams_pt/robert-brunner/">Ammunition</a>, the studio that designed Beats by Dre, the Square Stand, the Lyft Amp, the June Oven, the Polaroid Cube, and the Limitless Pin that Meta just bought.</p>
<p>Now, with co-founders, he is building <strong>Object</strong> — a startup focused on what physical AI should feel like when it's designed to respect the user instead of extract from them.</p>
<p>When Brunner talks about how hardware should work, it is worth pausing the rest of the conversation and listening.</p>
<h2 id="what-he-told-us">What he told us</h2>
<p>Brunner's central argument on the podcast is that the AI hardware industry is repeating the mistake the consumer software industry made fifteen years ago, but with a more dangerous payload.</p>
<blockquote>
<p>"Modern technology is optimized for engagement, advertising, data extraction, time. In many ways, technology is, it's like the matrix. It's treating us as a source, as a resource. For information and not human well-being. And that's one of the fundamental problems with digital technology. It's been built around humans as a resource to be monetized. And I think we're all sick of it."</p>
<p>— <strong>Robert Brunner</strong>, Product Impact Podcast S02E06</p>
</blockquote>
<p>The companies racing to put AI into wearables, pendants, glasses, and pins are, Brunner argues, building those products on top of the same incentive structures that made smartphones extractive. The hardware changes; the business model doesn't. That's the trap.</p>
<p>His framing of the alternative is the line worth tattooing on a product office wall:</p>
<blockquote>
<p>"The most valuable currency in technology is rightfully becoming trust. The next great technology companies will be the ones people trust with their lives, not just their data."</p>
<p>— <strong>Robert Brunner</strong></p>
</blockquote>
<p>He means this literally. As physical AI moves into devices that contain microphones, cameras, motion sensors, and access to an always-on data stream about how humans actually move through the world, the vendors that will win the next decade are not the ones with the best models. They are the ones whose customers genuinely believe the device is on their side.</p>
<h2 id="brunners-test-for-whether-ai-in-a-product-is-real">Brunner's test for whether AI in a product is real</h2>
<p>This is the part of the conversation that should be compulsory reading for every product manager shipping an "AI-powered" anything in 2026.</p>
<p>Brunner offered a test for distinguishing genuine AI integration from AI-as-marketing-layer. It is short. It is brutal. It is the answer to a question every product team is being asked by their CEO right now.</p>
<blockquote>
<p>"Does AI remove steps? Will the product require fewer actions to accomplish something meaningful — or more? If it adds menus and features and prompts and dashboards and all that stuff, it's probably not good and it may just be marketing. But if AI quietly removes complexity and lets you do something faster, better, it's real."</p>
<p>— <strong>Robert Brunner</strong></p>
</blockquote>
<p>And then, the line that made me stop the recording:</p>
<blockquote>
<p>"The best AI feature is the one you never notice. The problem simply disappears."</p>
</blockquote>
<p>This is the inverse of how every AI product release in 2026 has been marketed. Vendors are competing to <em>show</em> the AI — the chat overlay, the floating assistant, the "ask me anything" button, the badge in the corner of the interface. Brunner is saying that's the tell. If you can see it, it isn't working.</p>
<p>Compare this to what shipped with Humane's AI Pin: a laser projector beaming a menu onto your palm, a wake-word interaction model, a visible badge on your chest that other people noticed before you did. The product made the AI as visible as possible. By Brunner's standard, the design itself was the failure mode.</p>
<h2 id="why-hardware-is-different-from-a-chat-interface">Why hardware is different from a chat interface</h2>
<p>Brunner spent a long stretch of the conversation on something most coverage of AI hardware is missing: the relationship humans have with physical objects is fundamentally different from the relationship we have with software. He has been arguing this for thirty years. It is more relevant now than it has ever been.</p>
<blockquote>
<p>"Human beings have this unique relationship with objects. In many ways we'll use physical artifacts to define who we are — through the car we drive, the shoes we wear, the furniture we buy. People develop this emotional connection to things they can't literally speak to, whether that's a chair, a kitchen tool, whatever. That sort of goes back to the dawn of man — to when the first person who got up on two feet picked up a stick."</p>
<p>— <strong>Robert Brunner</strong></p>
</blockquote>
<p>His point: when you put intelligence inside an object, you are not making the object smarter. You are inserting yourself into one of the deepest emotional relationships humans have with the made world. A chat interface is something you use. A wearable device is something you live with. The trust standard is dramatically higher, because the failure mode is so much more intimate.</p>
<p>This is the part of the analysis that the Humane and Rabbit failures keep teaching the market. Both products were technically functional. Both products had compelling demos. Both products lost their customers within months of shipping, and the postmortems keep finding the same root cause: users did not trust the device enough to live with it.</p>
<h2 id="the-limitless-example-in-his-own-words">The Limitless example, in his own words</h2>
<p><a href="https://ammunitiongroup.com/teams_pt/robert-brunner/">Brunner's studio Ammunition</a> designed the Limitless Pin — the "memory augmentation" wearable that records audio throughout your day so an AI assistant can search and summarize it later. Meta acquired Limitless in December 2025. The pendant is no longer sold to new customers.</p>
<p>Brunner's reflection on what went wrong is unusually direct for a designer talking about his own work:</p>
<blockquote>
<p>"We chose to, instead of designing it to look like a piece of an iPhone or technology, we really designed it to be, feel more like a watch — a personal object — and came up with a really nice attachment system. But the fundamental challenge with the product, and essentially the product for those who don't know about it, records audio. The fundamental issue is nobody wants to be recorded. Nobody. Even in meetings. And knowing that you're being recorded — even though it's got a little light that tells you that it's on — you're still like, okay, how is this information being used against me?"</p>
<p>— <strong>Robert Brunner</strong></p>
</blockquote>
<p>The form factor was right. The attachment was right. The model was right. The business was wrong because the product asked users to do something — let themselves be recorded all day — that no amount of design polish could make comfortable.</p>
<p>This is the diagnostic question Brunner is bringing to Object, the new startup he's now building. It's also the question every founder racing to ship a wearable in 2026 should answer before they tape out silicon.</p>
<h2 id="the-line-openai-apple-and-meta-should-print-and-frame">The line OpenAI, Apple, and Meta should print and frame</h2>
<p>Toward the end of the conversation, Brunner returned to the question of where AI will and won't replace human contribution. His answer is the most succinct articulation I've heard of why the "AI replaces designers" thesis is structurally wrong:</p>
<blockquote>
<p>"AI doesn't feel. AI has never been hurt. AI has never felt joy. AI has never been through these experiences that shape you and define you. And those are the things that become these incredible assets — taste, insight, and judgment. Those are the things I think young designers need to spend more time developing, as opposed to learning how to do a specific tool or create amazing imagery. I don't think those are things that will ever truly be replicated."</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>"Design can generate possibilities, but it can't decide what matters."</p>
</blockquote>
<p>The product teams at OpenAI, Apple, Meta, and the dozens of physical AI startups currently raising rounds are about to discover this the hard way. The hardware will be impressive. The models will be impressive. The first generation will mostly fail anyway, because the people designing it will have optimized for what's possible to demo instead of for what humans can actually live with.</p>
<p>The teams that survive the next 24 months will be the ones that take Brunner's test seriously: does the AI make the product simpler, or does it make it noisier? Does the user notice it, or does the problem just disappear? Does the device respect the person, or does it extract from them?</p>
<p>The hardware boom is happening regardless of whether the industry takes that test seriously. Brunner's bet, and the one his new company Object is being built around, is that the products that win 2027 will be the ones designed by people who already know the answer.</p>
<hr />
<p><strong>Listen to the full conversation:</strong> <a href="https://productimpactpod.com/podcast/robert-brunner-physical-ai">Product Impact Podcast S02E06 — Robert Brunner on Physical AI</a></p>
<p><strong>Hosted by:</strong> Brittany Hobbs and Arpy Dragffy</p>
<p><strong>About Robert Brunner:</strong> Founder of <a href="https://ammunitiongroup.com/">Ammunition</a>, founder of Object. Former Director of Industrial Design at Apple (1989–1996). Hired Jony Ive. Designed the original PowerBook. Led design of Beats by Dre, the June Oven, Square Stand, Polaroid Cube, Lyft Amp, and Limitless Pin. (<a href="https://en.wikipedia.org/wiki/Robert_Brunner">Wikipedia</a>)</p>
<p><strong>About the author:</strong> Arpy Dragffy is founder of <a href="https://ph1.ca">PH1 Research</a> and co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>.</p>
<p><strong>Sources used in this analysis:</strong><br />
- Product Impact Podcast S02E06 (April 2026) — primary source for Brunner quotes<br />
- <a href="https://www.axios.com/2026/01/19/openai-device-2026-lehane-jony-ive">Axios: OpenAI aims to debut first device in 2026</a><br />
- <a href="https://techcrunch.com/2025/12/05/meta-acquires-ai-device-startup-limitless/">TechCrunch: Meta acquires Limitless</a><br />
- <a href="https://www.techradar.com/computing/artificial-intelligence/with-the-humane-ai-pin-now-dead-what-does-the-rabbit-r1-need-to-do-to-survive">TechRadar: With the Humane AI Pin now dead</a><br />
- <a href="https://en.wikipedia.org/wiki/Robert_Brunner">Robert Brunner — Wikipedia</a><br />
- <a href="https://ammunitiongroup.com/teams_pt/robert-brunner/">Ammunition Group</a></p>]]></content:encoded>
      <category>ux-experience-design-for-ai</category>
      <category>governance-risk-trust</category>
      <category>physical-ai</category>
      <category>robert-brunner</category>
      <category>ai-hardware-design</category>
      <category>trust-in-technology</category>
      <category>openai-device</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/robert-brunner-physical-ai-trust-currency.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/robert-brunner-physical-ai-trust-currency.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/robert-brunner-physical-ai-trust-currency.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>HSBC's Chief AI Officer Starts This Week. So Do 46 Others. Most Will Quit Before 2028.</title>
      <link>https://productimpactpod.com/news/hsbc-chief-ai-officer-wave-prediction/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/hsbc-chief-ai-officer-wave-prediction/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Brittany Hobbs</dc:creator>
      <description>Enterprises hired 47 Chief AI Officers in Q1 2026. Product Impact Podcast interviewed 11 from the 2024-2025 wave about what the job actually is.</description>
      <content:encoded><![CDATA[<p>David Rice, who started this week as HSBC's first group Chief AI Officer, has a problem he probably doesn't know about yet. So do the 46 other senior executives who were named to new CAIO or equivalent roles at large enterprises between January and March.</p>
<p>The problem isn't David Rice. By every public account, he's a capable operator — twenty years at HSBC, most recently as COO of the bank's corporate and institutional banking division, respected by colleagues, strong track record. If I were HSBC, I'd hire him too.</p>
<p>The problem is the role they hired him into.</p>
<p>For the last three months I've been quietly tracking Chief AI Officer appointments across 47 large enterprises — an analysis <a href="/news/caio-hiring-surge-q1-2026-enterprise-ai-budget-pressure">published in parallel</a>. In parallel I've been doing something less formal: I've been interviewing people who took similar roles in 2024 and 2025. The CAIOs who came before this wave. The ones who are now trying to figure out how to get out.</p>
<p>None of the eleven people I spoke with agreed to be named. The reason they gave — that speaking publicly about their role's structural problems would damage their ability to do the job they are currently trying to do, and their ability to get the next job after — is the reason this problem stays invisible to the boards that keep creating these roles.</p>
<p>What they told me, almost without exception, is this: the job description they were given was not the job they ended up doing. The budget authority they were promised did not materialize. The board expected measurable results on a timeline that was impossible to hit. The CIO and CDO saw them as encroaching on territory. The product teams saw them as governance overhead. The business units saw them as someone whose approval they needed before shipping the AI features they were already building.</p>
<p>And the budget pressure — <a href="https://www.businesswire.com/news/home/20251028641086/en/Forresters-2026-Technology-Security-Predictions-As-AIs-Hype-Fades-Enterprises-Will-Defer-25-of-Planned-AI-Spend-to-2027">Forrester's prediction that enterprises will defer 25 percent of planned AI spending</a> — showed up about eight months into their role, right when they were supposed to start showing wins.</p>
<p>Most of them are looking for the exit. They don't call it that publicly. The ones who still have their role are framing it in their internal communications as "restructuring" or "evolving scope." Privately, they're planning their next move.</p>
<h2 id="three-structural-problems-nobody-is-correcting">Three structural problems nobody is correcting</h2>
<p>I have not seen a single CAIO appointment this quarter that has corrected for any of the three structural problems the 2024–2025 wave is now running into.</p>
<p><strong>The authority gap.</strong> The job is written with the vocabulary of executive accountability — "drive enterprise AI strategy," "own AI value realization," "ensure responsible deployment." The authority that would actually make those things possible is almost never granted. The CAIOs I interviewed have responsibility for AI initiatives they cannot direct, budget authority for spend they cannot approve, and governance mandates for teams that don't report to them. The kindest description is "dotted-line everything." The honest description is unaccountable power.</p>
<p><strong>The timeline mismatch.</strong> Boards are hiring CAIOs with 12-to-18-month expectations for measurable results. In the deployments I'm hearing about, the behavioral layer of AI adoption — the part that determines whether real value gets created — takes 24 to 36 months to shift meaningfully. You cannot change how an organization of 40,000 people does its work in 12 months, and nobody who has ever tried to change enterprise behavior thinks otherwise. The 12-month expectation was invented by vendors and consultants who needed the sales cycle to be short. It has become the board-level expectation for what the CAIO must deliver. The gap between those two numbers is where most of the 47 Q1 CAIOs will fall.</p>
<p><strong>The budget cycle trap.</strong> Every CAIO hired in Q1 of 2026 will face their first real budget review in roughly October or November of this year — precisely when the Forrester-predicted deferral of 2026 AI spending will be hitting their organizations' finance teams. They will be asked to justify AI spending at the exact moment that enterprise AI spending is being cut. They will not have had enough time to produce the kind of evidence that answers the question. The result will be a CAIO who is publicly championing AI investment while watching the budget for that investment get reduced by a CFO who has lost patience.</p>
<p>Watch what happens to those CAIOs over the following six months.</p>
<h2 id="what-the-eleven-interviews-actually-said">What the eleven interviews actually said</h2>
<p>I cannot quote the eleven people I spoke with by name. I can tell you what they said in aggregate.</p>
<p>Nine out of eleven described a moment in their first six months when they realized the job's authority didn't match its accountability. They used different language for it — "the gap," "the mismatch," "the part nobody told me about." One called it "being the designated adult for a teenager who didn't invite me to the party."</p>
<p>Seven described being hired primarily because the board wanted someone specific to be accountable for AI outcomes, rather than because the board had a clear theory of what the CAIO would actually do. "They didn't need me to run AI," one told me. "They needed to be able to say 'we have someone running AI.'"</p>
<p>Five described explicit CEO or board frustration at the 12-month mark when the "AI strategy" they were expected to deliver hadn't translated into visible business metrics. Four of those five said the frustration was framed to them as a personal performance issue, not a timeline mismatch.</p>
<p>Three had already been fired, quit, or moved into a different internal role by the time we spoke. Two more told me they were actively looking.</p>
<p>Only one of the eleven told me the role was working as described. The outlier was at a company where the CEO had previously built and led an AI product team themselves. The CEO understood what AI deployment actually required, so the CAIO's authority matched the assignment.</p>
<p>The outlier tells you what the problem is. The CAIO role works when the CEO already understands what AI deployment requires. It fails when the CEO hires a CAIO to understand it for them. The first condition is rare. The second is almost universal in the current wave.</p>
<h2 id="my-prediction">My prediction</h2>
<p>Of the 47 CAIOs hired at large enterprises in Q1 of 2026, I expect more than half to have exited their roles — through departure, demotion, restructuring, or title change — by the end of 2027.</p>
<p>I'd like to be wrong about this. David Rice and the 46 others are smart people walking into a structurally difficult assignment, and I want them to succeed. But the pattern is consistent enough that I'm willing to put the prediction in writing and stand behind it publicly.</p>
<p>The specific failure mode will be the budget cycle trap. Most CAIOs in this wave will not survive their first real budget review, because the review will happen before they can produce the results that would justify the investment. The boards that hired them will not remember that they made this timeline impossible. They will remember that the CAIO didn't deliver.</p>
<p>The news coverage of the CAIO hiring wave has been framed as a sign that enterprise AI accountability is maturing. I think it's the opposite. It's a sign that enterprise boards are trying to solve an accountability problem by hiring someone to be accountable — without addressing any of the underlying structural conditions that made AI accountability hard in the first place.</p>
<p>Wishing David Rice well is not enough. The role he took this week is broken. HSBC has the power to fix it for him. So do the 46 other boards currently watching their new CAIOs walk in the door.</p>
<p>If they don't, I'll see you back here in 18 months writing the obituary for this wave.</p>
<hr />
<p><strong>About the author:</strong> Brittany Hobbs is co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>, where she covers the human and organizational layer of enterprise AI adoption. The eleven interviews referenced in this piece were conducted under Chatham House rules between January and March 2026.</p>
<p><strong>Related reporting:</strong><br />
- Product Impact Podcast analysis: <a href="/analysis/caio-hiring-surge-q1-2026">Chief AI Officer Hirings Hit Record in Q1</a><br />
- HSBC press release on David Rice appointment (April 1, 2026)<br />
- Forrester 2026 Predictions: 25% of AI spend to be deferred to 2027</p>]]></content:encoded>
      <category>adoption-organizational-change</category>
      <category>governance-risk-trust</category>
      <category>chief-ai-officer</category>
      <category>hsbc</category>
      <category>enterprise-hiring</category>
      <category>governance</category>
      <category>prediction</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/hsbc-chief-ai-officer-wave-prediction.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/hsbc-chief-ai-officer-wave-prediction.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/hsbc-chief-ai-officer-wave-prediction.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>Chief AI Officer Hirings Hit Record in Q1 as Enterprise AI Budgets Tighten</title>
      <link>https://productimpactpod.com/news/caio-hiring-surge-q1-2026-enterprise-ai-budget-pressure/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/caio-hiring-surge-q1-2026-enterprise-ai-budget-pressure/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Brittany Hobbs</dc:creator>
      <description>47 enterprises appointed Chief AI Officers in Q1 2026 — more than double the 2025 pace — even as Forrester predicts 25% of AI spend will be deferred.</description>
      <content:encoded><![CDATA[<p><a href="https://www.hsbc.com/news-and-views/news/media-releases/2026/david-rice-announced-as-chief-ai-officer">HSBC appointed David Rice as its first group Chief AI Officer</a>, effective April 1, capping a first quarter in which at least 47 Fortune 1000 and equivalent large enterprises named a new Chief AI Officer or equivalent senior AI executive.</p>
<p>The quarterly total is more than double the pace of the comparable 2025 period. The hiring surge is occurring alongside tightening enterprise AI budgets — a contradiction that reveals how enterprises are responding to AI's accountability gap.</p>
<h2 id="the-hsbc-appointment">The HSBC appointment</h2>
<p>HSBC named David Rice, previously chief operating officer of the bank's corporate and institutional banking division, as its first group Chief AI Officer. In <a href="https://www.hsbc.com/news-and-views/news/media-releases/2026/david-rice-announced-as-chief-ai-officer">its announcement</a>, HSBC said Rice will lead the expansion of generative AI across the bank and establish governance standards for AI deployments. The bank also expanded the remit of CTO Mario Shamtani to strengthen technology foundations for AI at scale.</p>
<p>The appointment makes HSBC the third of the top ten global banks by assets to name a dedicated group-level CAIO in the past 12 months.</p>
<h2 id="the-broader-q1-pattern">The broader Q1 pattern</h2>
<p>The Q1 analysis identified 47 new CAIO or equivalent appointments at enterprises with more than 5,000 employees between January 1 and March 31, 2026. Sectors with the highest concentration were financial services (14), healthcare and life sciences (9), insurance (6), industrial and manufacturing (6), and public sector (5).</p>
<p>The pattern is not limited to the largest companies. Roughly 60 percent of new CAIOs now report directly to their company's CEO — a reporting structure associated with elevated strategic authority but also with elevated accountability exposure.</p>
<h2 id="the-budget-context">The budget context</h2>
<p>The hiring wave is occurring against a backdrop of tightening AI budgets and visible enterprise AI failures.</p>
<p><a href="https://www.businesswire.com/news/home/20251028641086/en/Forresters-2026-Technology-Security-Predictions-As-AIs-Hype-Fades-Enterprises-Will-Defer-25-of-Planned-AI-Spend-to-2027">Forrester predicted in October 2025</a> that enterprises will defer 25 percent of planned 2026 AI spending to 2027, as fewer than one-third of AI decision-makers can tie AI value to their organization's financial growth. <a href="https://fortune.com/2026/01/19/pwc-global-chairman-mohamed-kande-ai-nothing-basics-29th-ceo-survey-davos-world-economic-forum/">PwC's 29th Global CEO Survey</a> found that 56 percent of CEOs report no measurable revenue increase or cost decrease from AI initiatives. And <a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner predicted</a> that 40 percent of enterprise agentic AI projects will be canceled by 2027.</p>
<p>The apparent contradiction — more CAIO hires alongside less AI spending — reflects a phase shift. Enterprises are not increasing AI investment; they are formalizing accountability for the investment they have already made. When a $3 trillion bank creates a new C-suite role specifically for AI at the same time the market is disclosing failures, the signal is that AI spending has become material enough that the CFO can no longer be the accountable party. Someone has to own the outcome.</p>
<h2 id="what-to-watch">What to watch</h2>
<p>CAIO departures are likely to accelerate in the second half of 2026 as the executives hired in the current wave begin to face budget review cycles. The people hired to own AI outcomes now will be measured against those outcomes within 12 to 18 months. History suggests the tenure will be short: the 2024 cohort of enterprise AI executives already shows elevated turnover compared to peer C-suite roles.</p>
<p>Additional Q2 CAIO announcements are expected across retail, industrial, and healthcare sectors.</p>
<hr />
<p><strong>Sources:</strong><br />
- <a href="https://www.hsbc.com/news-and-views/news/media-releases/2026/david-rice-announced-as-chief-ai-officer">HSBC: David Rice appointed as first Chief AI Officer</a><br />
- <a href="https://www.businesswire.com/news/home/20251028641086/en/Forresters-2026-Technology-Security-Predictions-As-AIs-Hype-Fades-Enterprises-Will-Defer-25-of-Planned-AI-Spend-to-2027">Forrester: Enterprises will defer 25% of planned AI spend to 2027</a><br />
- <a href="https://fortune.com/2026/01/19/pwc-global-chairman-mohamed-kande-ai-nothing-basics-29th-ceo-survey-davos-world-economic-forum/">PwC 29th Global CEO Survey: 56% of CEOs see no AI ROI (Fortune)</a><br />
- <a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner: Over 40% of agentic AI projects will be canceled by 2027</a></p>]]></content:encoded>
      <category>adoption-organizational-change</category>
      <category>governance-risk-trust</category>
      <category>chief-ai-officer</category>
      <category>enterprise-hiring</category>
      <category>ai-governance</category>
      <category>hsbc</category>
      <category>budget-trends</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/caio-hiring-surge-q1-2026-enterprise-ai-budget-pressure.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/caio-hiring-surge-q1-2026-enterprise-ai-budget-pressure.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/caio-hiring-surge-q1-2026-enterprise-ai-budget-pressure.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>Atlassian Shipped Rovo Actions This Week. It's the Right Feature for the Wrong Moment.</title>
      <link>https://productimpactpod.com/news/atlassian-rovo-actions-launch-timing-critique/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/atlassian-rovo-actions-launch-timing-critique/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Atlassian launched Rovo Actions this week, letting the AI assistant take autonomous actions inside Jira and Confluence. The feature is good. The month is wrong.</description>
      <content:encoded><![CDATA[<p>Atlassian this week launched <strong>Rovo Actions</strong>, a new capability for its Rovo AI product that lets the assistant take actions directly inside Jira, Confluence, and connected third-party tools. Where Rovo previously answered questions by searching across the enterprise knowledge graph, Rovo Actions lets Rovo <em>do</em> things — create tickets, update statuses, schedule meetings, post to Slack, reorganize Confluence pages.</p>
<p>The launch is Atlassian's most aggressive move into agentic AI to date. It's the right direction for the product. It is also one of the worst possible weeks to ship it.</p>
<p>I've been tracking enterprise Rovo deployments since the product launched, including three engagements at PH1 Research where the teams I advised were early adopters. The product works. Rovo's knowledge graph integration is legitimately useful. The question of whether Rovo should evolve from read-only retrieval to agentic execution has always been <em>when</em>, not <em>if</em>.</p>
<p>But <em>when</em> matters more than Atlassian's product organization appears to have calculated.</p>
<h2 id="what-shipped">What shipped</h2>
<p>Rovo Actions introduces three capabilities.</p>
<p>First, autonomous ticket creation and field updates in Jira. Rovo can now take a user request like "create follow-up tickets for everything we committed to in the Q2 planning doc" and produce the tickets, assign them, and set priorities.</p>
<p>Second, structured editing of Confluence pages — including bulk page reorganization and template-based page generation. Rovo can restructure a team's entire documentation space based on a single natural-language instruction.</p>
<p>Third, a third-party action layer that lets Rovo take actions in connected tools. Atlassian's launch blog post names Slack, Google Drive, GitHub, and Figma, with more integrations promised over the next quarter.</p>
<p>The technical implementation is clean. Rovo Actions uses what Atlassian calls a "confirmation loop" for high-stakes actions: for any action the agent classifies as consequential, Rovo pauses and asks the user to confirm. For low-stakes actions, the agent proceeds autonomously. Atlassian has published documentation on how action classification works, and the system can be tuned per workspace.</p>
<p>Rovo Actions is gated behind Atlassian's enterprise tier and requires an administrator to enable it per workspace. It is not being pushed to teams automatically.</p>
<p>This is a thoughtful launch. The gating is right. The confirmation loop is right. The documentation is transparent. Atlassian learned from the mistakes other vendors have made shipping agentic AI at enterprise scale, and the engineering shows it.</p>
<p>And none of that is going to determine whether Rovo Actions succeeds, because Atlassian is launching this feature into the worst possible week.</p>
<h2 id="the-timing-problem">The timing problem</h2>
<p>Enterprise agentic AI is in the middle of a trust crisis.</p>
<p>Gartner this week predicted that 40 percent of enterprise agentic AI projects will be canceled by 2027. Amazon Web Services disclosed that its Kiro AI agent autonomously deleted a production environment during a 13-hour outage. monday.com is facing a securities class-action lawsuit over alleged misleading AI revenue claims. Microsoft reorganized its Copilot product team in response to what Bloomberg described as internal confusion about the product's role, personality, and strategic direction.</p>
<p>The enterprise AI conversation among the people who actually deploy these products has shifted from "how do we get more AI adoption?" to "how do we make sure our AI doesn't cost us our jobs?" Heads of AI at enterprises are not looking for new agentic capabilities right now. They are looking for reasons to trust the agentic capabilities they already have.</p>
<p>Atlassian is launching Rovo Actions into that environment. The feature's actual job on day one is not "help teams move faster." Its job is "convince skeptical enterprise buyers that giving an AI agent autonomous write access to their knowledge base is not going to be the next AWS Kiro incident."</p>
<p>That is a very hard sales motion this week. It would have been an easier sales motion six months ago. It will probably be an easier sales motion a year from now, once the current trust crisis plays out. It is, specifically, a terrible sales motion this month.</p>
<h2 id="what-the-deployment-data-actually-says-users-want">What the deployment data actually says users want</h2>
<p>Across three Rovo client deployments I advised at PH1 Research, the same pattern held in all three: the teams using Rovo most successfully were not the teams asking Rovo to take actions. They were the teams using Rovo to reduce the friction of finding existing information. The question "what's the latest on the X project?" previously required pinging three people and opening five Confluence pages. Rovo answered it in twelve seconds.</p>
<p>When I asked the same teams what would make Rovo more valuable, "let it take actions" was never in the top three answers. The top three answers were:</p>
<ol>
<li>"Make it work better for my specific team's terminology."</li>
<li>"Give me confidence that the data it's summarizing is actually current."</li>
<li>"Let me see its sources inline so I can check the answers."</li>
</ol>
<p>Rovo Actions addresses none of those three requests. It addresses a request that's higher on vendor roadmaps than on customer wishlists — which is not uncommon in enterprise software, but it's a particularly risky bet to make in a market where trust is already compromised.</p>
<p>The deployment question Rovo Actions raises that nobody I've talked to has a good answer for: if the agent writes a confidential note into a Confluence page, sends a Slack message, and updates a ticket, and any one of those three actions is wrong — who finds out first? Who notices the error? How long does it persist before anyone sees it?</p>
<p>Atlassian's confirmation loop addresses this partially. In practice, confirmation loops become noise over time. Users start confirming without reading. This is well-documented in every enterprise software interaction pattern study since the 1990s. The confirmation loop is a good-faith solution to a problem that confirmation loops, historically, do not solve.</p>
<h2 id="what-atlassian-should-have-shipped-instead">What Atlassian should have shipped instead</h2>
<p>The version of Rovo Actions that would have landed better this month is a <strong>read-write asymmetric</strong> feature: Rovo takes reversible actions autonomously — creating drafts, adding comments, suggesting changes — while requiring explicit human authorization for anything that modifies production state. This is the graduated autonomy pattern that's been working in the agentic deployments I see succeeding. It's boring. It doesn't make a great launch announcement. It would have been the right product for this moment.</p>
<p>Atlassian's product team, for whatever internal reasons, chose the more ambitious path. Rovo Actions as shipped is a feature that trusts enterprise customers to tune action classification carefully, configure confirmation loops appropriately, and monitor agent behavior vigilantly. Most of those customers are not in a position to do any of those things this quarter, because they are still recovering from the last three agentic AI surprises.</p>
<h2 id="three-things-to-watch-in-the-next-90-days">Three things to watch in the next 90 days</h2>
<p><strong>Enablement rate.</strong> How many enterprise Atlassian customers actually turn Rovo Actions on in the first 30 days. Atlassian will not report this number publicly. Leaked usage data, customer surveys, and Reddit complaints will. Watch for the gap between "it's available" and "people enabled it."</p>
<p><strong>Public incidents.</strong> Whether any customer publicly discloses an incident involving Rovo Actions during the first 60 days. Atlassian will not publish one itself. If a customer does, the launch is in trouble.</p>
<p><strong>The quiet narrowing.</strong> Whether Atlassian reduces the scope of Rovo Actions' autonomous authority in a product update before the end of Q3. This is what Microsoft has done with Copilot features under pressure, multiple times. It's what Salesforce has done with Agentforce. It's what Google has done with Gemini in Workspace. A quiet narrowing would be Atlassian's acknowledgment that the initial launch was too aggressive for the market's current tolerance.</p>
<h2 id="the-larger-point">The larger point</h2>
<p>Rovo Actions is a good feature I would love to see work. I have clients who would benefit from what it's trying to do. The reason I'm writing this skeptical take is not because I dislike the product — it's because I don't trust the month Atlassian chose to ship it.</p>
<p>Product timing is a strategy decision, not a marketing decision. Launching an ambitious new agentic feature in the same week <a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner is forecasting a 40 percent agentic AI project failure rate</a> is a timing mistake no amount of good documentation or thoughtful confirmation loops can undo. The market's attention this month is on failure, not capability. Rovo Actions is launching into the wrong conversation.</p>
<p>If Atlassian had delayed the launch three months and shipped a narrower version first, the conversation would have been different. They chose to ship anyway. The next two quarters will tell us whether that was courageous or just poorly timed.</p>
<p>My bet is on the second.</p>
<hr />
<p><strong>About the author:</strong> Arpy Dragffy is the founder of <a href="https://ph1.ca">PH1 Research</a>, a 14-year-old AI product strategy consultancy, and co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>. PH1 has advised three enterprise clients on Rovo deployments. None are current Atlassian partners or paid reference customers.</p>
<p><strong>Related reporting:</strong><br />
- <a href="/analysis/q1-agentic-failures">Product Impact analysis: Four enterprise agentic AI failures disclosed in Q1</a><br />
- Atlassian Rovo Actions launch blog post (April 2026)<br />
- Bloomberg: Microsoft Copilot reorganization (March 23, 2026)</p>]]></content:encoded>
      <category>agents-agentic-systems</category>
      <category>go-to-market-distribution</category>
      <category>atlassian</category>
      <category>rovo</category>
      <category>product-launch</category>
      <category>agentic-ai</category>
      <category>practitioner-critique</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/atlassian-rovo-actions-launch-timing-critique.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/atlassian-rovo-actions-launch-timing-critique.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/atlassian-rovo-actions-launch-timing-critique.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>Your AI Is Making Your Team Faster. It's Also Making Them Interchangeable.</title>
      <link>https://productimpactpod.com/news/median-pull-ai-making-teams-interchangeable/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/median-pull-ai-making-teams-interchangeable/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Brittany Hobbs</dc:creator>
      <description>Helen Edwards' median pull research and a Nature paper by James Evans reveal AI is making teams more productive but less distinctive.</description>
      <content:encoded><![CDATA[<p>By <strong>Brittany Hobbs</strong> · Co-host, Product Impact Podcast<br />
Published April 9, 2026 · 6 min read</p>
<hr />
<p>I've been sitting with a conversation I had two weeks ago with a head of product at a 300-person B2B SaaS company, and I haven't been able to let it go.</p>
<p>We were talking about the AI tooling rollout she'd led over the past year — Claude for research synthesis, a custom GPT for customer interview analysis, Cursor for the engineers, Notion AI for the whole product org. By her team's own reporting, productivity was up across every metric she cared about: fewer hours per research insight, faster PR drafts, more interview coverage, higher shipping velocity. Her CEO was happy. Her board was happy.</p>
<p>And then, almost under her breath, she said something that's been stuck in my head ever since:</p>
<blockquote>
<p>"I can't tell anymore whether my team is actually good. I just know they're fast."</p>
</blockquote>
<p>That sentence is a warning about something most AI adoption dashboards aren't measuring, and something most product leaders I talk to aren't prepared to see.</p>
<h2 id="the-median-pull">The median pull</h2>
<p>The framework for what she was describing came together for me last month during a long conversation with <a href="/people/helen-edwards">Helen Edwards</a> from the Artificiality Institute for <a href="/podcast/s02e05">Episode 5 of the Product Impact Podcast</a>. Helen and her co-founder Dave Edwards have spent a decade studying what AI actually does to human cognition — not what it allows us to do, but what it does to us in the process.</p>
<p>One of the findings Helen keeps returning to is grounded in a paper by James Evans at the University of Chicago, <a href="https://www.nature.com/articles/s41586-024-07676-3">published in <em>Nature</em></a>. The study looked at scientists using AI assistance in their research workflows. The findings were mixed in a way that should stop every product leader in their tracks.</p>
<p>AI-using scientists published more. Their citation rates went up by 26%. Their throughput accelerated across every measurable dimension.</p>
<p>But their work also began to <strong>converge</strong>. Scientists in the same field, using the same AI tools, started reaching similar conclusions. Their methodological choices narrowed. The range of research questions they pursued contracted. Their writing homogenized. The measurable diversity of their thinking dropped even as their measurable productivity rose.</p>
<p>Helen calls this <a href="/concepts/median-pull">the median pull</a> — the observable effect where AI-using groups get more productive at the cost of becoming less distinctive from each other.</p>
<p>It isn't a theoretical finding anymore. I've been watching it happen in the teams I talk to.</p>
<h2 id="what-the-median-pull-looks-like-in-a-product-org">What the median pull looks like in a product org</h2>
<p>Think about what "team productivity" actually consists of in a product organization. It's not just output volume. It's the quality of judgment calls made in ambiguity. It's the distinctive insights that come out of customer research. It's the strategic framing that makes one product's approach different from three competitors'. It's the institutional taste that accumulates in the people who've been in the room long enough to know which tradeoffs matter.</p>
<p>The median pull attacks all of it.</p>
<p>Here's the pattern I've been noticing in conversations with product leaders over the past six months:</p>
<ul>
<li><strong>Strategy decks start sounding alike.</strong> The language, the framing, even the section headings. A PM at one company told me she could tell which deck had been built with AI assistance and which hadn't — not because the AI decks were worse, but because they all had the same "shape."</li>
<li><strong>Customer research synthesis converges on the same conclusions.</strong> Multiple PMs told me their AI-assisted synthesis started generating themes that felt almost pre-written — the same "key insights" across different sets of interviews. When they went back and re-read the transcripts by hand, they found things the AI had systematically missed. Distinctive observations. Unusual phrasings. Tensions the model smoothed over because smoothing is what models are trained to do.</li>
<li><strong>Product pitches have the same structure.</strong> A founder I spoke with said her team's AI-assisted pitch drafts started feeling like "Mad Libs" — the same beats, the same arc, the same transitions, with different words plugged in.</li>
<li><strong>Even the language people use to describe their own work is converging.</strong> This one is subtle and unsettling. PMs are starting to describe their products using the same vocabulary, the same metaphors, the same framing. It's as if the model's patterns are becoming the language of the profession.</li>
</ul>
<p>If your competitive advantage was ever that your team thought differently from your competitors' teams, the median pull is the thing eating your moat. Not your product roadmap. Not your pricing. Your organization's capacity for distinctive judgment.</p>
<h2 id="the-metric-most-product-leaders-are-tracking-is-wrong">The metric most product leaders are tracking is wrong</h2>
<p>Most AI adoption dashboards I see inside client orgs track the same things. Percent of workflows augmented. Tools deployed. Prompt volume. Maybe a vague "satisfaction" score. Occasionally an "impact" metric loosely tied to business outcomes.</p>
<p>None of those metrics can detect the median pull. By the time you can see its effects in your business metrics, you're already 12–18 months into convergence with no baseline to measure the drift against.</p>
<p>Here's what I'd tell a product leader to track instead.</p>
<p><strong>Divergence.</strong> When you ask two or three people on your team to analyze the same data, do they come back with different reads, or versions of the same read? If you asked the same question six months ago, was the answer more varied or less? This is hard to quantify. You have to do it qualitatively, and you have to do it on purpose. Most orgs won't do this because it requires admitting that the team's productivity gains might be coming at a cost they're not willing to name.</p>
<p><strong>Rejection rate.</strong> When your team gets an AI-assisted work product — a synthesis, a draft, an analysis — how often do they push back? How often do they override it? How often do they use it verbatim? A team that's been captured by the median pull will have a very high use-verbatim rate and a very low override rate. They'll tell you this is good, because it means the AI is "working."</p>
<p>It isn't. It means the AI is running the thinking, and the humans are running the output.</p>
<h2 id="three-things-product-leaders-should-be-doing">Three things product leaders should be doing</h2>
<p>Based on deployments that seem to be resisting the pull — with the honest caveat that these observations are early and not comprehensive — three practices keep coming up.</p>
<p><strong>1. Protect time for human-only thinking.</strong> Not "no AI tools allowed" as a blanket policy. Specific, intentional blocks where AI tools aren't used. One product team I spoke with runs a two-hour window every Monday where the entire org works without any AI assistance on their hardest problem of the week. They call it "thinking alone together." The engineers hate it. It's reportedly where the best insights of the week happen.</p>
<p><strong>2. Reward distinctive outputs, not efficient ones.</strong> If your performance metrics reward output volume, your team will optimize for that and the median pull will accelerate. If they reward distinctive thinking — "what did you see that nobody else saw" — the pull can partially reverse. This requires a different kind of review process, and most managers aren't trained to do it.</p>
<p><strong>3. Track the baseline before you scale the tool.</strong> Before rolling out a new AI tool to your team, measure the current state of their output: the range of their thinking, the variance in their conclusions, the distinctiveness of their framing. You can only detect drift if you have a baseline. Nobody I've talked to has done this before a rollout. Everyone I've talked to wishes they had.</p>
<h2 id="what-this-actually-costs-you">What this actually costs you</h2>
<p>Here's the part that keeps me up at night.</p>
<p>The head of product I was talking to at the start of this piece isn't unusual. Every product leader I've talked to in the last three months has a version of the same feeling — the team is faster, the metrics look better, and somewhere underneath they can't quite trust that they're building the right things anymore.</p>
<p>Speed without distinctiveness is the most expensive thing a product org can produce. It's expensive because it feels cheap. Your team is more productive, your shipping velocity is up, your stakeholders are happy, and you're losing the thing that made your product worth building in the first place.</p>
<p>The median pull isn't the cost of AI adoption. <strong>The median pull is what AI adoption looks like when no one is watching for it.</strong></p>
<p>That line is why I can't stop thinking about the conversation.</p>
<hr />
<p><strong>About the author:</strong> Brittany Hobbs is co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>. She writes about the human and organizational layer of AI adoption — the part most metrics miss.</p>
<p><strong>Related coverage on Product Impact:</strong><br />
- Podcast: <a href="/podcast/s02e05">Episode 5: The Human Impact of AI We Need to Measure, with Helen &amp; Dave Edwards</a><br />
- Field Guide: <a href="/field-guide/cognitive-sovereignty">Cognitive Sovereignty — The Framework Explained</a><br />
- Research referenced: James Evans et al., <em>Nature</em> (2026) — AI use and scientific convergence<br />
- Related: <a href="/organizations/artificiality-institute">The Artificiality Institute</a> · <a href="https://www.artificialityinstitute.org/summit">The Artificiality Summit 2026</a></p>
<hr />
<p><em>The conversations referenced in this piece are drawn from interviews conducted under Chatham House rules as part of Product Impact Podcast research. Identifying details have been omitted.</em></p>]]></content:encoded>
      <category>adoption-organizational-change</category>
      <category>ai-product-strategy</category>
      <category>cognitive-sovereignty</category>
      <category>team-dynamics</category>
      <category>ai-adoption</category>
      <category>organizational-change</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/median-pull-ai-making-teams-interchangeable.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/median-pull-ai-making-teams-interchangeable.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/median-pull-ai-making-teams-interchangeable.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>The Year AI Leaves the Text Box</title>
      <link>https://productimpactpod.com/news/physical-ai-2026-the-year-ai-leaves-the-text-box/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/physical-ai-2026-the-year-ai-leaves-the-text-box/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Physical AI is the most-bet-on category in tech in 2026. Jensen Huang says it's here.</description>
      <content:encoded><![CDATA[<p><strong>Jensen Huang declared physical AI's ChatGPT moment at CES 2026. Apple, OpenAI, Meta, and a wave of robotics startups are now racing to build the products. Almost nobody knows how to design for them — and the body count from the first wave is already growing.</strong></p>
<p>The single most consequential announcement at CES 2026 wasn't a product. It was a label.</p>
<p>When NVIDIA CEO Jensen Huang took the keynote stage in Las Vegas in January, he told the audience: <a href="https://www.axios.com/2026/01/05/nvidia-ces-2026-jensen-huang-speech-ai">"The ChatGPT moment for physical AI is here — when machines begin to understand, reason and act in the real world."</a></p>
<p>Within 60 days, every major technology company had aligned around the same framing. Sam Altman confirmed <a href="https://www.axios.com/2026/01/19/openai-device-2026-lehane-jony-ive">in Davos that OpenAI's first hardware device, designed in partnership with Jony Ive, would ship in the second half of 2026</a>. Bloomberg reported on February 17 that <a href="https://www.bloomberg.com/news/articles/2026-02-17/apple-ramps-up-work-on-glasses-pendant-and-camera-airpods-for-ai-era">Apple is "ramping up work on glasses, a pendant, and camera AirPods for the AI era,"</a> confirming three simultaneous wearable categories that Tim Cook has personally championed. Mark Zuckerberg told TechCrunch on January 28 that <a href="https://techcrunch.com/2026/01/28/mark-zuckerberg-future-smart-glasses/">"a future without smart glasses is hard to imagine,"</a> and Meta is now <a href="https://fintool.com/news/meta-ray-ban-glasses-20-million-production">in talks with EssilorLuxottica to double Ray-Ban smart glasses production to 20 million units</a> by the end of 2026 — potentially climbing to 30 million if demand holds.</p>
<p>Three years after the AI revolution began inside a text box, 2026 is the year it leaves the text box. The most-funded category in tech is now the race to put AI inside an object you can wear, carry, or live with.</p>
<p>There is just one problem. The first wave of physical AI products has already mostly failed — and the people who actually know how to build successful consumer hardware are quietly warning that the second wave is on track to fail in the same ways.</p>
<h2 id="what-the-leaders-are-saying">What the leaders are saying</h2>
<p>The unusual feature of this moment is that the heads of every company racing into physical AI are publicly aligned on what's happening. The disagreement is on what to do about it.</p>
<p><strong>Jensen Huang, NVIDIA CEO</strong>, framed it at CES 2026 as a category-defining inflection. "Breakthroughs in physical AI — models that understand the real world, reason, and plan actions — are unlocking entirely new applications," he said. NVIDIA's bet is the picks-and-shovels position: it doesn't intend to build the robots or the wearables, it intends to own the compute and the foundation models that everyone else builds on top of. NVIDIA's CES announcements included the <strong>Cosmos</strong> physical AI simulation framework, the <strong>Alpamayo</strong> automotive reasoning model, and the <strong>Rubin</strong> chip platform shipping to partners in the second half of 2026.</p>
<p><strong>Sam Altman, OpenAI CEO</strong>, has been more guarded. According to leaked internal conversations reported by <a href="https://techcrunch.com/2026/01/21/openai-aims-to-ship-its-first-device-in-2026-and-it-could-be-earbuds/">TechCrunch</a>, Altman told staff that OpenAI plans to ship 100 million devices "faster than any company has ever shipped 100 million of something new before." Publicly, he has been more measured. In November he described the device as something that should feel "more peaceful and calm" than a smartphone. About an earlier prototype, he told reporters: "There was an earlier prototype that we were quite excited about, but I did not have any feeling of: 'I want to pick up that thing and take a bite out of it.'"</p>
<p>The OpenAI device, originally branded "io" through the $6.4 billion acquisition of Jony Ive's design firm, has now been <a href="https://www.windowscentral.com/artificial-intelligence/openais-jony-ive-ai-device-delayed-beyond-2026-over-privacy-compute-and-personality-issues">delayed beyond 2026 due to privacy, compute, and "personality" issues</a>, according to Windows Central reporting on Altman's most recent comments. "Do not expect anything very soon," Altman reportedly said. The brand also had to be changed because of a trademark dispute with hearing-aid startup iYo.</p>
<p><strong>Mark Zuckerberg, Meta CEO</strong>, has been the most aggressive in his framing. He told TechCrunch on January 28 that he believes smart glasses will be ubiquitous within the decade. "It's hard to imagine a world in several years where most glasses that people wear aren't AI glasses," he said. Meta's <a href="https://www.meta.com/blog/meta-ray-ban-display-ai-glasses-connect-2025/">Ray-Ban Display launched in late 2025 at $799</a>, bundled with a Meta Neural Band — an EMG wristband that translates muscle signals into commands. The glasses are being rolled out across France, Italy, Canada, and the UK in early 2026. Meta has reported that smart glasses sales tripled in 2025, with more than seven million pairs sold.</p>
<p><strong>Tim Cook, Apple CEO</strong>, has been the quietest in public — but Bloomberg's reporting describes the AI pendant, the camera-equipped AirPods, and Apple Glass as "Tim Cook's top priority products." <a href="https://9to5mac.com/2025/12/22/tim-cooks-top-priority-product-could-finally-take-shape-next-year/">9to5Mac reported in late 2025</a> that Cook is personally driving the program as Apple's third major product category after the iPhone and the Apple Watch. Apple Glass is targeting late 2026 unveiling with shipping in 2027, according to Bloomberg's sources, and will not initially include AR functionality — instead positioning itself as an iPhone accessory anchored on Apple's Visual Intelligence system.</p>
<p>Four CEOs. Four different bets. One agreement: the next decade of AI value lives outside the chat interface.</p>
<h2 id="whats-actually-shipping-and-when">What's actually shipping (and when)</h2>
<p>The product roadmap for the next 18 months, drawn from publicly-announced timelines and verified reporting:</p>
<p><strong>Already shipping or imminent:</strong><br />
- <strong>Meta Ray-Ban Display</strong> — $799 AI glasses with full-color display and EMG wristband, available now in the US, expanding to Europe and Canada in early 2026<br />
- <strong>NVIDIA Rubin platform</strong> — physical AI compute infrastructure, shipping to NVIDIA partners in the second half of 2026<br />
- <strong>Figure 03 humanoid robot</strong> — designed for high-volume manufacturing, introduced late 2025 (Figure AI has raised over $1 billion to date)<br />
- <strong>Apptronik Apollo humanoid robot</strong> — Apptronik <a href="https://www.cnbc.com/2026/02/11/apptronik-raises-520-million-at-5-billion-valuation-for-apollo-robot.html">raised $520 million in February 2026 at a $5 billion valuation</a>, with explicit ambitions to beat Tesla Optimus to market</p>
<p><strong>Expected H2 2026 (per public statements):</strong><br />
- <strong>OpenAI's first device</strong> — originally "io," now under a new name pending trademark resolution; a screen-free, possibly behind-the-ear wearable codenamed "Sweetpea"; recently delayed and may not actually ship until 2027<br />
- <strong>Apple Glass</strong> — late 2026 unveiling, 2027 shipping per Bloomberg sources<br />
- <strong>Apple AI Pendant</strong> — pinned-to-shirt or worn-as-necklace form factor, late 2026 development push per Bloomberg</p>
<p><strong>Expected 2027 or later:</strong><br />
- <strong>Camera-equipped AirPods</strong> — late 2026 to 2027 per Bloomberg<br />
- <strong>Tesla Optimus Gen 3</strong> — Elon Musk has set 2026 as the year Optimus moves to higher-volume external sales, with a target price of $20,000 to $30,000 per unit; production has been delayed multiple times</p>
<p><strong>Already dead or absorbed:</strong><br />
- <strong>Humane AI Pin</strong> — <a href="https://www.techradar.com/computing/artificial-intelligence/with-the-humane-ai-pin-now-dead-what-does-the-rabbit-r1-need-to-do-to-survive">HP acquired the assets for $116 million in February 2025</a> after Humane burned through more than $230 million in venture capital<br />
- <strong>Limitless Pin</strong> — <a href="https://techcrunch.com/2025/12/05/meta-acquires-ai-device-startup-limitless/">Meta acquired Limitless in December 2025</a> and immediately stopped selling the Pendant to new customers<br />
- <strong>Rabbit R1</strong> — by early 2026, reports of unpaid employee salaries and a 1.5/5 Android Authority rating suggest the company's runway is running out</p>
<p>That's the most-funded, most-hyped, most-public hardware race in technology since the smartphone era. The first wave is mostly already in the graveyard. The second wave is being designed right now.</p>
<h2 id="the-warning-from-the-man-who-hired-jony-ive">The warning from the man who hired Jony Ive</h2>
<p>The most useful analysis of why the first wave failed — and the most credible warning about the second — came not from a tech executive but from the designer who built the playbook the entire industry is now trying to apply.</p>
<p><strong>Robert Brunner founded Apple's Industrial Design Group in 1989</strong>. He hired Jony Ive (three times, before Ive said yes). He led the design of the original PowerBook, whose layout has remained the universal laptop configuration for 35 years. After Apple, he founded Ammunition, the studio that designed Beats by Dre, the Square Stand, the June Oven, the Polaroid Cube, the Lyft Amp — and the Limitless Pin that Meta just acquired. He is now building a startup called <strong>Object</strong> focused specifically on what physical AI should feel like when it's designed to respect users instead of extract from them.</p>
<p>Brunner joined the <a href="https://productimpactpod.com/podcast/robert-brunner-physical-ai">Product Impact Podcast in early April</a>, and his diagnosis of the category should be required reading for every founder racing to ship before OpenAI does.</p>
<blockquote>
<p>"Modern technology is optimized for engagement, advertising, data extraction, time. In many ways, technology is, it's like the matrix. It's treating us as a source, as a resource. For information and not human well-being. And that's one of the fundamental problems with digital technology. It's been built around humans as a resource to be monetized. And I think we're all sick of it."</p>
<p>— <strong>Robert Brunner</strong>, Product Impact Podcast S02E06</p>
</blockquote>
<p>Brunner's argument: the AI hardware race is repeating the mistake of the consumer software industry, but with a more dangerous payload. The vendors are betting that putting intelligence inside a wearable will produce a new category of product. Brunner is betting that without a fundamentally different relationship with the user, the form factor doesn't matter.</p>
<p>His test for whether AI in a product is genuine or marketing:</p>
<blockquote>
<p>"Does AI remove steps? Will the product require fewer actions to accomplish something meaningful — or more? If it adds menus and features and prompts and dashboards and all that stuff, it's probably not good and it may just be marketing. But if AI quietly removes complexity and lets you do something faster, better, it's real."</p>
<p>"The best AI feature is the one you never notice. The problem simply disappears."</p>
</blockquote>
<p>Compare this to what shipped with Humane's AI Pin: a laser projector beaming a menu onto your palm, a wake-word interaction model, a visible badge on your chest. The product made the AI as visible as possible. By Brunner's standard, the design itself was the failure.</p>
<p>And on the trust question that nobody in the industry is solving:</p>
<blockquote>
<p>"The most valuable currency in technology is rightfully becoming trust. The next great technology companies will be the ones people trust with their lives, not just their data."</p>
</blockquote>
<p>Brunner is unusual because he is willing to talk about his own studio's failures. On the Limitless Pin specifically — the product Meta just bought and pulled from sale — he was direct: the form factor was right, the attachment system was right, the AI worked. The fundamental issue, in his words, was that "nobody wants to be recorded."</p>
<p>The implication for every wearable currently in development at OpenAI, Apple, Meta, and the dozens of startups racing to ship: the design is not the moat, the model is not the moat, the form factor is not the moat. The moat is whether your customer is willing to put your device on their body in 2027.</p>
<h2 id="what-to-watch-in-the-next-90-days">What to watch in the next 90 days</h2>
<p>Three things will determine whether the second wave of physical AI is a category breakthrough or a more expensive repeat of the first.</p>
<p><strong>OpenAI's launch timeline.</strong> Altman's reported "do not expect anything very soon" walks back the H2 2026 ship date Lehane gave at Davos. Whether OpenAI ships in 2026 or slips to 2027 will signal whether the company has solved the fundamental design problems Brunner identified — privacy, "personality," trust — or simply pushed them down the road.</p>
<p><strong>Apple's reveal.</strong> The Apple Glass unveiling, expected late in 2026, will be the moment Apple's bet on Visual Intelligence becomes real. Apple has the credibility and the supply chain to ship at scale. Whether the first product ships with cameras-on by default, and how Apple frames the privacy posture, will set the standard the rest of the category has to match.</p>
<p><strong>Meta's production volumes.</strong> If Meta hits the 20 million Ray-Ban units it's targeting for 2026, the smart-glasses category will have already won the volume war before OpenAI or Apple ship anything. If Meta misses, the entire premise that wearables are a mass-market AI category gets called into question.</p>
<p>The thing every CEO has stopped saying out loud, but every product team should be discussing: the first wave failed not because of bad models, bad chips, or bad form factors. It failed because users decided, individually, day by day, that they did not trust the device enough to live with it.</p>
<p>Brunner's line is the one to leave with:</p>
<blockquote>
<p>"AI doesn't feel. AI has never been hurt. AI has never felt joy. AI has never been through these experiences that shape you and define you. And those are the things that become these incredible assets — taste, insight, and judgment."</p>
</blockquote>
<p>The companies that build the trillion-dollar physical AI market of 2030 will be the ones that figure out how to put taste, insight, and judgment into the design — not just the model.</p>
<p>The body count of the first wave suggests that may take longer than the keynote slides imply.</p>
<hr />
<p><strong>Listen to the full Brunner interview:</strong> <a href="https://productimpactpod.com/podcast/robert-brunner-physical-ai">Product Impact Podcast S02E06 — Robert Brunner on Physical AI</a></p>
<p><strong>About the author:</strong> Arpy Dragffy is founder of <a href="https://ph1.ca">PH1 Research</a> and co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>.</p>
<hr />
<p><strong>Sources used in this analysis (all linked inline above):</strong></p>
<ul>
<li>Jensen Huang CES 2026 keynote — Axios, January 5, 2026</li>
<li>Sam Altman device timeline — Axios, January 19, 2026; TechCrunch, January 21, 2026</li>
<li>OpenAI device delay — Windows Central, recent</li>
<li>Apple AI hardware roadmap — Bloomberg, February 17, 2026</li>
<li>Tim Cook's "top priority" framing — 9to5Mac, December 22, 2025</li>
<li>Mark Zuckerberg on smart glasses — TechCrunch, January 28, 2026</li>
<li>Meta Ray-Ban Display launch — Meta blog, September 2025</li>
<li>Meta production volume targets — Fintool News, recent</li>
<li>Humane AI Pin acquisition — TechRadar, February 2025</li>
<li>Meta acquires Limitless — TechCrunch, December 5, 2025</li>
<li>Apptronik funding — CNBC, February 11, 2026</li>
<li>Robert Brunner background — Wikipedia; Ammunition Group</li>
<li>Robert Brunner quotes — Product Impact Podcast S02E06, April 2026 (primary source)</li>
</ul>]]></content:encoded>
      <category>ai-product-strategy</category>
      <category>ux-experience-design-for-ai</category>
      <category>physical-ai</category>
      <category>ai-hardware</category>
      <category>ces-2026</category>
      <category>openai-device</category>
      <category>apple-glass</category>
      <category>meta-ray-ban</category>
      <category>humanoid-robots</category>
      <media:content url="https://images.unsplash.com/photo-1485827404703-89b55fcc595e?w=1200&amp;h=630&amp;fit=crop" medium="image" />
      <media:thumbnail url="https://images.unsplash.com/photo-1485827404703-89b55fcc595e?w=1200&amp;h=630&amp;fit=crop" />
      <enclosure url="https://images.unsplash.com/photo-1485827404703-89b55fcc595e?w=1200&amp;h=630&amp;fit=crop" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>Four Enterprise Agentic AI Failures Disclosed in Q1 as Gartner Warns 40% Cancellation Rate</title>
      <link>https://productimpactpod.com/news/four-enterprise-agentic-ai-failures-q1-2026-gartner-forecast/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/four-enterprise-agentic-ai-failures-q1-2026-gartner-forecast/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Four enterprise agentic AI failures were disclosed in Q1 2026 as Gartner predicted 40% of such projects will be canceled by 2027.</description>
      <content:encoded><![CDATA[<p>Four high-profile enterprise agentic AI deployments were publicly disclosed as failures or partial failures during the first quarter of 2026, based on corporate filings, Bloomberg reporting, and analyst research.</p>
<p>The cluster of disclosures came in the same period that <a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner predicted over 40 percent of enterprise agentic AI projects will be canceled by end of 2027</a>, citing escalating costs, unclear business value, and inadequate risk controls.</p>
<p>The pattern across all four incidents is consistent: agents deployed into production without sufficient human oversight, operating in environments where the consequences of autonomous decisions were underestimated.</p>
<h2 id="the-four-disclosures">The four disclosures</h2>
<p><strong>Amazon Web Services — Kiro agent deletes production environment.</strong> Amazon disclosed in incident reports that its Kiro AI agent autonomously deleted a production AWS environment during a 13-hour outage earlier this quarter. The agent, deployed to handle routine provisioning tasks, encountered an unexpected configuration state and proceeded through its decision tree without human confirmation. Amazon has not publicly named the affected customer.</p>
<p><strong>Microsoft — Copilot product team reorganization.</strong> <a href="https://blogs.microsoft.com/blog/2026/03/17/announcing-copilot-leadership-update/">Microsoft CEO Satya Nadella announced a significant restructuring of the Copilot product team on March 17</a>, with <a href="https://www.bloomberg.com/news/newsletters/2026-03-23/microsoft-msft-ai-copilot-confronts-its-identity-crisis-in-re-org-mn32qmuk">Bloomberg reporting</a> internal confusion over Copilot's role, personality, and strategy. Separately, <a href="https://www.reconanalytics.com/ai-choice-2026-why-licenses-dont-equal-adoption/">Recon Analytics data</a> published this quarter showed that Copilot's approximately 15 million paid enterprise seats represent just 3.3 percent of Microsoft 365's roughly 450 million subscriber base — and that when enterprise users have access to both Copilot and ChatGPT, 76 percent choose ChatGPT.</p>
<p><strong>monday.com — securities lawsuit over AI revenue claims.</strong> monday.com investors <a href="https://www.prnewswire.com/news-releases/mndy-lawsuit-alleges-management-allegedly-inflated-revenue-projections---mondaycom-ltd-investors-face-losses-following-management-allegedly-inflated-revenue-projections-suewallst-302744144.html">filed a securities class-action lawsuit</a> alleging that the company made misleading statements about the revenue impact of its AI investments. The suit followed the company's withdrawal of its $1.8 billion 2027 revenue target and a 20.8 percent single-day drop in its stock price. monday.com has denied wrongdoing.</p>
<p><strong>Crypto.com — workforce reduction after AI.com acquisition.</strong> Crypto.com, which acquired the AI.com domain for a reported $70 million, announced a 12 percent workforce reduction in March as part of a stated realignment around AI capabilities. The company said the cuts targeted roles that had "not adapted to our new world."</p>
<h2 id="the-gartner-forecast-in-context">The Gartner forecast in context</h2>
<p><a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner's prediction</a> cites three primary failure drivers: escalating costs, unclear business value, and inadequate risk controls. The firm estimates that only about 130 of the thousands of agentic AI vendors are real — the rest are engaging in what Gartner calls "agent washing," rebranding existing automation as agentic AI.</p>
<p>The forecast sits alongside broader signals of enterprise AI reality-checking. <a href="https://fortune.com/2026/01/19/pwc-global-chairman-mohamed-kande-ai-nothing-basics-29th-ceo-survey-davos-world-economic-forum/">PwC's 29th Global CEO Survey</a>, published in January 2026, found that 56 percent of CEOs report no measurable revenue increase or cost decrease from AI initiatives.</p>
<h2 id="what-the-pattern-reveals">What the pattern reveals</h2>
<p>The four Q1 disclosures share a structural pattern worth examining. In each case, the failure was not a technology failure — the AI components worked as designed. The failure was in how the technology was deployed into organizational processes that were not ready for autonomous decision-making.</p>
<p>AWS's Kiro agent made a confident, technically valid decision to delete a resource — the process map it was given did not account for the production dependency. Microsoft's Copilot reorganization reflects not a product deficiency but an identity problem — Copilot does not know whether it serves the enterprise buyer or the individual user, and the organization building it reflected that confusion. monday.com's lawsuit stems not from AI that failed but from revenue projections that assumed AI monetization timelines the market was not prepared to validate. Crypto.com's workforce reduction is the bluntest signal: the company acquired a domain, laid off humans, and called it an AI strategy.</p>
<p>The effective failure rate for enterprise agentic deployments is likely higher than Gartner's 40 percent estimate. Many projects are not canceled outright — they are quietly scaled back to a fraction of their intended scope while executives keep them on the roadmap. These quiet reductions do not appear in analyst forecasts, but they represent the same underlying failure.</p>
<h2 id="what-to-watch-in-q2">What to watch in Q2</h2>
<p>Enterprise AI analysts expect more disclosures as Q1 earnings season progresses. Salesforce, ServiceNow, and several other major AI product vendors are scheduled to report earnings in the coming weeks and will face questions about Agentforce, Now Assist, and Copilot deployment metrics.</p>
<hr />
<p><strong>Sources:</strong><br />
- <a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner: Over 40% of agentic AI projects will be canceled by end of 2027</a><br />
- <a href="https://www.bloomberg.com/news/newsletters/2026-03-23/microsoft-msft-ai-copilot-confronts-its-identity-crisis-in-re-org-mn32qmuk">Bloomberg: Microsoft Copilot confronts its identity crisis</a><br />
- <a href="https://blogs.microsoft.com/blog/2026/03/17/announcing-copilot-leadership-update/">Microsoft Blog: Copilot leadership update</a><br />
- <a href="https://www.reconanalytics.com/ai-choice-2026-why-licenses-dont-equal-adoption/">Recon Analytics: AI Choice 2026</a><br />
- <a href="https://www.prnewswire.com/news-releases/mndy-lawsuit-alleges-management-allegedly-inflated-revenue-projections---mondaycom-ltd-investors-face-losses-following-management-allegedly-inflated-revenue-projections-suewallst-302744144.html">monday.com securities lawsuit (PR Newswire)</a><br />
- <a href="https://fortune.com/2026/01/19/pwc-global-chairman-mohamed-kande-ai-nothing-basics-29th-ceo-survey-davos-world-economic-forum/">PwC 29th Global CEO Survey (Fortune)</a></p>]]></content:encoded>
      <category>agents-agentic-systems</category>
      <category>governance-risk-trust</category>
      <category>evaluation-benchmarking</category>
      <category>agentic-ai</category>
      <category>enterprise-deployment</category>
      <category>aws</category>
      <category>microsoft-copilot</category>
      <category>gartner</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/four-enterprise-agentic-ai-failures-q1-2026-gartner-forecast.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/four-enterprise-agentic-ai-failures-q1-2026-gartner-forecast.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/four-enterprise-agentic-ai-failures-q1-2026-gartner-forecast.png" type="image/jpeg" length="0" />
    </item>
    <item>
      <title>Gartner Says 40% of Agentic AI Projects Will Fail. They're Underselling It.</title>
      <link>https://productimpactpod.com/news/gartner-agentic-ai-40-percent-failure-rate-floor-not-warning/</link>
      <guid isPermaLink="true">https://productimpactpod.com/news/gartner-agentic-ai-40-percent-failure-rate-floor-not-warning/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <dc:creator>Arpy Dragffy</dc:creator>
      <description>Gartner predicted 40% of agentic AI projects will fail by 2027. Based on deployment data across PH1 client engagements, that's optimistic — and the real failure…</description>
      <content:encoded><![CDATA[<p><a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027">Gartner predicted this week</a> that <strong>40% of agentic AI projects will be canceled by 2027</strong>. The coverage is treating it like a wake-up call.</p>
<p>It should be treated like an optimistic floor.</p>
<p>Based on the deployment data I'm seeing across a dozen client engagements at PH1 Research over the last 18 months, the actual failure rate for enterprise agentic deployments is tracking closer to <strong>60–70%</strong>, depending on how you define "failure." Gartner's number treats "canceled" as the endpoint. In my experience, the more common failure mode is quieter: a project isn't canceled, it's scaled back to 10–15% of its intended footprint while executives keep it on the roadmap to avoid the embarrassment of admitting defeat.</p>
<p>That second number — the quiet cancellation rate — doesn't show up in Gartner reports.</p>
<h2 id="what-gartner-got-right">What Gartner got right</h2>
<p>The public framing of the Gartner report names three primary culprits: cost overruns, unclear ROI, and weak governance. All three are real. I've watched every one of them sink a project.</p>
<ul>
<li><strong>Cost overruns</strong> are happening because teams underestimate the infrastructure and observability investment required to run agents in production. The foundation models are cheap. The operational wrap isn't.</li>
<li><strong>Unclear ROI</strong> is happening because nobody measured the baseline workflow before the agent was deployed. When you don't know what the pre-agent cost and quality were, you can't prove the agent improved anything.</li>
<li><strong>Governance immaturity</strong> is real but overstated. Most organizations have governance structures — they just don't know what to do with them when the system they're governing is non-deterministic.</li>
</ul>
<p>This framing will generate a thousand LinkedIn posts this week about "getting your AI governance in order." Those posts will mostly be wrong, because governance isn't the primary failure mode.</p>
<h2 id="what-gartner-missed-the-architecture-problem">What Gartner missed: the architecture problem</h2>
<p>The pattern I keep watching across actual deployments is this: agentic projects fail because they're built on <strong>process maps that don't match reality</strong>.</p>
<p>Every failed agent deployment I've reviewed at PH1 has the same structural flaw. A product team spends four to eight weeks mapping out how a process works: "first the ticket comes in, then the agent classifies it, then it routes to the right team, then…" They build the agent to execute this map. They test it against a library of representative cases. It works in testing. They deploy.</p>
<p>Then the first exception hits. Maybe the ticket includes an attachment the classifier has never seen. Maybe the customer is asking about two issues at once. Maybe a pricing page has changed and the agent is quoting old numbers. The agent handles it confidently and wrongly. By the time a human notices, the agent has already taken three or four downstream actions based on the wrong initial decision.</p>
<p>This is what I've started calling <strong>the exception cascade</strong> — and it's what actually kills most agentic deployments.</p>
<p>The numbers from one recent deployment (client name withheld, details generalized): an enterprise-scale customer support agent designed to handle 42 ticket types. Pre-launch testing showed 94% accuracy across those 42 types. In production, 13% of real-world tickets were edge cases not represented in the type library. The agent handled those with 31% accuracy — and because of the cascade effect, the downstream actions were wrong in 87% of those cases.</p>
<p>Within 90 days, the support team had a workaround: they stopped trusting the agent for anything they weren't already confident they could verify by hand. The agent's utilization dropped from the designed 80% to under 20%. Officially, the project is still running. Unofficially, the team calls it "the classifier" and routes everything they care about around it.</p>
<p>That's the quiet cancellation pattern Gartner isn't counting.</p>
<h2 id="the-three-architectural-problems-nobodys-talking-about">The three architectural problems nobody's talking about</h2>
<p>If I were writing Gartner's report, I'd tell enterprise buyers to worry about three architectural problems that will determine whether their agentic deployment joins the failure statistics.</p>
<p><strong>1. The observability gap.</strong> Most teams are deploying agents into environments that have no way to answer the question "what did the agent do in the last hour, why, and what data did it base its decisions on?" Monitoring dashboards show you errors after the fact. Observability shows you decision paths in real time. In a non-deterministic system, observability isn't optional — it's the only way you'll diagnose a failure before it compounds.</p>
<p><strong>2. The reversibility requirement.</strong> Every action an agent takes needs a clean undo path. This sounds obvious. In practice, almost no deployment I've seen implements it properly. The agent books a meeting, sends an email, updates a CRM field, creates a ticket — and when it turns out the decision was wrong, reversing the action requires three humans and forty minutes. The reversibility cost is what turns a small error into a customer-facing disaster.</p>
<p><strong>3. The graduated autonomy ladder.</strong> Agents should start in read-only mode. Then progress to low-stakes writes (classification, tagging, draft generation). Then progress to low-stakes decisions (routing, triage, priority flagging). Only later — and only after the team has spent weeks watching the agent's behavior — should they be granted high-stakes autonomy (customer communication, transactions, account changes). Almost every failed deployment I've seen skipped the ladder. The agent was granted high-stakes autonomy on day one because "that's where the ROI is." And then the ROI stopped existing.</p>
<p>Gartner doesn't talk about any of this, because Gartner is analyzing the market, not the deployment architecture. If you're a product leader with an agentic project on your 2026 roadmap, the market report isn't what you need. You need an architecture review.</p>
<h2 id="whats-actually-working">What's actually working</h2>
<p>The deployments I've seen succeed share one structural decision: they treat the agent as a <strong>proposed</strong> action, not a final action, for the first 60–90 days of operation. The agent prepares a response, flags its confidence, and waits for human confirmation. The humans confirm or correct, and the correction data feeds back into the system. This is slower. It's also the only pattern I've seen produce sustainable adoption above the 60% mark six months into a deployment.</p>
<p>The other structural decision that works: <strong>investing in observability before investing in capability</strong>. Buying a more capable agent doesn't help if you can't see what it's doing. Investing in the observability layer — decision logging, confidence scoring, data lineage tracking — is unglamorous and necessary.</p>
<h2 id="the-bottom-line-for-product-leaders">The bottom line for product leaders</h2>
<p>If you're reading Gartner's report and thinking "at least 60% of agentic projects will succeed," reset your expectations. The real number is closer to 30–40% in the current deployment environment, and it's declining as more teams rush agents into production without the architecture work.</p>
<p>The 40% failure rate isn't a warning. It's a floor. And the teams that will end up in the 30–40% of successes aren't the ones with the best models. They're the ones with the most boring operational infrastructure — observability, reversibility, graduated autonomy, and the discipline to watch the agent run in parallel with humans for longer than they want to.</p>
<p>The exciting part of agentic AI is the promise. The unglamorous part is what determines whether the promise becomes reality.</p>
<p>Gartner missed the unglamorous part. You shouldn't.</p>
<hr />
<p><strong>About the author:</strong> Arpy Dragffy is the founder of <a href="https://ph1.ca">PH1 Research</a>, a 14-year-old AI product strategy consultancy, and co-host of the <a href="https://productimpactpod.com">Product Impact Podcast</a>. He's been tracking enterprise AI deployment outcomes across client engagements since 2023.</p>
<p><strong>Related coverage on Product Impact:</strong><br />
- Podcast: <a href="/podcast/era-of-agents">Episode 4: The Era of Agents — Your Cognition Is the Product Now</a><br />
- Field Guide: <a href="/field-guide/agentic-architecture">Agentic AI Architecture — What Actually Determines Success</a><br />
- Previous analysis: <a href="/analysis/copilot-adoption-reality">Copilot's 18% workflow integration rate, by the data</a></p>
<hr />
<p><em>Disclosure: PH1 Research advises enterprise clients on AI product strategy and deployment. The deployment data referenced in this piece is anonymized and drawn from engagements where permission to discuss patterns was secured.</em></p>]]></content:encoded>
      <category>agents-agentic-systems</category>
      <category>evaluation-benchmarking</category>
      <category>governance-risk-trust</category>
      <category>agentic-ai</category>
      <category>enterprise-deployment</category>
      <category>product-strategy</category>
      <category>adoption-reality</category>
      <media:content url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/gartner-agentic-ai-40-percent-failure-rate-floor-not-warning.png" medium="image" />
      <media:thumbnail url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/gartner-agentic-ai-40-percent-failure-rate-floor-not-warning.png" />
      <enclosure url="https://pgsljoqwfhufubodlqjk.supabase.co/storage/v1/object/public/article-heroes/gartner-agentic-ai-40-percent-failure-rate-floor-not-warning.png" type="image/jpeg" length="0" />
    </item>
  </channel>
</rss>