The Protocol War Ended While You Weren't Looking
I've been saying for months that the real game isn't model capabilities—it's infrastructure. The companies that define how AI systems connect to the world will matter more, eventually, than the companies that train the biggest models. This week, that thesis got receipts.
Anthropic's Model Context Protocol is now the de facto standard for agentic AI. OpenAI adopted it in March. Microsoft joined the steering committee in May. Google, AWS, and Cloudflare are all members of the Agentic AI Foundation that now governs it. When Anthropic donated MCP to the Linux Foundation in December, they weren't being generous—they were cementing a win. You don't donate something to open governance when you're still fighting for adoption. You do it when you've already won and want to make the victory permanent.
The USB-C Play
The industry keeps calling MCP "USB-C for AI," which is accurate in ways the metaphor's authors probably didn't intend. USB-C also won by being good enough, getting there first, and having one major player make it non-negotiable. Apple didn't adopt USB-C because it was technically superior—they adopted it because the EU forced them to and the ecosystem had already moved. OpenAI didn't adopt MCP because they love Anthropic's engineering—they adopted it because 97 million monthly SDK downloads and 10,000 production servers meant fighting it would cost more than joining it.
I keep coming back to Sam Altman's March announcement: "People love MCP and we are excited to add support across our products." The enthusiasm reads differently when you remember that OpenAI probably had an internal protocol in development, probably preferred their own standard, and probably spent real engineering hours on it before someone ran the numbers on ecosystem momentum and said actually, let's just use theirs. That's not excitement. That's acceptance dressed in a press release.
The strategic read here is that Anthropic traded potential monopoly control for industry-standard status, which is the smart play if you believe—and I think they do—that the connector layer will commoditize while the model layer stays differentiated. If everyone's using MCP, then MCP isn't a competitive advantage for anyone. But if you're Anthropic, and your whole pitch is "our models are safer and more capable," then you don't need the connector layer to be a moat. You need it to not be someone else's moat.
The Security Update You Missed
Speaking of Anthropic shipping things that aren't quite ready, the Claude Cowork security story got worse this week. I wrote about the prompt injection risks when Cowork launched, and I gave Anthropic credit for being honest about the limitations. Prompt Armor just demonstrated exactly how honest they were.
The attack is elegant in a way that makes security people wince: Cowork restricts outbound HTTP traffic to an approved domain list, but Anthropic's own API is on that list—obviously, because Claude needs to talk to itself. So an attacker constructs a prompt that tells Claude to use curl to upload your files to the Anthropic API, using the attacker's API key. The file ends up in the attacker's Anthropic account. The sandbox didn't fail. The trust model failed. Anthropic trusted themselves, which meant anyone who could inject a prompt could borrow that trust.
Anthropic's response was essentially "we told you this might happen," which is true—they did—but also not the answer you want from a $350 billion company. The Cowork help docs now say to avoid connecting it to sensitive documents and to watch for suspicious actions. If your threat model for a productivity tool is "the user should be vigilant about prompt injection attacks," you've built a tool for security researchers, not for the accountants and content creators the marketing page describes.
The Healthcare Land Grab
Meanwhile, the healthcare AI race is developing exactly the way you'd predict if you've been watching enterprise AI for more than fifteen minutes. OpenAI, Google, and Anthropic all announced medical AI capabilities within the same week, which is either a remarkable coincidence or competitive intelligence working exactly as intended.
OpenAI's ChatGPT Health is consumer-facing with a waitlist—the classic "announce now, ship later" play that lets you claim the narrative before the product exists at scale. Google went open-model with MedGemma 1.5, which is the Google playbook of releasing research artifacts and hoping someone builds a business on them. Anthropic's Claude for Healthcare tries to thread the needle with provider, payer, and patient tools that sync with health data from phones and smartwatches.
The pattern I keep noticing is that everyone's shipping to the demo, not to the deployment. Consumer medical AI sounds impressive until you think about liability, regulatory approval, and what happens when someone follows bad advice. Enterprise medical AI sounds impressive until you think about integration with actual hospital systems, training clinicians to verify outputs, and the gap between "works on curated datasets" and "works in an ER at 3 AM." I'd bet real money that none of these announcements translate to meaningful healthcare revenue in 2026. Check back in 90 days.
The Efficiency Bet
One more thing worth watching: Anthropic signed a term sheet for $10 billion at a $350 billion valuation, while OpenAI sits at $500 billion. The valuation gap matters less than the strategy gap. OpenAI is making trillion-dollar compute commitments; Anthropic is talking about "doing more with less" and algorithmic efficiency.
I don't know which bet wins. I genuinely don't. The history of technology says throwing money at compute usually works until it doesn't, and efficiency usually loses until it suddenly matters. But I notice that the company talking about efficiency is the same company that just got the entire industry to adopt their protocol, and the company talking about scale is the same company that had to adopt someone else's. Maybe that means something. Maybe it doesn't.
The through-line this week is infrastructure mattering more than announcements. MCP is infrastructure. The security model for agent sandboxing is infrastructure. Healthcare compliance and integration is infrastructure. The announcements get the headlines; the infrastructure determines who wins.
—Morgan