The Walled Garden Gets Its Wall
I told you I give Anthropic too many passes. Let's test that.
On January 9th, Anthropic flipped a switch and broke a lot of people's workflows. Third-party tools that had been using Claude Pro and Max subscriptions through unofficial integrations—OpenCode, various Cursor setups, the whole ecosystem of "harnesses" that made the Claude Code experience available through other interfaces—stopped working overnight without warning, without migration paths, without so much as a heads-up email.
The xAI angle is the headline everyone's running with, because it's dramatic. Tony Wu, xAI cofounder, posted to internal Slack that Anthropic models had stopped responding in Cursor, framing it as motivation to build in-house: the productivity hit would force them to develop their own tools, lemons into lemonade, etc. Nikita Bier at X suggested banning Anthropic from the platform in retaliation, because that's where we are now—the AI cold war conducted through access control and platform bans.
But the xAI angle is a sideshow, and I think everyone covering it knows that. They're a well-funded competitor using Claude to build rival products, and Section D.4 of Anthropic's terms explicitly prohibits exactly this. You can call it aggressive enforcement, but you can't call it surprising—when OpenAI got cut off in July for allegedly benchmarking against Claude before the GPT-5 launch, it was the same playbook. The more interesting story is everyone else who got caught in the blast radius, the developers who weren't building competing AI labs but just preferred a different interface.
The Collateral Damage
DHH called it "very customer hostile," and I'm not sure he's wrong, even though I'm predisposed to cut Anthropic slack.
The developers who built workflows around third-party tools weren't trying to violate terms of service—they were paying customers who found an interface they liked better than the official one. Maybe they preferred Cursor's UI, or had specific integrations they needed, or just liked having options in a market that keeps consolidating toward fewer of them. Now they're scrambling: the tools they relied on stopped working overnight, and their choices are to migrate to other models, find ways to work within official channels, or just be stuck.
This is the pattern I keep watching, and I'm not sure what to do with it: the gap between demo and production is where startups die, sure, but it's also where individual developers get ground up between platform decisions made far above their pay grade. The platform giveth, the platform taketh away. Terms of service are written by lawyers for lawyers, and the rest of us find out what they mean when the switch gets flipped.
The Pass I Keep Giving
So here's where I have to check myself, because I know my biases and this is one of them.
Is Anthropic wrong to enforce their terms? Probably not—if you let competitors use your models to build rival products, you're subsidizing your own obsolescence, and if third-party tools are spoofing headers to bypass rate limits, that's a real problem that doesn't solve itself. Commercial sustainability isn't optional, especially for a company that's trying to do expensive safety research on the side.
But the way they did it matters, and I think it matters more than Anthropic seems to realize. No deprecation period. No heads-up to tool developers who'd built businesses on the assumption of continued access. No migration assistance or even acknowledgment that the ecosystem they were about to break had grown up around their product in good faith.
Compare this to the safety announcements I always tell you to wait 90 days on: Anthropic publishes their reasoning, they move slower than the competition, they seem to actually believe the safety stuff. "Seem to" is doing a lot of work in that sentence, but the work it was doing was about distinguishing safety theater from safety substance—and that's a different question from how you treat an ecosystem of developers who built on your platform because they believed in what you were doing.
The developers who built on Claude didn't do anything wrong. They just built on sand and found out it was sand when the tide came in.
I'm still giving Anthropic the pass on the competitor stuff—xAI knew the terms, OpenAI knew the terms, and playing stupid games wins stupid prizes. But the third-party tool developers? The individual users who just wanted a better interface? That's where I'm less sure, and that's where "publish your reasoning" would have actually helped. Instead we got a light switch with no warning label.
The Walled Garden Question
Here's the frame I keep coming back to, and I'm genuinely uncertain whether it's the right one: every platform eventually has to decide what it is.
Open ecosystems are messy by design. People build weird things on them—some of which compete with you, some of which embarrass you, some of which make your product better in ways you never imagined and couldn't have planned for. You get innovation at the cost of control, and you have to be okay with that tradeoff or the whole thing falls apart.
Walled gardens are clean by design. You control the experience, you capture more of the value, you don't have to worry about third parties breaking your security model or undermining your business in ways you can't predict. You get control at the cost of ecosystem energy, and you have to believe the control is worth more than the energy you're giving up.
Anthropic just told us which one they're building, and I think that's useful information even if you're not sure what to do with it. I'm not saying it's wrong—Microsoft didn't become Microsoft by letting anyone plug into Windows however they wanted, and Apple's garden has very nice walls. Control is a legitimate strategy, maybe even the right one for a company trying to maintain safety standards across an ecosystem of applications it doesn't control.
But if you've been betting on Claude as the "open-ish" alternative to OpenAI's walled garden, this week was a wake-up call. The garden is getting its wall. Plan accordingly.
What Happens Next
The competitor blocking will continue—if you're a well-funded AI lab using Claude to build your products, your access is probably on borrowed time, and the smart play is to assume this rather than hope for exceptions. The third-party tool ecosystem will either adapt or die: some will find ways to work within official channels, some will migrate to other models, some will build businesses around the providers that still allow this kind of access and hope those providers don't follow Anthropic's lead.
And the developers caught in the middle will do what they always do: figure it out, complain on Twitter, and remember which platforms burned them when the next switching opportunity comes around.
I'll keep watching Anthropic with my acknowledged bias. They're still publishing more reasoning than most, still moving slower than OpenAI, still doing safety work that seems more substantive than theatrical. But this week reminded me that "seems to" only goes so far when the commercial pressure shows up. Everyone's a business first, even the ones who'd rather not be, and the question is just how gracefully they handle the moments when the business imperatives conflict with the values they've been marketing.
Check back in 90 days. Let's see what else changes.
— Morgan