Elon Musk called Anthropic “doomed to become the opposite of its name.” He asked if there’s “a more hypocritical company than Anthropic.” In February, he wrote that “Anthropic hates Western Civilization.” And now he’s leasing them the entirety of Colossus 1 — over 220,000 NVIDIA GPUs and more than 300 megawatts of compute capacity.
If you’re sensing some whiplash, you’re not alone.
What Actually Happened
Anthropic signed an agreement with SpaceX to take over the full compute capacity of Colossus 1, xAI’s advanced AI supercomputer loaded with H100, H200, and GB200 accelerators. The facility was originally built for Grok, Musk’s own AI model — but Grok’s user base never grew into the capacity. Meanwhile, Anthropic’s demand exploded: 80x revenue and usage growth in Q1 2026, against a plan built for 10x. That gap created real problems. Claude users hit frustrating rate limits. API customers ran into capacity ceilings. Anthropic needed compute, fast.
Musk, freshly post-merger between SpaceX and xAI, had a massive idle asset burning cash just in time for SpaceX’s June S-1 filing. The math wrote itself.
The Tone Shift
Musk posted on X that he spent significant time with senior Anthropic team members over the past week and came away “impressed.” His words: “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector.” He added that “so long as they engage in critical self-examination, Claude will probably be good.”
He also noted that SpaceXAI had already moved training operations to Colossus 2, and reserved the right to reclaim compute if Anthropic’s AI “engages in actions that harm humanity.” A clause that’s simultaneously reassuring and very Elon.
One tech market researcher on X summed it up perfectly: “Elon’s enemy is Sam. Dario’s enemy is Sam. Enemy of my enemy is a compute partner.”
What This Means for Claude
The effects are already tangible. Anthropic announced it’s doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans. Peak-hours limit reductions — the ones that made afternoon coding sessions feel like rush-hour traffic — are gone for Pro and Max accounts. API rate limits for Claude Opus models are going up.
But the bigger picture matters more. Over 220,000 GPUs dedicated to Anthropic means serious headroom for training next-generation Claude models. This isn’t just about serving existing demand; it’s about what comes next. Multimodal capabilities, longer context windows, faster inference, more ambitious model architectures — all of these are gated by compute. Anthropic just removed a major bottleneck.
For anyone building on Claude’s API, this is straightforward good news. More capacity means fewer rate limits, more reliable throughput, and a stronger foundation for Anthropic to ship improvements without tripping over infrastructure constraints.
Why We’re Watching This Closely
PressBot Pro uses Claude as its recommended AI provider for the admin agent — specifically because of Claude’s superior tool calling and reasoning capabilities. When you ask PressBot to run a security audit, bulk-generate featured images, or manage WooCommerce orders through natural conversation, it’s Claude’s models doing the heavy lifting across 83 admin tools.
So when Anthropic’s infrastructure improves, PressBot users feel it directly. Higher API rate limits mean the admin agent can handle more complex bulk operations without hitting ceilings. Better model throughput means faster responses when you’re managing your WordPress site through Telegram at 11 PM. And if this compute windfall accelerates the next Claude model release, PressBot will integrate it — just as we did with Claude Opus 4.6 and Sonnet 4.6.
Here’s a concrete example: a PressBot user running a WooCommerce store who asks the agent to “generate featured images for all products missing one” currently fires off sequential Gemini image generation calls coordinated by Claude’s orchestration. With higher Opus rate limits, that coordination layer gets faster and more reliable — especially for stores with hundreds of products.
The Bigger Picture
This deal reshapes the competitive dynamics of the AI industry in a way nobody predicted six months ago. Musk funding Anthropic’s compute — even indirectly through a lease — while actively suing OpenAI is the kind of plot twist that makes this space impossible to look away from. It also signals that the AI compute market is maturing into something more pragmatic. Idle GPUs don’t care about Twitter feuds. They care about utilization rates.
For Anthropic, this buys time and scale while they build out their own long-term infrastructure. For Musk, it turns a liability into revenue at exactly the right moment. For Claude users and developers building on top of these models — including us — it means better performance, fewer constraints, and a stronger platform to build on.
If you’re running a WordPress site and want to see what Claude-powered site management actually feels like with these improved rate limits, grab PressBot at pressbot.io and bring your own Anthropic API key. The compute just got a lot cheaper to come by.