Anthropic announced a partnership with SpaceX to use all of the compute capacity at SpaceX’s Colossus 1 data center. The deal gives Anthropic access to more than 300 megawatts of new capacity, representing over 220,000 Nvidia GPUs, available within the month. xAI confirmed the partnership separately, describing Colossus 1 as one of the world’s largest and fastest-deployed AI supercomputers.

The immediate effect is capacity expansion for Claude’s agent infrastructure. Anthropic doubled Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans, removed peak-hours limit reductions on Claude Code for Pro and Max accounts, and raised API rate limits for Claude Opus models, all effective the day of announcement.

The Compute Stack

Colossus 1 was originally built by xAI to power its own Grok models. As Tom’s Hardware reported, the entire first-generation cluster is now powering one of xAI’s direct AI rivals, as the company focuses on building Colossus 2. Musk reportedly said “No one set off my evil detector” regarding the Anthropic deal.

The SpaceX agreement is one component of a broader compute acquisition strategy. According to Anthropic’s announcement, the company has also secured an up to 5 gigawatt agreement with Amazon (including nearly 1 GW of new capacity by end of 2026), a 5 GW agreement with Google and Broadcom (coming online in 2027), a strategic partnership with Microsoft and Nvidia that includes $30 billion of Azure capacity, and a $50 billion investment in American AI infrastructure with Fluidstack.

HotHardware noted that Amazon committed an additional $5 billion in investment, with up to $20 billion more possible in the future, as part of the broader compute arrangement.

Orbital Compute and Agent Capacity

The most forward-looking detail in Anthropic’s announcement: “As part of this agreement, we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.” No timeline or technical details were provided, but the mention signals that frontier AI labs are already planning compute infrastructure beyond terrestrial data centers.

For agent builders, the capacity expansion has immediate practical impact. Claude Code is the primary tool developers use to build, test, and deploy agent systems on Claude. Doubled rate limits and removed peak-hour throttling directly affect how fast teams can iterate on agent architectures. The Opus API rate limit increases affect production agent deployments that depend on Claude’s most capable model for reasoning-intensive tasks.

Anthropic framed the international expansion component through the lens of regulated industries. Enterprise customers in financial services, healthcare, and government need in-region infrastructure for compliance and data residency requirements. The Amazon collaboration includes additional inference capacity in Asia and Europe, and Anthropic said it is “intentional about where we’ll add capacity,” partnering only with democratic countries whose legal frameworks support investments at this scale.

The deal arrives one week after Anthropic released ten new agent plugins for financial services and insurance organizations and announced a $1.5 billion enterprise AI services company with Blackstone, Goldman Sachs, and Hellman & Friedman. The compute expansion, agent tooling, and enterprise services announcements form a coordinated push: Anthropic is building the full stack from silicon to deployment services, with agents as the delivery mechanism.