
Worktree Base Refs, Effort-Aware Hooks, and a String of Compute Deals
Claude Code makes worktree base branches configurable and exposes effort levels to hooks and Bash commands. Plus Anthropic raises usage limits and signs compute deals with SpaceX and Akamai.
Chapters
Transcript
I'm Shannon, and this is the Claude Notes Brief -- Claude Code updates and Anthropic news for the week of May eleventh. Worktree base branches are now configurable. Hooks and Bash commands can read the active effort level. And Anthropic raised Claude's usage limits alongside new compute deals.
Let's start with Claude Code -- the headline this week is a new setting that controls where worktrees branch from, and it reverses a default that landed just a couple of releases ago. The new worktree base ref setting lets you choose whether worktrees branch from origin's default branch or from your local HEAD. The default has flipped back to origin, so if you came to rely on unpushed commits being carried into new worktrees, you'll want to set the base ref to head explicitly. Something to watch carefully if worktrees are part of your daily flow.
That same theme of giving you finer control shows up in hooks too. Hooks now receive the active effort level as a field in their JSON input, and both hooks and Bash tool subprocesses can read it from a CLAUDE_EFFORT environment variable. So you can branch your hook logic or shell commands depending on whether the session is running at low, medium, or high effort -- useful if you want different guardrails or different test suites depending on how much the model is being asked to chew on. On the plugin side, you can now load a plugin directly from a URL by pointing Claude Code at a zip archive.
The local plugin directory flag also accepts zip archives now. Handy for trying something out without committing it to a marketplace. A couple of smaller quality-of-life changes round things out. There's a new opt-out for the fullscreen renderer, so if you'd rather keep your conversation in your terminal's native scrollback, you can disable the alternate screen.
Control-R history search is once again global across all projects by default, restoring the behavior from earlier this year -- press Control-S inside the picker to narrow to the current project. The skill overrides setting now actually takes effect, letting you hide skills from the model entirely, keep them slash-menu only, or expose just the name. And the MCP server list now shows tool counts per server and flags servers that connected but exposed zero tools, which makes it obvious at a glance which ones are quietly failing.
Moving under the hood, the theme this week is reliability fixes you'll feel without ever noticing them. Several concurrency bugs around credential refresh have been resolved. MCP OAuth refresh tokens are no longer lost when multiple servers refresh at the same time, so you shouldn't be re-authing daily anymore. Parallel sessions no longer all hit 401 errors after a refresh-token race.
And a wake-from-sleep race that could log out every running session at once is fixed. Memory got attention too. A stdio MCP server writing non-protocol data to standard out no longer balloons memory past ten gigabytes -- a meaningful ceiling if you've ever had a runaway server. Warm-spare background workers now release under memory pressure.
And sub-agent progress summaries now hit the prompt cache, which cuts cache creation tokens by roughly three times. The OAuth fixes and that MCP memory leak are probably the two changes most likely to quietly clean up your day.
On to the broader news, and the through-line this week is compute. Anthropic announced higher usage limits for Claude alongside a compute deal with SpaceX. Bloomberg is reporting the SpaceX agreement covers capacity on Colossus 1, and it's part of a string of infrastructure deals aimed at the constraints behind recent rate limits. Two days later, Bloomberg also reported a separate one point eight billion dollar agreement with Akamai for edge capacity, which sits on top of existing deals with SpaceX and Google.
The reason for all this becomes clearer in a New York Times piece from the same week. Dario Amodei said Anthropic grew roughly eighty times in the first quarter, which he framed as the cause of the recent compute shortages. So the pattern of multi-cloud capacity deals is now the dominant story shaping how quickly Claude Code features and limits can scale. The other piece of news is a one point five billion dollar joint venture with Blackstone, Hellman and Friedman, and Goldman Sachs to deliver AI services to enterprises.
The New York Times reports it's launching alongside ten new financial-services Claude Code plugins and Microsoft 365 integrations. If you work in regulated industries, that's the channel those integrations will ship through. And on the engineering blog, Anthropic detailed new capabilities in Claude Managed Agents -- dreaming for offline self-improvement, Outcomes for graded verification loops, and multiagent orchestration. There are companion cookbooks if you build agents and want to dig in.
We'll link everything in the show notes. That's it for the brief. I'm Shannon, and we'll see you next week.
Show Notes
- New in Claude Managed Agents: dreaming, outcomes, and multiagent orchestrationclaude.com
- Higher usage limits for Claude and a compute deal with SpaceXanthropic.com
- Anthropic Signs Computing Deal With SpaceX to Meet AI Demandbloomberg.com
- Anthropic's C.E.O. Says It Could Grow by 80 Times This Yearnytimes.com
- Anthropic Inks $1.8 Billion Computing Deal With Akamaibloomberg.com
- Anthropic and Wall Street Giants Join Forces to Create New A.I. Firmnytimes.com
