
Bare Mode for Pipelines, MCP Channels, and Anthropic Overtakes OpenAI in Enterprise Revenue
Claude Code ships a new bare mode that strips it down for scripted pipelines and CI jobs. MCP channels let servers push messages into your session, VS Code gains remote control bridging, and responses now stream line by line. In the news, Anthropic overtakes OpenAI in enterprise revenue and Cursor plans its own rival model.
Chapters
Transcript
I'm Shannon, and this is the Claude Notes Brief -- Claude Code updates and Anthropic news for the week of March twenty-third. A new bare mode strips Claude Code down for pipelines. Anthropic overtakes OpenAI in enterprise revenue. And OpenAI acquires the team behind popular Python tools.
Five releases for Claude Code this week, headlined by a new bare mode that turns it into a minimal, scriptable building block for automated pipelines. If you call Claude Code from CI jobs, shell scripts, or any kind of automated workflow, you can now pass a flag that skips hooks, language server startup, plugin sync, and skill directory walks entirely. What you get is a stripped-down execution path that only needs an API key -- OAuth and keychain auth are disabled, auto-memory is off. It's Anthropic making an explicit bet that Claude Code isn't just an interactive tool.
It's something you compose into larger systems. That composability theme extends to how external services can now interact with your sessions. A new research preview called MCP channels lets MCP servers push messages directly into your active session instead of waiting for Claude to poll them. Servers that declare a permission capability can even forward tool approval prompts to your phone, so you can approve actions while you're away from the terminal.
It's a meaningful step toward Claude Code sessions that stay connected to the outside world in real time. On that same theme of staying connected, the Visual Studio Code extension picked up remote control bridging this week. You can now bridge your active editor session to claude.ai and continue working from a browser or your phone. Session titles sync automatically, and ending the remote session archives it cleanly.
And one more change you'll notice immediately -- responses now stream line by line as they're generated, instead of arriving in larger chunks. You see output appear in real time, which makes long responses feel much more responsive. Worth noting this is currently disabled on Windows, including WSL in Windows Terminal, due to rendering issues. Finally, skills, custom commands, and plugin-shipped agents now support effort-level frontmatter.
You can set how hard Claude works, how many turns it takes, and which tools it can reach, all directly in the frontmatter for a specific command or agent. It's fine-grained control over behavior without touching global settings.
Under the hood, this was a strong week for performance and stability. Startup on macOS is about sixty milliseconds faster thanks to parallel keychain reads, large repos save roughly eighty megabytes on startup, and resuming big sessions is up to forty-five percent faster with one hundred to one hundred fifty megabytes less peak memory. Those are the kind of improvements you feel every time you open a session in a big codebase. On the stability side, resumed sessions no longer drop parallel tool results or silently truncate history -- both of which could cause confusing behavior in long-running work.
Concurrent sessions no longer force you to re-authenticate repeatedly, compound bash always-allow rules now save correctly per subcommand, and a security fix ensures the sandbox can no longer silently disable itself when dependencies are missing. That last one is worth highlighting -- it closes a gap where the sandbox could degrade without telling you.
Turning to the broader landscape, and it was a busy week. Axios is reporting that Anthropic has overtaken OpenAI in a key enterprise revenue metric. That matters here because stronger enterprise revenue typically funds the kind of sustained investment in developer tools that drives these weekly updates. The competitive pressure is real, though.
Bloomberg reports that Cursor, the AI coding startup, is building its own model to compete directly with Anthropic and OpenAI in agentic coding. Cursor has been one of the most visible tools in this space, and training a purpose-built model signals it sees a future where it doesn't depend on third-party providers. That same competitive dynamic is playing out on the tooling side too. Reuters reports that OpenAI is acquiring Astral, the company behind the popular uv and ruff Python tools.
The acquisition is framed as part of OpenAI's push to compete with Anthropic in developer tooling. If you use those tools alongside Claude Code, it's worth watching how the integration story evolves under new ownership. And one more -- TechCrunch took a look at how Y Combinator CEO Garry Tan's Claude Code setup went viral, sparking debate about how power users configure agentic coding tools. We'll link that in the show notes if you want a window into how others are structuring their workflows.
Separately, Claude Code's Head of Product Cat Wu published a piece on the Claude blog about how product teams should rethink workflows as model capabilities shift. Also linked in the show notes. That's it for the brief. I'm Shannon, and we'll see you next week.
Show Notes
- AI Coding Startup Cursor Plans New Model to Rival Anthropic, OpenAIbloomberg.com
- OpenAI to buy Python toolmaker Astral to take on Anthropicreuters.com
- Why Garry Tan's Claude Code setup has gotten so much love, and hatetechcrunch.com
- Anthropic turns the tables on OpenAI in critical revenue categoryaxios.com
- Product management on the AI exponentialclaude.com
