Skip to content

Friday, May 8, 2026

Daily picks

20

articles scored

#1 GOLDReleaseClaude Code Releases

v2.1.133

  • New `worktree.baseRef` setting (`fresh` | `head`) controls whether `--worktree` and `EnterWorktree` branch from `origin/<default>` or local `HEAD` — **important**: the default reverts `EnterWorktree` back to `origin/<default>`, undoing the behavior change from v2.1.128; set `worktree.baseRef: "head"` to keep unpushed commits in new worktrees
  • Hooks now receive the active effort level via `effort.level` in their JSON input and via `$CLAUDE_EFFORT` env var — hook scripts can now behave differently based on whether you're in budget/normal/high/xhigh mode
  • Fixed: parallel sessions all hitting 401 after a refresh-token race that wiped shared credentials — multi-session workflows no longer dead-end after token refresh
  • Fixed: `HTTP(S)_PROXY` / `NO_PROXY` / mTLS now respected for the entire MCP OAuth flow including discovery and token exchange — enterprise proxy users can finally use MCP servers requiring OAuth
  • Fixed: `/effort` in one session no longer bleeds into other concurrent sessions
#2 SILVERReleaseGitHub anthropics/claude-agent-sdk-typescript

[Release] anthropics/claude-agent-sdk-typescript: v0.2.133

  • The unstable V2 session API (`unstable_v2_createSession`, `unstable_v2_resumeSession`, `unstable_v2_prompt`) is now deprecated — migrate to `query()` before these methods are removed
  • Passing `'Skill'` in `allowedTools` is deprecated — use the `skills` option on `ClaudeAgentOptions` instead, which gives per-skill control rather than an all-or-nothing toggle
  • Synced to parity with Claude Code v2.1.133
#3 BRONZEAnnouncementAnthropic Research

Natural Language Autoencoders: Turning Claude's thoughts into text

  • Anthropic trained Claude to translate its own internal activations (the numbers computed between input and output) into human-readable sentences — not summaries of the output, but descriptions of what the model is "thinking" mid-computation
  • In one demo, Opus 4.6 was asked to finish a couplet; the method read its activations and found it had already chosen the rhyme word before generating the second line — the internal state said "I'll end this with 'rabbit'" before the token appeared
  • The trick: three copies of the model — one frozen (the target), one that turns activations into text, one that reconstructs activations from text — trained against round-trip fidelity rather than human labels

Made with passive-aggressive love by manoga.digital. Powered by Claude.