[{"data":1,"prerenderedAt":48},["ShallowReactive",2],{"$ftmBNf8Q3QIgR8MGgu8Zhu4NoVqKlZMvTidzYzzT909o":3},{"date":4,"generated_at":5,"picks":6,"candidates_scanned":47,"candidates_scored":43},"2026-05-08","2026-05-08T06:00:00.000000+00:00",[7,21,34],{"rank":8,"title":9,"source":10,"url":11,"category":12,"tldr":13,"score":14,"scores":15,"why":20},1,"v2.1.133","Claude Code Releases","https://github.com/anthropics/claude-code/releases/tag/v2.1.133","Release","- New `worktree.baseRef` setting (`fresh` | `head`) controls whether `--worktree` and `EnterWorktree` branch from `origin/\u003Cdefault>` or local `HEAD` — **important**: the default reverts `EnterWorktree` back to `origin/\u003Cdefault>`, undoing the behavior change from v2.1.128; set `worktree.baseRef: \"head\"` to keep unpushed commits in new worktrees\n- Hooks now receive the active effort level via `effort.level` in their JSON input and via `$CLAUDE_EFFORT` env var — hook scripts can now behave differently based on whether you're in budget/normal/high/xhigh mode\n- Fixed: parallel sessions all hitting 401 after a refresh-token race that wiped shared credentials — multi-session workflows no longer dead-end after token refresh\n- Fixed: `HTTP(S)_PROXY` / `NO_PROXY` / mTLS now respected for the entire MCP OAuth flow including discovery and token exchange — enterprise proxy users can finally use MCP servers requiring OAuth\n- Fixed: `/effort` in one session no longer bleeds into other concurrent sessions",75,{"direct_claude_relevance":16,"practical_utility":17,"novelty":18,"source_credibility":19},32,22,6,15,"The `worktree.baseRef` change deserves immediate attention: it reverts `EnterWorktree`'s default branching back to `origin/\u003Cdefault>`, which means anyone relying on the v2.1.128 behavior (local HEAD) needs to explicitly set `worktree.baseRef: \"head\"` or their unpushed commits will again be invisible in new worktrees. The hooks effort level is a genuinely new capability — hook scripts can now adapt their behavior based on whether Claude is running in budget or xhigh mode. Novelty is penalized since this is the fourth consecutive Claude Code release featured this week, but the worktree revert and the 401 parallel session race fix address real footguns.",{"rank":22,"title":23,"source":24,"url":25,"category":12,"tldr":26,"score":27,"scores":28,"why":33},2,"[Release] anthropics/claude-agent-sdk-typescript: v0.2.133","GitHub anthropics/claude-agent-sdk-typescript","https://github.com/anthropics/claude-agent-sdk-typescript/releases/tag/v0.2.133","- The unstable V2 session API (`unstable_v2_createSession`, `unstable_v2_resumeSession`, `unstable_v2_prompt`) is now deprecated — migrate to `query()` before these methods are removed\n- Passing `'Skill'` in `allowedTools` is deprecated — use the `skills` option on `ClaudeAgentOptions` instead, which gives per-skill control rather than an all-or-nothing toggle\n- Synced to parity with Claude Code v2.1.133",60,{"direct_claude_relevance":29,"practical_utility":30,"novelty":31,"source_credibility":32},25,14,8,13,"If your TypeScript agent code calls any of the `unstable_v2_*` session methods, this release is your formal notice to migrate to `query()` before the V2 API is removed. The `skills` option replacing the `'Skill'` string in `allowedTools` provides more granular control — it's worth updating now rather than being forced to later. This release ships the same deprecations as the Python SDK v0.1.77 released earlier this week, completing the migration signal across both SDK languages.",{"rank":35,"title":36,"source":37,"url":38,"category":39,"tldr":40,"score":41,"scores":42,"why":46},3,"Natural Language Autoencoders: Turning Claude's thoughts into text","Anthropic Research","https://www.anthropic.com/research/natural-language-autoencoders","Announcement","- Anthropic trained Claude to translate its own internal activations (the numbers computed between input and output) into human-readable sentences — not summaries of the output, but descriptions of what the model is \"thinking\" mid-computation\n- In one demo, Opus 4.6 was asked to finish a couplet; the method read its activations and found it had already chosen the rhyme word before generating the second line — the internal state said \"I'll end this with 'rabbit'\" before the token appeared\n- The trick: three copies of the model — one frozen (the target), one that turns activations into text, one that reconstructs activations from text — trained against round-trip fidelity rather than human labels",53,{"direct_claude_relevance":43,"practical_utility":44,"novelty":45,"source_credibility":32},20,4,16,"Natural Language Autoencoders are a meaningful step in AI interpretability: instead of requiring human-labeled training data to decode what a model is computing, the reconstruction quality itself is the training signal. The result is that Claude's mid-computation state becomes legible as sentences a human can read — not just feature activations or attribution graphs. This is pure research with no immediate API surface, but it's the most genuinely novel Anthropic publication this week and has direct implications for understanding and auditing Claude's reasoning.",38,1778475926108]