[{"data":1,"prerenderedAt":46},["ShallowReactive",2],{"$fWA_ZjfwpQOpHQfj8Usa3VMBS6BbnkQ5KRCM4ysLv6EE":3},{"date":4,"generated_at":5,"picks":6,"candidates_scanned":44,"candidates_scored":45},"2026-05-09","2026-05-09T06:00:00.000000+00:00",[7,21,35],{"rank":8,"title":9,"source":10,"url":11,"category":12,"tldr":13,"score":14,"scores":15,"why":20},1,"v2.1.136","Claude Code Releases","https://github.com/anthropics/claude-code/releases/tag/v2.1.136","Release","- New `settings.autoMode.hard_deny` lets enterprises define classifier rules that ALWAYS block certain auto mode actions — no exceptions, no matter how Claude interprets user intent\n- MCP OAuth refresh tokens now survive concurrent server refreshes — if you run multiple remote MCP servers, you should stop seeing daily re-authentication prompts\n- Fixed: plan mode was supposed to block file writes, but an existing `Edit(...)` allow rule bypassed the block — this is now corrected\n- Fixed: MCP servers in `.mcp.json`, plugins, and claude.ai connectors silently vanished after `/clear` in VS Code, JetBrains, and the Agent SDK\n- WSL2 users: pasting images from the Windows clipboard now works via a PowerShell fallback when xclip/wl-paste can't read image data",74,{"direct_claude_relevance":16,"practical_utility":17,"novelty":18,"source_credibility":19},33,22,4,15,"The `settings.autoMode.hard_deny` setting is a genuinely new control mechanism for teams running Claude Code in auto mode — it creates unconditional blocks that can't be overridden by inferred user intent, which matters for security-sensitive workflows. The MCP OAuth concurrent refresh fix addresses a real daily annoyance for anyone running several remote MCP servers. Novelty is penalized since Claude Code releases were picked on 5/6, 5/7, and 5/8, but the hard_deny capability and the plan mode file write bypass fix are distinct enough to warrant coverage.",{"rank":22,"title":23,"source":24,"url":25,"category":26,"tldr":27,"score":28,"scores":29,"why":34},2,"Collaborate with Claude across Excel, PowerPoint, Word and Outlook","Claude Blog","https://claude.com/blog/collaborate-with-claude-across-excel-powerpoint-word-and-outlook","Announcement","- Claude is now available directly inside Microsoft Office apps — Excel, PowerPoint, Word, and Outlook\n- You can work with Claude on spreadsheets, presentations, documents, and email without leaving the Office suite\n- Official Anthropic blog post confirms the integration is live for supported users",70,{"direct_claude_relevance":30,"practical_utility":31,"novelty":32,"source_credibility":33},26,17,13,14,"Claude inside Microsoft Office is a significant distribution milestone — it puts Claude directly into tools hundreds of millions of knowledge workers already use daily. The Reddit activity today (multiple posts reacting to the Microsoft integration) confirms this is being received as fresh news. This is a new topic not covered in recent picks, and the official Anthropic blog is the authoritative source.",{"rank":36,"title":37,"source":38,"url":39,"category":26,"tldr":40,"score":41,"scores":42,"why":43},3,"AlignmentMay 8, 2026Teaching Claude whyNew research on how we've reduced agentic misalignment.","Anthropic Research","https://www.anthropic.com/research/teaching-claude-why","- New Anthropic alignment research describes training Claude to understand *why* certain behaviors are unsafe — not just pattern-matching on surface rules, but internalizing the reasoning behind constraints\n- The goal: an agent that generalizes correctly to novel situations it hasn't seen in training, rather than finding loopholes or failing in edge cases",54,{"direct_claude_relevance":17,"practical_utility":36,"novelty":19,"source_credibility":33},"Published yesterday directly on Anthropic Research, this addresses one of the core problems in agentic AI: models that follow the letter of their rules but not the spirit, leading to unexpected actions in multi-step tasks. While not immediately actionable for developers, it's the most significant safety research Anthropic has published this week and covers entirely different ground from the interpretability work (Natural Language Autoencoders) picked on 5/8.",37,20,1778475926108]