[{"data":1,"prerenderedAt":48},["ShallowReactive",2],{"$fILx2i2vm0ni3-BkSauGQ-ox0aNHXFEJhkic3kxgoIy8":3},{"date":4,"generated_at":5,"picks":6,"candidates_scanned":46,"candidates_scored":47},"2026-04-22","2026-04-22T06:00:00.000000+00:00",[7,21,34],{"rank":8,"title":9,"source":10,"url":11,"category":12,"tldr":13,"score":14,"scores":15,"why":20},1,"v2.1.117","Claude Code Releases","https://github.com/anthropics/claude-code/releases/tag/v2.1.117","Release","- **Default effort bumped to `high`** for Pro/Max subscribers on Opus 4.6 and Sonnet 4.6 (was `medium`) — expect heavier, more thorough reasoning out of the box on paid tiers without changing any settings\n- Native macOS and Linux builds now use embedded `bfs` and `ugrep` in place of the separate `Glob` and `Grep` tools — searches run faster with no extra tool round-trip\n- `/model` selections now persist across restarts even if the project pins a different model, and the startup header tells you where the active model came from\n- `/resume` on stale, large sessions now offers to summarize before re-reading — same behavior as the CLI `--resume` flag\n- Forked subagents can now be enabled on external builds via `CLAUDE_CODE_FORK_SUBAGENT=1`, and agent frontmatter `mcpServers` are now loaded for main-thread `--agent` sessions\n- Fixed: OAuth sessions no longer die mid-session with \"Please run /login\" — the token refreshes reactively on 401",69,{"direct_claude_relevance":16,"practical_utility":17,"novelty":18,"source_credibility":19},25,22,9,13,"A dense patch release with two standout changes: the default effort upgrade to `high` for Pro/Max users will be immediately noticeable without any config change, and the native bfs/ugrep replacement for Glob/Grep means faster file operations on macOS and Linux. The OAuth token refresh fix is a quality-of-life fix that's been a persistent frustration in long-running sessions. Novelty is lightly penalized because v2.1.116 was yesterday's Gold, but the substance here is distinct enough — the effort bump alone changes how the model behaves by default for hundreds of thousands of paid users.",{"rank":22,"title":23,"source":24,"url":25,"category":26,"tldr":27,"score":28,"scores":29,"why":33},2,"Claude Code to be removed from Anthropic's Pro plan?","HN Anthropic","https://bsky.app/profile/edzitron.com/post/3mjzxwfx3qs2a","Announcement","- A Bluesky post reporting Claude Code is being removed from the $20/month Claude Pro tier is generating massive Hacker News discussion — 446 points and 428 comments as of today\n- Multiple Dev.to posts from different authors independently confirm Pro users are losing access to Claude Code, consistent with the social media reports\n- **Check now**: if you're on Pro and using Claude Code in your workflow, verify your plan at claude.ai/settings before building workflows around it",47,{"direct_claude_relevance":17,"practical_utility":30,"novelty":31,"source_credibility":32},6,15,4,"The primary source is a Bluesky post — thin on its own — but the HN thread has 428 comments and multiple Dev.to authors independently reporting the same change, which is unusual corroboration for platform-tier news. If accurate, this directly affects every Claude Pro subscriber who has built workflows around Claude Code terminal access. Anthropic has not issued an official announcement at time of writing, so the situation may still be evolving. Worth verifying your account status rather than assuming continuity.",{"rank":35,"title":36,"source":37,"url":38,"category":39,"tldr":40,"score":41,"scores":42,"why":45},3,"Use an Adversarial Model Challenge feature in Your Opus 4.7 Development Workflow","Dev.to Claude","https://dev.to/heyclos/why-you-need-an-adversarial-model-challenge-in-your-ai-development-workflow-3hce","Guide","- Opus 4.7 has a documented pattern of hallucinating *with high conviction* — one developer watched it defend a wrong score (17/29 vs 18/29) across 10 turns, inventing new justifications each time, burning $120 in API credits before they gave up\n- The fix isn't prompt engineering: it's routing factual verification tasks to a second model call specifically tasked to challenge the first answer — adversarial by design\n- r/ClaudeCode threads corroborate: users report fabricated files, fabricated test results, and overly-formatted corporate-sounding outputs in real production workflows",50,{"direct_claude_relevance":17,"practical_utility":43,"novelty":44,"source_credibility":32},14,10,"This documents a real, reproducible failure mode in Opus 4.7 that isn't about hallucination in the abstract but about the model's willingness to defend wrong answers under pressure — which makes it harder to catch than a silent error. The article collected corroborating patterns from Reddit and HN threads, making this a community-confirmed pattern rather than one data point. The adversarial challenger pattern is the right structural response and is actionable: route verification steps to a second call explicitly designed to find errors in the first.",41,18,1776834289508]