[{"data":1,"prerenderedAt":46},["ShallowReactive",2],{"$felmndc7vaSHpEbBiNWqfGWdJqBGUb3OH2R2Fm5vCFMs":3},{"date":4,"generated_at":5,"picks":6,"candidates_scanned":44,"candidates_scored":45},"2026-03-30","2026-03-30T05:30:00.000000+00:00",[7,20,34],{"rank":8,"title":9,"source":10,"url":11,"category":12,"tldr":13,"score":14,"scores":15,"why":19},1,"Bringing Code Review to Claude Code","Claude Blog","https://claude.com/blog/code-review","Release","- Claude Code now has a built-in code review workflow — ask Claude to review your changes the same way you'd request a teammate review\n- This moves Claude Code from 'write code for me' toward 'be my full development partner', covering the review step that was previously manual\n- Official Anthropic blog post — code review is one of the most common developer workflows, and having it native to Claude Code removes the context-switch to a separate reviewer\n- Pairs naturally with the Claude Code GitHub Action v1.0 (released last week) for automated PR review pipelines",76,{"direct_claude_relevance":16,"practical_utility":17,"novelty":18,"source_credibility":18},30,18,14,"An official Claude Blog post announcing native code review in Claude Code is a meaningful capability expansion. Code review is arguably the most universally shared developer workflow, and having it built into the tool rather than bolted on via prompts or third-party integrations is the right direction. This complements the recently-released Claude Code GitHub Action v1.0 and auto mode — together they form a coherent picture of Claude Code becoming a full-cycle development collaborator, not just a code generator.",{"rank":21,"title":22,"source":23,"url":24,"category":25,"tldr":26,"score":27,"scores":28,"why":33},2,"Why the 1M context window burns through limits faster and what to do about it","Reddit r/ClaudeCode","https://www.reddit.com/r/ClaudeCode/comments/1s6zxkp/why_the_1m_context_window_burns_through_limits/","Guide","- Every message you send re-sends your *entire* conversation to the API — message 50 includes all 49 prior turns before Claude starts on your new one. Without caching, a 100-turn Opus session would cost $50-100 in input tokens alone\n- Anthropic caches aggressively (90% off for cache hits). One measured session hit 96.39% cache hit rate: 47M tokens sent, only 1.6M needed real compute\n- The real cost driver is cache *busts* caused by the 5-minute TTL. A 6-minute coffee break on a 500K-token conversation costs ~$3.13 just in cache-write fees (billed at 125% of normal input rate)\n- Common accidental cache busters: timestamps or dynamic content in your system prompt, switching models mid-session, adding/removing MCP tools (tool definitions are part of the cached prefix)\n- Fix: keep system prompts fully static, don't swap models mid-session, batch MCP tool changes to the start, avoid pauses longer than 5 minutes when your context is large",69,{"direct_claude_relevance":29,"practical_utility":30,"novelty":31,"source_credibility":32},28,22,12,7,"This is the clearest technical explanation yet of why Claude Code sessions unexpectedly drain usage budgets, landing at exactly the right moment given this week's flood of 'my 20x limit was gone in 19 minutes' complaints. The 5-minute TTL cache bust mechanism is genuinely non-obvious, and the concrete list of what causes busts — timestamps in system prompts, model switches, MCP tool changes — gives users immediate, actionable fixes. The 96.39% cache hit rate measurement from a wired vLLM setup makes the math concrete rather than theoretical.",{"rank":35,"title":36,"source":23,"url":37,"category":25,"tldr":38,"score":39,"scores":40,"why":43},3,"MEX: structured context scaffold for Claude Code with drift detection","https://www.reddit.com/r/ClaudeCode/comments/1s7580d/i_built_this_last_week_woke_up_to_a_developer/","- MEX replaces one monolithic context file with a routing table in `.mex/` — Claude loads only the context relevant to the current task (working on auth? loads `context/architecture.md`, writing new code? loads `context/conventions.md`)\n- A CLI runs 8 zero-token, zero-AI drift checkers: finds referenced file paths that no longer exist, npm scripts your docs mention that were deleted, dependency version conflicts, scaffold files not updated in 50+ commits\n- When drift is found, `mex sync` builds a targeted repair prompt and fires Claude Code on only the broken files",58,{"direct_claude_relevance":41,"practical_utility":41,"novelty":31,"source_credibility":42},20,6,"Context management is one of the biggest unsolved friction points in Claude Code workflows, and MEX tackles it from a direction most people haven't tried: instead of trying to compress or summarize context, it routes — pointing Claude at exactly what it needs for the task at hand. The drift detection CLI is the real differentiator; it catches stale scaffold state without burning tokens on it. The organic viral spread (28k-follower developer tweet, PRs from strangers) is a reasonable quality signal for something this niche.",50,25,1776402243215]