[{"data":1,"prerenderedAt":43},["ShallowReactive",2],{"$fypWpOzFtx2aH68OCRk0ltd_xkLTqkGrCDRrShCvWFGA":3},{"date":4,"generated_at":5,"picks":6,"candidates_scanned":42,"candidates_scored":30},"2026-04-11","2026-04-11T06:00:00.000000+00:00",[7,21,33],{"rank":8,"title":9,"source":10,"url":11,"category":12,"tldr":13,"score":14,"scores":15,"why":20},1,"Claude Code v2.1.101 — /team-onboarding, OS CA cert trust, command injection fix, memory leak, 20+ fixes","Claude Code Releases","https://github.com/anthropics/claude-code/releases/tag/v2.1.101","Release","- New `/team-onboarding` command: generates a teammate ramp-up guide from your own local Claude Code usage history — useful for async onboarding without writing docs manually\n- OS CA certificate store is now trusted by default, so enterprise TLS proxies work without extra setup; opt out with `CLAUDE_CODE_CERT_STORE=bundled` if you need the old behavior\n- Security fix: command injection vulnerability in the POSIX `which` fallback used by LSP binary detection — patched\n- Memory leak fix: long sessions were retaining dozens of historical copies of the full message list in the virtual scroller — now cleaned up\n- `/ultraplan` and remote-session features no longer require web setup first — they auto-create a default cloud environment\n- Multiple `--resume` fixes: context loss on large sessions, crash when a persisted Edit/Write result was missing `file_path`, and cross-subagent chain bridging into the wrong conversation",70,{"direct_claude_relevance":16,"practical_utility":17,"novelty":18,"source_credibility":19},32,20,5,13,"The command injection fix in LSP binary detection is a security-class issue worth patching even if you're not affected yet. The OS CA cert trust change is a meaningful enterprise usability win — teams behind TLS inspection proxies no longer need manual workarounds. The `/team-onboarding` command is genuinely novel: instead of writing a CLAUDE.md from scratch for new teammates, you can generate one from your actual usage patterns. Novelty is penalized because this is the fourth consecutive day covering a Claude Code release, but the security and onboarding content is distinct enough to merit the pick.",{"rank":22,"title":23,"source":24,"url":25,"category":26,"tldr":27,"score":28,"scores":29,"why":32},2,"Stop Putting Best Practices in Skills","Dev.to Claude","https://dev.to/edysilva/stop-putting-best-practices-in-skills-3pof","Guide","- Skills in Claude Code only get invoked 6–66% of the time; CLAUDE.md is always in context — so anything you want Claude to *always* do should live in CLAUDE.md, not a skill\n- The author ran 51 multi-turn evals across 4 configurations, replicated Vercel's single-shot experiment in realistic sessions, and read Claude Code's source to confirm: skills and CLAUDE.md are both just prompts, but skills depend on a chain of decisions that frequently fails\n- Rule of thumb: CLAUDE.md for guidelines, coding standards, and non-negotiable behavior; skills for on-demand recipes ('run this audit', 'generate this template', 'open a PR')",61,{"direct_claude_relevance":30,"practical_utility":30,"novelty":31,"source_credibility":18},22,12,"This is the kind of empirical research that reshapes how you configure your Claude Code setup. The activation gap — skills only triggering 6–66% of the time in multi-turn sessions — isn't intuitive, and the author backs it up with 51 evals and source code analysis rather than vibes. The practical takeaway is immediately actionable: audit your skills, move any 'always do this' guidelines to CLAUDE.md, keep skills for explicit invocation patterns. Lower source credibility (Dev.to personal blog) but the methodology holds up.",{"rank":34,"title":35,"source":24,"url":36,"category":26,"tldr":37,"score":38,"scores":39,"why":41},3,"Most of your Claude Code agents don't need Sonnet","https://dev.to/edwardkubiak/most-of-your-claude-code-agents-dont-need-sonnet-4587","- Haiku costs $0.25/1M input tokens vs Sonnet's $3/1M — a 12x difference — and for mechanical tasks (commit messages, code review, docs, CI config, test running) Haiku is plenty\n- The author runs 50 agent calls/day; only 8 hit Sonnet — the remaining 68% run Haiku, with 2 local Ollama models at zero API cost\n- Sonnet stays on tasks where wrong answers are expensive: planning, multi-file debugging, security review, complex implementation",57,{"direct_claude_relevance":17,"practical_utility":30,"novelty":40,"source_credibility":18},10,"The 12x price gap between Haiku and Sonnet is well-known in the abstract, but most people don't actually split their agent configs by task tier. This post gives a concrete breakdown of which task categories belong in each tier, with real numbers from a 50-calls-per-day workflow. Immediately actionable: pick any subagent config you run regularly, ask whether it actually needs deep reasoning, and drop it to Haiku if it doesn't. Two Dev.to picks in one day is a mild source diversity concern, but both earn their slots on content quality.",40,1776402243046]