Friday, March 20, 2026
Daily picks
31
articles scored
#1 GOLDReleaseClaude Code Releases
Claude Code v2.1.80
- The big headline: `--channels` (research preview) lets MCP servers push messages directly into your Claude Code session — meaning you can control Claude Code from Telegram or Discord, right from your phone
- Rate limit visibility is finally here: a new `rate_limits` field in statusline scripts shows your Claude.ai usage percentage and reset times for both 5-hour and 7-day windows — no more guessing when you'll run out
- You can now set `effort` in skill/slash command frontmatter to override the model effort level when that command runs — useful for making lightweight commands cheaper without touching global settings
- Memory usage drops by ~80MB on large repositories (250k+ files) — if you work on huge codebases this is a noticeable improvement on startup
- Bug fixes that matter: `--resume` no longer drops parallel tool results (sessions with parallel tool calls now restore correctly), voice mode WebSocket failures from Cloudflare bot detection are fixed, and plugin installs are now a single `/plugin install` command instead of a two-step flow
#2 SILVERGuideReddit r/ClaudeCode
From Zero to Fleet: The Claude Code Progression Ladder
- Someone who built a 668,000-line platform with autonomous Claude agents maps out 5 distinct levels of Claude Code mastery: raw prompting → CLAUDE.md → Skills → Hooks → Orchestration
- The key insight: you don't level up by deciding to — you get pushed up when something breaks. The fix is always more infrastructure, not more effort
- CLAUDE.md has a real ceiling at ~100 lines (theirs crept to 190 with 40% redundancy); Skills are the right place for deep expertise — they have 40 skills totaling 10,800 lines that cost zero tokens when not in use
- Level 5 (Orchestration) means parallel agents in isolated worktrees, 198 agents across 109 waves — don't try to skip to it before having solid hooks in place
#3 BRONZETutorialReddit r/ClaudeAI
How I use Haiku as a gatekeeper before Sonnet to save ~80% on API costs
- Simple two-stage pipeline: send everything to Haiku first with a yes/no prompt ("does this contain a real complaint/need?"), then only pass the ~15% that pass to Sonnet for the real work
- Result: running Sonnet on 15% of input instead of 100% — the cost difference at scale is massive, and Haiku is surprisingly good at the gate job with few false negatives
