✦ How Claude Code Channels Works — Architecture Under the Hood
With hands-on reviews and deep-dive coverage arriving a day after launch, the architecture behind Claude Code Channels became clearer. MCP is the spine of the whole system: a Channels plugin on the Claude Code side registers as an MCP server, while a lightweight bot process holds the Telegram or Discord connection. Messages cross the bridge bidirectionally — instructions come in as MCP tool calls, and Claude Code's responses (diffs, terminal output, file contents) are formatted and sent back through the same bot thread.
The message flow, step by step
- You send a message in Telegram or Discord to the bot
- Bot → MCP bridge translates the message into a tool call targeting the active Claude Code session
- Claude Code receives the tool call, executes the task with full local access (filesystem, git, shell), and produces a result
- MCP bridge → Bot formats the result (truncating very long output, wrapping code in fenced blocks) and sends it back to your chat thread
- The Claude Code session maintains full continuity — it's the same context window throughout, not a new session per message
Security note: The MCP bridge listens only on localhost by default. Your bot token authenticates the messaging-platform side; no external port is exposed. Review your bot token permissions and restrict to the minimum required channels before deploying on shared machines.
Claude Code
MCP
architecture
developer tools
security
✦ Cowork Persistent Agent Threads — Now Available on Pro & Max Plans
Anthropic began rolling out persistent agent threads in Cowork to Max plan subscribers on 22 March, with Pro plan access following within two days. Where standard Claude conversations end when you close the window, a Cowork persistent thread keeps the agent alive and retains full context across sessions. You can return hours later — via Claude Desktop, iOS, or Android — and the agent picks up exactly where it left off, with memory of every file it has seen and every tool it has called.
What makes this different from a regular conversation
- Persistent context: the thread doesn't reset between sessions — files opened, code written, and decisions made all remain in scope
- Cross-device continuity: start a task on desktop, check progress on mobile, approve an action, come back to desktop for review
- Long-horizon tasks: suited for multi-day projects — refactoring a large module, researching a topic over several sessions, managing a backlog
- Distinct from the
/loop skill (which runs on an interval); this is a persistent context environment, not a scheduled re-invocation
- Availability: Max plan from 22 March, Pro plan rolling out over the following 48 hours
Best use cases: large codebase refactors, ongoing research projects, and any task where you need the agent to remember context accumulated over multiple working sessions rather than starting fresh each time.
Cowork
agents
productivity
Claude Desktop
mobile
✦ The Policy Debate — How Anthropic Framed Its "Freely Available Information" Update
The day after Anthropic's usage policy update, commentary from AI safety researchers, policy advocates, and competitors arrived in force. Critics argued that "freely available online" sets too low a bar and risks normalising Claude as a research tool for bad actors who simply couldn't be bothered to search. Supporters countered that over-restriction had already driven users to less-safe alternatives and that Anthropic's hard limits on novel synthesis and operational uplift are the meaningful safety surface — not gatekeeping information from Wikipedia.
Anthropic's published rationale (key points)
- The distinction is between informational access (discussing how something works) and operational uplift (providing a step-by-step plan that meaningfully helps someone cause harm)
- Hard limits remain on unpublished synthesis routes, attack planning, and anything calibrated to mass-casualty scenarios
- Over-refusal has measurable costs: user trust, utility for educators and journalists, and competitive pressure from less-careful alternatives
- Anthropic published a detailed rationale document alongside the policy — a transparency move that itself drew positive comment from several policy researchers
Takeaway for developers: If you build Claude-powered products, review the updated usage policy against your use case. Applications in education, journalism, and research may find Claude more useful in areas where it previously over-refused. Applications in sensitive sectors may want to add their own system-prompt guardrails on top of Anthropic's policy floor.
safety
policy
ethics
responsible AI
public debate