← Back to all entries
2026-03-22

Channels Architecture Deep Dive, Cowork Persistent Threads & Policy Debate

Channels Architecture, Cowork Threads & Policy Debate — visual for 2026-03-22

How Claude Code Channels Works — Architecture Under the Hood

With hands-on reviews and deep-dive coverage arriving a day after launch, the architecture behind Claude Code Channels became clearer. MCP is the spine of the whole system: a Channels plugin on the Claude Code side registers as an MCP server, while a lightweight bot process holds the Telegram or Discord connection. Messages cross the bridge bidirectionally — instructions come in as MCP tool calls, and Claude Code's responses (diffs, terminal output, file contents) are formatted and sent back through the same bot thread.

The message flow, step by step

Security note: The MCP bridge listens only on localhost by default. Your bot token authenticates the messaging-platform side; no external port is exposed. Review your bot token permissions and restrict to the minimum required channels before deploying on shared machines.

Claude Code MCP architecture developer tools security

Cowork Persistent Agent Threads — Now Available on Pro & Max Plans

Anthropic began rolling out persistent agent threads in Cowork to Max plan subscribers on 22 March, with Pro plan access following within two days. Where standard Claude conversations end when you close the window, a Cowork persistent thread keeps the agent alive and retains full context across sessions. You can return hours later — via Claude Desktop, iOS, or Android — and the agent picks up exactly where it left off, with memory of every file it has seen and every tool it has called.

What makes this different from a regular conversation

Best use cases: large codebase refactors, ongoing research projects, and any task where you need the agent to remember context accumulated over multiple working sessions rather than starting fresh each time.

Cowork agents productivity Claude Desktop mobile

The Policy Debate — How Anthropic Framed Its "Freely Available Information" Update

The day after Anthropic's usage policy update, commentary from AI safety researchers, policy advocates, and competitors arrived in force. Critics argued that "freely available online" sets too low a bar and risks normalising Claude as a research tool for bad actors who simply couldn't be bothered to search. Supporters countered that over-restriction had already driven users to less-safe alternatives and that Anthropic's hard limits on novel synthesis and operational uplift are the meaningful safety surface — not gatekeeping information from Wikipedia.

Anthropic's published rationale (key points)

Takeaway for developers: If you build Claude-powered products, review the updated usage policy against your use case. Applications in education, journalism, and research may find Claude more useful in areas where it previously over-refused. Applications in sensitive sectors may want to add their own system-prompt guardrails on top of Anthropic's policy floor.

safety policy ethics responsible AI public debate