✦ Claude Dispatch — Control Your Desktop Agent from Your Phone
Launched on 18 March 2026 as a research preview for
Pro and Max plan subscribers, Claude Dispatch bridges the
Claude mobile app and the Claude Desktop app into a single, persistent agent thread. Instead
of each Cowork session starting fresh, Claude now retains context from previous tasks across
devices — you can hand off work to your desktop, step away, and check in or redirect Claude
from your phone whenever you like.
How Dispatch works
- Your phone runs the Claude mobile app — this is your remote control.
- Your desktop (awake, with Claude Desktop open) does the actual compute
work: file edits, web searches, code execution.
- A single persistent thread connects both ends — Claude remembers the
task it was working on when you send a follow-up message from your phone.
- You can send new instructions, ask for a status update, or course-correct mid-task
without sitting at your desk.
Practical use cases
- Start a long research task before leaving the office, then check Claude's progress from
the commute home.
- Trigger a code refactor in Cowork, go to a meeting, and confirm the result on mobile
before merging.
- Queue up a batch of document summaries from your phone and find them ready when you
sit down again.
Tip Dispatch requires both the Claude Desktop app and the Claude
mobile app to be installed and signed into the same account. Your desktop must remain
awake and connected for Claude to continue working on tasks in the background.
Research preview Dispatch is rolling out to Max plans first, with
Pro plan access following over the subsequent days. Behaviour may change before
general availability.
dispatch
cowork
cross-device
persistent-agent
Pro
Max
✦ Claude Code Voice Mode — Push-to-Talk in the Terminal
Voice mode for Claude Code started rolling out on 3 March 2026 and is
continuing its progressive rollout through mid-March. Available on Pro, Max, Team,
and Enterprise plans, it lets you dictate prompts to Claude Code using a
push-to-talk mechanism — no typing required. Speech is transcribed locally (no audio sent
to Anthropic's servers), and the transcript appears in the terminal before Claude acts on it.
Activating voice mode
/voice # toggle voice mode on/off in any Claude Code session
Once active, hold the spacebar to speak, release to send. Claude Code
processes the transcript identically to a typed prompt — all the same tools, file access,
and reasoning capabilities apply.
Key design decisions
- Push-to-talk, not always-on — no persistent microphone listening means
no accidental commands and no privacy concerns about your terminal monitoring background
conversation.
- Terminal-native — voice mode operates at the CLI layer, so it works
regardless of which editor or IDE you have open behind it.
- Local STT — speech-to-text processing happens on-device; only the
resulting text transcript is sent to Claude.
- Customisable keybinding — the default push-to-talk key is spacebar,
but you can remap it in
keybindings.json
(key: voice:pushToTalk), e.g. to meta+k.
Tip Voice mode is especially useful for long, descriptive prompts —
architecture explanations, bug descriptions, or instructions that would take a paragraph
to type. For short, precise commands (file paths, function names), typing is still faster
and less error-prone.
Availability Voice mode is not available on the free tier. It is
rolling out gradually — if you do not see /voice yet, check again in the
coming days as coverage expands.
claude-code
voice-mode
push-to-talk
productivity
accessibility
✦ Claude Partner Network — $100M Enterprise Adoption Push
On 12 March 2026, Anthropic launched the Claude Partner
Network — a formalised channel programme backed by a $100 million
commitment for 2026 (with more signalled for subsequent years). The programme
connects large enterprises to a vetted network of consulting and professional services
firms — including anchor partners Accenture, Deloitte, Cognizant, and Infosys — that
specialise in deploying Claude at scale.
What the programme offers
- Training & enablement — joint go-to-market investment, sales
enablement, and co-marketing support.
- Technical certification — the new Claude Certified Architect:
Foundations credential is available immediately for solution architects building
production Claude applications; additional certifications for sellers and developers
are planned for later in 2026.
- Code Modernisation starter kit — a pre-built asset helping partners
migrate legacy codebases and address technical debt for enterprise clients.
- Dedicated Applied AI engineers — Anthropic is scaling its
partner-facing headcount fivefold, adding technical architects for complex deployments
and localised go-to-market support across international markets.
Why it matters for developers
The partner programme accelerates enterprise Claude adoption, which in turn drives demand
for developers with Claude API skills. The free-to-join certification track means individual
developers can now earn a verifiable Anthropic credential — useful differentiation for
consultants and agencies building Claude-powered products.
partner-network
enterprise
certification
go-to-market
Anthropic
✦ Models API Upgrade — Query Capabilities Programmatically
The Claude Developer Platform's Models API received a significant
upgrade in mid-March 2026: the GET /v1/models and
GET /v1/models/{model_id} endpoints now return a richer response that
includes max_input_tokens, max_tokens (max output), and a
structured capabilities object for every available model. This removes
the need to hard-code model limits in your application and makes it straightforward to
build model-agnostic pipelines that adapt automatically as new models roll out.
New response fields
max_input_tokens — the model's context window size (e.g.
1000000 for Opus 4.6 and Sonnet 4.6).
max_tokens — maximum output tokens the model can generate
(e.g. 128000 for Opus 4.6, 64000 for Sonnet 4.6).
capabilities — a structured object listing supported features such as
extended thinking, vision, tool use, streaming, and structured outputs.
Example — dynamic model selection
import anthropic
client = anthropic.Anthropic()
# List all available models with their capabilities
models = client.models.list()
for model in models.data:
caps = model.capabilities
if caps.get("extended_thinking") and model.max_input_tokens >= 1_000_000:
print(f"{model.id}: 1M context + thinking ✓")
# Or fetch a single model's limits
m = client.models.retrieve("claude-sonnet-4-6")
print(f"Max input: {m.max_input_tokens:,} tokens")
print(f"Max output: {m.max_tokens:,} tokens")
Best practices
- Don't hard-code limits — fetch them at startup or cache them with a
short TTL so your app automatically picks up when a model's capabilities change.
- Gate features on capabilities — check
capabilities.extended_thinking
before setting thinking={"type": "adaptive"} to avoid errors on models
that don't support it.
- Use
max_input_tokens to chunk — if your input may exceed
the context window, divide it by max_input_tokens to determine your
chunk strategy dynamically.
Tip The capabilities object is the authoritative source of truth for
what a model can do — more reliable than documentation, which may lag behind model
releases. Build your feature-gating logic against the API response, not a static
reference list.
models-API
capabilities
API
developer-platform
best-practices