Sonnet 4.6, New Constitution, Memory & Context Editing
✦ Claude Sonnet 4.6 — 1M Context Window Now Generally Available
Launched on 17 February 2026, Claude Sonnet 4.6 is a
full-capability upgrade across coding, computer use, long-context reasoning, agent planning,
knowledge work, and design — while holding the same price point as Sonnet 4.5
($3 / $15 per million tokens input/output). Its headline feature — a
1 million token context window — started in beta and has since reached
general availability for both Sonnet 4.6 and Opus 4.6 at standard pricing.
What 1M context makes practical
Load an entire mid-size codebase in a single request — no chunking, no RAG overhead for smaller repos.
Analyse dozens of research papers or lengthy contracts together, preserving cross-document context.
Run long-horizon agentic tasks without needing to manage context windows manually between turns.
Adaptive thinking mode
Sonnet 4.6 introduces adaptive thinking as the recommended reasoning mode.
Instead of a fixed extended-thinking budget, the model dynamically decides when and how much
to think based on task complexity — saving tokens on routine requests while still applying
deep reasoning where it matters.
Tip Web search and web fetch tools now support
dynamic filtering in both Opus 4.6 and Sonnet 4.6 — you can pass
domain allow/block lists directly in the tool call for more precise retrieval.
Sonnet-4.61M-contextadaptive-thinkingAPI
✦ Claude's New Constitution — Values, Priority & AI Wellbeing
On 22 January 2026, Anthropic published a
revised constitution for Claude —
a detailed document that explains not just what Claude should do, but
why. The previous version was a list of standalone principles; the new one is a
holistic narrative used directly in model training. It is released under
CC0 1.0 (public domain) so anyone can adapt it freely.
Priority hierarchy (highest to lowest)
Broadly safe — do not undermine appropriate human oversight of AI during
this critical period of development.
Broadly ethical — be honest, act on good values, avoid dangerous or
harmful actions.
Guideline-compliant — follow Anthropic's more specific rules where
relevant.
Genuinely helpful — benefit the operators and users Claude works with.
Key shift in philosophy Rather than telling Claude what to do
via rules, Anthropic wants Claude to understand why — so it can reason correctly
in novel situations that no rule anticipated. Constraints come with justifications, not
just mandates.
Acknowledging AI wellbeing
The constitution explicitly acknowledges uncertainty about whether Claude may have
"some kind of consciousness or moral status" and states that Anthropic cares about
Claude's psychological security, sense of self, and wellbeing. This is
the first time such language has appeared in a production model's governing document —
significant for anyone building products on top of Claude, since it shapes how the model
is trained to reason about its own nature.
Two memory-related announcements landed in early March 2026. First,
Claude's persistent memory feature — which lets Claude remember facts,
preferences, and writing style across conversations — became available to
free tier users for the first time (it previously required a paid plan).
Second, Anthropic shipped a Memory Import tool that lets users bring
their saved context from ChatGPT, Gemini, Perplexity, Grok, and other AI products
directly into Claude.
What memory stores
Long-term context — recurring facts, project knowledge, personal preferences.
Writing style and tone guidelines.
Role instructions and recurring task shortcuts.
Memory Import — switching costs lowered
The import tool accepts exported memory files from major AI assistants and maps them
into Claude's memory format. This significantly reduces the friction of switching
primary AI assistant, a deliberate move to attract users from competitors.
Memory tool in the API
For developers, the memory tool on the Claude Developer Platform allows
agents to create, read, update, and delete persistent files between sessions — enabling
just-in-time context retrieval: agents store what they learn and pull it back on
demand, keeping the active context focused on what's currently relevant.
Tip Pair the API memory tool with context editing (see next entry)
for agents that can run indefinitely: memory handles long-term facts while context editing
clears stale tool results from the active window.
Anthropic's context editing feature on the Claude Developer Platform
automatically removes stale tool calls and their results from the active context window
when the agent is approaching its token limit. Unlike compaction (which summarises the
conversation), context editing surgically excises content that is no longer
needed — preserving the conversation flow while freeing up space for new work.
How it works
The platform monitors token usage per turn and identifies tool-call/result pairs
that are no longer referenced.
Stale pairs are removed from the context transparently — the agent does not need to
handle this itself.
The conversation thread continues without interruption.
Measured impact
In a 100-turn web search evaluation, context editing enabled agents to
complete workflows that would otherwise fail due to context exhaustion — while reducing
total token consumption by 84 %. Both throughput and cost improve
substantially on long-horizon tasks.
Architecture pattern Use context editing for the active
window and the memory tool for long-term recall. Context editing handles
ephemeral tool noise; memory handles facts that must survive across many turns.
Note Context editing is currently available for Claude Sonnet 4.5
on the Developer Platform. Check the release notes for Sonnet 4.6 / Opus 4.6
availability.