Anthropic XML-Tagged
Anthropic's official Claude format — XML tags that lock in instruction adherence on long prompts.
Anthropic's prompt engineering guide explicitly recommends XML tags as the primary structural device for Claude. The model was trained on a huge volume of XML-structured documents, so tagged prompts hit a learned pattern. Independent benchmarks (and the Claude Cookbook) consistently show 20–40% better instruction-following on long prompts when sections are tagged.
Every Anthropic XML-Tagged prompt has these sections.
<role>
Identity and expertise. Claude treats this as the highest-priority frame for every reply.
<context>
Background, stakes, what's been tried, what's off-limits. Claude reads this carefully when deciding what to ignore.
<instructions>
The numbered task list. Tagged separately so Claude does not blend it into context.
<examples>
Few-shot input/output pairs. Wrapped in tags so Claude pattern-matches structure, not surface words.
<constraints>
Hard 'must not' rules. Claude is trained to weight constraints heavily when this tag appears.
<output_format>
The exact deliverable schema. Claude follows tagged format specs ~30% more reliably than untagged ones.
- →Any prompt running on Claude Sonnet, Opus, or Haiku.
- →Long prompts (>1,500 tokens) where instruction drift is a real risk.
- →Multi-document analysis where each document needs its own tag.
- →Agentic workflows where Claude has to keep tools, context, and constraints separate.
- ×Short prompts on GPT-family models (XML tags are neutral, not harmful, but add noise).
- ×Simple completions where structure overhead isn't worth it.
From one sentence to a Anthropic XML-Tagged section.
"Review this PR description and tell me what's missing."
<role>You are a staff engineer reviewing a pull request description for clarity, completeness, and risk disclosure.</role> <context>Our team requires every PR to disclose: customer impact, rollback plan, observability hooks…</context>
Ready to write your first Anthropic XML-Tagged prompt?
Drop a sentence. The Forge analyzes your task, asks the right clarifying questions, then generates a complete enterprise meta prompt structured exactly to Anthropic XML-Tagged.