03
Framework

Anthropic XML-Tagged

Anthropic's official Claude format — XML tags that lock in instruction adherence on long prompts.

<role> · <context> · <instructions> · <examples> · <constraints> · <output_format>
Origin

Anthropic's prompt engineering guide explicitly recommends XML tags as the primary structural device for Claude. The model was trained on a huge volume of XML-structured documents, so tagged prompts hit a learned pattern. Independent benchmarks (and the Claude Cookbook) consistently show 20–40% better instruction-following on long prompts when sections are tagged.

Anthropic-recommendedClaude API customersMIT 6.S898 curriculum
The anatomy

Every Anthropic XML-Tagged prompt has these sections.

01

<role>

Identity and expertise. Claude treats this as the highest-priority frame for every reply.

02

<context>

Background, stakes, what's been tried, what's off-limits. Claude reads this carefully when deciding what to ignore.

03

<instructions>

The numbered task list. Tagged separately so Claude does not blend it into context.

04

<examples>

Few-shot input/output pairs. Wrapped in tags so Claude pattern-matches structure, not surface words.

05

<constraints>

Hard 'must not' rules. Claude is trained to weight constraints heavily when this tag appears.

06

<output_format>

The exact deliverable schema. Claude follows tagged format specs ~30% more reliably than untagged ones.

When to use
  • Any prompt running on Claude Sonnet, Opus, or Haiku.
  • Long prompts (>1,500 tokens) where instruction drift is a real risk.
  • Multi-document analysis where each document needs its own tag.
  • Agentic workflows where Claude has to keep tools, context, and constraints separate.
When not to use
  • ×Short prompts on GPT-family models (XML tags are neutral, not harmful, but add noise).
  • ×Simple completions where structure overhead isn't worth it.
Best models
Claude Sonnet 4.5Claude Opus 4Claude Haiku 4.5
Worked example

From one sentence to a Anthropic XML-Tagged section.

Input

"Review this PR description and tell me what's missing."

Sample Anthropic XML-Tagged section
<role>You are a staff engineer reviewing a pull request description for clarity, completeness, and risk disclosure.</role>

<context>Our team requires every PR to disclose: customer impact, rollback plan, observability hooks…</context>

Ready to write your first Anthropic XML-Tagged prompt?

Drop a sentence. The Forge analyzes your task, asks the right clarifying questions, then generates a complete enterprise meta prompt structured exactly to Anthropic XML-Tagged.