← Articles

OpenClaw Preserves Anthropic Thinking Block Order

OpenClaw Anthropic Extended Thinking March 23, 2026 · Vincent Koc

About the Author

Vincent Koc is a prolific OpenClaw contributor and core maintainer. His work spans security hardening, multi-provider support, and gateway reliability — making OpenClaw production-ready for enterprise deployments.

What Changed

A fix (#52961) ensures that Anthropic's extended thinking blocks maintain their original order when processed through OpenClaw's agent pipeline.

When Claude uses extended thinking (the reasoning traces visible in models like Claude Opus 4), the response contains multiple content blocks: thinking blocks interleaved with regular text. The order of these blocks is semantically meaningful — the thinking should appear where it occurred in the reasoning process.

Why This Matters

Extended thinking is increasingly important for complex agentic tasks. When an AI assistant reasons through a multi-step problem, users (and downstream systems) benefit from seeing the reasoning in the order it occurred:

The Ordering Problem
When gateway middleware processes streaming responses, it's tempting to collect all thinking blocks together and all text blocks together. This breaks the semantic relationship between a piece of reasoning and the action it produced.

Technical Details

Anthropic's API returns content blocks like:

[
  { "type": "thinking", "text": "Let me consider the options..." },
  { "type": "text", "text": "Based on the analysis, I recommend..." },
  { "type": "thinking", "text": "Wait, I should also check..." },
  { "type": "text", "text": "Actually, there's one more consideration..." }
]

The fix ensures OpenClaw preserves this exact order rather than grouping by type. This is especially important when:

Broader Context

This fix is part of a busy day for OpenClaw releases — alongside macOS publishing automation, plugin bundling improvements, and session transcript fixes. The project continues maturing its multi-provider support as different LLM providers expose different response structures.

Extended thinking support across providers remains an area of active development. As more models adopt chain-of-thought reasoning, maintaining semantic structure through middleware layers becomes increasingly important.

← Back to Repo Pulse