Peter Steinberger's commit 8da3a9a automatically enables OpenAI's server-side response compaction when using the Responses API — a feature that intelligently summarizes previous conversation turns to preserve context window space while maintaining conversation coherence.
This addresses a longstanding operational challenge: as AI conversations grow longer, they consume increasingly expensive context window tokens. Without compaction, users either hit context limits (causing conversation breaks) or pay premium prices for extended context models.
Peter Steinberger is a core maintainer of OpenClaw and has been leading the project's infrastructure evolution. His recent work has focused on security hardening (path canonicalization, exec approvals) and operational improvements. This commit continues his pattern of making OpenClaw's defaults smarter without requiring user configuration.
The implementation leverages OpenAI's Responses API compaction feature, which works by:
0fe6cf0) by Rodrigo Uroz ensures that summaries preserve opaque identifiers, preventing broken references in compacted conversationsClient-side compaction requires the AI assistant to summarize its own context — a meta-task that itself consumes tokens and can introduce errors. Server-side compaction offloads this to OpenAI's infrastructure, which can optimize more aggressively because it has access to model internals.
This change has three significant implications:
Long-running sessions with AI assistants can accumulate substantial context. By compacting older turns, users pay for fewer tokens while maintaining conversation continuity. For power users running always-on assistants, this directly reduces operational costs.
Previously, hitting context limits meant either starting fresh (losing conversation history) or manually truncating context (risking lost information). Auto-compaction enables genuinely long-running assistant relationships without artificial breaks.
OpenClaw previously needed client-side strategies for context management. Delegating this to OpenAI's API reduces code complexity and potential bugs in OpenClaw's context handling logic.
The companion commit from Rodrigo Uroz (0fe6cf0) addresses a subtle but important issue: when conversations are compacted, summaries must preserve opaque identifiers (like file references, session IDs, or tool call IDs) that appear in the original conversation.
Without this fix, a compacted conversation might lose reference to a file the user uploaded or a tool the assistant invoked — breaking continuity in ways that are difficult to debug.
This auto-enablement applies specifically to OpenAI Responses API usage. Other providers may require different strategies: