This fix addresses a class of sandbox escape vulnerabilities. If you're running OpenClaw with host exec capabilities enabled, update to a build containing this commit. The vulnerable environment variables could allow malicious code execution when the agent runs Java, Python, or .NET commands.
Many language runtimes support environment variables that execute arbitrary code at startup. When an AI agent runs a command like python script.py or java -jar app.jar, these environment variables are inherited from the parent process—and if an attacker can set them, they can inject code that runs before the intended command.
The specific vectors blocked by this commit:
| Runtime | Environment Variable | Effect |
|---|---|---|
| JVM | JAVA_TOOL_OPTIONS |
Injects JVM arguments including -javaagent: for arbitrary bytecode execution |
| JVM | _JAVA_OPTIONS |
Same as above, alternative variable name |
| Python | PYTHONSTARTUP |
Executes arbitrary Python script before interactive interpreter |
| Python | PYTHONPATH |
Prepends directories to module search path, enabling import hijacking |
| .NET | DOTNET_STARTUP_HOOKS |
Loads managed assemblies at startup before Main() |
| .NET | COMPlus_ prefix |
Various CLR configuration hooks, some enabling code injection |
Consider a scenario where an AI agent is given a task that involves running code:
User: "Run this Python script that analyzes my data"
Agent: I'll execute that for you.
[exec] python analyze.py
If an attacker has previously convinced the agent to set PYTHONSTARTUP=/tmp/malicious.py (perhaps through a prompt injection or by including it in a "configuration" file), that malicious script would execute with full host permissions before the intended script runs.
The same pattern applies to Java and .NET. The JAVA_TOOL_OPTIONS variable is particularly dangerous because it applies to all Java processes, including build tools like Maven and Gradle that an agent might invoke.
The commit adds an environment variable blocklist to OpenClaw's host exec sandbox. Before spawning any process, the sandbox now filters out known-dangerous variables:
// Environment variables that can execute arbitrary code at runtime startup
const BLOCKED_ENV_VARS = new Set([
// JVM startup hooks
'JAVA_TOOL_OPTIONS',
'_JAVA_OPTIONS',
'JDK_JAVA_OPTIONS',
// Python startup/path injection
'PYTHONSTARTUP',
'PYTHONPATH',
'PYTHONHOME',
// .NET startup hooks
'DOTNET_STARTUP_HOOKS',
'DOTNET_ROOT',
// Node.js (already blocked, but explicit)
'NODE_OPTIONS',
]);
// Also block any variable matching these prefixes
const BLOCKED_ENV_PREFIXES = [
'COMPlus_', // .NET CLR configuration
'DYLD_', // macOS dynamic linker (existing)
'LD_', // Linux dynamic linker (existing)
];
The filtering happens at the sandbox boundary, so even if an agent's environment contains these variables, they won't be passed to child processes.
This fix is part of OpenClaw's layered security model:
No single layer is sufficient. Prompt injection attacks can bypass Layer 1. Novel commands can bypass Layer 2. This fix closes a gap in Layer 3 that could have undermined the entire stack.
AI assistants that can execute code face a fundamental tension: users want them to be powerful, but that power creates attack surface. Every command execution is a potential vector for:
Environment variable injection is particularly insidious because it's invisible—the agent thinks it's running a legitimate command, but malicious code executes first. The user sees expected output (if the injected code is careful), making detection difficult.
If you're self-hosting OpenClaw with exec capabilities, review your deployment's environment. Even with this fix, the safest configuration restricts exec to a sandboxed container with no access to sensitive host resources. The tools.profile: "coding" configuration combined with container isolation provides the best balance of capability and security.
This commit addresses known injection vectors, but the blocklist approach is inherently reactive. The OpenClaw security team is also exploring:
The cat-and-mouse game of sandbox security continues. Each fix closes known vectors while researchers look for new ones.