← Articles
OpenClaw Adds Custom Provider Onboarding: Local AI Gets First-Class Support
A new configuration flow in OpenClaw makes it dramatically easier to connect self-hosted and custom AI providers. For privacy-conscious users and enterprises running local models, this removes a significant barrier to adoption.
About the Contributor
Blossom contributed the custom/local API configuration flow, while Gustavo Madeira Santana followed up with UX improvements renaming "Custom API Endpoint" to the clearer "Custom Provider" terminology. Together, these changes represent a significant investment in supporting diverse AI infrastructure.
Why This Matters
OpenClaw has grown to nearly 179,000 stars by making AI assistants accessible across platforms. But until now, connecting to anything beyond the major cloud providers (Anthropic, OpenAI, etc.) required manual configuration file editing—a barrier for many users.
This PR introduces a guided onboarding flow for custom providers, bringing the same polish that exists for Anthropic or OpenAI to self-hosted setups like:
- Ollama — Local model inference
- LM Studio — Desktop model runner
- vLLM — High-performance inference server
- Text Generation Inference (TGI) — Hugging Face's inference solution
- LocalAI — OpenAI-compatible local inference
- Enterprise deployments — Internal AI services behind corporate firewalls
Technical Implementation
The Configuration Flow
The new onboarding flow guides users through:
- Provider selection — Choose "Custom Provider" from the provider list
- Endpoint configuration — Enter the base URL for your API (e.g.,
http://localhost:11434 for Ollama)
- Authentication — Optional API key if your provider requires it
- Model selection — List available models or manually specify model names
- Validation — Test the connection before saving
Connection Validation
The flow includes automatic endpoint testing, catching common issues like incorrect ports, missing authentication, or unreachable hosts before users save their configuration.
Terminology Refinement
The follow-up commit from Gustavo renames "Custom API Endpoint" to "Custom Provider" throughout the UI. This subtle change clarifies that users are adding a provider (like Anthropic or OpenAI), not just an endpoint—reinforcing that custom providers are first-class citizens in OpenClaw.
Enterprise Implications
For enterprise deployments, this feature is particularly significant:
- Data sovereignty — Companies can use local models without data leaving their network
- Compliance — Regulated industries can maintain AI capabilities while meeting data residency requirements
- Cost control — Self-hosted inference can be more economical at scale
- Air-gapped environments — Disconnected networks can still run AI assistants
Combined with the GitHub Enterprise Cloud support for Copilot provider also merged today, OpenClaw is clearly prioritizing enterprise deployment scenarios.
The Broader Context
This change reflects a maturing understanding of how AI assistants will be deployed. The early assumption—that everyone would call cloud APIs—is giving way to a more nuanced reality:
- Privacy-conscious individuals prefer local inference
- Developers want to test against local models before incurring API costs
- Enterprises need flexible deployment options
- Researchers require custom model configurations
By making custom providers a first-class experience, OpenClaw positions itself as infrastructure-agnostic—a key differentiator as the AI assistant market matures.
What's Next
- For self-hosters: Upgrade to get the new onboarding flow; expect a smoother setup experience
- For enterprises: Evaluate whether this simplifies your internal AI assistant deployments
- Watch for: Possible expansion of provider-specific optimizations (e.g., Ollama-specific features)