OpenClaw Self-Hosted Deployment - Running It On Your Own Infrastructure
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
One of the first questions I hear from technical teams evaluating OpenClaw is "where does it run?" The answer is wherever you want it to. Your laptop, a VPS, a Raspberry Pi, a server in your office, or a Docker container in your existing infrastructure. OpenClaw is designed to be self-hosted, and the deployment patterns are flexible enough to fit most environments.
Here's what I've learned deploying it across different setups.
Deployment Options
OpenClaw runs anywhere you can run Node.js 22. The gateway process is a single long-lived daemon that manages all messaging channels, agent connections, and client interfaces. The main deployment options are:
Direct install on a server or VM. Install Node.js, run the OpenClaw installer, start the gateway as a daemon. This is the simplest path and works well for teams that don't want to deal with container orchestration.
Docker. Build the OpenClaw image using the provided setup script, run it with Docker Compose. This is what we recommend for production deployments because it gives you clean isolation, reproducible builds, and straightforward updates.
Cloud VPS providers. OpenClaw has specific deployment guides for Fly.io, Railway, Render, Northflank, DigitalOcean, Oracle Cloud, Hetzner, and GCP. Each guide covers the platform-specific configuration needed.
On-premise hardware. We've seen OpenClaw running on everything from dedicated servers in data centres to Raspberry Pis. As long as it can run Node 22 and has enough memory for your workload, it works.
Docker Deployment in Detail
For most production setups, Docker is the way to go:
./docker-setup.sh
This handles the image build, runs the onboarding wizard inside the container, starts the gateway via Docker Compose, and generates authentication tokens.
Key environment variables for Docker:
OPENCLAW_IMAGEto use a pre-built remote imageOPENCLAW_SANDBOXto enable Docker-based sandboxing for agent toolsOPENCLAW_EXTRA_MOUNTSto add additional host directoriesOPENCLAW_HOME_VOLUMEto persist the home directory in a named Docker volume
Storage works through bind mounts. Config and workspace directories mount to the host for persistence. Sandbox containers use tmpfs for temporary data. The main growth hotspots to watch are media folders, session files, and rolling logs.
The image runs as a non-root node user (UID 1000), which is the right default for production.
Remote Access Patterns
The gateway binds to loopback (127.0.0.1:18789) by default. Keeping it loopback-only is the safest configuration. To access it remotely, you have several options:
Tailscale is the recommended approach. Add both your gateway machine and your client devices to a tailnet, and they can communicate directly without exposing anything to the public internet. If you use Tailscale Serve with identity headers, you can enable tokenless access for the Control UI on trusted networks.
SSH tunnelling works well for ad-hoc access:
ssh -N -L 18789:127.0.0.1:18789 user@gateway-host
This forwards your local port 18789 to the gateway's port on the remote machine. CLI commands like openclaw health and openclaw status --deep work transparently through the tunnel.
VPN is the enterprise option. If your organisation already has a VPN, the gateway is just another internal service.
The documentation is clear about this: don't bind the gateway to a public interface unless you're absolutely sure you need to. And if you do, authentication tokens are mandatory.
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
Always-On Gateway Setup
The most common production pattern is an always-on gateway running on a VPS or dedicated server. Your laptop or workstation connects remotely when you need to interact with the Control UI or CLI.
This solves the main problem with running the gateway on a laptop: it keeps working when your laptop is closed, asleep, or off the network. Your WhatsApp agent keeps responding to customers, your Slack agent keeps answering questions, and your Discord agent keeps moderating your community.
The macOS app supports a "Remote over SSH" mode that automatically manages tunnelling to a remote gateway. Connect your laptop, work with the Control UI, disconnect, and the gateway keeps running.
On-Premise Deployment Considerations
For businesses with on-premise requirements (common in healthcare, financial services, and government), there are a few extra things to think about:
Local AI models. If data cannot leave your network, configure OpenClaw to use Ollama or vLLM for local model inference. This means no external API calls at all. The trade-off is that you need hardware capable of running the models, but for organisations with existing GPU infrastructure, this is a clean solution.
Secrets management. Use the SecretRef system to pull credentials from your existing secrets infrastructure. OpenClaw integrates with 1Password CLI, HashiCorp Vault, and sops out of the box. No API keys stored in plaintext config files.
Network isolation. The gateway only makes outbound connections to AI model providers and messaging platform APIs. If you're behind a restrictive firewall, you'll need to allow those specific endpoints. For fully air-gapped deployments with local models, no outbound connections are needed at all.
Monitoring. The gateway exposes health check endpoints that you can integrate with your existing monitoring stack. Use openclaw health for basic status and openclaw status --deep for detailed diagnostics.
Multi-Gateway Deployments
For larger organisations, OpenClaw supports running multiple gateway instances. Each gateway manages its own set of channels, agents, and connections. This is useful when:
- Different departments need completely separate deployments
- You want geographic distribution (an Australian gateway and a European gateway)
- Compliance requires strict separation between production environments
Each gateway operates independently. There's no built-in clustering or synchronisation between gateways, so this is really about running separate instances for separate purposes rather than load balancing.
Updates and Maintenance
Keeping OpenClaw updated is straightforward:
openclaw update
For Docker deployments, rebuild the image with the latest version and restart the container. For production environments, we recommend testing updates in a staging environment first, especially if you're running custom skills that might be affected by changes.
The update cadence is active. The project releases frequently, and keeping up with updates is worth it for security fixes and new features.
Getting Help with Deployment
If you're evaluating self-hosted deployment options, we can help you figure out the right architecture for your situation. Through our OpenClaw managed service, we handle the entire deployment lifecycle: infrastructure setup, security hardening, channel configuration, agent workspace setup, and ongoing maintenance.
Reach out if you want to discuss your deployment requirements. The right setup depends on your infrastructure, security requirements, and how many agents and channels you're planning to run.