OpenClaw Onboarding - Setting Up Your Gateway for AI Agents
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
Getting an AI agent framework running locally shouldn't take more than a few minutes. But "a few minutes" assumes you know which options to pick, which provider credentials to have ready, and whether you want a local gateway or a remote one. OpenClaw's onboard command tries to make that decision tree manageable, and for the most part, it does a decent job.
We've been setting up OpenClaw deployments for Australian organisations as part of our AI agent development work, and the onboarding process is one of the smoother parts of the experience. But there are a few things worth knowing before you run that first command.
The full OpenClaw onboard documentation covers every flag and option. Here I want to focus on the practical decision points - what to pick when, and how to set things up for different scenarios.
What the Onboard Command Actually Does
openclaw onboard is the first thing you run after installing OpenClaw. It does three things:
- Configures your gateway (local or remote)
- Sets up your LLM provider credentials
- Optionally installs a background daemon to keep the gateway running
The gateway is the bit that sits between your AI agents and the outside world - it handles routing, authentication, and model selection. Think of it as a local proxy that your agents talk to instead of hitting LLM APIs directly.
You can run it interactively (answer prompts as they come up) or non-interactively (pass everything as command-line flags). Interactive is fine for getting started. Non-interactive is what you want for any kind of automated deployment.
Quickstart vs Manual - Which Flow to Pick
OpenClaw gives you two onboarding flows:
openclaw onboard --flow quickstart
openclaw onboard --flow manual
Quickstart asks minimal questions and auto-generates a gateway token. It's genuinely quick - maybe 90 seconds if you have your API key ready. Pick this if you're evaluating OpenClaw locally, doing a proof of concept, or just want to get something running to poke at.
Manual (also called "advanced") gives you control over port binding, authentication method, and other gateway configuration. Pick this if you're setting up for a team, deploying to a server, or have specific networking requirements.
For most of our client work, we start with quickstart for the initial evaluation, then move to manual configuration when we're setting up the actual deployment environment.
Provider Setup - Picking Your LLM Backend
This is where you tell OpenClaw which LLM to use. The options cover pretty much every provider you'd want:
Cloud providers - OpenAI, Anthropic, Mistral, and others work with straightforward API key authentication. You pass the key during onboarding and OpenClaw stores it.
Local models - Ollama and LM Studio are supported for teams that need to keep data on-premises. The Ollama setup is particularly smooth:
openclaw onboard --non-interactive \
--auth-choice ollama \
--custom-base-url "http://localhost:11434" \
--custom-model-id "qwen3.5:27b" \
--accept-risk
The --accept-risk flag is required for local model setups because OpenClaw can't verify the security posture of your local installation. Fair enough.
Custom endpoints - Any OpenAI-compatible or Anthropic-compatible API works as a custom provider. This covers hosted models, private deployments, and niche providers that aren't in OpenClaw's built-in list.
One thing I appreciate about the provider setup: OpenClaw auto-detects compatibility mode for custom endpoints. You tell it the base URL and model ID, and it figures out whether to use OpenAI or Anthropic protocol. Falls back to "Unknown" for auto-detection if it can't tell, which usually works.
Secret Management - Don't Skip This
OpenClaw gives you two ways to store API keys, and the difference matters.
Plaintext stores the key directly in your config. Simple, works immediately, and is fine for local development on your own machine.
Reference mode stores a pointer to an environment variable instead of the actual key:
openclaw onboard --non-interactive \
--auth-choice openai-api-key \
--secret-input-mode ref \
--accept-risk
With --secret-input-mode ref, OpenClaw writes an env-backed reference. The key itself lives in your environment, not in the config file. This is what you want for anything beyond personal use - shared machines, CI/CD pipelines, or any situation where config files might be committed to version control.
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
The interactive version lets you choose between environment variables and configured secret providers (file-based or exec-based). The exec provider is useful if you're pulling keys from a vault or key management service at runtime.
Whichever method you pick, OpenClaw runs a preflight validation to confirm the key works before saving the reference. If validation fails, you get a chance to retry rather than ending up with a broken config.
Gateway Authentication
The gateway itself needs authentication - you don't want random processes on your network sending requests through it. Two options here:
Token auth is the simpler choice. You either let OpenClaw generate a token (quickstart does this automatically) or provide your own:
export OPENCLAW_GATEWAY_TOKEN="your-token"
openclaw onboard --non-interactive \
--mode local \
--auth-choice skip \
--gateway-auth token \
--gateway-token-ref-env OPENCLAW_GATEWAY_TOKEN \
--accept-risk
Notice the --gateway-token-ref-env flag - this stores the token as a reference to an environment variable, consistent with the secret management approach above.
Password auth is the other option, though token auth is more common in the setups we deploy.
One thing to watch out for: if you configure both token and password auth but don't explicitly set gateway.auth.mode, the onboarding process will block until you pick one. This catches people who copy-paste config snippets without cleaning up conflicting settings.
Non-Interactive Setup for Automation
This is where OpenClaw's onboarding really shines. Every interactive prompt has a corresponding command-line flag, which means you can fully script the setup. For agentic automation deployments where you're spinning up environments programmatically, this is exactly what you need.
A typical automated setup looks like:
openclaw onboard --non-interactive \
--mode local \
--auth-choice openai-api-key \
--secret-input-mode ref \
--gateway-auth token \
--gateway-token-ref-env OPENCLAW_GATEWAY_TOKEN \
--install-daemon \
--accept-risk
The --install-daemon flag starts a managed gateway process. On macOS and Linux, this creates a system service. On Windows, it tries Scheduled Tasks first and falls back to a Startup folder item.
If you only need the configuration written without actually starting a gateway - say you're running this in a CI pipeline where the gateway runs separately - add --skip-health to skip the reachability check.
One gotcha with daemon installation: if you're using a SecretRef for your gateway token and the referenced environment variable isn't set at install time, the onboarding process fails. It won't silently start a gateway with no authentication. Good behaviour, but it catches people who set env vars in their shell profile but run the installer in a clean environment.
Local vs Remote Gateways
Local is the default and what most people start with. The gateway runs on the same machine as your agents. Simple, no network complexity, everything talks over localhost.
Remote connects to a gateway running elsewhere:
openclaw onboard --mode remote --remote-url wss://gateway-host:18789
Remote gateways are the pattern we use for shared team environments. Run one gateway on a server, have everyone's agents connect to it. This centralises model access, token management, and usage tracking.
For remote connections over private networks where TLS isn't configured, you can use ws:// instead of wss:// by setting OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1. Only do this on networks you fully control - this isn't for anything internet-facing.
What Comes After Onboarding
Once openclaw onboard finishes, your gateway is running and configured. The next steps are usually:
openclaw configure # Adjust model defaults, allowlists
openclaw agents add <name> # Create your first agent
Or if you want the fastest path to seeing something work, openclaw dashboard opens the Control UI where you can start chatting immediately without setting up channels.
From our experience deploying OpenClaw across different organisations, the onboarding itself is rarely where teams get stuck. The harder decisions come later - which models to allow, how to structure agent permissions, how to handle cost management when multiple teams share a gateway. But those are problems for after you've got the basics running.
If you're evaluating OpenClaw for your organisation's AI agent infrastructure, or you've already started and need help with the production architecture around it, our AI agent builders team has been through these deployments across multiple industries. Happy to share what works.