Connecting OpenClaw AI Agents to Mattermost - Setup and Configuration
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
Mattermost occupies a specific niche that matters a lot to certain Australian organisations. It's the self-hosted team messaging platform that gives you Slack-like functionality without sending your conversations through someone else's servers. Government agencies, defence contractors, financial services firms, healthcare providers - if your compliance requirements say "data stays on our infrastructure," Mattermost is probably on your shortlist.
The question we keep getting is: can we run AI agents inside Mattermost the same way people run them in Slack or Teams? With OpenClaw, yes. And the setup is more straightforward than you might expect.
OpenClaw's Mattermost integration ships as a plugin. It connects your AI agents to Mattermost via bot tokens and WebSocket events, supporting channels, groups, and direct messages. Here's how to get it running and what to think about when configuring it for a real team.
Installing the Plugin
OpenClaw doesn't bundle the Mattermost channel by default - you install it as a plugin. If you're running OpenClaw from npm, it's one command:
openclaw plugins install @openclaw/mattermost
If you're running from a git checkout (common in development or air-gapped environments), point it at the local extension directory:
openclaw plugins install ./extensions/mattermost
A nice touch: if you're setting up OpenClaw for the first time and it detects a git checkout, the setup wizard will offer the local install path automatically. Small detail, but it saves a trip to the docs.
The Minimum Viable Configuration
You need two things from your Mattermost instance: a bot token and the server URL.
Create a bot account in Mattermost's System Console under Integrations. Give it a sensible name (something like "AI Assistant" rather than "bot1"). Copy the token. Then your OpenClaw config looks like this:
{
channels: {
mattermost: {
enabled: true,
botToken: "your-mm-token",
baseUrl: "https://chat.yourcompany.com",
dmPolicy: "pairing",
},
},
}
That's enough to get going. The bot will respond to direct messages (with pairing-based access control) and can be @mentioned in channels.
Alternatively, if you prefer environment variables over config files:
MATTERMOST_BOT_TOKEN=your-mm-token
MATTERMOST_URL=https://chat.yourcompany.com
Environment variables only work for the default account. If you're running multiple Mattermost connections (which is unusual but supported), you'll need the config file for the additional accounts.
Chat Modes - Controlling When the Bot Talks
This is where you make the difference between an AI assistant that's helpful and one that's annoying. OpenClaw gives you three chat modes for channel behaviour:
oncall (default): The bot only responds when someone @mentions it. This is what most teams want. The AI sits quietly in the channel until someone explicitly asks for help. No noise, no unsolicited opinions.
onmessage: The bot responds to every message in the channel. Sounds chaotic, and for general-purpose channels it absolutely would be. But for dedicated channels - like a "data-questions" or "incident-response" channel where every message is implicitly a request for help - this can work well.
onchar: The bot responds when a message starts with a specific prefix character. Think of it like a command prefix. You could set > as the trigger, so > what's the status of the Melbourne deployment? gets a response but normal conversation doesn't.
{
channels: {
mattermost: {
chatmode: "onchar",
oncharPrefixes: [">", "!"],
},
},
}
Our recommendation: start with oncall. Let the team get used to having an AI agent in their channels. If they find themselves @mentioning it constantly, consider switching specific channels to onmessage or onchar.
Direct messages always get a response regardless of chat mode, which makes sense - if someone DMs the bot, they obviously want to talk to it.
Threading - Keep Conversations Tidy
One thing that drives people mad in team chat is when bot responses clutter the main channel. OpenClaw handles this with the replyToMode setting:
off (default): Replies go to a thread only if the triggering message was already in a thread. Otherwise, the response appears in the main channel.
first / all: For top-level channel messages, OpenClaw starts a new thread under the triggering post and routes the entire conversation there. This keeps the main channel clean. Follow-up messages and media continue in the same thread automatically.
{
channels: {
mattermost: {
replyToMode: "all",
},
},
}
For most team setups, we suggest replyToMode: "all". It keeps AI conversations contained in threads so they don't dominate the main channel feed. People who want to read the AI's answer can expand the thread. Everyone else sees a clean channel.
Access Control - Who Gets to Talk to the Bot
This is the part that security-conscious organisations (which, if you're running Mattermost, you probably are) care most about.
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
DM access control uses a pairing system by default. When an unknown user messages the bot, they get a pairing code. An administrator then approves or denies access:
openclaw pairing list mattermost
openclaw pairing approve mattermost <CODE>
This is friction by design. You don't want every contractor and temp with a Mattermost account automatically getting access to your AI agent, especially if that agent can query internal systems or access sensitive data.
For less sensitive deployments, you can open DMs up:
{
channels: {
mattermost: {
dmPolicy: "open",
allowFrom: ["*"],
},
},
}
Channel (group) access control defaults to allowlist mode with mention gating. You specify which users can trigger the bot in channels using groupAllowFrom. User IDs are recommended over usernames because usernames can change.
{
channels: {
mattermost: {
groupPolicy: "allowlist",
groupAllowFrom: ["user-id-1", "user-id-2"],
},
},
}
There's a dangerouslyAllowNameMatching flag that lets you match by @username instead of user ID. The "dangerously" prefix is earned - usernames are mutable, so someone could potentially change their username to match an allowed user. Stick with user IDs unless you have a strong reason not to.
Native Slash Commands
OpenClaw can register native slash commands in Mattermost (the /oc_* variety). This lets users interact with the AI through Mattermost's built-in command interface rather than just @mentions.
{
channels: {
mattermost: {
commands: {
native: true,
nativeSkills: true,
callbackPath: "/api/channels/mattermost/command",
callbackUrl: "https://gateway.yourcompany.com/api/channels/mattermost/command",
},
},
},
}
The setup here requires that your Mattermost server can reach the OpenClaw gateway's callback endpoint. This sounds obvious but causes more setup headaches than anything else. A few things to check:
- The
callbackUrlmust be reachable from the Mattermost server, not just from your laptop. - If you're running on a private network or tailnet, add the gateway host to Mattermost's
AllowedUntrustedInternalConnectionssetting. Use the hostname, not the full URL. - Quick connectivity test:
curl https://your-gateway/api/channels/mattermost/commandshould return405 Method Not Allowed(the endpoint exists but doesn't accept GET requests).
Outbound Messages
OpenClaw can also send messages proactively - for scheduled updates, webhook-triggered notifications, or cron jobs. Target formats:
channel:<id>for a channel postuser:<id>for a DM@usernamefor a DM (resolved via the Mattermost API)
One gotcha: bare IDs (without the channel: or user: prefix) are ambiguous. OpenClaw resolves them user-first - it checks if the ID is a user, and if so, sends a DM. If not, it treats it as a channel ID. Use the explicit prefixes to avoid surprises.
Real-World Deployment Patterns
We've deployed OpenClaw with Mattermost for several AI agent projects, and a few patterns have emerged:
The internal helpdesk bot. A dedicated channel where employees ask IT, HR, or facilities questions. The bot is set to onmessage mode and connected to internal knowledge bases. Threading is set to all so each question becomes its own self-contained thread. Access control is open because the channel itself is restricted.
The incident response assistant. Sits in the incident channel on oncall mode. Team members @mention it to query logs, look up runbook procedures, or summarise the incident timeline. The pairing system restricts DM access to the on-call rotation.
The data analyst companion. Lives in a data team channel. Team members use the > prefix (onchar mode) to ask questions about datasets, get SQL help, or interpret query results. Direct messages are used for longer, multi-step analysis conversations.
Configuration Tips From Experience
A few things we've learned that aren't in the docs:
Start with tight access control and loosen as needed. It's much easier to approve additional users than to revoke access after a security review flags your wide-open bot.
Test threading behaviour before rolling out to the whole team. Set up a test channel, send a few messages with different threading configs, and make sure the conversation flow feels natural.
Monitor the bot's response patterns. If it's generating long responses in channels, consider adjusting your AI agent's system prompt to keep channel responses concise and suggest DMs for detailed discussions.
Log everything. Mattermost already gives you message audit trails, but make sure OpenClaw's gateway logging is configured too. When something goes wrong (and something always goes wrong), having both sides of the conversation logged makes debugging much faster.
For organisations running self-hosted infrastructure who want AI agents integrated into their team communication, the OpenClaw plus Mattermost combination is solid. The plugin architecture means you're not committing to a monolithic platform - you can start with Mattermost and add other channels later if your needs change.
If you're looking at deploying AI agents across your organisation's communication channels, get in touch with our team to discuss what configuration makes sense for your environment.