Back to Blog

OpenClaw Broadcast Groups - Running Multiple AI Agents in One WhatsApp Chat

March 29, 20267 min readMichael Ridland

Deploy OpenClaw for Your Business

Secure deployment in 48 hours. Choose personal setup or fully managed.

One of the patterns we keep seeing in AI agent deployments is the need for multiple agents to work on the same conversation. Not a single do-everything agent, but a team of focused agents that each handle their own area of expertise.

OpenClaw just released broadcast groups (as of version 2026.1.9), and it solves this problem cleanly. You can now have multiple agents process and respond to the same message in a WhatsApp group or DM, all running through a single phone number.

We've been testing this with clients and the results are genuinely useful. Let me walk through how it works and where we see it fitting.

What Broadcast Groups Actually Do

The concept is straightforward. Normally, when a message comes into an OpenClaw-managed WhatsApp group, one agent handles it. With broadcast groups, you configure a list of agents for a specific chat, and all of them process every eligible message independently.

Each agent maintains its own session, conversation history, workspace, and tool access. They don't see each other's responses. They process the same incoming message but operate in complete isolation otherwise.

Think of it like CC'ing multiple specialists on an email. They all see the same question, they all respond from their own expertise, but they're not collaborating in real time. They're working independently.

This runs on WhatsApp only right now, with Telegram, Discord, and Slack support on the roadmap.

Where This Gets Interesting

The obvious use case is specialist teams. Say you have a development-focused WhatsApp group where your team shares code snippets and asks questions. You could set up:

  • A code reviewer agent that analyses code quality and suggests improvements
  • A security auditor that checks for vulnerabilities
  • A documentation agent that generates docs from code
  • A test generator that suggests test cases

Drop a code snippet in the group and all four agents respond with their perspective. The code reviewer flags naming conventions, the security auditor catches an SQL injection risk, the docs agent produces a function description, and the test agent suggests three test cases.

Nobody had to tag four different bots or send the same message to four different chats. One message, four specialised responses.

Another pattern we've been exploring is quality assurance for customer support. You set up a primary support agent that answers customer questions, and a QA agent that reviews the support agent's response quality. The QA agent might only respond when it detects an issue - an inaccurate answer, a missed question, or a tone problem. In practice this creates a lightweight review layer that catches problems before they become customer complaints.

Multi-language support is another natural fit. One message comes in, and agents configured for English, German, and Spanish each respond in their language. The user gets their answer in their preferred language without any routing logic.

Setting It Up

The configuration lives in your OpenClaw config file as a top-level broadcast section. You map WhatsApp peer IDs to lists of agent IDs:

{
  "broadcast": {
    "[email protected]": ["code-reviewer", "security-auditor", "docs-generator"],
    "+15551234567": ["assistant", "logger"]
  }
}

For group chats, you use the group JID. For DMs, you use the E.164 phone number. The agents listed need to exist in your agents.list configuration.

By default, agents process messages in parallel - all at the same time. If you need sequential processing (maybe one agent's work should finish before the next starts), you can set the strategy:

{
  "broadcast": {
    "strategy": "sequential",
    "[email protected]": ["formatter", "reviewer"]
  }
}

Sequential makes sense when there's a logical order - format the code first, then review it - though keep in mind the agents don't actually see each other's output. The sequencing just controls timing.

An important detail: broadcast groups don't bypass your existing channel allowlists or group activation rules. If your group is set to only respond on mentions, broadcast only kicks in when a message triggers the mention rule. It changes which agents run, not when they run.

Broadcast also takes priority over normal bindings. If a chat has both a binding and a broadcast configuration, the broadcast wins.

Session Isolation in Practice

Each agent in a broadcast group is fully isolated. This is a deliberate design choice and it matters more than you might think.

Deploy OpenClaw for Your Business

Secure deployment in 48 hours. Choose personal setup or fully managed.

Agent A has its own session key, its own conversation history (it only sees the user's messages and its own previous responses), its own workspace and sandbox, its own tool permissions, and its own personality and instructions.

Agent B has all of the same, completely separately.

This means you can give different agents different levels of access. Your code reviewer gets read and execute permissions. Your code fixer gets read, write, edit, and execute. Your logger gets read-only. Each agent only has the tools it needs, which limits blast radius if something goes wrong.

You can also run different models for different agents. Put your complex reasoning tasks on Opus and your simpler classification tasks on Sonnet or Haiku. Cost optimisation at the agent level, which adds up when you're processing a lot of messages.

The one shared element is the group context buffer. All broadcast agents see the same recent group messages for context. This makes sense - they need to understand what's being discussed to give useful responses.

Practical Tips From Our Testing

Keep agent count reasonable. We found that 3-5 agents per broadcast group works well. Beyond that, the chat gets noisy and users start ignoring responses. If you need more than 5 specialised perspectives on every message, you probably need a different architecture - maybe a coordinator agent that delegates to specialists rather than broadcasting to all of them.

Give agents clear, focused jobs. A code reviewer should review code. It shouldn't also try to generate documentation and suggest tests. The whole point of broadcast groups is specialisation. One job per agent, done well.

Use the agent's name to signal what it does. When four agents respond to a message, users need to quickly identify which response came from which specialist. "Code Reviewer", "Security Auditor" - these are immediately clear. "Assistant 3" is not.

Think about when agents should stay quiet. Not every agent needs to respond to every message. Configure your agents' system prompts so they know when to skip. A security auditor should only speak up when there's something security-related to say. If someone asks about lunch plans, the security auditor should stay silent.

Monitor for failures independently. Agents fail independently in broadcast groups. If one agent errors out, the others still respond. That's good for reliability but it means you need to watch logs for individual agent failures. A broadcast group where one agent has been silently failing for a week is easy to miss.

You can check for issues with:

tail -f ~/.openclaw/logs/gateway.log | grep broadcast

When Not to Use Broadcast Groups

Broadcast groups aren't the right tool for every multi-agent scenario.

If your agents need to collaborate - one agent's output feeds into another's reasoning - broadcast groups won't work since the agents can't see each other. You need an orchestration pattern for that.

If you want exactly one agent to respond based on the message content, that's routing, not broadcasting. Use bindings and pattern matching instead.

If your agents need to have a back-and-forth conversation with each other about the user's message, that's a different architecture entirely. Broadcast groups are parallel independent processing, not collaborative reasoning.

The Bigger Picture

Broadcast groups are a good example of a trend we're seeing across AI agent platforms: moving from single monolithic agents toward teams of specialised agents. A single agent that tries to do everything tends to be mediocre at most things. A team of focused agents, each excellent at their specific task, tends to produce better results overall.

We're building these kinds of multi-agent architectures for clients across different platforms and use cases. If you're interested in deploying AI agents for your business - whether through OpenClaw or custom-built solutions - our AI agent builders team can help you design the right architecture.

For businesses looking at AI agents for customer-facing channels specifically, we offer OpenClaw managed services that handle setup, monitoring, and ongoing optimisation.

You can read the full broadcast groups documentation at OpenClaw's docs.

The takeaway: if you've been trying to make one AI agent do too many things in a single conversation, broadcast groups give you a clean way to split that into specialised agents without making your users manage multiple chat threads. It's not the right pattern for every situation, but when it fits, it works well.

Deploy OpenClaw for Your Business

Secure deployment in 48 hours. Choose personal setup or fully managed.