Adding Web Search to AI Agents with OpenClaw and Brave Search
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
One of the most common requests we get when building AI agents for clients is "can it search the web?" Fair question. Most business tasks that involve research - checking competitor pricing, finding regulatory updates, pulling in market data - require access to current information that isn't sitting in the organisation's internal systems.
The problem is that giving an AI agent unrestricted web access is like giving an intern a credit card and telling them to "go research things." You need some structure around what gets searched, how results are returned, and what it costs. That's where a managed search integration comes in.
We've been working with OpenClaw as an AI gateway layer, and their Brave Search integration is one of the cleaner implementations we've seen for adding web search to agent workflows.
Why Brave Search Specifically?
There are a few web search APIs out there - Google Custom Search, Bing Web Search, SerpAPI, Tavily. We've tried most of them at various points. Brave Search has a few things going for it.
First, the pricing is straightforward. $5 per 1,000 queries on the Search plan, and they give you $5/month in free credit that renews. So you get 1,000 queries a month at no cost, which is enough for development and light production use. Compare that to Google Custom Search, which charges $5 per 1,000 queries but with no free tier beyond the first 100 per day.
Second, the Brave Search plan includes AI inference rights, which matters if you're processing search results through an LLM. Some search APIs have terms of service that restrict using their results for AI training or inference. Brave is explicit that the Search plan covers this use case.
Third, the API is just clean. The parameters are sensible - query, count, country, language, freshness filters, date ranges. No over-engineered abstraction layers. You ask for search results, you get search results.
Setting It Up in OpenClaw
The configuration in OpenClaw is minimal, which is how it should be. You add a web tool configuration with your Brave API key:
{
"tools": {
"web": {
"search": {
"provider": "brave",
"apiKey": "YOUR_BRAVE_API_KEY",
"maxResults": 5,
"timeoutSeconds": 30
}
}
}
}
That's it. Your agents now have a web_search tool they can call. The maxResults parameter controls how many results come back per query (1-10), and the timeout prevents a slow search from hanging your agent's execution.
OpenClaw handles the caching layer too - results are cached for 15 minutes by default, which you can configure with cacheTtlMinutes. This is worth paying attention to. If your agent runs the same query multiple times in a conversation (which happens more often than you'd think), caching means you're not paying for duplicate API calls.
Practical Usage Patterns
Here's where it gets interesting. Giving an agent web search is easy. Making it use web search well takes some thought.
Country and Language Filtering
If you're building agents for Australian businesses, you almost certainly want to filter results by country:
await web_search({
query: "GST compliance requirements",
country: "AU",
language: "en"
});
Without the country filter, you'll get results about US sales tax, UK VAT, or Indian GST. All technically relevant to the query, none useful for your Australian client.
We set country: "AU" as the default for most of our agent deployments and only override it when the use case specifically needs international results.
Freshness Controls
The freshness parameter is something we lean on heavily. When an agent is checking for recent regulatory changes or news, you don't want results from 2019 cluttering the response:
Deploy OpenClaw for Your Business
Secure deployment in 48 hours. Choose personal setup or fully managed.
await web_search({
query: "ASIC regulatory updates fintech",
freshness: "month"
});
You can also use explicit date ranges with date_after and date_before for more precise windows. This is particularly useful for agents that need to find information from a specific period - quarterly reporting data, for instance.
Cost Management
Here's something that catches people off guard: AI agents love to search. If you give an agent unrestricted web search access, it'll search for every piece of information it's even slightly unsure about. That adds up.
A few things we do to keep costs reasonable:
Set usage limits in the Brave dashboard. Brave lets you cap your monthly spending. Do it. We've seen runaway costs from agents that got stuck in loops, running the same search with slightly different wording dozens of times.
Be specific in your agent prompts about when to search. Rather than "you can search the web for any information you need," try "search the web only when you need current market data, regulatory updates, or information not available in the provided documents." This reduces unnecessary searches significantly.
Use caching aggressively. If your agents handle similar queries across different users (like "what are the current ATO tax rates"), a longer cache TTL means one API call serves many requests.
Limit maxResults. Five results per query is usually enough. Your LLM can synthesise information from five sources. Ten results means more tokens processed (more cost on the LLM side) without proportionally better answers.
When Web Search Doesn't Make Sense
Let's be honest about the limitations. Web search is great for certain types of information retrieval, but it's not always the right tool.
For internal company data - don't use web search. Use RAG with your own document store. Your internal policies, procedures, and business data should be retrieved from your own vector database or knowledge base, not from a Google-indexed version of your intranet.
For structured queries - if you need specific data points (stock prices, exchange rates, weather), a dedicated API is better than a web search. Web search returns pages, not data. You're asking an LLM to extract a number from a web page when you could just call an API that returns the number directly.
For anything requiring authority - web search results aren't verified. If your agent is giving legal or medical advice, the results from a web search could be outdated, wrong, or from a dodgy source. Human review is still essential for high-stakes information.
We build a lot of AI agents for Australian businesses, and web search is one tool in the toolkit - not the whole toolkit. The best agent architectures combine web search for current external information, RAG for internal knowledge, structured APIs for specific data, and clear guardrails about which source to use when.
The Bigger Picture
Adding web search to an AI agent is one small piece of building agents that actually do useful work. The harder problems are usually around orchestration (how do agents decide what to do next?), memory (how do they maintain context across interactions?), and reliability (what happens when a tool call fails?).
If you're exploring how AI agents could work for your organisation - whether that's customer service bots that can look up current information, research assistants that pull from multiple sources, or operational agents that monitor external data - we'd be happy to talk through the architecture. Our agentic automations practice is where we do most of this work, and we've built enough of these systems to have strong opinions about what works in production versus what only works in a demo.
Web search is one of those capabilities that sounds simple but has real implications for cost, accuracy, and security. Getting the configuration right upfront - the right provider, sensible defaults, proper cost controls - saves you from problems down the line.