Back to Blog

Claude MCP Connector - Connecting to Remote MCP Servers From the API

March 27, 20267 min readMichael Ridland

One of the friction points we kept hitting when building AI agent systems was the MCP client layer. The Model Context Protocol is excellent - it gives AI agents a standard way to interact with external tools and data sources. But setting up and managing an MCP client to broker the connection between Claude and your MCP servers added a layer of infrastructure that felt unnecessary for simpler use cases.

Anthropic's MCP connector removes that layer. It lets you connect to remote MCP servers directly from the Messages API, without running your own MCP client. You define the server in your API request, and Claude handles the connection, tool discovery, and tool execution internally.

That's a meaningful simplification for a lot of agent architectures. Let me walk through how it works and where it fits.

What the MCP Connector Does

The MCP connector adds two components to the Messages API:

mcp_servers array - defines the remote MCP servers you want to connect to. Each entry includes a URL, a name, and optionally an authentication token.

mcp_toolset in the tools array - tells Claude which tools from which MCP server to enable. You can enable all tools from a server, allowlist specific ones, or denylist tools you don't want available.

When you make an API request with these components, Claude connects to the specified MCP server, discovers what tools are available, and can call those tools during the conversation. The connection happens server-side within Anthropic's infrastructure, so your application doesn't need to maintain an MCP client or manage WebSocket connections.

Here's a minimal example in Python:

import anthropic

client = anthropic.Anthropic()

response = client.beta.messages.create(
    model="claude-opus-4-6",
    max_tokens=1000,
    messages=[{"role": "user", "content": "What tools do you have available?"}],
    mcp_servers=[
        {
            "type": "url",
            "url": "https://your-mcp-server.example.com/sse",
            "name": "my-tools",
            "authorization_token": "YOUR_TOKEN",
        }
    ],
    tools=[{"type": "mcp_toolset", "mcp_server_name": "my-tools"}],
    betas=["mcp-client-2025-11-20"],
)

Note the betas parameter - this is a beta feature and requires the mcp-client-2025-11-20 header. The previous version (mcp-client-2025-04-04) is deprecated, so make sure you're using the current one.

When This Makes Sense

Not every agent architecture benefits from the MCP connector. Here's where we've found it particularly useful.

Prototyping and early development. When you're building a new agent and want to test against an MCP server quickly, spinning up a full MCP client just to experiment adds friction. The MCP connector lets you point Claude at your server and start testing immediately.

Simple tool integrations. If your agent needs access to a handful of tools exposed via a single MCP server, the connector is often all you need. No client to deploy, no connection management, no process to keep running.

Multi-server scenarios. You can connect to multiple MCP servers in a single request. Each server gets a name, and each toolset references a server by name. This makes it straightforward to compose an agent from tools spread across different services without building a unified client that manages all those connections.

Serverless and ephemeral workloads. If your agent runs in a Lambda function or similar serverless environment, maintaining a persistent MCP client is awkward. The MCP connector handles everything within the API request lifecycle, which maps naturally to serverless execution.

Where It Doesn't Fit

A few limitations to be aware of:

Only HTTP-accessible servers. The MCP connector requires your MCP server to be publicly accessible over HTTP, supporting either Streamable HTTP or SSE transports. Local STDIO-based servers - the kind you might run on a developer's machine or in a local process - can't be connected directly. For those, you still need a local MCP client.

Tool calls only. The MCP specification includes more than just tool calls - there are resources, prompts, and other primitives. The connector currently only supports the tool call portion. If your MCP server exposes resources that Claude should be able to read, you'll need a full MCP client to access those.

No Bedrock or Vertex support. If you're running Claude through Amazon Bedrock or Google Vertex AI rather than the direct Anthropic API, the MCP connector isn't available. This is worth knowing if your infrastructure is built on one of those platforms.

Data retention considerations. The MCP connector feature is not eligible for Zero Data Retention (ZDR). Data is retained according to Anthropic's standard retention policy. If your data governance requirements mandate zero retention, this feature may not be appropriate for handling sensitive data.

Controlling Tool Access

One of the better-designed aspects of the connector is the tool filtering. You're not limited to an all-or-nothing approach. The mcp_toolset configuration lets you:

Enable all tools from a server (the default if you just specify the server name).

Allowlist specific tools by listing only the tools you want Claude to have access to.

Denylist specific tools by listing tools you want to exclude.

Configure individual tools with custom settings like descriptions or parameter overrides.

In practice, we almost always use an allowlist rather than enabling everything. MCP servers can expose a lot of tools, and giving Claude access to tools it doesn't need for a specific task increases the chance of unexpected behaviour. If your agent is supposed to look up customer information, don't also give it access to the server's admin tools. This is basic principle-of-least-privilege thinking applied to AI tool access.

Multi-Server Configuration

Connecting to multiple servers is where the architecture starts to get interesting. You might have one MCP server for your CRM, another for your internal knowledge base, and a third for your project management tool. Each gets its own entry in mcp_servers and its own mcp_toolset in tools:

mcp_servers=[
    {
        "type": "url",
        "url": "https://crm-mcp.example.com/sse",
        "name": "crm",
        "authorization_token": "CRM_TOKEN",
    },
    {
        "type": "url",
        "url": "https://kb-mcp.example.com/sse",
        "name": "knowledge-base",
        "authorization_token": "KB_TOKEN",
    }
],
tools=[
    {"type": "mcp_toolset", "mcp_server_name": "crm"},
    {"type": "mcp_toolset", "mcp_server_name": "knowledge-base"},
]

Claude sees all the tools from both servers and can use them together in a single conversation. Want to find a customer in the CRM and then search the knowledge base for relevant documentation about their product? That's a single prompt with tools from two different servers.

This pattern maps well to enterprise environments where data and functionality are spread across many systems. Rather than building a monolithic integration layer, each system exposes its own MCP server, and the connector brings them together at the API level.

Authentication

The connector supports OAuth Bearer tokens via the authorization_token field. This is straightforward for servers that use token-based auth. For servers behind more complex authentication (SAML, mutual TLS, custom auth headers), you'd need to handle auth at the MCP server level or use an API gateway in front of it.

A practical note: don't hardcode tokens in your application code. Pull them from environment variables or a secrets manager, rotate them regularly, and use scoped tokens with the minimum necessary permissions. Standard API security hygiene, but worth stating explicitly because MCP servers often have broad access to internal systems.

Building Agent Systems With the MCP Connector

The MCP connector is one piece of a larger architecture. For production agent systems, you'll typically combine it with:

  • System prompts that define the agent's role and behaviour
  • Conversation management to maintain context across turns
  • Error handling for when MCP servers are unavailable or tool calls fail
  • Observability to track which tools are being called and how the agent is performing

We've been building agent systems using Claude's SDK and MCP extensively, and the connector has simplified several patterns we use regularly. For organisations building their first AI agents, it lowers the barrier to getting a working prototype. For teams with existing agent infrastructure, it's a useful option for specific scenarios where a full MCP client is overkill.

If you're exploring AI agent development and want to understand how MCP, the Claude API, and agent architectures fit together, our AI agent development team works with Australian organisations to design and build these systems. We're also active with the Claude Agent SDK and can help you evaluate whether the MCP connector pattern fits your specific use case.

For organisations already running MCP servers and looking to simplify their agent architecture, the connector is worth testing. For those just starting out, it's a good way to get MCP tool access without the infrastructure overhead of running a full client. Either way, the fact that Anthropic is investing in making MCP easier to use from the API is a positive signal for the protocol's maturity and longevity.

Check the official documentation for the full API reference, and reach out to our team if you want help putting it into practice.