Back to Blog

OpenAI MCP and Connectors - What This Means for AI Agent Development

April 2, 20268 min readMichael Ridland

OpenAI MCP and Connectors - What This Means for AI Agent Development

The Model Context Protocol (MCP) has been gaining momentum across the AI ecosystem, and OpenAI's adoption of it in their agents platform is a significant step. For anyone building AI agents that need to interact with external tools and data sources, this changes the integration story considerably.

I'll be honest - when MCP first emerged, I wasn't sure it would get enough traction to matter. There have been plenty of "standard protocols" in AI that went nowhere. But with Anthropic pushing it, OpenAI adopting it, and a growing ecosystem of MCP servers, it's looking like this one is going to stick around.

What MCP Actually Is

MCP - the Model Context Protocol - is a standardised way for AI models to interact with external tools and data sources. Think of it as a universal adapter between AI agents and the outside world.

Before MCP, if you wanted an AI agent to query a database, you'd write a custom function, define the schema, handle the authentication, parse the response, and format it for the model. If you wanted the same agent to also interact with a CRM, that's another custom integration. And a third system? Another one. Each integration was bespoke.

MCP standardises this. A tool provider publishes an MCP server that describes what capabilities it offers - what functions are available, what parameters they accept, what data they return. The AI agent connects to that MCP server using the standard protocol and can immediately use those capabilities. No custom integration code needed for each new tool.

The analogy I use with clients is USB. Before USB, every peripheral had its own connector type. USB didn't make peripherals better - it just made them all use the same plug. MCP is doing the same thing for AI agent tool integrations.

How OpenAI Has Implemented It

OpenAI's implementation brings MCP support directly into their agents and tools framework. You can configure MCP servers as tool sources for your agents, and the agent can discover and use the tools that those servers expose.

The practical workflow looks like this:

  1. You point your agent at one or more MCP server endpoints
  2. The agent queries each server to discover available tools
  3. When the agent decides it needs to use a tool, it makes a standardised call through the MCP protocol
  4. The response comes back in a standardised format that the agent can interpret

What makes this particularly useful is the connector ecosystem. OpenAI has introduced connectors - pre-built integrations for common services - that use MCP under the hood. Instead of building a custom integration every time you need your agent to talk to a new system, you configure a connector and the agent can use it immediately.

Why This Matters for Real Projects

On paper, this all sounds like plumbing. Let me explain why it actually matters when you're building production agent systems.

Integration speed. On a recent project, we needed an AI agent that could search a client's SharePoint, query their Dynamics 365 data, and check their Azure DevOps board. Without MCP, each of those integrations would have taken days to build, test, and debug. With MCP connectors, the integration work drops from days to hours because someone has already built and tested the MCP server for each service.

Standardised error handling. One of the most painful parts of custom tool integrations is handling failures gracefully. What happens when the API is down? What about rate limiting? Authentication expiry? With MCP, the error handling patterns are standardised. The agent knows what a "tool unavailable" response looks like regardless of which tool failed.

Composability. This is the big one. Because every tool speaks the same protocol, you can mix and match tools freely. An agent that starts with access to three tools can have a fourth added without any code changes - just add the MCP server endpoint to its configuration. You can also share MCP servers across multiple agents, which means building a tool integration once and reusing it everywhere.

The Connectors Ecosystem

OpenAI's connectors are essentially managed MCP servers for popular services. Rather than running your own MCP server for something like web search or code execution, you configure a connector and OpenAI handles the infrastructure.

The connector model works well for common integrations. Web search, code interpretation, file retrieval - these are things almost every agent needs and they benefit from being managed centrally.

For custom or proprietary systems, you'll still need to build your own MCP servers. But the protocol gives you a clear specification to build against, which is a lot better than inventing your own tool-calling convention for each project.

Building Your Own MCP Servers

This is where things get interesting for organisations with proprietary systems. If you have internal APIs, databases, or services that your agents need to access, building an MCP server for them means any agent in your organisation can use those capabilities.

We've been building MCP servers for clients' internal systems as part of our agentic automation work. The pattern is consistent:

  • Identify the capabilities the system exposes (search, create, update, query, etc.)
  • Define the tool schemas with clear parameter descriptions so the model knows how to use them
  • Implement the tool handlers that translate MCP calls into your system's API calls
  • Handle authentication and authorisation properly
  • Add error handling and rate limiting

The investment pays off because once the MCP server exists, every agent you build can use it. A customer service agent, an internal ops agent, and an analytics agent can all access the same CRM data through the same MCP server. Build it once, use it everywhere.

What Works Well and What Doesn't

After working with MCP across several projects, here's our honest assessment.

What works well:

The discovery mechanism is genuinely useful. Agents can query an MCP server to understand what tools are available and how to use them, without any hardcoded knowledge. This means you can update a tool's capabilities on the server side and agents automatically get access to the new functionality.

The standardisation of tool schemas means less ambiguity for the model. When every tool describes itself using the same format, the model gets better at understanding when and how to use each tool. We've seen fewer "wrong tool selection" errors since moving to MCP compared to our custom function-calling implementations.

Composability works in practice, not just in theory. We've had agents running with five or six MCP tool sources and they handle the routing between them well. Adding a new tool source doesn't degrade performance on the existing ones.

What's still rough:

Latency. Every MCP call involves a network round-trip to the MCP server, which then makes its own call to the underlying service. For tools that need to be fast - like real-time data lookups during a conversation - the added latency is noticeable. It's fine for background tasks but can feel sluggish for interactive use cases.

Debugging. When something goes wrong in an MCP call chain, tracing the issue through the standardised protocol layer to the underlying tool can be frustrating. The abstraction that makes MCP useful also makes it harder to debug. Good logging at the MCP server level is non-negotiable.

Authentication management. Each MCP server might need its own credentials to the underlying system. Managing those credentials - rotation, scoping, revocation - adds operational overhead. It's solvable, but you need to think about it upfront rather than bolting it on later.

Practical Recommendations

If you're building AI agents and evaluating MCP integration, here's what I'd suggest:

Start with the managed connectors. For common capabilities like web search and code execution, use what's already available. Don't build custom MCP servers for things that already have good connectors.

Build MCP servers for your core systems early. If your agents need access to your CRM, your ERP, or your product database, invest in building proper MCP servers for those systems. The earlier you standardise on MCP, the more agents benefit from each integration you build.

Plan for observability. Add structured logging and metrics to your MCP servers from day one. Track which tools are being called, how often, with what parameters, and how long they take. This data is invaluable for debugging and for understanding how your agents actually use their tools.

Don't over-expose. Just because you can expose every API endpoint through MCP doesn't mean you should. Give agents access to the specific capabilities they need. A customer-facing agent probably shouldn't have access to admin-level system tools, even if the MCP server technically supports them.

Where This Is Heading

MCP adoption is accelerating. More AI platforms are adding support, more tool providers are publishing MCP servers, and the protocol itself continues to mature. For organisations investing in AI agent development, standardising on MCP now means your tool integrations will work across platforms as the ecosystem evolves.

The convergence around a standard protocol is good for everyone. It means less time writing integration code and more time building the actual agent logic that delivers value.

If you're planning an AI agents project and want to talk through the architecture - including how MCP fits into your existing systems - our AI development team has been working with these patterns across multiple client engagements. Get in touch and we can walk through your specific use case.

For the full technical details on OpenAI's MCP and connectors implementation, check out the OpenAI documentation.