agents-toolsopen-sourcedeployment

MCP Is Winning the Agent Tool Protocol War

MCP (Model Context Protocol) has emerged as the de facto standard for connecting AI agents to external tools and data sources. With support from Anthropic, OpenAI, Google, and the open-source community, MCP servers now cover databases, APIs, dev tools, and enterprise systems. Builders should standardize on MCP now.

Digiteria Labs/15 min read

Key Signals

  • Anthropic launched the Model Context Protocol (MCP) as an open specification in November 2024. Fifteen months later, the community registry lists over 2,000 MCP servers spanning databases, developer tools, SaaS APIs, and enterprise connectors.
  • OpenAI added native MCP support to the Agents SDK and ChatGPT desktop app in March 2025, marking the moment MCP crossed from Anthropic ecosystem tool to industry-wide default.
  • Google DeepMind confirmed MCP client support in Gemini and Vertex AI Agent Builder in late 2025, eliminating the last major holdout and cementing MCP as the only protocol with backing from all three frontier labs.
  • The MCP spec has evolved through three major revisions — adding Streamable HTTP transport, OAuth 2.1 authorization, and structured tool annotations — transforming it from a local-only developer protocol into a production-grade, remotely-deployable integration layer.
  • Agent frameworks including LangChain, CrewAI, AutoGen, and Vercel AI SDK now ship with MCP-first tool discovery, meaning any tool exposed as an MCP server is automatically available to agents built on these platforms without custom glue code.

What Happened

I've been tracking MCP since Anthropic open-sourced it in November 2024, and I'll be honest — I didn't expect it to win this cleanly. The pitch was straightforward: a JSON-RPC 2.0 based specification for connecting AI models to external tools and data sources. One protocol instead of every agent framework inventing its own tool-calling interface. A universal adapter. We've seen this movie before with container orchestration (Kubernetes), package management (npm), and hardware interfaces (USB). The question was always whether any single protocol could win fast enough to prevent fragmentation.

It won fast. Surprisingly fast. Within six months of launch, MCP had reference implementations in TypeScript and Python, a growing registry of community-built servers, and — this is the part that caught me off guard — adoption from Anthropic's direct competitors. OpenAI integrated MCP client support into the Agents SDK in March 2025. Google followed in late 2025. By early 2026, every major agent framework had standardized on MCP as the default tool integration layer. The protocol war that many anticipated between OpenAI's function calling format, Google's Vertex Extensions, and Anthropic's MCP? It never materialized. And here's the thing most people miss about why: MCP's advantage was not technical superiority. It was that Anthropic gave it away as a fully open specification with an Apache 2.0 licensed SDK, while competitors had tied their tool interfaces to proprietary platforms. Open beats proprietary when the ecosystem wants a shared standard. Every time.

The spec itself has matured considerably since launch. The initial release supported only stdio-based local communication between a host application and MCP servers running as child processes — basically a toy for developer workstations. By February 2026, the protocol supports three transport mechanisms (stdio, Server-Sent Events over HTTP, and the newer Streamable HTTP transport), includes OAuth 2.1 for remote server authentication, and has formalized tool annotations that let servers declare whether a tool is read-only or has side effects. This is no longer a toy for local developer workflows. It is production infrastructure. That shift happened faster than I expected.

Note: I keep coming back to the Kubernetes parallel. MCP's adoption curve mirrors Kubernetes in 2016-2017 almost exactly. The technical details mattered less than the ecosystem dynamics: an open spec, backed by a major player willing to cede control, adopted by competitors who preferred a shared standard to building their own. The lesson is the same — in infrastructure protocol wars, open + good enough beats proprietary + technically superior every time. I think we'll say the same thing about MCP in three years.

The Protocol Architecture

Before I get into what this means for builders, it's worth understanding what MCP actually standardizes. The protocol defines three core primitives that an MCP server can expose to an AI agent:

Tools are the most widely used primitive — and the one you'll interact with first. A tool is a function with a name, a JSON Schema description of its parameters, and optional annotations describing its behavior (read-only vs. destructive, idempotent vs. not). When an agent decides to use a tool, the MCP client sends a tools/call request to the server, which executes the function and returns the result. This is the mechanism that lets an agent query a database, create a GitHub issue, or send a Slack message. Simple concept. Surprisingly powerful in practice.

Resources provide structured data access. Unlike tools, which are model-controlled (the agent decides when to invoke them), resources are application-controlled — the host application decides which resources to include in context. Think of resources as the protocol's way of handling file contents, database schemas, API documentation, or any reference data the agent needs to reason about. I think resources are underappreciated right now — more on that later.

Prompts are reusable prompt templates that servers can expose. These are less commonly used than tools and resources, but they allow MCP servers to package domain-specific prompting strategies — for example, a database MCP server might expose a "query_optimization" prompt template that instructs the agent how to write efficient SQL for that specific database engine. (Honestly, I haven't seen many production deployments leaning on this primitive yet. But it's there.)

Note: I want to be honest about the biggest gap in the MCP ecosystem right now: security. There is minimal server verification or supply chain security. Anyone can publish an MCP server, and many community servers have not been audited. A malicious MCP server has full access to whatever capabilities the host application grants it — file system reads, network requests, database writes. Treat MCP servers like npm packages: vet them before you install them, pin versions, and run untrusted servers in sandboxed environments. The spec now includes tool annotations for declaring side effects, but these are self-reported by the server and not enforced by the protocol. That last part worries me. Self-reported safety metadata is not safety.

Builder Breakdown

Building an MCP Server

Let me walk through what it actually takes to build and deploy an MCP server. The fastest path to production is the official TypeScript or Python SDK. An MCP server is a process that speaks JSON-RPC 2.0 over one of the supported transports and exposes tools, resources, or prompts (or any combination of the three). It's simpler than it sounds.

TypeScript Example — A Database Query Server:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import Database from "better-sqlite3";

const server = new McpServer({
  name: "sqlite-query",
  version: "1.0.0",
});

const db = new Database("/path/to/your.db", { readonly: true });

// Expose a tool for running read-only SQL queries
server.tool(
  "query",
  "Execute a read-only SQL query against the database",
  {
    sql: z.string().describe("The SQL SELECT query to execute"),
  },
  async ({ sql }) => {
    // Safety: only allow SELECT statements
    if (!sql.trim().toUpperCase().startsWith("SELECT")) {
      return {
        content: [{ type: "text", text: "Error: Only SELECT queries allowed" }],
        isError: true,
      };
    }
    const rows = db.prepare(sql).all();
    return {
      content: [{ type: "text", text: JSON.stringify(rows, null, 2) }],
    };
  }
);

// Expose the database schema as a resource
server.resource(
  "schema",
  "db://schema",
  "The database schema for all tables",
  async () => {
    const tables = db
      .prepare("SELECT sql FROM sqlite_master WHERE type='table'")
      .all();
    return {
      contents: [{
        uri: "db://schema",
        mimeType: "text/plain",
        text: tables.map((t: any) => t.sql).join("\n\n"),
      }],
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

Python Example — A Web API Connector:

from mcp.server.fastmcp import FastMCP
import httpx

mcp = FastMCP("github-issues", version="1.0.0")

@mcp.tool()
async def list_issues(
    repo: str,
    state: str = "open",
    limit: int = 10,
) -> str:
    """List GitHub issues for a repository.

    Args:
        repo: Repository in owner/name format (e.g. 'anthropics/mcp')
        state: Filter by state — open, closed, or all
        limit: Maximum number of issues to return
    """
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"https://api.github.com/repos/{repo}/issues",
            params={"state": state, "per_page": limit},
            headers={"Accept": "application/vnd.github.v3+json"},
        )
        resp.raise_for_status()
        issues = resp.json()

    lines = []
    for issue in issues:
        lines.append(f"#{issue['number']} [{issue['state']}] {issue['title']}")
    return "\n".join(lines) if lines else "No issues found."

@mcp.tool()
async def create_issue(
    repo: str,
    title: str,
    body: str = "",
) -> str:
    """Create a new GitHub issue.

    Args:
        repo: Repository in owner/name format
        title: Issue title
        body: Issue body in markdown
    """
    async with httpx.AsyncClient() as client:
        resp = await client.post(
            f"https://api.github.com/repos/{repo}/issues",
            json={"title": title, "body": body},
            headers={
                "Accept": "application/vnd.github.v3+json",
                "Authorization": f"Bearer {os.environ['GITHUB_TOKEN']}",
            },
        )
        resp.raise_for_status()
        issue = resp.json()
    return f"Created issue #{issue['number']}: {issue['html_url']}"

if __name__ == "__main__":
    mcp.run(transport="stdio")

Transport Mechanisms

Let me break down the three transports, because choosing the right one determines how your server deploys — and I've seen teams make expensive mistakes here:

stdio is the original transport. The MCP client launches the server as a child process and communicates over standard input/output. Best for: local development tools, CLI integrations, desktop applications. Claude Desktop, Claude Code, and Cursor all use stdio for local MCP servers. Zero network configuration required. If you're just getting started, this is your friend.

Streamable HTTP is the recommended transport for remote deployments (introduced in the 2025-03-26 spec revision). The client sends JSON-RPC requests via HTTP POST to a single endpoint. The server can respond with a single JSON response or upgrade to a Server-Sent Events stream for progress updates and multi-part results. This replaced the older SSE transport and is now the preferred production mechanism. If you're building anything that needs to serve multiple clients or run behind a load balancer, this is the one.

// Remote MCP server using Streamable HTTP transport
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from
  "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from "express";

const app = express();
app.use(express.json());

const server = new McpServer({ name: "remote-api", version: "1.0.0" });

// ... register tools, resources, prompts ...

const transport = new StreamableHTTPServerTransport({ endpoint: "/mcp" });
app.use("/mcp", async (req, res) => {
  await transport.handleRequest(req, res);
});

await server.connect(transport);
app.listen(3001);

SSE (deprecated) was the original remote transport. It used one endpoint for client-to-server messages (POST) and another for server-to-client events (GET with SSE). Still supported for backward compatibility, but don't build new servers on it. You'll just end up migrating later.

Authentication for Remote Servers

The spec now includes an OAuth 2.1 authorization flow for remote MCP servers — and this is the change that, in my view, made MCP production-ready. When a client connects to a remote server that requires auth, the server responds with a 401 and an RFC 8414 authorization server metadata URL. The client then performs a standard OAuth flow — authorization code with PKCE — and attaches the resulting Bearer token to subsequent requests. This means your MCP server can integrate with existing identity providers (Auth0, Okta, Cognito) without inventing a custom auth scheme. No more bespoke token-passing hacks.

Key Tool Categories in the Ecosystem

The 2,000+ MCP servers in the ecosystem cluster into predictable categories. Here's what I'm seeing:

  • Databases: PostgreSQL, MySQL, SQLite, MongoDB, Redis, Elasticsearch — each exposing query, schema inspection, and (optionally) write tools.
  • Developer tools: GitHub, GitLab, Jira, Linear, Sentry — issue management, code search, deployment status.
  • Web and search: Brave Search, Google Search, web scraping (Firecrawl, Browserbase), URL fetching.
  • Communication: Slack, Discord, email (via SMTP or Resend/SendGrid APIs), calendar management.
  • Cloud infrastructure: AWS, GCP, Cloudflare, Vercel, Kubernetes — resource management, log queries, deployment triggers.
  • Enterprise systems: Salesforce, HubSpot, Notion, Confluence, Google Workspace — CRM data, document access, knowledge bases.
  • File and storage: Local filesystem, S3, Google Drive, Dropbox — file read/write, search, and organization.

Economic Analysis

Platform Dynamics

Here's why that 2,000-server number matters more than it looks: MCP has created a two-sided marketplace dynamic. On one side, tool and API providers who ship MCP servers make their products instantly accessible to every AI agent. On the other side, agent frameworks that support MCP as a client get access to the entire tool ecosystem without bilateral integration work. This is the classic platform flywheel: more servers attract more clients, which attract more servers. And once a flywheel like this gets going, it's nearly impossible to stop. (Ask anyone who tried to build a Kubernetes alternative in 2018.)

Winners:

  • Anthropic gains ecosystem influence without direct revenue — and I think this was the entire strategy from day one. By controlling the spec (even as an open standard), Anthropic shapes the conventions that every agent builder follows. Claude's tool-use capabilities were designed around MCP's primitives, giving it a subtle home-field advantage in how tools are described and invoked. Clever move.
  • SaaS companies that shipped MCP servers early. If your product has an MCP server in the community registry, every Claude, ChatGPT, and Gemini user can connect to it by adding a single config block. This is a new distribution channel — tool discovery via agent, not via app store. Companies like Stripe, Sentry, and Cloudflare that published official MCP servers in 2025 are seeing organic agent-driven API usage they never marketed for. The headline says "2,000 community servers" but if you look at the actual numbers, the official first-party servers from major SaaS companies are the ones driving real production volume.
  • Agent framework developers. LangChain, CrewAI, and similar frameworks become more valuable as the MCP ecosystem grows — they are the middleware layer that connects arbitrary agents to arbitrary tools. Good position to be in.
  • Enterprise integration platforms. Companies building managed MCP server hosting and governance (server registries, access controls, audit logging) are filling a real gap. The protocol defines the wire format; it does not define the management plane. My take: this is the biggest greenfield opportunity in the MCP ecosystem right now.

Losers:

  • Proprietary tool-calling ecosystems. OpenAI's custom GPT Actions, Google's Vertex Extensions, and similar proprietary integration formats are being deprioritized by builders in favor of "build once with MCP, deploy everywhere." I've talked to several teams that started with GPT Actions and are now migrating. The writing is on the wall.
  • Traditional iPaaS platforms (Zapier, Make, Workato) face what I'd call an existential strategy question. MCP allows agents to call APIs directly with structured, schema-aware tool interfaces — exactly the problem iPaaS platforms solve for human-in-the-loop workflows. The integration layer is moving from human-configured workflows to agent-discovered tools. I'm not sure iPaaS dies (there's a long tail of non-technical users), but the developer market is gone.
  • Tool providers that ignore MCP. The integration tax is real and growing. If a developer must write custom tool definitions to connect your API to an agent — instead of pointing at your MCP server — they will choose a competitor that publishes one. This is the same dynamic that punished SaaS companies without REST APIs in the 2010s. History doesn't repeat, but it rhymes.

The Open Question: Who Governs the Spec?

Here's what worries me about the long-term picture. MCP is currently maintained by Anthropic with community input via GitHub. There is no independent standards body. This mirrors the early days of Docker (company-controlled spec) before the Open Container Initiative formed. As MCP adoption deepens and enterprise budgets depend on it, expect pressure for either a formal foundation or multi-vendor governance structure. The spec's evolution — particularly around security, auth, and server verification — will increasingly have financial consequences for the ecosystem. One company controlling the spec that everyone depends on is fine when adoption is early. It gets uncomfortable fast.

"MCP is not a tool-calling format. It is a network protocol for the agent economy — the TCP/IP layer that lets any agent talk to any tool. The companies that build MCP servers today are the ones that will have distribution when agents become the primary interface for software."

The Integration Pattern Shift

I want to spend a moment on something that I think is the most important practical consequence of MCP winning, because it changes how you architect agent-powered products. Before MCP, building an agent that could interact with external systems required writing custom tool definitions for every integration — describing the function signature, parsing the response, handling auth, managing errors. Each agent framework had its own format for these definitions. Adding a new tool meant writing new code. It was tedious, fragile, and it scaled terribly.

With MCP, the pattern inverts. Instead of the agent application defining tools, tools define themselves. An MCP server advertises its capabilities through the tools/list and resources/list endpoints. The agent client discovers available tools at runtime, reads their schemas, and invokes them through a standardized protocol. Adding a new integration to your agent means adding a line to your MCP server configuration, not writing new application code. That's it. One line.

This is the architectural shift that matters. It decouples agent logic from tool implementation. Your agent does not need to know how to query PostgreSQL or create a Jira ticket — it needs to know how to speak MCP. The implementation details live in the MCP server, which can be maintained independently, versioned separately, and shared across every agent in your organization. If you've worked with microservices, this should feel familiar. If you haven't — well, this is why people got excited about microservices.

Note: The most underappreciated MCP feature — and I keep coming back to this — is resource subscriptions. An MCP client can subscribe to resource changes and receive notifications when underlying data updates. This enables reactive agent architectures — agents that respond to database changes, file modifications, or API state transitions in real time, rather than polling. Most production implementations have not adopted this pattern yet, but I think it will define the next generation of agent workflows. The data here is thin (I haven't found good numbers on subscription adoption), but architecturally, this is where things get really interesting.

Recommendation

What I'd Do

If you're a CTO: Make MCP the standard integration interface for your product's API. If you offer a developer platform or SaaS product with an API, publish an official MCP server alongside your SDK. This is not optional for 2026 — it is table stakes. Internally, audit your agent-powered features and migrate any custom tool-calling implementations to MCP clients. The goal is a single integration protocol across your entire agent stack, reducing maintenance burden and enabling tool reuse across products. The teams I've talked to who did this early are already seeing compound benefits.

If you're a founder: The MCP ecosystem is your distribution strategy. Full stop. Ship an MCP server and get listed in the community registry. Every developer using Claude Code, Claude Desktop, Cursor, Windsurf, or any MCP-compatible agent framework becomes a potential user without you spending a dollar on marketing. If you're building agents (not tools), standardize on MCP-first tool discovery — it gives your agents access to the broadest set of integrations with the least engineering effort. Whatever you do, do not build proprietary tool interfaces. That path leads to integration maintenance debt that will slow your iteration speed. I've watched two startups learn this the hard way in the last six months.

If you're an infra lead: Deploy a centralized MCP gateway for your organization. Instead of every developer running local MCP servers with their own credentials (and yes, I've seen production API keys in plaintext MCP configs — it's ugly out there), stand up shared remote MCP servers using Streamable HTTP transport and OAuth 2.1 that enforce access controls, audit tool invocations, and manage secrets centrally. Treat MCP servers like microservices: containerize them, run them behind your service mesh, monitor their latency and error rates. The operational maturity of your MCP infrastructure will determine how reliably your agents perform in production. Start with your three highest-value integrations — likely your primary database, your issue tracker, and your internal knowledge base — and expand from there. You don't need to boil the ocean on day one.

Sources

  1. "Model Context Protocol Specification," modelcontextprotocol.io/specification (2025-03-26 revision)
  2. "Introducing the Model Context Protocol," Anthropic Blog, anthropic.com/news/model-context-protocol (November 2024)
  3. "OpenAI Agents SDK — MCP Support," OpenAI Documentation, platform.openai.com/docs/agents (March 2025)
  4. MCP Community Servers Registry, github.com/modelcontextprotocol/servers (accessed February 2026)
  5. "MCP: The Protocol Powering Agent-Tool Integration," Sequoia Capital AI Infrastructure Report, sequoiacap.com/article/mcp-agent-tool-protocol (January 2026)
  6. "Building Remote MCP Servers with OAuth 2.1," Anthropic Developer Blog, docs.anthropic.com/en/docs/agents-and-tools/mcp (2025)

Need help implementing AI infrastructure for your organization? We help enterprises build, deploy, and optimize production AI systems. Learn about our AI consulting services.

Related insights