Skip to Content

MCP Kills REST API: The Last Year of Classical Integrations

How we replaced 6 REST scripts with 1 MCP server — and built a 90-day migration playbook for your team.

By Alexandr Balas (CEO & Chief System Architect, dlab.md) | March 2026

I need to confess something. Three months ago, one of our Python scripts was connecting to our ERP with SSL verification disabled. verify=False, right there in the codebase. A script that managed published content on our production website. I discovered it on a Tuesday, during a routine code review I had been postponing for weeks.

That script was one of six. Each had its own way of authenticating to Odoo. Each quietly did its job. Each was a small, silent liability.

We deleted all six. Replaced them with a single MCP server. Nobody noticed — because everything just kept working. But better, and without the security debt.

This is a story about that migration, about why the 957 applications in the average enterprise cannot keep talking to each other through hand-built point-to-point integrations forever, and about what comes next. Also: about when you should absolutely not follow our example.

We Deleted 6 Scripts and Nobody Noticed


Here is what we were living with in February 2026: six Python scripts, built over eighteen months by different people at different times, each solving the same problem slightly differently — get data into and out of Odoo 18.

Script #1 published blog posts. Script #2 managed SEO metadata. Script #3 uploaded cover images. Script #4 ran content audits. You get the idea. They all connected to Odoo via XML-RPC. And each one reinvented authentication, error handling, and, in one memorable case, SSL verification.

The problem wasn't that they didn't work. They worked fine. The problem was threefold:

  1. Six separate attack surfaces. One compromised credential set per script. One verify=False. Zero centralized audit trail.
  2. Zero AI compatibility. When we started using AI agents for content operations, none of these scripts were discoverable. The agent literally could not find them.
  3. Maintenance multiplication. When Odoo changed an API behavior, we had to patch six scripts. When we added a new language, six scripts. When we wanted dry-run capability, we just never added it.

The migration to MCP took two weeks. Here's the before-and-after:

MetricBefore (6 Scripts)After (1 MCP Server)
Codebase units6 separate files1 unified server
Auth implementations6 (each handcrafted)1 (centralized, env-based)
SSL workarounds1 (verify=False)0 (proper cert chain)
AI agent compatibilityNoneNative tool discovery
Dry-run capability0 of 611 of 11 tools
Content auditManual, one post at a timeAutomated, 27 posts in <3 seconds
SEO auditNon-existentSchema.org + meta validation across all posts

The part that surprised me most: nobody on the team asked where the old scripts went. The MCP server did everything they did — plus content auditing, SEO validation, pSEO page management, and direct Odoo record manipulation — through a single interface. That's not a sales line. It's literally what happened.

If your current integration estate looks similar, the root issue is usually not protocol choice alone. It is governance. We covered that from the security side in Zero-Trust IT Audit: How to Secure Business Processes Before Entering European Markets. MCP helped us reduce the number of moving parts, but the real gain came from forcing one consistent control plane.

The Bigger Picture: 4 Servers, 87+ Tools

Now zoom out a bit. People hear "we built an MCP server for our blog" and assume this is a niche content workflow. It isn't. That was one migration out of four.

Here is the dlab.md MCP infrastructure running in production as of March 2026:

MCP ServerToolsWhat It Actually Does
dlab-mcp11Blog publishing, SEO audits, pSEO page generation, Odoo record CRUD
ql-mcp60+RSOC domain management, campaign creation, keyword research, revenue reporting, batch operations
rsoc-bot10+P&L reporting, SQL queries on production DB, Prophet-based revenue forecasting, pipeline monitoring alerts
telegram6Alert delivery to operators, formatted reports, command queue processing, user management

Total: 4 production MCP servers, 87+ tools, operated daily by AI agents. Every write operation defaults to dry_run=True. Every tool invocation is logged. The AI agent discovers the full tool registry at connection time — no Swagger files, no separate API docs to maintain, no custom wrapper per script.

A two-person engineering team runs this. That is not a boast. It is a statement about leverage.

                           [ AI Agent ]
                                |
              +-----------------+-----------------+
              | | |
            (MCP) (MCP) (MCP)
              | | |
         [ dlab-mcp ] [ ql-mcp ] [ rsoc-bot ]
              | | |
          [ Odoo 18 ] [ Lead Platform ] [ PostgreSQL ]
              |
            (MCP)
              |
         [ telegram ]
              |
        [ Telegram API ]

Figure 1: dlab.md MCP infrastructure — 4 servers, 87+ tools. Each server wraps a business backend behind a consistent tool interface. The AI agent discovers available tools at connection time.

Why REST Was Never Built for This


Let me be precise here, because this topic attracts too many lazy takes.

REST was designed in 2000 by Roy Fielding. It solved a real problem: giving human developers a predictable way to build integrations between web services. Stateless requests, uniform interface, resource-oriented design — elegant, battle-tested, and correct for its original purpose.

But Fielding's dissertation did not anticipate what happened in 2025 and 2026: the primary consumer of enterprise integration contracts stopped being a developer with Postman and became an AI agent that needs to discover capabilities on its own, maintain context across a session, and orchestrate multi-step workflows across multiple systems.

That creates five structural mismatches:

REST Design ChoiceWhy It Breaks for AI Agents
StatelessAI agents need context across calls. With REST, you end up managing state outside the API, and that becomes its own engineering project.
No native capability discoveryAn AI agent cannot reliably work from a human-written Swagger page. It needs machine-readable tool descriptions available at connection time.
Request-response onlyAI workflows often need progress updates, partial results, or cancellation during long-running operations.
Human-oriented documentationREST docs are written for engineers. AI agents need typed schemas, parameter descriptions, and deterministic return structures.
Per-integration custom codeEach API still needs a custom client, auth handling, retries, and error mapping. At enterprise scale, that multiplication becomes the real cost.

MCP addresses those gaps with stateful sessions, native tool discovery, JSON-RPC 2.0 messaging, typed schemas, and a standard contract between agent and server.

That does not mean REST is obsolete. It means REST is no longer enough as the top-level orchestration contract for AI-driven workflows.

If you want the security angle behind that shift, especially for PII and financial data, read Data Protection by Design: Why Your Backend Scripts Are a €20M Liability. The protocol discussion matters, but the liability discussion matters more.

MCP vs the Alternatives: An Honest Comparison


If you are reading this with an engineering mindset, your next question is the right one: why MCP specifically? What about Google's A2A protocol? LangChain tools? Auto-generated tool definitions from OpenAPI?

Fair. These are the questions we asked before moving production workflows.

Here is our assessment, based on running these patterns in production rather than discussing them in slides:

CapabilityREST + OpenAPILangChain ToolsGoogle A2AMCP
AI-native discovery❌ Manual⚠️ Code-defined✅ Agent cards✅ Native
Multi-agent orchestration⚠️ Custom code✅ Primary design goal⚠️ Server-level
Bidirectional streaming
Enterprise governance✅ Mature❌ No built-in🟡 Early stage🟡 Growing quickly
Ecosystem size✅ Massive✅ Large🟡 Early✅ Fast-growing
Standards body✅ OpenAPI Initiative❌ Vendor-specific✅ Google-backed✅ Linux Foundation

The key point most comparisons miss is simple: MCP and A2A are not direct competitors.

  • MCP standardizes how an agent connects to tools and data.
  • A2A standardizes how agents communicate with other agents.

That is vertical integration versus horizontal collaboration. In a serious enterprise deployment, you may need both.

LangChain tool-calling is fine for prototypes, but your tools remain framework-bound and invisible outside that runtime. OpenAPI-to-tools generation is useful, but it does not solve stateful context, streaming, or a standard discovery handshake.

We chose MCP because it hit the practical balance: open standard, usable Python and TypeScript SDKs, broad vendor support, and a client/server model that maps cleanly to how enterprise systems are actually built.

If you want a deeper implementation view, Connecting AI Agents to Internal CRM: An MCP Architecture Breakdown goes into the server-side pattern in more detail. And if you are still converting old Python automation into something an AI system can safely call, From Script-Kiddie to Enterprise: Re-architecting Python Scraping Tools into Scalable FastMCP Backends is the more practical companion piece.

[MCP]
AI Agent ---> MCP Server ---> CRM / ERP / DB / CMS


[A2A]
Planning Agent <----> Execution Agent <----> Review Agent


Relationship:
MCP = agent-to-tools
A2A = agent-to-agent

Figure 2: MCP handles agent-to-tools communication. A2A handles agent-to-agent coordination. In production, they are usually complementary.

When NOT to Migrate


This is the section most articles avoid. It is also the section that matters most.

I could spend another thousand words explaining why MCP is useful. More important is knowing when not to use it.

1. Simple CRUD endpoints. If your API serves GET /users/{id} to a mobile app, and no AI agent will ever call it, MCP adds overhead for no real gain. REST is still the cleaner fit for trivial deterministic reads.

2. Debugging is harder — today. Let me be blunt: debugging an MCP server in production is harder than debugging a REST API. The ecosystem has fewer monitoring tools, less operational history, and fewer battle-tested patterns. We hit a concurrency issue in our first dlab_mcp.py deployment. FastMCP was single-threaded by default, and under roughly 100 concurrent sessions response times degraded from about 85 ms to more than 2 seconds. The fix took six hours. Finding the root cause took longer than it should have.

3. Non-AI integration surfaces. Not everything needs to be AI-accessible. Your Kubernetes liveness probe, your Prometheus exporter, your nightly ETL batch — wrapping these in MCP is over-engineering. The rule is simple: migrate what AI agents actually call in production workflows.

4. Protocol maturity risk. MCP is still young. Linux Foundation governance reduces single-vendor risk, which matters. But the specification is still evolving: transport options, auth patterns, and capability negotiation are changing fast enough that you should keep business logic separate from protocol glue.

So no, do not migrate everything.

Migrate the surfaces where AI needs deterministic access to tools. Leave the rest alone.

REST Is Not Dead — It Is Demoted


The headline is provocative. The actual thesis is more useful.

REST is not dying. It is being demoted from orchestration layer to transport layer.

We have seen this pattern before. TCP did not disappear when HTTP became the dominant application contract. It moved down the stack. The same thing is happening here: MCP servers often communicate over HTTP/SSE or stdio and frequently wrap existing REST or XML-RPC integrations underneath.

So your REST APIs still matter. They still serve data. They still enforce authentication. They still carry payloads. But they are no longer the interface an AI agent should have to negotiate with directly.

For enterprises, this is good news. Your existing API investment is not wasted. It becomes the substrate that MCP wraps and exposes in a more usable form for agent workflows.

That is not destruction. It is architectural repositioning.

The 90-Day Migration Playbook


We do not like vague advice. If you have read this far and want to test MCP, here is the playbook we would actually use again.

DaysActionDeliverable
1–5Inventory all endpoints and scripts. Tag each P0 (critical), P1 (important), P2 (low-risk).api_inventory.csv
6–10Pick 3 P2 candidates: low-risk, high-frequency, internal-only.Migration candidates document
11–20Build an MCP server wrapping those 3 functions. Use FastMCP or the official TypeScript SDK.Working mcp_server.py with dry_run=True on all writes
21–30Connect your agent client. Verify discovery, invocation, and failure handling.Demo recording and session logs
31–45Add authentication and access control. Prefer scoped tokens or OAuth 2.1 where possible.Security configuration and test results
46–60Deploy to production with monitoring: latency, error rate, tool frequency, failed writes.Grafana dashboard and alert rules
61–75Expand to P1 workflows. Add medium-risk, business-critical operations.v2 MCP server
76–90Run a real production workflow with human approval gates. Measure time saved, error reduction, and rollback quality.ROI report

Day 90 checkpoint: if you cannot show measurable improvement in integration speed, error reduction, or developer productivity, MCP is not solving your problem. Scale down. That is a valid outcome.

If the numbers are positive, then expand to P0 workflows in the next quarter.

One more point here: if your estate still includes 1C or SAP-era integration logic, do not evaluate MCP in isolation. Migration sequencing matters. Migrating from Legacy Systems (1C, SAP) to Odoo 19: Risk Assessment and Roadmap covers the dependency side of that decision.

What Our MCP Server Actually Looks Like


This is not a conceptual diagram. This is the production tool registry extracted from dlab_mcp.py — the file that replaced our six scripts:

# dlab_mcp.py — Production MCP Server (FastMCP)
# Source: https://github.com/dlab-md/mcp (internal, architecture open for audit)
from mcp.server.fastmcp import FastMCP


mcp = FastMCP("DLab Odoo Agent")


# ═══ Blog Tools ═══════════════════════════════════════════════
@mcp.tool()
def list_blog_posts(lang: str = "all") -> str:
    """List all published blog posts with SEO compliance status."""


@mcp.tool()
def get_blog_post(post_id: int) -> str:
    """Get full blog post details including content preview and SEO fields."""


@mcp.tool()
def publish_article(article_num: int, lang: str = "all", dry_run: bool = True) -> str:
    """Compile markdown from content/blog/ and publish to Odoo."""


@mcp.tool()
def update_seo_meta(post_id: int, meta_title: str = "", ..., dry_run: bool = True) -> str:
    """Update SEO metadata for a blog post."""


@mcp.tool()
def upload_cover_image(post_id: int, image_path: str, dry_run: bool = True) -> str:
    """Upload a local image as cover for a blog post."""


# ═══ pSEO Tools ═══════════════════════════════════════════════
@mcp.tool()
def list_pseo_pages(published_only: bool = True) -> str:
    """List all pSEO comparison pages."""


@mcp.tool()
def list_pseo_templates() -> str:
    """List pSEO templates with AdSense/AFS configuration."""


@mcp.tool()
def create_pseo_page(template_id: int, slug: str, ..., dry_run: bool = True) -> str:
    """Create a new pSEO comparison page."""


# ═══ Audit Tools ══════════════════════════════════════════════
@mcp.tool()
def run_seo_audit() -> str:
    """Run a full SEO audit on all published blog posts.
    Checks: schema.org, meta title/description/keywords, seo_name."""


@mcp.tool()
def run_content_audit() -> str:
    """Audit SSOT (content/blog/) vs live Odoo posts.
    Checks: all markdown files have odoo_id, content hash matches."""


# ═══ Generic Odoo ═════════════════════════════════════════════
@mcp.tool()
def read_odoo_records(model: str, domain_json: str = "[]", ...) -> str:
    """Read records from any Odoo model."""


@mcp.tool()
def write_odoo_records(model: str, record_ids_json: str, ..., dry_run: bool = True) -> str:
    """Write/update records in any Odoo model."""

Two things matter here.

First, every tool has typed parameters and a docstring. That docstring becomes part of the AI-readable contract. The agent reads it during discovery and can reason about what the tool does without a separate documentation portal.

Second, every write operation defaults to dry_run=True. The agent cannot silently mutate production data unless a human or an explicit workflow step approves it. That is not a convenience feature. It is a control mechanism.

For systems touching customer data, finance, or regulated workflows, I would go further:

  • enforce role-scoped tool access,
  • log every invocation with caller identity and payload hash,
  • keep rollback procedures documented and tested,
  • isolate high-risk tools behind approval gates,
  • and air-gap anything that processes sensitive exports unless there is a strong reason not to.

That matters for GDPR Article 32, and it will matter even more as AI-assisted business workflows start falling under stricter governance expectations in the EU AI Act Compliance 2026: A Technical Guide for Developers and Integrators.

Frequently Asked Questions


Is MCP really replacing REST API? No — and the headline is deliberately provocative. MCP is moving REST down the stack, not erasing it. REST APIs continue to serve data reliably, but they are no longer the best top-level interface for AI agent integration. MCP servers frequently wrap existing REST endpoints, adding capability discovery, session context, and structured tool contracts.

What is the Model Context Protocol (MCP)? MCP is an open standard, originally introduced by Anthropic in November 2024 and now governed through the Linux Foundation ecosystem, for exposing tools and data to AI agents through a structured client/server model. In practice, it gives agents a standard way to discover tools, understand parameters, and invoke operations over transports such as HTTP/SSE or stdio. Industry adoption is accelerating: Gartner projects (as a forward-looking estimate, not a confirmed figure) that 75% of API gateway vendors will incorporate MCP features by end of 2026.

How long does it take to migrate from REST to MCP? For a focused team of 2–4 engineers, the first three endpoints or scripts can usually be wrapped in 2–3 weeks if the underlying business logic is already stable. Our own migration of six scripts to one MCP server with 11 tools took two weeks, including testing, dry-run controls, and production rollout.

Is MCP secure enough for enterprise production? It can be, if you implement it like an enterprise system rather than a demo. That means scoped authentication, full audit logs, approval gates for writes, rollback procedures, and zero-trust assumptions around every tool call. The protocol itself is not the whole security model. Your implementation is.

Can MCP work alongside existing REST APIs? Yes. That is the normal migration pattern. MCP servers wrap existing APIs and expose them through a standard tool interface while the underlying REST endpoints continue to operate unchanged. Start with a few low-risk workflows, measure the result, and expand only where the architecture proves its value.