By Alexandr Balas (CEO & Chief System Architect, dlab.md) | Updated: March 2026
In 2026, the Model Context Protocol (MCP) has become a practical requirement for enterprise AI deployments. As organizations move from isolated LLM chatbots to context-aware AI agents, the architectural focus shifts to secure, scalable, and compliant data access. This article looks at that shift from fragile, bespoke REST integrations to standardized MCP server architectures, with a specific focus on configuring Anthropic Claude 3.5 for controlled access to internal systems under strict compliance constraints.
The Core Problem: Why Bespoke API Integrations for LLMs Fail at Scale
Over the last two years, many enterprise IT teams have tried to connect internal data lakes and generative AI endpoints using custom REST APIs, LangChain wrappers, and ad-hoc Python middleware. It works for a prototype. It usually fails in production.
The common failure points are predictable:
- Schema Volatility: LLM provider schema changes break custom integrations and force repeated refactoring.
- No Consistent Tool-Calling Model: Each wrapper implements its own invocation logic, so agent behavior becomes difficult to test and audit.
- Latency and Throughput Constraints: Large enterprise payloads trigger bottlenecks and timeouts, especially over XML-RPC or REST.
- Security Gaps: Weak authentication, broad service accounts, and unreviewed internal traffic paths increase exfiltration risk.
Always use asynchronous
queue_job patterns for payloads exceeding 500k rows to avoid XML-RPC timeouts.
These issues become more serious once the AI agent is allowed to touch ERP, CRM, or financial data. If you are still assessing your internal exposure before opening these systems to AI tooling, our Zero-Trust IT Audit: How to Secure Business Processes Before Entering European Markets is the right companion read.
Context Window Limitations vs. Targeted Data Fetching
A recurring mistake in legacy RAG pipelines is treating the LLM context window like a data warehouse. Teams push large, unstructured JSON payloads into the prompt and then wonder why quality drops, token costs rise, and outputs become inconsistent.
A better pattern is simple: keep the prompt small, and let the agent fetch only the records it actually needs. In practice, that means structured tool calls against controlled data sources instead of dumping entire datasets into the model.
This is where MCP starts to matter.
Model Context Protocol: A Practical Standard for AI Agent Integration
The Model Context Protocol (MCP) addresses the integration problem by defining a standard way for AI clients to discover tools, request resources, and execute actions. Instead of rebuilding the same glue layer for every model provider, teams can separate internal data logic from the LLM vendor interface.
That separation matters operationally. Claude may change, your internal systems will definitely change, but your MCP layer can remain stable if it is designed correctly.
MCP and JSON-RPC 2.0: Predictable, Auditable Communication
MCP is built on JSON-RPC 2.0, which gives client-server exchanges a strict and auditable structure. When an AI agent needs a customer balance, invoice status, or ERP record, the MCP client issues a tools/call request to the MCP server, and the server returns only the scoped result.
In regulated environments, that predictability is more important than convenience. You need to know what was requested, by whom, and under which service account.
Case Study: dlab.md XML-RPC Payload Construction
The following sanitized extract shows how our MCP client routes XML-RPC requests into Odoo 18 using a dedicated API token:
import os
import xmlrpc.client
import ssl
# Zero-Trust Environment Params
ODOO_URL = os.environ.get("ODOO_URL", "https://dlab.md")
ODOO_DB = os.environ.get("ODOO_DB", "bitnami_odoo")
ODOO_USER = os.environ.get("ODOO_USER", "agent_publisher@dlab.md")
ODOO_API_KEY = os.environ.get("ODOO_API_KEY") # Strict Token Requirement
def _get_odoo_models():
"""Authenticate via API Token and return the models proxy."""
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
common = xmlrpc.client.ServerProxy(f'{ODOO_URL}/xmlrpc/2/common', context=ctx)
uid = common.authenticate(ODOO_DB, ODOO_USER, ODOO_API_KEY, {})
if not uid:
raise Exception(f"Failed to authenticate Agent ({ODOO_USER}) with generated API Token.")
models = xmlrpc.client.ServerProxy(f'{ODOO_URL}/xmlrpc/2/object', context=ctx)
return models, uidThe integration pattern is sound, but there is an important caveat here: disabling certificate verification with ssl.CERT_NONE is acceptable only in tightly controlled local lab scenarios. In production, especially where financial or PII data is involved, TLS verification should remain enabled and certificates should be pinned or validated through your internal trust chain. That aligns directly with GDPR Article 32, which requires appropriate technical and organizational measures for protecting personal data.
If you want a broader implementation view beyond Odoo, see Connecting AI Agents to Internal CRM: An MCP Architecture Breakdown.
Technical Architecture: Securely Connecting Claude 3.5 to Internal Databases
Deploying Claude 3.5 Sonnet as an enterprise agent requires a clean handshake between the MCP client and server. In most production setups, the lifecycle looks like this:
- Protocol Initialization: The MCP client discovers and authenticates the MCP server.
- Capability Negotiation: The server advertises available resources, prompts, and tools such as
execute_sql_queryorfetch_erp_record. - Transport Selection: Local deployments typically use
stdiofor low-latency execution. Distributed services often use HTTP-based transports such as SSE when asynchronous communication is required.
That transport choice is not cosmetic. It affects latency, observability, and your attack surface.
Execution Log: FastMCP Stdio Binding
Local AI agents often need low-latency transport with minimal moving parts. The following snippet shows a FastMCP server bound tostdiofor Claude Desktop execution:
from mcp.server.fastmcp import FastMCP
# Initialize the MCP Server with strict namespace
mcp = FastMCP("DLab Odoo Web Editor")
@mcp.tool()
def search_odoo_views(name: str) -> str:
"""
Search for Odoo QWeb views by name (e.g. 'bridge', 'homepage').
Returns a formatted list of IDs and Names to help you target the right page.
"""
# ... [XML-RPC Logic] ...
if __name__ == "__main__":
# Bind to stdio transport for local Agent execution
mcp.run()A topic-specific architectural note is worth adding here: for Claude Desktop or other local MCP clients, stdio is usually the safest starting point because it avoids exposing an HTTP listener on the network. Once you move to shared services or multi-agent orchestration, you need explicit authentication boundaries, request logging, and rollback procedures before switching to remote transports.
This architecture is especially relevant when the agent interacts with systems subject to RO e-Factura, SAF-T, or internal accounting controls. In those cases, the real risk is not just model output quality. It is unauthorized data access, silent write operations, and poor auditability.
Security Controls That Actually Matter
The main concern for CISOs is straightforward: if Claude can reach internal systems through MCP, what stops it from seeing too much or doing too much?
A raw MCP-to-database connection without enterprise controls is a liability. In practice, the following controls matter far more than generic “AI security” language:
-
Dedicated Agent Service Accounts: AI agents should authenticate only through single-purpose users such as
agent_publisher@dlab.md, with revocable API tokens and narrow permissions. - Machine-to-Machine Authentication: Where remote transport is used, the MCP server should validate signed machine credentials before any tool call is executed.
- Scoped Data Access: Queries should inherit the row-level and model-level restrictions of the service account. If the account cannot read payroll tables manually, the agent should not read them either.
- Read-Only Execution by Default: MCP servers should run in isolated containers with read-only database access unless a specific write workflow has been approved and logged.
- Rollback Protocols: If an agent is allowed to trigger business actions, every write path needs a rollback plan. This is non-negotiable for finance, inventory, and customer master data.
- Air-Gapping for High-Risk Flows: For highly sensitive PII or regulated financial datasets, keep the MCP execution path isolated from public-facing inference layers whenever possible.
For financial records or PII, treat every agent tool as if it were a privileged integration user. Start read-only, log everything, and require an explicit rollback path before enabling writes.
This is also where many teams underestimate their exposure. A small helper script that bypasses access controls can create more legal and operational risk than the model itself. We covered that in more detail in Data Protection by Design: Why Your Backend Scripts Are a €20M Liability.
Compliance: MCP Does Not Remove Your Regulatory Obligations
MCP gives you a cleaner integration layer. It does not exempt you from compliance.
If Claude 3.5 is used to process personal data, GDPR Article 32 still applies. If the system influences business decisions or operates in a regulated workflow, the governance requirements under the EU AI Act may also become relevant depending on the use case and risk classification. And if the agent touches accounting or tax workflows, local obligations around SAF-T and RO e-Factura remain in force regardless of how elegant the MCP implementation looks.
This is why we usually advise clients to separate three concerns early:
- model access,
- business tool access,
- compliance logging.
When those are mixed together in one Python service, audits become painful and incident response becomes slower than it should be.
For a deeper compliance-focused view, see EU AI Act Compliance 2026: A Technical Guide for Developers and Integrators.
Why Enterprise MCP Integration Requires Proven Expertise
The era of ad-hoc Python scripts for LLM integration is ending. Not because Python is the problem, but because unmanaged integration patterns do not survive real enterprise constraints.
Once Claude is connected to ERP, CRM, document stores, or internal SQL systems, the work stops being “AI experimentation” and becomes architecture. You need transport discipline, authentication boundaries, observability, failure handling, and a clear understanding of what the agent is allowed to do when upstream systems are slow, unavailable, or inconsistent.
A common example: a prototype works well against a staging database with 50,000 records, then fails against production because the first broad query returns millions of rows, XML-RPC calls start timing out, and the agent retries the same action without proper idempotency controls. That is not a model problem. It is an integration design problem.
If your organization is already modernizing ERP and backend processes, this transition should be planned as part of the wider systems roadmap, not as an isolated AI side project. Our article on Migrating from Legacy Systems (1C, SAP) to Odoo 19: Risk Assessment and Roadmap is useful here because the same migration discipline applies to MCP-enabled AI layers as well.
This article discusses secure MCP patterns for connecting Claude 3.5 to internal business systems. It is not a substitute for a formal security review, DPIA, or legal assessment under GDPR, the EU AI Act, or local fiscal reporting rules before exposing ERP, CRM, or database tools to an AI agent.