Consulting Webflow Template - Galway - Designed by Azwedo.com and Wedoflow.com
Agentic AI
Model Context Protocol: The Unifying Standard for Enterprise AI Integration: Part 2
The Model Context Protocol (MCP) offers a new standard for integrating AI, like LLMs, with enterprise tools and data, moving beyond rigid API integrations to dynamic, context-aware tool invocation. While MCP's contextual intelligence and uniform tool interface streamline AI agent development and data access, it also introduces significant new security challenges. Traditional security measures often fall short due to MCP's unique blend of application logic and data, and its interactions outside conventional HTTP boundaries. Consequently, addressing threats like identify spoofing, prompt injection, sandbox escapes, and shadow attacks requires specific mitigations such as cryptographic verification, strict input validation, and robust architectural principles, underscoring the need for evolving security policies tailored to MCP.
Date
June 3, 2025
Topic
Agentic AI

Note: This article will focus on the security considerations of MCP. Refer to part 1 here for an introduction to the Model Context Protocol (MCP).

MCP transforms how Large Language Models (LLMs) interact with tools and data, offering a plug-and-play connectivity akin to USB in modern computing. This represents a significant evolution from rigid and brittle, hard-coded API integrations to dynamic, context-aware tool invocation.

Based on deep context, AI agents using MCP, can decide which tools to use, in what order, and how to chain them together to accomplish a task.

MCP’s transformative capabilities stem from three key attributes:

  • Contextual Intelligence
  • Uniform Tool Interface
  • Dynamic Tool Invocation

Contextual Intelligence

Unlike traditional API-based integrations or robotic process automations, MCP empowers AI models to maintain rich context across interactions. An AI agent can leverage context from a user’s request, query diverse tools or databases, and synthesize results into a cohesive response. This capability dismantles data silos. For instance, an AI assistant can seamlessly integrate customer data from a CRM, transaction history from a database, and real-time analytics from a BI tool, all within a single session.

MCP emplowers AI models to maintain rich context across interactions.

Uniform Tool Interface

It provides a standardized method for AI models to interact with applications, APIs, and external services. Much like the Language Server Protocol streamlined IDE integrations, MCP delivers a universal framework enabling tools from SaaS APIs to local scripts to expose their functionalities to AI. This streamlines development and integration efforts, freeing developers from building custom connector code for every AI-tool combination. As a result, enterprise AI teams can dedicate their resources to designing advanced agent logic instead of managing low-level infrastructure.

Dynamic Tool Invocation

MCP enables AI agents to autonomously discover and select tools relevant to the current task. This capability eliminates reliance on pre-defined APIs or fixed function calls. For example, if a new internal tool becomes available, the AI can query its capabilities via MCP and immediately integrate it without requiring code changes. This adaptability proves crucial in enterprise environments where requirements evolve frequently.

The evolving security landscape

MCP combines application logic and data in a way that legacy security measures cannot adequately address. Agentic applications using MCP introduces significant new security attack vectors and increases the potential surface area of attack.

When a user query arrives at the agentic AI application, it determines need for an external operation, such as fetching customer data. It then queries one or more MCP servers to identify suitable tools. Based on context, the AI selects a tool, and the client invokes it on the respective server. This server executes the action. A protocol like HTTP facilitates all these interactions by managing requests, responses, and real-time notifications.

While there are credible mitigations available to secure MCP, careful considerations are needed in architecting MCP-based integrations in agentic applications.

Purpose-built firewalls

There are several new LLM-focused firewalls that can filter LLM input/output in real-time, like an optimizing packet filter firewall. They help detect any attempts to inject maliciously formed content into the LLM. Additionally, they scan for unauthorized output from LLMs like content filters. This method is moderately effective since clever token alterations are known to bypass these security scanners.

API Gateways and Web Application Firewalls

Since most MCP interactions occur outside HTTP protocol boundaries, there are reduced opportunities for API gateways and WAFs to enforce security filters.  A web application firewall (WAF) protects web applications from a variety of application layer attacks such as cross-site scripting (XSS), SQL injection,and cookie poisoning, among others. Current API gateways look for structured API call paths and payloads, while MCP may have compressed natural language text context payloads, bypassing these filters. WAFs are mostly based onregular expressions that may also miss malformed MCP payloads. Furthermore, tool calling with MCP can occur at the edge, bypassing traditional API gateways and WAFs.

MCP clients and servers making tool selection across security infrastructure.

While MCP transitions through its lifecycle from creation, operation, and update, there are several attack vectors today that are active threats.

  1. Identity spoofing: Today, there are no standard ways of publishing and identify MCP services. Names and descriptions are largely used to for identification and invocation. For example, “mcp-slack” is used to invoke the Slack MCP. This allows attackers to spoof names, using domain name typo squatting, subdomains, enabling impersonation of services. This can be easily overcome by cryptographic verification of endpoints and payload using authorization and authentication technologies like OAuth. Also, servers can be whitelisted. Enforce strict naming conventions within your enterprise.
  2. LLM or prompt injection: By poisoning tool name and descriptions, an attacker can make MCP chose the insecure tool. This is the most common form of attack today. To circumvent this, it’s advisable to connect to servers you trust, implement context isolation, and encrypt payloads. Agentic engineers and security teams should enforce strict input validation and sanitize commands at tool endpoints. Every point where external input enters the system is an opportunity for injection. Validate all parameters passed to MCP tools rigorously.
  3. Sandbox attacks: Sandboxes are used to isolate runtimes within managed security context. Cleverly constructed tool calling can allow sandbox escapes using side channel attacks, by exploiting unpatched containers or virtual machines. There were several recent incidents using this attack vector which broke isolation and exposed entire applications and networks. Common mitigation should be to configure granular fine-grained security with least privileges principles and promptly patch containers. Also, use robust automated sandbox testing methods.
  4. Shadow attacks: When the same agent runs multiple services, it is possible for one service to impersonate the tools of the other service. Cross-over shadow attacks are hard to isolate and secure against. Mitigating this can be done through good architectural principles such as using context namespaces for each service and implementing real-time observability for tool definition changes. Use API keys or tokens for tools that access sensitive systems and ensure the AI agent cannot use those keys beyond intended scope.
  5. Lack of centralized management: Since MCPs are inherently decentralized, there is a lack of centralized management, trust and governance. Until these evolve, for example an App Store like trusted publisher, or an enterprise directory service, MCP is managed through an array of names, descriptions and configuration files. In such scenarios, an MCP service may change from one update to another without the knowledge of the application or the user, introducing unknown attack vectors. As mitigation, enterprises should maintain strict governance of approved endpoints, services, versions and configuration management toolchains. Who can connect and what they can do should be tightly managed with existing authentication and authorization mechanisms. Avoid automatically trusting every update.
Attack vectors across context creation and final output lifecycle.

Securing the Next Generation of Enterprise AI

MCP drives a new generation of AI-enhanced enterprise systems, allowing AI agents to effortlessly leverage necessary tools and data to tackle intricate problems. Yet, this unprecedented flexibility and power also create new security challenges we cannot overlook. Traditional security solutions fall short; we must evolve our defenses to specifically address MCP’s unique approach to context-sharing and tool invocation. Encouragingly, the community is already prioritizing these efforts. Initiatives focus on standardizing secure installation, logging, and packaging for MCP, and enterprises are increasingly understanding the need for MCP-specific security policies.

If you are venturing into building your Agentic AI stack or applications and are looking at expertise to ensure its security, let’s talk!