Introduction

Welcome back, intrepid AI architects! In our previous chapters, we’ve explored the foundational concepts of the Model Context Protocol (MCP), from its purpose as a universal language for AI tool interaction to the intricate details of defining and registering tools using robust JSON Schemas. You’ve learned how tools declare their capabilities, making them discoverable by AI agents.

But how does an AI agent actually use a tool once it’s discovered? How does a request travel from the agent, through the MCP system, to the correct tool, and then return a meaningful response? That’s precisely what we’ll unravel in this chapter: the fascinating world of Execution Pipelines and Request Routing within MCP.

Understanding these mechanisms is crucial for building robust, scalable, and reliable AI agent systems. It ensures that tool calls are processed efficiently, securely, and correctly, forming the backbone of effective agent-tool collaboration. By the end of this chapter, you’ll have a clear picture of the journey a tool request takes and how you, as a developer, can leverage the anticipated TypeScript SDK to orchestrate these interactions.

Before we dive in, make sure you’re comfortable with the core MCP concepts covered so far, especially tool schema definition and registration. Let’s get started!

Core Concepts: The Journey of a Tool Call

When an AI agent decides to use a tool, it’s not just a simple function call. Behind the scenes, a carefully orchestrated sequence of steps, known as an Execution Pipeline, springs into action. This pipeline ensures that the request is understood, authorized, validated, and ultimately executed by the correct tool. Coupled with this is Request Routing, which is responsible for guiding that request to the appropriate MCP server and tool implementation.

Let’s break down these core concepts.

What is an Execution Pipeline?

Imagine a sophisticated factory assembly line, but instead of building cars, it’s processing requests to use AI tools. An MCP execution pipeline is a conceptual sequence of stages that an MCP server follows to fulfill an AI agent’s request to invoke a registered tool. Each stage performs a specific check or action, ensuring the integrity and successful execution of the tool call.

Here are the typical stages of an MCP execution pipeline:

  1. Request Parsing: The MCP server first receives the agent’s request, usually a structured message (like JSON). This stage involves parsing the message to extract the intended toolId and the arguments for the tool.
  2. Tool Discovery and Selection: Using the extracted toolId, the server looks up the corresponding tool definition from its registry. This confirms the tool exists and retrieves its schema and other metadata.
  3. Permission and Authorization Check: This is a critical security step. The server verifies if the requesting AI agent (or the user on whose behalf the agent is acting) has the necessary permissions to invoke this specific tool. We’ll delve deeper into permissions in the next chapter, but for now, know it’s a vital gate.
  4. Input Validation: Before executing the tool, the server validates the provided arguments against the tool’s defined input schema. This prevents malformed data from reaching the tool’s backend, ensuring data integrity and preventing errors.
  5. Tool Invocation: If all previous stages pass, the MCP server invokes the actual backend implementation of the tool. This could be a call to a REST API, a microservice, a database function, or any other external system.
  6. Response Handling: Once the tool’s backend completes its task, it returns a response to the MCP server. This stage involves receiving and potentially transforming that raw response.
  7. Output Formatting: Finally, the MCP server formats the tool’s response according to the tool’s defined output schema (if applicable) and prepares it for the AI agent, often as a structured JSON object.
  8. Error Management: At any stage, if an error occurs (e.g., tool not found, permission denied, invalid input, tool backend failure), the pipeline should gracefully handle it, log the issue, and return an informative error message to the AI agent.

This structured flow ensures consistency, security, and reliability for every tool interaction.

What is Request Routing?

While the execution pipeline handles what happens when a tool is called, Request Routing dictates where that call goes. In complex AI systems, an AI agent might not interact with a single, monolithic MCP server. Tools might be distributed across different services, servers, or even organizations. Routing is the mechanism that directs the agent’s tool invocation request to the correct MCP server instance or tool backend.

Consider these routing scenarios:

  • Direct Routing: In a simple setup, an AI agent might directly communicate with a single MCP server that hosts all registered tools. The routing is straightforward: the server simply looks up the tool locally.
  • Federated Routing: For larger systems, multiple MCP servers might exist, each managing a set of tools or a specific domain. An agent might send a request to a “router” MCP server, which then forwards it to the appropriate specialized MCP server based on the toolId or a namespace within it.
  • Load-Balanced Routing: If a particular tool is highly utilized, its backend might be deployed across multiple instances. Routing can involve load balancers to distribute requests efficiently among these instances, ensuring high availability and performance.

The toolId itself often plays a crucial role in routing. It can contain namespaces or identifiers that hint at which server or service is responsible for that tool.

The Agent’s Perspective

From the AI agent’s point of view, the execution pipeline and routing are largely abstracted away. The agent’s primary responsibility is to:

  1. Determine Intent: Decide which tool is needed based on the user’s request.
  2. Formulate Request: Construct a valid tool invocation request, specifying the toolId and the necessary arguments that conform to the tool’s input schema.
  3. Process Response: Receive the structured response from the MCP server and integrate it into its reasoning or generate a user-facing output.

The beauty of MCP is that it provides a standardized interface, allowing agents to interact with a vast ecosystem of tools without needing to understand the underlying complexities of each tool’s implementation or where it lives.

Visualizing the Pipeline

Let’s visualize the execution pipeline with a simple Mermaid flowchart. This diagram illustrates the flow from an AI agent’s request through the MCP server’s processing stages.

flowchart TD Agent_Request[AI Agent Request: Invoke Tool X] --> MCP_Server["MCP Server "] subgraph Execution_Pipeline["MCP Execution Pipeline"] MCP_Server --> Parse_Request[1. Parse Request] Parse_Request --> Discover_Tool[2. Discover Tool] Discover_Tool --> Check_Permissions[3. Check Permissions] Check_Permissions --> Validate_Input[4. Validate Input] Validate_Input --> Invoke_Tool[5. Invoke Tool Backend] Invoke_Tool --> Handle_Response[6. Handle Tool Response] Handle_Response --> Format_Output[7. Format Output Agent] end Format_Output --> Agent_Receives[AI Agent Receives Response] Check_Permissions --x Denied[Error: Permission Denied] Validate_Input --x Invalid[Error: Invalid Input] Invoke_Tool --x Tool_Error[Error: Tool Execution Failed]

This diagram clearly shows the sequential steps, including potential error points, highlighting the robustness of the MCP approach.

Step-by-Step Implementation: Invoking a Tool from an Agent

Now that we understand the theory, let’s look at how an AI agent, using the anticipated TypeScript SDK (v2, Q1 2026), would initiate a tool call. We won’t be building a full MCP server here, but we’ll simulate the agent’s side of the interaction, assuming an MCP server is available and our “Burger Ordering” tool from a previous chapter is registered.

Let’s recall our orderBurger tool definition:

{
  "toolId": "com.example.food.orderBurger",
  "name": "Order Burger",
  "description": "Orders a burger with specified ingredients.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "burgerType": {
        "type": "string",
        "description": "Type of burger, e.g., 'cheeseburger', 'veggie burger'",
        "enum": ["cheeseburger", "veggie burger", "chicken sandwich"]
      },
      "quantity": {
        "type": "integer",
        "description": "Number of burgers to order",
        "minimum": 1
      },
      "extras": {
        "type": "array",
        "items": {
          "type": "string",
          "enum": ["fries", "drink", "extra cheese"]
        },
        "description": "Optional extras like fries or a drink"
      }
    },
    "required": ["burgerType", "quantity"]
  },
  "outputSchema": {
    "type": "object",
    "properties": {
      "orderId": {
        "type": "string",
        "description": "Unique identifier for the placed order"
      },
      "estimatedDeliveryTime": {
        "type": "string",
        "format": "date-time",
        "description": "Estimated time of delivery"
      },
      "totalPrice": {
        "type": "number",
        "description": "Total cost of the order"
      }
    },
    "required": ["orderId", "estimatedDeliveryTime", "totalPrice"]
  }
}

Now, let’s write some TypeScript code for an agent to invoke this tool.

Step 1: Set up your project

First, ensure you have a Node.js project. If not, create one:

mkdir mcp-agent-invoke
cd mcp-agent-invoke
npm init -y
npm install typescript @modelcontextprotocol/typescript-sdk@^2.0.0 # Anticipated v2 release
npm install -D ts-node # For running TypeScript directly

The @modelcontextprotocol/typescript-sdk@^2.0.0 is an anticipated stable release for Q1 2026. We’re using it here to represent the modern way an agent would interact.

Create a file named agent.ts.

Step 2: Simulate MCP Server Interaction

Since we’re not running a full MCP server, we’ll simulate the interaction using a placeholder McpClient class from the SDK. In a real scenario, this client would communicate with a live MCP server.

Add the following code to agent.ts:

// agent.ts

import { McpClient, McpToolInvocation } from '@modelcontextprotocol/typescript-sdk';

// This is a placeholder for a real MCP server client.
// In a production environment, this client would connect to a live MCP server endpoint.
class MockMcpClient implements McpClient {
    async invokeTool<TInput extends Record<string, any>, TOutput extends Record<string, any>>(
        invocation: McpToolInvocation<TInput>
    ): Promise<TOutput> {
        console.log(`\n--- Agent attempting to invoke tool: ${invocation.toolId} ---`);
        console.log('Arguments:', JSON.stringify(invocation.args, null, 2));

        // Simulate the MCP server's execution pipeline and routing
        // based on the toolId.
        if (invocation.toolId === 'com.example.food.orderBurger') {
            const { burgerType, quantity, extras } = invocation.args;

            if (!burgerType || !quantity) {
                throw new Error("Missing required arguments for orderBurger: burgerType and quantity.");
            }
            if (typeof quantity !== 'number' || quantity < 1) {
                throw new Error("Quantity must be a positive number.");
            }

            console.log(`Simulating order for ${quantity}x ${burgerType} with extras: ${extras?.join(', ')}`);

            // Simulate a successful response
            const orderId = `BURGER-${Date.now()}`;
            const estimatedDeliveryTime = new Date(Date.now() + 30 * 60 * 1000).toISOString(); // 30 mins from now
            const totalPrice = (burgerType === 'cheeseburger' ? 10.99 : 9.99) * quantity + (extras?.length || 0) * 2.50;

            console.log(`Tool 'orderBurger' executed successfully.`);
            return {
                orderId,
                estimatedDeliveryTime,
                totalPrice: parseFloat(totalPrice.toFixed(2))
            } as TOutput;
        } else if (invocation.toolId === 'com.example.food.checkOrderStatus') {
            const { orderId } = invocation.args;
            if (!orderId) {
                throw new Error("Missing required argument for checkOrderStatus: orderId.");
            }
            console.log(`Simulating status check for order: ${orderId}`);
            // Simulate a response for a different tool
            return {
                orderId,
                status: "preparing",
                estimatedCompletion: new Date(Date.now() + 10 * 60 * 1000).toISOString()
            } as TOutput;
        }

        throw new Error(`Tool '${invocation.toolId}' not found or not supported by this mock client.`);
    }
}

// Instantiate our mock client
const mcpClient = new MockMcpClient();

Explanation:

  • We import McpClient and McpToolInvocation from the anticipated TypeScript SDK.
  • MockMcpClient is a stand-in for a real MCP client. Its invokeTool method simulates the entire MCP server’s pipeline conceptually:
    • It logs the invocation attempt.
    • It performs basic argument validation (mimicking the “Input Validation” stage).
    • It “executes” the tool by generating a mock response.
    • It handles a second tool, checkOrderStatus, to demonstrate routing to different functionalities.
  • This setup allows us to focus on how the agent constructs and handles tool calls without needing a live MCP server.

Step 3: Implement the AI Agent’s Logic

Now, let’s add the agent’s logic to agent.ts. This agent will decide to call orderBurger based on some predefined input.

Append the following to agent.ts:

// ... (previous code for MockMcpClient) ...

// Simple agent function to decide and invoke a tool
async function runAgent() {
    console.log("AI Agent starting...");

    // Scenario 1: Order a cheeseburger
    try {
        console.log("\n--- Agent's Decision: User wants to order a cheeseburger ---");
        const orderBurgerInvocation: McpToolInvocation<{ burgerType: string; quantity: number; extras?: string[] }> = {
            toolId: 'com.example.food.orderBurger',
            args: {
                burgerType: 'cheeseburger',
                quantity: 2,
                extras: ['fries']
            }
        };

        const orderResult = await mcpClient.invokeTool(orderBurgerInvocation);
        console.log("\nAgent received order confirmation:");
        console.log(JSON.stringify(orderResult, null, 2));
    } catch (error: any) {
        console.error("\nAgent encountered an error during burger order:", error.message);
    }

    // Scenario 2: Attempt to order an invalid burger type (input validation test)
    try {
        console.log("\n--- Agent's Decision: User tries to order a 'pizza' (invalid burger type) ---");
        const invalidOrderInvocation: McpToolInvocation<{ burgerType: string; quantity: number }> = {
            toolId: 'com.example.food.orderBurger',
            args: {
                burgerType: 'pizza', // This should fail validation
                quantity: 1
            }
        };

        const invalidOrderResult = await mcpClient.invokeTool(invalidOrderInvocation);
        console.log("\nAgent received invalid order confirmation (should not happen):");
        console.log(JSON.stringify(invalidOrderResult, null, 2));
    } catch (error: any) {
        console.error("\nAgent correctly caught error for invalid burger type:", error.message);
    }

    // Scenario 3: Call a different tool: checkOrderStatus
    try {
        console.log("\n--- Agent's Decision: User wants to check status of order 'BURGER-12345' ---");
        const checkStatusInvocation: McpToolInvocation<{ orderId: string }> = {
            toolId: 'com.example.food.checkOrderStatus',
            args: {
                orderId: 'BURGER-12345'
            }
        };

        const statusResult = await mcpClient.invokeTool(checkStatusInvocation);
        console.log("\nAgent received order status:");
        console.log(JSON.stringify(statusResult, null, 2));
    } catch (error: any) {
        console.error("\nAgent encountered an error during status check:", error.message);
    }

    console.log("\nAI Agent finished.");
}

// Run the agent
runAgent();

Explanation:

  • The runAgent function simulates an AI agent’s decision-making process.
  • It constructs an McpToolInvocation object, which specifies the toolId and the args (arguments) for the tool. Notice how args directly maps to the inputSchema of our orderBurger tool.
  • The agent then calls mcpClient.invokeTool(), passing the invocation object. This is the crucial step where the agent hands off the request to the MCP system.
  • It handles potential errors using try...catch blocks, demonstrating how an agent would react to failures in the execution pipeline (e.g., input validation errors).
  • Scenario 3 explicitly shows how the agent can seamlessly switch to invoking a different tool (checkOrderStatus) by simply changing the toolId and providing the appropriate arguments. This highlights the power of routing within MCP – the agent just specifies the toolId, and the system handles directing the request.

Step 4: Run the Agent

Execute your agent.ts file using ts-node:

npx ts-node agent.ts

You should see output similar to this, demonstrating the simulated tool invocations and error handling:

AI Agent starting...

--- Agent's Decision: User wants to order a cheeseburger ---
--- Agent attempting to invoke tool: com.example.food.orderBurger ---
Arguments: {
  "burgerType": "cheeseburger",
  "quantity": 2,
  "extras": [
    "fries"
  ]
}
Simulating order for 2x cheeseburger with extras: fries
Tool 'orderBurger' executed successfully.

Agent received order confirmation:
{
  "orderId": "BURGER-1710979200000",
  "estimatedDeliveryTime": "2026-03-20T17:00:00.000Z",
  "totalPrice": 26.98
}

--- Agent's Decision: User tries to order a 'pizza' (invalid burger type) ---
--- Agent attempting to invoke tool: com.example.food.orderBurger ---
Arguments: {
  "burgerType": "pizza",
  "quantity": 1
}
Agent correctly caught error for invalid burger type: Missing required arguments for orderBurger: burgerType and quantity.

--- Agent's Decision: User wants to check status of order 'BURGER-12345' ---
--- Agent attempting to invoke tool: com.example.food.checkOrderStatus ---
Arguments: {
  "orderId": "BURGER-12345"
}
Simulating status check for order: BURGER-12345

Agent received order status:
{
  "orderId": "BURGER-12345",
  "status": "preparing",
  "estimatedCompletion": "2026-03-20T16:40:00.000Z"
}

AI Agent finished.

This output clearly shows the simulated execution pipeline stages and how the agent gracefully handles both successful tool invocations and validation failures. The ability to invoke different tools by simply changing the toolId demonstrates the routing capabilities of MCP.

Mini-Challenge: Extend Your Agent with a New Tool

Let’s make this more interactive!

Challenge: Add a new tool to our MockMcpClient called com.example.food.cancelOrder. This tool should take an orderId as input and return a confirmationMessage and a status (e.g., “cancelled”). Then, modify your runAgent function to simulate a scenario where the agent decides to cancel an order.

Hint:

  1. Add another else if block inside the invokeTool method of MockMcpClient to handle the cancelOrder toolId.
  2. Implement basic validation for the orderId (e.g., check if it’s a string).
  3. Simulate a response for cancellation.
  4. Add a new try...catch block in runAgent to construct and invoke the cancelOrder tool.

What to Observe/Learn: You’ll observe how easily the MCP framework allows you to extend agent capabilities by adding new tool implementations and how the agent can dynamically route requests to these new tools without significant changes to its core logic. This reinforces the modularity and extensibility that MCP provides.

Common Pitfalls & Troubleshooting

Even with a well-designed protocol like MCP, things can sometimes go awry. Understanding common pitfalls can save you hours of debugging.

  1. Incorrect Tool ID or Arguments:

    • Pitfall: The AI agent attempts to invoke a toolId that isn’t registered or passes arguments that don’t conform to the tool’s inputSchema.
    • Symptom: The MCP server returns an error like “Tool not found” or “Invalid input payload.”
    • Troubleshooting:
      • Verify toolId: Double-check the exact toolId string used by the agent against the registered tool definitions. Typos are common!
      • Review inputSchema: Compare the arguments the agent is sending with the tool’s inputSchema. Ensure data types, required fields, and enum values are all correctly matched. Use a JSON Schema validator if necessary.
      • Check logs: The MCP server’s logs (or our MockMcpClient’s console output) will usually provide specific details about the validation failure.
  2. Routing Failures:

    • Pitfall: The MCP server cannot reach the actual backend implementation of the tool, or the agent cannot reach the MCP server itself.
    • Symptom: Network errors, timeouts, or “Service unavailable” messages.
    • Troubleshooting:
      • Network Connectivity: Ensure the agent can reach the MCP server, and the MCP server can reach the tool’s backend. Check firewalls, proxy settings, and DNS resolution.
      • Server Status: Verify that both the MCP server and the tool’s backend service are running and healthy.
      • Configuration: Check the MCP server’s configuration for how it’s configured to locate and invoke tool backends. This might involve environment variables, configuration files, or service discovery mechanisms.
  3. Permission Denied:

    • Pitfall: The AI agent (or the user it represents) is not authorized to use a specific tool, even if the tool exists and the arguments are valid.
    • Symptom: The MCP server returns an “Unauthorized” or “Permission Denied” error.
    • Troubleshooting:
      • Review Permissions: Consult the MCP server’s access control configuration. Verify that the agent’s identity (or its assigned roles/scopes) has been granted explicit permission to invoke the target toolId.
      • Authentication: Ensure the agent is correctly authenticating with the MCP server, if required. An unauthenticated agent might default to having no permissions.

These common issues often stem from misconfigurations or mismatches between agent expectations and tool definitions or server policies. A systematic approach to checking configurations and logs is usually the fastest way to resolve them.

Summary

Phew! We’ve covered a lot of ground in this chapter, delving into the critical operational aspects of the Model Context Protocol.

Here’s a quick recap of our key takeaways:

  • Execution Pipelines are the structured sequence of steps an MCP server takes to process an AI agent’s tool invocation request, ensuring proper parsing, discovery, authorization, validation, invocation, and response handling.
  • Request Routing is the mechanism that directs tool invocation requests to the correct MCP server or tool backend, enabling distributed and scalable AI agent systems.
  • From an AI agent’s perspective, invoking a tool primarily involves formulating a McpToolInvocation object with the correct toolId and args, then sending it to the MCP client.
  • The anticipated TypeScript SDK v2 simplifies this interaction by providing client utilities for constructing and sending tool invocation requests.
  • We explored common pitfalls such as incorrect toolIds, invalid arguments, routing failures, and permission issues, along with practical troubleshooting steps.

Understanding these concepts is foundational to building reliable and performant AI agent applications that can seamlessly integrate with a diverse range of external tools and services.

What’s Next?

Now that we understand how tool calls are executed and routed, a crucial question remains: How do we ensure that only authorized agents can access specific tools and that sensitive operations are protected? In our next chapter, we’ll dive deep into Permissions and Authorization in MCP, exploring how the protocol addresses security and access control, which is paramount for any robust AI system.


References


This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.