Building and integrating Model Context Protocol servers and clients using Python
Core Idea: Python offers robust libraries for implementing both MCP servers that expose tools and MCP clients that consume those tools, enabling developers to create standardized, interoperable AI tool ecosystems with minimal code.
Key Elements
Python MCP SDK Overview
- Official Python SDK from Anthropic
- Supports both server and client implementations
- Handles JSON-RPC communication details
- Provides decorators and helper functions for easy tool definition
Server Implementation
Setup and Installation
# Install the SDK
pip install mcp-python
# Import required components
from mcp import MCPServer, tool
Creating a Basic Server
from mcp import MCPServer
# Initialize server
server = MCPServer("my-server")
# Define a tool using decorator
@server.tool(
name="calculator.add",
description="Add two numbers together",
parameters={
"x": {"type": "number", "description": "First number"},
"y": {"type": "number", "description": "Second number"}
}
)
async def add(x: float, y: float):
return {"result": x + y}
# Start the server
if __name__ == "__main__":
server.start(host="localhost", port=8080)
Advanced Server Features
- Resources: Exposing data to LLMs
@server.resource(
name="documentation",
description="API documentation"
)
async def get_docs():
with open("docs.md", "r") as f:
return Resource(content=f.read(), mime_type="text/markdown")
- Prompts: Creating templated workflows
@server.prompt(
name="research_template",
description="Template for research tasks",
parameters={
"topic": {"type": "string", "description": "Research topic"}
}
)
async def research_prompt(topic: str):
return f"""
# Research Plan for {topic}
1. Define key terms
2. Identify main research questions
3. Collect relevant sources
4. Analyze findings
5. Synthesize conclusions
"""
Client Implementation
Setting Up a Client
from mcp import MCPClient
# Create client connection to MCP server
client = MCPClient("http://localhost:8080")
# List available tools
tools = await client.list_tools()
print(tools)
# Call a tool
result = await client.call_tool(
name="calculator.add",
arguments={"x": 5, "y": 10}
)
print(result) # {"result": 15}
Integrating with LLM Frameworks
from mcp import MCPClient
import openai
# Setup client
client = MCPClient("http://localhost:8080")
# Get tools in OpenAI format
tools = await client.get_openai_tools()
# Use with OpenAI
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Calculate 27 + 15"}],
tools=tools,
tool_choice="auto"
)
Implementation Patterns
Local Development Workflow
- Create server script with tool definitions
- Run server locally with
python server.py
- Connect client applications to
localhost:port
- Test tool functionality through client interface
Production Deployment Options
- Docker Containers: Package server into container for easy deployment
- Serverless Functions: Deploy as serverless endpoints (with adaptations)
- Traditional Hosting: Run on application servers with proper process management
Helper Functions for Framework Integration
Pantic AI Integration
def convert_mcp_to_pantic(mcp_tools):
"""Convert MCP tools to Pantic AI format"""
pantic_tools = []
for tool in mcp_tools:
pantic_tools.append({
"name": tool["name"],
"description": tool["description"],
"parameters": {
"type": "object",
"properties": tool["parameters"]
}
})
return pantic_tools
LangChain Integration
from langchain.tools import BaseTool
from mcp import MCPClient
class MCPToolWrapper(BaseTool):
"""Wrapper to use MCP tools in LangChain"""
name: str
description: str
client: MCPClient
tool_name: str
def _run(self, **kwargs):
return self.client.call_tool(
name=self.tool_name,
arguments=kwargs
)
Best Practices
Error Handling
try:
result = await client.call_tool(name="calculator.divide",
arguments={"x": 10, "y": 0})
except Exception as e:
print(f"Tool execution failed: {e}")
# Implement fallback logic
Async/Await Usage
- Use async/await throughout for best performance
- Handle concurrent tool calls efficiently
- Process multiple responses with Python's async features
Testing MCP Implementations
import pytest
from mcp import MCPServer, MCPClient
@pytest.fixture
async def mcp_server():
server = MCPServer("test-server")
@server.tool(name="test.echo")
async def echo(message: str):
return {"message": message}
await server.start_async()
yield server
await server.stop()
@pytest.fixture
async def mcp_client(mcp_server):
client = MCPClient(mcp_server.url)
return client
async def test_echo_tool(mcp_client):
result = await mcp_client.call_tool(
name="test.echo",
arguments={"message": "hello"}
)
assert result["message"] == "hello"
Connections
- Protocol Components: Model Context Protocol, MCP Architecture, MCP Servers, MCP Clients
- Implementation Examples: Building MCP Servers, MCP vs Traditional AI Tools
- Applications: MCP in AI Agents, Pantic AI Integration, LangChain with MCP
- Python Ecosystem: Python Async Programming, JSON-RPC in Python
References
- Python MCP SDK: github.com/anthropics/mcp-python
- MCP Python Examples: github.com/anthropics/mcp-examples/python
- Anthropic AI Documentation: docs.anthropic.com/claude/docs/model-context-protocol-python
#MCP #Python #MCPImplementation #AITools #ServerDevelopment #ClientDevelopment #PythonSDK
Connections:
Sources: