📌 Level: Intermediate (basic Python + understanding of AI agent concepts) ⏱️ Reading time: ~13 minutes 🛠️ After reading this: You’ll understand what MCP is and be able to build your first MCP server in Python within 30 minutes
In the last post we covered the 4 core components of AI agents. Remember the Tool Layer?
The interface that lets agents search the web, query databases, and call APIs — the one that makes all “actions” possible.
Before 2026, every one of these connections was one-off. A Claude plugin, a ChatGPT plugin, a Cursor extension — the same capability had to be built three separate times.
MCP ended that.
“Running an MCP server has become almost as popular as running a web server.” — The New Stack, 2026
Why MCP — The Numbers Say It All
- Announced by Anthropic as an open standard in November 2024
- OpenAI, Google DeepMind, Microsoft, AWS — all adopted it
- Now governed by the Linux Foundation’s Agentic AI Foundation
- 97 million monthly SDK downloads (March 2026)
- 13,000+ public MCP servers on GitHub
- FastMCP: 1 million PyPI downloads per day
📊 Table of Contents
- What MCP Is — “USB-C for AI”
- Before vs After MCP — What Actually Changed
- MCP Architecture — 3 Components
- Core Concepts — Tools, Resources, Prompts
- Real Code — Build Your First MCP Server in 30 Minutes
- Connecting MCP to LangChain / LangGraph
- Deployment — From Local to the Cloud
- Security & Production Checklist
1. What MCP Is — “USB-C for AI”
The most famous analogy for MCP:
MCP is USB-C for AI applications.
Think about what life was like before USB-C. Laptop chargers, phone chargers, external drive cables — all different. USB-C gave us one cable that connects everything.
Before MCP:
Claude ──custom integration──▶ NotionClaude ──custom integration──▶ GitHubClaude ──custom integration──▶ SlackGPT-4 ──different custom────▶ NotionGPT-4 ──different custom────▶ GitHub (same service, built twice)
After MCP:
Notion MCP Server ◀──MCP standard──▶ Claude ◀──MCP standard──▶ GPT-4 ◀──MCP standard──▶ Cursor ◀──MCP standard──▶ Any AI client
Build an MCP server once, and it works with every AI client that supports MCP.
2. Before vs After MCP — What Actually Changed
The Problem: N×M Integration Hell
[Before MCP]AI clients: Claude, GPT-4, Gemini, Cursor (4 clients)External tools: DB, GitHub, Slack, Figma, Gmail (5 services)Custom integrations needed: 4 × 5 = 20
Every company, every AI, every service needed its own connector — a maintenance nightmare.
The Fix: Standardization Reduces It to N+M
[After MCP]Each service builds one MCP serverMCP servers: DB, GitHub, Slack, Figma, Gmail (5 servers)AI clients connect via MCP (4 clients)Total integrations needed: 5 + 4 = 9
3. MCP Architecture — 3 Components
┌─────────────────────────────────────────────────┐│ MCP Ecosystem ││ ││ ┌──────────────┐ MCP Protocol ┌──────────┐ ││ │ MCP Client │◄──────────────────▶│MCP Server│ ││ │ │ (JSON-RPC 2.0) │ │ ││ │ Claude │ │ DB server │ ││ │ Cursor │ │ GitHub srv│ ││ │ My Agent │ │ My API │ ││ └──────────────┘ └─────┬─────┘ ││ │ ││ ┌──────▼──────┐ ││ │External Svc │ ││ │(DB, APIs...) │ ││ └─────────────┘ │└─────────────────────────────────────────────────┘
MCP Client: An app with an AI model inside (Claude Desktop, Cursor, your own agent)
MCP Server: A server that exposes specific functions/data to AI — what you’ll build today
MCP Protocol: How clients and servers talk (JSON-RPC 2.0 based)
4. Core Concepts — Tools, Resources, Prompts
An MCP server can offer AI three things.
① Tools — Functions the AI Can Execute
# Tool = a function the AI calls# Examples: web search, DB query, file save@mcp.tool()def get_weather(city: str) -> str: """Returns the current weather for a given city.""" # Real implementation: call a weather API return f"{city}: Sunny, 22°C"
② Resources — Data the AI Can Read
# Resource = data the AI reads as context# Examples: file contents, DB records, config info@mcp.resource("config://app-settings")def get_app_config() -> str: """Returns application configuration.""" return json.dumps({"version": "1.0", "env": "production"})
③ Prompts — Reusable Prompt Templates
# Prompt = frequently used prompts stored on the server# Examples: code review template, report writing format@mcp.prompt()def code_review_prompt(language: str, code: str) -> str: """Generates a prompt for code review.""" return f"""Please review the following {language} code:```{language}{code}
Review criteria: bugs, security vulnerabilities, performance, readability “””
**When to use what:**
| | Tool | Resource | Prompt |
|-|------|----------|--------|
| Purpose | Perform action | Read data | Reusable template |
| Examples | Send email, write to DB | Read file, get config | Review form, report format |
| AI perspective | "Do this" | "Read this" | "Think in this format" |
---
## 5. Real Code — Build Your First MCP Server in 30 Minutes
### Install
```bash
# FastMCP (most popular Python MCP framework in 2026)
pip install fastmcp
# Or use the official SDK directly
pip install mcp
Build a Simple MCP Server
# my_mcp_server.pyfrom fastmcp import FastMCPimport jsonimport osfrom datetime import datetime# Initialize MCP servermcp = FastMCP( name="My First MCP Server", description="MCP server providing weather checks, calculations, and memo storage")# ── Tool 1: Weather Lookup ────────────────────────────mcp.tool()async def get_weather(city: str) -> str: """ Look up current weather for a city. Examples: get_weather("Tokyo"), get_weather("New York") """ # Production: wire to OpenWeather API or similar # Demo data for this tutorial weather_data = { "seoul": {"temp": 18, "condition": "Sunny", "humidity": 55}, "tokyo": {"temp": 22, "condition": "Cloudy", "humidity": 70}, "new york": {"temp": 15, "condition": "Rainy", "humidity": 85}, "london": {"temp": 12, "condition": "Foggy", "humidity": 90}, } city_lower = city.lower() if city_lower in weather_data: w = weather_data[city_lower] return f"{city} weather: {w['condition']}, {w['temp']}°C, humidity {w['humidity']}%" return f"No weather data found for '{city}'."# ── Tool 2: Calculator ───────────────────────────────mcp.tool()def calculate(expression: str) -> str: """ Evaluate a math expression. Examples: "2 + 2 * 10", "sqrt(144)", "(100 - 20) / 4" """ import math safe_env = { "__builtins__": {}, "sqrt": math.sqrt, "pow": math.pow, "abs": abs, "round": round, "pi": math.pi, "e": math.e, } try: result = eval(expression, safe_env) return f"{expression} = {result}" except Exception as e: return f"Calculation error: {str(e)}"# ── Tool 3: Save / Retrieve Memos ───────────────────MEMO_FILE = "memos.json"def _load_memos() -> dict: if os.path.exists(MEMO_FILE): with open(MEMO_FILE, "r") as f: return json.load(f) return {}def _save_memos(memos: dict): with open(MEMO_FILE, "w") as f: json.dump(memos, f, indent=2)mcp.tool()def save_memo(title: str, content: str) -> str: """Save a memo with a title and content.""" memos = _load_memos() memos[title] = { "content": content, "created_at": datetime.now().isoformat() } _save_memos(memos) return f"✅ Memo saved: '{title}'"@mcp.tool()def get_memo(title: str) -> str: """Retrieve a saved memo by title.""" memos = _load_memos() if title in memos: memo = memos[title] return f"📝 [{title}]\n{memo['content']}\n\nSaved at: {memo['created_at']}" return f"No memo found for '{title}'. Existing memos: {list(memos.keys())}"@mcp.tool()def list_memos() -> str: """List all saved memos.""" memos = _load_memos() if not memos: return "No memos saved yet." items = [f"- {title} ({m['created_at'][:10]})" for title, m in memos.items()] return "📋 Saved memos:\n" + "\n".join(items)# ── Resource: Server Status ──────────────────────────@mcp.resource("server://status")def get_server_status() -> str: """Returns current server status and available tools.""" return json.dumps({ "status": "running", "version": "1.0.0", "tools": ["get_weather", "calculate", "save_memo", "get_memo", "list_memos"], "timestamp": datetime.now().isoformat() })# ── Prompt: Weather Report Template ─────────────────@mcp.prompt()def weather_report_prompt(cities: str) -> str: """Prompt template for generating a multi-city weather comparison report.""" return f"""Check the weather for these cities and write a comparison report: {cities}Report format:1. Current weather status for each city2. Best weather city recommendation3. Travel / outdoor activity suitability ratingUse the available tools to fetch real weather data before writing."""# ── Run the Server ───────────────────────────────────if __name__ == "__main__": # stdio mode: connects locally to Claude Desktop, Cursor, etc. mcp.run() # HTTP mode (for remote deployment): # mcp.run(transport="streamable-http", host="0.0.0.0", port=8080)
Run and Test
# Start the server (stdio mode)python my_mcp_server.py# Test with MCP Inspector (separate terminal)npx @modelcontextprotocol/inspector python my_mcp_server.py
Open the MCP Inspector in your browser to test tools interactively.
6. Connecting MCP to Claude Desktop / Cursor
Wire the server into a real AI tool.
Claude Desktop
// ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)// %APPDATA%/Claude/claude_desktop_config.json (Windows){ "mcpServers": { "my-server": { "command": "python", "args": ["/absolute/path/to/my_mcp_server.py"], "env": { "PYTHONPATH": "/absolute/path" } } }}
Restart Claude Desktop and your tools will appear in the chat interface.
Cursor
// .cursor/mcp.json (project root){ "mcpServers": { "my-server": { "command": "python", "args": ["my_mcp_server.py"] } }}
Connecting to a LangChain Agent
# langchain_with_mcp.pyfrom langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_anthropic import ChatAnthropicfrom langgraph.prebuilt import create_react_agentasync def main(): # Initialize MCP client async with MultiServerMCPClient( { "my-server": { "command": "python", "args": ["my_mcp_server.py"], "transport": "stdio", } } ) as client: # MCP tools are auto-converted to LangChain tools tools = await client.get_tools() print(f"✅ {len(tools)} MCP tools loaded") for tool in tools: print(f" - {tool.name}: {tool.description[:50]}...") # Wire MCP tools into a LangGraph agent llm = ChatAnthropic(model="claude-sonnet-4-20250514") agent = create_react_agent(llm, tools) # Run the agent result = await agent.ainvoke({ "messages": [("user", "Check the weather in Tokyo and New York, then save a memo comparing them.")] }) print("\n📌 Agent response:") print(result["messages"][-1].content)import asyncioasyncio.run(main())
Install:
pip install langchain-mcp-adapters langchain-anthropic langgraph
7. Deployment — From Local to the Cloud
Choosing a Transport
| Transport | Characteristics | When to Use |
|---|---|---|
| stdio | Local inter-process communication | Claude Desktop, Cursor local connections |
| Streamable HTTP | HTTP-based remote connections | Cloud deployment, remote agents |
| ~~SSE~~ | ~~Legacy approach~~ | ~~Pre-2025~~ |
In 2026: stdio for local, Streamable HTTP for the cloud — that’s the standard.
Convert to Streamable HTTP
# Cloud deployment (HTTP mode)from fastmcp import FastMCPimport osmcp = FastMCP("My Cloud MCP Server")# ... tool definitions are identical ...if __name__ == "__main__": mcp.run( transport="streamable-http", host="0.0.0.0", port=int(os.getenv("PORT", 8080)) )
Deploy to Railway
# DockerfileFROM python:3.12-slimWORKDIR /appCOPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txtCOPY . .EXPOSE 8080CMD ["python", "my_mcp_server.py"]
# requirements.txtfastmcp>=3.0httpxpython-dotenv
# Deployrailway up# Connect to Claude Desktop via HTTP after deploying:# claude_desktop_config.json:# {# "mcpServers": {# "my-cloud-server": {# "url": "https://your-app.up.railway.app/mcp",# "transport": "streamable-http"# }# }# }
8. Security & Production Checklist
An MCP server is the gateway through which AI touches your real systems. Security is critical.
Security Principles
Prefer read-only access
# ❌ Dangerous: write access for untrusted agents@mcp.tool()def execute_sql(query: str) -> str: """Execute an SQL query.""" return db.execute(query) # DELETE, DROP possible!# ✅ Safe: read only@mcp.tool()def search_data(keyword: str) -> str: """Search data by keyword (read-only).""" return db.execute( "SELECT * FROM data WHERE content LIKE ? LIMIT 10",
[f”%{keyword}%”]
)
Validate inputs
@mcp.tool()def save_file(filename: str, content: str) -> str: """Save content to a file.""" # Prevent path traversal if ".." in filename or "/" in filename or "\\" in filename: return "Error: invalid filename." # Enforce file size limit if len(content) > 1_000_000: # 1MB return "Error: file too large (max 1MB)." safe_path = os.path.join("./allowed_dir", os.path.basename(filename)) with open(safe_path, "w") as f: f.write(content) return f"✅ Saved: {filename}"
Manage secrets properly
# ❌ Never do thisAPI_KEY = "sk-1234567890" # hardcoded in source# ✅ Inject via environment variableimport osAPI_KEY = os.getenv("MY_API_KEY")if not API_KEY: raise ValueError("MY_API_KEY environment variable is not set.")
Production Checklist
Functionality
- [ ] Every tool has a clear, descriptive name and description?
- [ ] Error messages are clear enough for the AI to understand?
- [ ] Timeouts are handled on all external calls?
Security
- [ ] Input validation applied to every tool?
- [ ] No sensitive data returned directly in tool responses?
- [ ] API keys managed only via environment variables?
- [ ] Principle of least privilege applied?
Operations
- [ ]
/healthendpoint exists? - [ ] Logging configured appropriately?
- [ ] Rate limiting in place?
Wrapping Up — MCP Is No Longer Optional, It’s Foundational
In 2026, MCP is not a trend. It’s infrastructure.
OpenAI, Google, Microsoft, and AWS have all adopted it. The Linux Foundation now governs it. Over 13,000 public servers already exist. 97 million monthly SDK downloads show just how fast this ecosystem is growing.
Building AI agents without knowing MCP is becoming like doing web development without knowing REST APIs.
Copy the code in this post and run it. Swap the demo weather data for a real OpenWeather API call. Connect the memo storage to an actual database. That hands-on experience is where MCP’s power becomes obvious.
🔖 AI Agent Development Series
- Previous: The Complete AI Agent Development Guide — From Concepts to Production Architecture
- Current: The Complete MCP Guide — The Standard Protocol That Gives Your AI Agent Hands and Feet
- Coming next: AI Agent Observability — Tracing Agent Behavior with LangSmith
Tags: #MCP #ModelContextProtocol #AIAgents #FastMCP #Python #LangChain #AgentDevelopment #2026 #DevTutorial #Anthropic
Sources: Wikipedia MCP · Particula Tech MCP Developer Guide · Apigene MCP Build Guide · The New Stack AI Engineering Trends 2026 · FastMCP GitHub