diff --git a/.cursor/rules/about-codebase.mdc b/.cursor/rules/about-codebase.mdc index 3bcb166..cc801b6 100644 --- a/.cursor/rules/about-codebase.mdc +++ b/.cursor/rules/about-codebase.mdc @@ -5,4 +5,7 @@ alwaysApply: false --- - This repository contains a Model Context Protocol (MCP) server that integrates with CodeLogic's knowledge graph APIs - It enables AI programming assistants to access dependency data from CodeLogic to analyze code and database impacts -- The core package is in src/codelogic_mcp_server/ with server.py, handlers.py, and utils.py \ No newline at end of file +- **NEW**: Provides DevOps CI/CD integration capabilities for CodeLogic scanning in Jenkins, GitHub Actions, Azure DevOps, and GitLab CI +- **NEW**: Generates structured data for AI models to directly modify CI/CD files and implement CodeLogic scanning +- The core package is in src/codelogic_mcp_server/ with server.py, handlers.py, and utils.py +- **DevOps Tools**: codelogic-docker-agent, codelogic-build-info, codelogic-pipeline-helper for CI/CD integration \ No newline at end of file diff --git a/.cursor/rules/best-practices.mdc b/.cursor/rules/best-practices.mdc index dd47eee..78c77aa 100644 --- a/.cursor/rules/best-practices.mdc +++ b/.cursor/rules/best-practices.mdc @@ -6,3 +6,7 @@ alwaysApply: false - Use semantic search before grep for broader context - Maintain proper error handling and logging - Keep code changes atomic and focused +- **NEW**: For DevOps tools, provide structured JSON data for AI file modification +- **NEW**: Include specific file paths, line numbers, and exact code modifications +- **NEW**: Generate platform-specific CI/CD configurations (Jenkins, GitHub Actions, Azure DevOps, GitLab) +- **NEW**: Always include setup instructions and validation checks for DevOps integrations diff --git a/.cursor/rules/environment-variables.mdc b/.cursor/rules/environment-variables.mdc index 77b3b3c..b4303fd 100644 --- a/.cursor/rules/environment-variables.mdc +++ b/.cursor/rules/environment-variables.mdc @@ -8,4 +8,9 @@ alwaysApply: false - `CODELOGIC_PASSWORD`: Password for authentication - `CODELOGIC_WORKSPACE_NAME`: Workspace name - `CODELOGIC_DEBUG_MODE`: Enable debug logging -- `CODELOGIC_TEST_MODE`: Used by test framework \ No newline at end of file +- `CODELOGIC_TEST_MODE`: Used by test framework +- **NEW**: DevOps CI/CD Integration Variables: + - `CODELOGIC_HOST`: CodeLogic server host for Docker agents + - `AGENT_UUID`: CodeLogic agent UUID for authentication + - `AGENT_PASSWORD`: CodeLogic agent password for authentication + - `SCAN_SPACE_NAME`: Target scan space for CodeLogic scans \ No newline at end of file diff --git a/.cursor/rules/error-handling.mdc b/.cursor/rules/error-handling.mdc index 72302ad..f2ababc 100644 --- a/.cursor/rules/error-handling.mdc +++ b/.cursor/rules/error-handling.mdc @@ -3,7 +3,8 @@ description: Error handling patterns for the CodeLogic MCP Server globs: "**/*.py" alwaysApply: false --- -- Use the following pattern for error handling in tool implementations: +# Use the following pattern for error handling in tool implementations + ```python try: # Operations that might fail @@ -11,6 +12,7 @@ except Exception as e: sys.stderr.write(f"Error: {str(e)}\n") return [types.TextContent(type="text", text=f"# Error\n\n{str(e)}")] ``` + - Always catch and report exceptions - Write errors to stderr -- Return formatted error messages to the client \ No newline at end of file +- Return formatted error messages to the client diff --git a/.cursor/rules/file-operations.mdc b/.cursor/rules/file-operations.mdc deleted file mode 100644 index 8a4655d..0000000 --- a/.cursor/rules/file-operations.mdc +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: File operation guidance for working with the CodeLogic MCP Server -globs: -alwaysApply: false ---- -- Direct file editing with context preservation -- File creation and deletion capabilities -- Directory listing and navigation -- Maintain proper file organization and structure \ No newline at end of file diff --git a/.cursor/rules/mcp-server-pattern.mdc b/.cursor/rules/mcp-server-pattern.mdc index c386bf8..8cacad7 100644 --- a/.cursor/rules/mcp-server-pattern.mdc +++ b/.cursor/rules/mcp-server-pattern.mdc @@ -3,7 +3,9 @@ description: Core coding patterns for MCP Server implementation globs: "**/*.py" alwaysApply: false --- -- Use the following pattern for MCP server implementation: + +# Use the following pattern for MCP server implementation + ```python server = Server("codelogic-mcp-server") @@ -15,7 +17,11 @@ async def handle_list_tools() -> list[types.Tool]: async def handle_call_tool(name: str, arguments: dict | None) -> list[types.TextContent]: # Handle tool execution ``` + - New tools should be added to handle_list_tools() with descriptive names (prefix: `codelogic-`) - Tool handlers should be implemented in handle_call_tool() - Create handler functions with proper error handling -- Return results as markdown-formatted text \ No newline at end of file +- Return results as markdown-formatted text +- **NEW**: For DevOps tools, return structured JSON data for AI file modification +- **NEW**: Include helper functions for generating platform-specific CI/CD configurations +- **NEW**: Use structured output patterns for file modifications with specific line numbers and content diff --git a/.cursor/rules/technologies.mdc b/.cursor/rules/technologies.mdc index 68ca416..5d87f91 100644 --- a/.cursor/rules/technologies.mdc +++ b/.cursor/rules/technologies.mdc @@ -6,4 +6,7 @@ alwaysApply: false - Python 3.13+ with extensive use of async/await - Model Context Protocol SDK (`mcp[cli]`) - HTTPX for API requests -- Environment variables via dotenv for configuration \ No newline at end of file +- Environment variables via dotenv for configuration +- **NEW**: Docker for CodeLogic agent containerization +- **NEW**: CI/CD Platform Support: Jenkins (Groovy), GitHub Actions (YAML), Azure DevOps (YAML), GitLab CI (YAML) +- **NEW**: JSON structured output for AI model file modification \ No newline at end of file diff --git a/.cursorindexingignore b/.cursorindexingignore deleted file mode 100644 index 68347b3..0000000 --- a/.cursorindexingignore +++ /dev/null @@ -1,2 +0,0 @@ -# Don't index SpecStory auto-save files, but allow explicit context inclusion via @ references -.specstory/** diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 0e16040..45e8c69 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -20,17 +20,19 @@ jobs: with: python-version: ${{ matrix.python-version }} - name: Install dependencies + env: + PIP_PROGRESS_BAR: off run: | - python -m pip install --upgrade pip - python -m pip install uv - uv pip install --system -e ".[dev]" - python -m pip install flake8 + python -m pip install --upgrade pip -q + python -m pip install uv -q + uv pip install --system -e ".[dev]" --quiet + python -m pip install flake8 -q - name: Lint with flake8 run: | # stop the build if there are Python syntax errors or undefined names flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics - # exit-zero treats all errors as warnings - flake8 . --count --exit-zero --max-complexity=10 --statistics + # exit-zero treats all errors as warnings (quiet: only violations, no per-file stats) + flake8 . --count --exit-zero --max-complexity=10 --quiet - name: Test with unittest run: | - python -m unittest discover -s test -p "unit*.py" -v \ No newline at end of file + python -m unittest discover -s test -p "unit*.py" \ No newline at end of file diff --git a/README.md b/README.md index 157a3a1..565dde3 100644 --- a/README.md +++ b/README.md @@ -6,13 +6,25 @@ An [MCP Server](https://modelcontextprotocol.io/introduction) to utilize Codelog ### Tools -The server implements two tools: +The server implements five tools: +#### Code Analysis Tools - **codelogic-method-impact**: Pulls an impact assessment from the CodeLogic server's APIs for your code. - Takes the given "method" that you're working on and its associated "class". - **codelogic-database-impact**: Analyzes impacts between code and database entities. - Takes the database entity type (column, table, or view) and its name. +#### DevOps & CI/CD Integration Tools +- **codelogic-docker-agent**: Generates Docker agent configurations for CodeLogic scanning in CI/CD pipelines. + - Supports .NET, Java, SQL, and TypeScript agents + - Generates configurations for Jenkins, GitHub Actions, Azure DevOps, and GitLab CI +- **codelogic-build-info**: Generates build information and send commands for CodeLogic integration. + - Supports Git information, build logs, and metadata collection + - Provides both Docker and standalone usage examples +- **codelogic-pipeline-helper**: Generates complete CI/CD pipeline configurations for CodeLogic integration. + - Creates comprehensive pipeline configurations with best practices + - Includes error handling, notifications, and scan space management strategies + ### Install #### Pre Requisites @@ -196,6 +208,41 @@ To configure the CodeLogic MCP server in Cursor: The CodeLogic MCP server tools will now be available in your Cursor workspace. +## DevOps Integration + +The CodeLogic MCP Server now includes powerful DevOps capabilities for integrating CodeLogic scanning into your CI/CD pipelines. These tools help DevOps teams: + +### Docker Agent Integration +- Generate Docker run commands for CodeLogic agents +- Create platform-specific configurations (Jenkins, GitHub Actions, Azure DevOps, GitLab CI) +- Set up proper environment variables and volume mounts +- Include build information collection + +### Build Information Management +- Send Git information, build logs, and metadata to CodeLogic servers +- Support multiple CI/CD platforms with platform-specific variables +- Handle log file management and rotation +- Provide both Docker and standalone usage options + +### Complete Pipeline Configuration +- Generate end-to-end CI/CD pipeline configurations +- Include error handling, notifications, and monitoring +- Support different scan space management strategies +- Follow DevOps best practices for security and performance + +### Example Usage + +```bash +# Generate Docker agent configuration for .NET +codelogic-docker-agent --agent-type=dotnet --scan-path=/app --application-name=MyApp --ci-platform=jenkins + +# Set up build information sending +codelogic-build-info --build-type=all --output-format=docker --ci-platform=github-actions + +# Create complete pipeline configuration +codelogic-pipeline-helper --ci-platform=jenkins --agent-type=dotnet --scan-triggers=main,develop +``` + ## AI Assistant Instructions/Rules To help the AI assistant use the CodeLogic tools effectively, you can add the following instructions/rules to your client's configuration. We recommend customizing these instructions to align with your team's specific coding standards, best practices, and workflow requirements: @@ -216,9 +263,16 @@ When modifying SQL code or database entities: - Always use codelogic-database-impact to analyze potential impacts - Highlight impact results for the modified database entities +For DevOps and CI/CD integration: +- Use codelogic-docker-agent to generate Docker agent configurations +- Use codelogic-build-info to set up build information sending +- Use codelogic-pipeline-helper to create complete CI/CD pipeline configurations +- Support Jenkins, GitHub Actions, Azure DevOps, and GitLab CI platforms + To use the CodeLogic tools effectively: - For code impacts: Ask about specific methods or functions - For database relationships: Ask about tables, views, or columns +- For DevOps: Ask about CI/CD integration, Docker agents, or build information - Review the impact results before making changes - Consider both direct and indirect impacts ``` @@ -239,9 +293,16 @@ When modifying SQL code or database entities: - Always use codelogic-database-impact to analyze potential impacts - Highlight impact results for the modified database entities +For DevOps and CI/CD integration: +- Use codelogic-docker-agent to generate Docker agent configurations +- Use codelogic-build-info to set up build information sending +- Use codelogic-pipeline-helper to create complete CI/CD pipeline configurations +- Support Jenkins, GitHub Actions, Azure DevOps, and GitLab CI platforms + To use the CodeLogic tools effectively: - For code impacts: Ask about specific methods or functions - For database relationships: Ask about tables, views, or columns +- For DevOps: Ask about CI/CD integration, Docker agents, or build information - Review the impact results before making changes - Consider both direct and indirect impacts ``` @@ -260,9 +321,16 @@ When modifying SQL code or database entities: - Always use codelogic-database-impact to analyze potential impacts - Highlight impact results for the modified database entities +For DevOps and CI/CD integration: +- Use codelogic-docker-agent to generate Docker agent configurations +- Use codelogic-build-info to set up build information sending +- Use codelogic-pipeline-helper to create complete CI/CD pipeline configurations +- Support Jenkins, GitHub Actions, Azure DevOps, and GitLab CI platforms + To use the CodeLogic tools effectively: - For code impacts: Ask about specific methods or functions - For database relationships: Ask about tables, views, or columns +- For DevOps: Ask about CI/CD integration, Docker agents, or build information - Review the impact results before making changes - Consider both direct and indirect impacts ``` diff --git a/context/Python-MCP-SDK.md b/context/Python-MCP-SDK.md index 05d6072..0f0468a 100644 --- a/context/Python-MCP-SDK.md +++ b/context/Python-MCP-SDK.md @@ -8,11 +8,18 @@ [![MIT licensed][mit-badge]][mit-url] [![Python Version][python-badge]][python-url] [![Documentation][docs-badge]][docs-url] +[![Protocol][protocol-badge]][protocol-url] [![Specification][spec-badge]][spec-url] -[![GitHub Discussions][discussions-badge]][discussions-url] +> [!IMPORTANT] +> **This is the `main` branch which contains v2 of the SDK (currently in development, pre-alpha).** +> +> We anticipate a stable v2 release in Q1 2026. Until then, **v1.x remains the recommended version** for production use. v1.x will continue to receive bug fixes and security updates for at least 6 months after v2 ships to give people time to upgrade. +> +> For v1 documentation and code, see the [`v1.x` branch](https://github.com/modelcontextprotocol/python-sdk/tree/v1.x). + ## Table of Contents @@ -27,20 +34,41 @@ - [Server](#server) - [Resources](#resources) - [Tools](#tools) + - [Structured Output](#structured-output) - [Prompts](#prompts) - [Images](#images) - [Context](#context) + - [Getting Context in Functions](#getting-context-in-functions) + - [Context Properties and Methods](#context-properties-and-methods) + - [Completions](#completions) + - [Elicitation](#elicitation) + - [Sampling](#sampling) + - [Logging and Notifications](#logging-and-notifications) + - [Authentication](#authentication) + - [MCPServer Properties](#mcpserver-properties) + - [Session Properties and Methods](#session-properties-and-methods) + - [Request Context Properties](#request-context-properties) - [Running Your Server](#running-your-server) - [Development Mode](#development-mode) - [Claude Desktop Integration](#claude-desktop-integration) - [Direct Execution](#direct-execution) + - [Streamable HTTP Transport](#streamable-http-transport) + - [CORS Configuration for Browser-Based Clients](#cors-configuration-for-browser-based-clients) - [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server) - - [Examples](#examples) - - [Echo Server](#echo-server) - - [SQLite Explorer](#sqlite-explorer) + - [StreamableHTTP servers](#streamablehttp-servers) + - [Basic mounting](#basic-mounting) + - [Host-based routing](#host-based-routing) + - [Multiple servers with path configuration](#multiple-servers-with-path-configuration) + - [Path configuration at initialization](#path-configuration-at-initialization) + - [SSE servers](#sse-servers) - [Advanced Usage](#advanced-usage) - [Low-Level Server](#low-level-server) + - [Structured Output Support](#structured-output-support) + - [Pagination (Advanced)](#pagination-advanced) - [Writing MCP Clients](#writing-mcp-clients) + - [Client Display Utilities](#client-display-utilities) + - [OAuth Authentication for Clients](#oauth-authentication-for-clients) + - [Parsing Tool Results](#parsing-tool-results) - [MCP Primitives](#mcp-primitives) - [Server Capabilities](#server-capabilities) - [Documentation](#documentation) @@ -53,12 +81,12 @@ [mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE [python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg [python-url]: https://www.python.org/downloads/ -[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg -[docs-url]: https://modelcontextprotocol.io +[docs-badge]: https://img.shields.io/badge/docs-python--sdk-blue.svg +[docs-url]: https://modelcontextprotocol.github.io/python-sdk/ +[protocol-badge]: https://img.shields.io/badge/protocol-modelcontextprotocol.io-blue.svg +[protocol-url]: https://modelcontextprotocol.io [spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg -[spec-url]: https://spec.modelcontextprotocol.io -[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk -[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions +[spec-url]: https://modelcontextprotocol.io/specification/latest ## Overview @@ -66,14 +94,14 @@ The Model Context Protocol allows applications to provide context for LLMs in a - Build MCP clients that can connect to any MCP server - Create MCP servers that expose resources, prompts and tools -- Use standard transports like stdio and SSE +- Use standard transports like stdio, SSE, and Streamable HTTP - Handle all MCP protocol messages and lifecycle events ## Installation ### Adding MCP to your python project -We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects. +We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects. If you haven't created a uv-managed project yet, create one: @@ -89,6 +117,7 @@ If you haven't created a uv-managed project yet, create one: ``` Alternatively, for projects using pip for dependencies: + ```bash pip install "mcp[cli]" ``` @@ -105,12 +134,18 @@ uv run mcp Let's create a simple MCP server that exposes a calculator tool and some data: + ```python -# server.py -from mcp.server.fastmcp import FastMCP +"""MCPServer quickstart example. + +Run from the repository root: + uv run examples/snippets/servers/mcpserver_quickstart.py +""" + +from mcp.server.mcpserver import MCPServer # Create an MCP server -mcp = FastMCP("Demo") +mcp = MCPServer("Demo") # Add an addition tool @@ -125,18 +160,49 @@ def add(a: int, b: int) -> int: def get_greeting(name: str) -> str: """Get a personalized greeting""" return f"Hello, {name}!" + + +# Add a prompt +@mcp.prompt() +def greet_user(name: str, style: str = "friendly") -> str: + """Generate a greeting prompt""" + styles = { + "friendly": "Please write a warm, friendly greeting", + "formal": "Please write a formal, professional greeting", + "casual": "Please write a casual, relaxed greeting", + } + + return f"{styles.get(style, styles['friendly'])} for someone named {name}." + + +# Run with streamable HTTP transport +if __name__ == "__main__": + mcp.run(transport="streamable-http", json_response=True) +``` + +_Full example: [examples/snippets/servers/mcpserver_quickstart.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/mcpserver_quickstart.py)_ + + +You can install this server in [Claude Code](https://docs.claude.com/en/docs/claude-code/mcp) and interact with it right away. First, run the server: + +```bash +uv run --with mcp examples/snippets/servers/mcpserver_quickstart.py ``` -You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running: +Then add it to Claude Code: + ```bash -mcp install server.py +claude mcp add --transport http my-server http://localhost:8000/mcp ``` -Alternatively, you can test it with the MCP Inspector: +Alternatively, you can test it with the MCP Inspector. Start the server as above, then in a separate terminal: + ```bash -mcp dev server.py +npx -y @modelcontextprotocol/inspector ``` +In the inspector UI, connect to `http://localhost:8000/mcp`. + ## What is MCP? The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can: @@ -150,33 +216,48 @@ The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you bui ### Server -The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing: +The MCPServer server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing: + ```python -# Add lifespan support for startup/shutdown with strong typing -from contextlib import asynccontextmanager +"""Example showing lifespan support for startup/shutdown with strong typing.""" + from collections.abc import AsyncIterator +from contextlib import asynccontextmanager from dataclasses import dataclass -from fake_database import Database # Replace with your actual DB type +from mcp.server.mcpserver import Context, MCPServer +from mcp.server.session import ServerSession + + +# Mock database class for example +class Database: + """Mock database class for example.""" -from mcp.server.fastmcp import Context, FastMCP + @classmethod + async def connect(cls) -> "Database": + """Connect to database.""" + return cls() -# Create a named server -mcp = FastMCP("My App") + async def disconnect(self) -> None: + """Disconnect from database.""" + pass -# Specify dependencies for deployment and development -mcp = FastMCP("My App", dependencies=["pandas", "numpy"]) + def query(self) -> str: + """Execute a query.""" + return "Query result" @dataclass class AppContext: + """Application context with typed dependencies.""" + db: Database @asynccontextmanager -async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]: - """Manage application lifecycle with type-safe context""" +async def app_lifespan(server: MCPServer) -> AsyncIterator[AppContext]: + """Manage application lifecycle with type-safe context.""" # Initialize on startup db = await Database.connect() try: @@ -187,438 +268,2224 @@ async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]: # Pass lifespan to server -mcp = FastMCP("My App", lifespan=app_lifespan) +mcp = MCPServer("My App", lifespan=app_lifespan) # Access type-safe lifespan context in tools @mcp.tool() -def query_db(ctx: Context) -> str: - """Tool that uses initialized resources""" - db = ctx.request_context.lifespan_context["db"] +def query_db(ctx: Context[ServerSession, AppContext]) -> str: + """Tool that uses initialized resources.""" + db = ctx.request_context.lifespan_context.db return db.query() ``` +_Full example: [examples/snippets/servers/lifespan_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lifespan_example.py)_ + + ### Resources Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects: + ```python -from mcp.server.fastmcp import FastMCP +from mcp.server.mcpserver import MCPServer -mcp = FastMCP("My App") +mcp = MCPServer(name="Resource Example") -@mcp.resource("config://app") -def get_config() -> str: - """Static configuration data""" - return "App configuration here" +@mcp.resource("file://documents/{name}") +def read_document(name: str) -> str: + """Read a document by name.""" + # This would normally read from disk + return f"Content of {name}" -@mcp.resource("users://{user_id}/profile") -def get_user_profile(user_id: str) -> str: - """Dynamic user data""" - return f"Profile data for user {user_id}" +@mcp.resource("config://settings") +def get_settings() -> str: + """Get application settings.""" + return """{ + "theme": "dark", + "language": "en", + "debug": false +}""" ``` +_Full example: [examples/snippets/servers/basic_resource.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_resource.py)_ + + ### Tools Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects: + ```python -import httpx -from mcp.server.fastmcp import FastMCP +from mcp.server.mcpserver import MCPServer -mcp = FastMCP("My App") +mcp = MCPServer(name="Tool Example") @mcp.tool() -def calculate_bmi(weight_kg: float, height_m: float) -> float: - """Calculate BMI given weight in kg and height in meters""" - return weight_kg / (height_m**2) +def sum(a: int, b: int) -> int: + """Add two numbers together.""" + return a + b @mcp.tool() -async def fetch_weather(city: str) -> str: - """Fetch current weather for a city""" - async with httpx.AsyncClient() as client: - response = await client.get(f"https://api.weather.com/{city}") - return response.text +def get_weather(city: str, unit: str = "celsius") -> str: + """Get weather for a city.""" + # This would normally call a weather API + return f"Weather in {city}: 22degrees{unit[0].upper()}" ``` -### Prompts +_Full example: [examples/snippets/servers/basic_tool.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_tool.py)_ + -Prompts are reusable templates that help LLMs interact with your server effectively: +Tools can optionally receive a Context object by including a parameter with the `Context` type annotation. This context is automatically injected by the MCPServer framework and provides access to MCP capabilities: + ```python -from mcp.server.fastmcp import FastMCP -from mcp.server.fastmcp.prompts import base - -mcp = FastMCP("My App") +from mcp.server.mcpserver import Context, MCPServer +from mcp.server.session import ServerSession +mcp = MCPServer(name="Progress Example") -@mcp.prompt() -def review_code(code: str) -> str: - return f"Please review this code:\n\n{code}" +@mcp.tool() +async def long_running_task(task_name: str, ctx: Context[ServerSession, None], steps: int = 5) -> str: + """Execute a task with progress updates.""" + await ctx.info(f"Starting: {task_name}") + + for i in range(steps): + progress = (i + 1) / steps + await ctx.report_progress( + progress=progress, + total=1.0, + message=f"Step {i + 1}/{steps}", + ) + await ctx.debug(f"Completed step {i + 1}") -@mcp.prompt() -def debug_error(error: str) -> list[base.Message]: - return [ - base.UserMessage("I'm seeing this error:"), - base.UserMessage(error), - base.AssistantMessage("I'll help debug that. What have you tried so far?"), - ] + return f"Task '{task_name}' completed" ``` -### Images +_Full example: [examples/snippets/servers/tool_progress.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/tool_progress.py)_ + -FastMCP provides an `Image` class that automatically handles image data: +#### Structured Output -```python -from mcp.server.fastmcp import FastMCP, Image -from PIL import Image as PILImage +Tools will return structured results by default, if their return type +annotation is compatible. Otherwise, they will return unstructured results. -mcp = FastMCP("My App") +Structured output supports these return types: +- Pydantic models (BaseModel subclasses) +- TypedDicts +- Dataclasses and other classes with type hints +- `dict[str, T]` (where T is any JSON-serializable type) +- Primitive types (str, int, float, bool, bytes, None) - wrapped in `{"result": value}` +- Generic types (list, tuple, Union, Optional, etc.) - wrapped in `{"result": value}` -@mcp.tool() -def create_thumbnail(image_path: str) -> Image: - """Create a thumbnail from an image""" - img = PILImage.open(image_path) - img.thumbnail((100, 100)) - return Image(data=img.tobytes(), format="png") -``` +Classes without type hints cannot be serialized for structured output. Only +classes with properly annotated attributes will be converted to Pydantic models +for schema generation and validation. -### Context +Structured results are automatically validated against the output schema +generated from the annotation. This ensures the tool returns well-typed, +validated data that clients can easily process. -The Context object gives your tools and resources access to MCP capabilities: +**Note:** For backward compatibility, unstructured results are also +returned. Unstructured results are provided for backward compatibility +with previous versions of the MCP specification, and are quirks-compatible +with previous versions of MCPServer in the current version of the SDK. -```python -from mcp.server.fastmcp import FastMCP, Context +**Note:** In cases where a tool function's return type annotation +causes the tool to be classified as structured _and this is undesirable_, +the classification can be suppressed by passing `structured_output=False` +to the `@tool` decorator. -mcp = FastMCP("My App") +##### Advanced: Direct CallToolResult +For full control over tool responses including the `_meta` field (for passing data to client applications without exposing it to the model), you can return `CallToolResult` directly: -@mcp.tool() -async def long_task(files: list[str], ctx: Context) -> str: - """Process multiple files with progress tracking""" - for i, file in enumerate(files): - ctx.info(f"Processing {file}") - await ctx.report_progress(i, len(files)) - data, mime_type = await ctx.read_resource(f"file://{file}") - return "Processing complete" -``` + +```python +"""Example showing direct CallToolResult return for advanced control.""" -## Running Your Server +from typing import Annotated -### Development Mode +from pydantic import BaseModel -The fastest way to test and debug your server is with the MCP Inspector: +from mcp.server.mcpserver import MCPServer +from mcp.types import CallToolResult, TextContent -```bash -mcp dev server.py +mcp = MCPServer("CallToolResult Example") -# Add dependencies -mcp dev server.py --with pandas --with numpy -# Mount local code -mcp dev server.py --with-editable . -``` +class ValidationModel(BaseModel): + """Model for validating structured output.""" -### Claude Desktop Integration + status: str + data: dict[str, int] -Once your server is ready, install it in Claude Desktop: -```bash -mcp install server.py +@mcp.tool() +def advanced_tool() -> CallToolResult: + """Return CallToolResult directly for full control including _meta field.""" + return CallToolResult( + content=[TextContent(type="text", text="Response visible to the model")], + _meta={"hidden": "data for client applications only"}, + ) -# Custom name -mcp install server.py --name "My Analytics Server" -# Environment variables -mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://... -mcp install server.py -f .env +@mcp.tool() +def validated_tool() -> Annotated[CallToolResult, ValidationModel]: + """Return CallToolResult with structured output validation.""" + return CallToolResult( + content=[TextContent(type="text", text="Validated response")], + structured_content={"status": "success", "data": {"result": 42}}, + _meta={"internal": "metadata"}, + ) + + +@mcp.tool() +def empty_result_tool() -> CallToolResult: + """For empty results, return CallToolResult with empty content.""" + return CallToolResult(content=[]) ``` -### Direct Execution +_Full example: [examples/snippets/servers/direct_call_tool_result.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/direct_call_tool_result.py)_ + -For advanced scenarios like custom deployments: +**Important:** `CallToolResult` must always be returned (no `Optional` or `Union`). For empty results, use `CallToolResult(content=[])`. For optional simple types, use `str | None` without `CallToolResult`. + ```python -from mcp.server.fastmcp import FastMCP +"""Example showing structured output with tools.""" -mcp = FastMCP("My App") +from typing import TypedDict -if __name__ == "__main__": - mcp.run() -``` +from pydantic import BaseModel, Field -Run it with: -```bash -python server.py -# or -mcp run server.py -``` +from mcp.server.mcpserver import MCPServer -### Mounting to an Existing ASGI Server +mcp = MCPServer("Structured Output Example") -You can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications. -```python -from starlette.applications import Starlette -from starlette.routing import Mount, Host -from mcp.server.fastmcp import FastMCP +# Using Pydantic models for rich structured data +class WeatherData(BaseModel): + """Weather information structure.""" + temperature: float = Field(description="Temperature in Celsius") + humidity: float = Field(description="Humidity percentage") + condition: str + wind_speed: float -mcp = FastMCP("My App") -# Mount the SSE server to the existing ASGI server -app = Starlette( - routes=[ - Mount('/', app=mcp.sse_app()), - ] -) +@mcp.tool() +def get_weather(city: str) -> WeatherData: + """Get weather for a city - returns structured data.""" + # Simulated weather data + return WeatherData( + temperature=22.5, + humidity=45.0, + condition="sunny", + wind_speed=5.2, + ) -# or dynamically mount as host -app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app())) -``` -For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes). +# Using TypedDict for simpler structures +class LocationInfo(TypedDict): + latitude: float + longitude: float + name: str -## Examples -### Echo Server +@mcp.tool() +def get_location(address: str) -> LocationInfo: + """Get location coordinates""" + return LocationInfo(latitude=51.5074, longitude=-0.1278, name="London, UK") -A simple server demonstrating resources, tools, and prompts: -```python -from mcp.server.fastmcp import FastMCP +# Using dict[str, Any] for flexible schemas +@mcp.tool() +def get_statistics(data_type: str) -> dict[str, float]: + """Get various statistics""" + return {"mean": 42.5, "median": 40.0, "std_dev": 5.2} -mcp = FastMCP("Echo") +# Ordinary classes with type hints work for structured output +class UserProfile: + name: str + age: int + email: str | None = None -@mcp.resource("echo://{message}") -def echo_resource(message: str) -> str: - """Echo a message as a resource""" - return f"Resource echo: {message}" + def __init__(self, name: str, age: int, email: str | None = None): + self.name = name + self.age = age + self.email = email @mcp.tool() -def echo_tool(message: str) -> str: - """Echo a message as a tool""" - return f"Tool echo: {message}" +def get_user(user_id: str) -> UserProfile: + """Get user profile - returns structured data""" + return UserProfile(name="Alice", age=30, email="alice@example.com") -@mcp.prompt() -def echo_prompt(message: str) -> str: - """Create an echo prompt""" - return f"Please process this message: {message}" +# Classes WITHOUT type hints cannot be used for structured output +class UntypedConfig: + def __init__(self, setting1, setting2): # type: ignore[reportMissingParameterType] + self.setting1 = setting1 + self.setting2 = setting2 + + +@mcp.tool() +def get_config() -> UntypedConfig: + """This returns unstructured output - no schema generated""" + return UntypedConfig("value1", "value2") + + +# Lists and other types are wrapped automatically +@mcp.tool() +def list_cities() -> list[str]: + """Get a list of cities""" + return ["London", "Paris", "Tokyo"] + # Returns: {"result": ["London", "Paris", "Tokyo"]} + + +@mcp.tool() +def get_temperature(city: str) -> float: + """Get temperature as a simple float""" + return 22.5 + # Returns: {"result": 22.5} ``` -### SQLite Explorer +_Full example: [examples/snippets/servers/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/structured_output.py)_ + -A more complex example showing database integration: +### Prompts -```python -import sqlite3 +Prompts are reusable templates that help LLMs interact with your server effectively: -from mcp.server.fastmcp import FastMCP + +```python +from mcp.server.mcpserver import MCPServer +from mcp.server.mcpserver.prompts import base -mcp = FastMCP("SQLite Explorer") +mcp = MCPServer(name="Prompt Example") -@mcp.resource("schema://main") -def get_schema() -> str: - """Provide the database schema as a resource""" - conn = sqlite3.connect("database.db") - schema = conn.execute("SELECT sql FROM sqlite_master WHERE type='table'").fetchall() - return "\n".join(sql[0] for sql in schema if sql[0]) +@mcp.prompt(title="Code Review") +def review_code(code: str) -> str: + return f"Please review this code:\n\n{code}" -@mcp.tool() -def query_data(sql: str) -> str: - """Execute SQL queries safely""" - conn = sqlite3.connect("database.db") - try: - result = conn.execute(sql).fetchall() - return "\n".join(str(row) for row in result) - except Exception as e: - return f"Error: {str(e)}" +@mcp.prompt(title="Debug Assistant") +def debug_error(error: str) -> list[base.Message]: + return [ + base.UserMessage("I'm seeing this error:"), + base.UserMessage(error), + base.AssistantMessage("I'll help debug that. What have you tried so far?"), + ] ``` -## Advanced Usage +_Full example: [examples/snippets/servers/basic_prompt.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_prompt.py)_ + -### Low-Level Server +### Icons -For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API: +MCP servers can provide icons for UI display. Icons can be added to the server implementation, tools, resources, and prompts: ```python -from contextlib import asynccontextmanager -from collections.abc import AsyncIterator +from mcp.server.mcpserver import MCPServer, Icon -from fake_database import Database # Replace with your actual DB type +# Create an icon from a file path or URL +icon = Icon( + src="icon.png", + mimeType="image/png", + sizes="64x64" +) -from mcp.server import Server +# Add icons to server +mcp = MCPServer( + "My Server", + website_url="https://example.com", + icons=[icon] +) +# Add icons to tools, resources, and prompts +@mcp.tool(icons=[icon]) +def my_tool(): + """Tool with an icon.""" + return "result" -@asynccontextmanager -async def server_lifespan(server: Server) -> AsyncIterator[dict]: - """Manage server startup and shutdown lifecycle.""" - # Initialize resources on startup - db = await Database.connect() - try: - yield {"db": db} - finally: - # Clean up on shutdown - await db.disconnect() +@mcp.resource("demo://resource", icons=[icon]) +def my_resource(): + """Resource with an icon.""" + return "content" +``` +_Full example: [examples/mcpserver/icons_demo.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/mcpserver/icons_demo.py)_ -# Pass lifespan to server -server = Server("example-server", lifespan=server_lifespan) +### Images +MCPServer provides an `Image` class that automatically handles image data: -# Access lifespan context in handlers -@server.call_tool() -async def query_db(name: str, arguments: dict) -> list: - ctx = server.request_context - db = ctx.lifespan_context["db"] - return await db.query(arguments["query"]) + +```python +"""Example showing image handling with MCPServer.""" + +from PIL import Image as PILImage + +from mcp.server.mcpserver import Image, MCPServer + +mcp = MCPServer("Image Example") + + +@mcp.tool() +def create_thumbnail(image_path: str) -> Image: + """Create a thumbnail from an image""" + img = PILImage.open(image_path) + img.thumbnail((100, 100)) + return Image(data=img.tobytes(), format="png") ``` -The lifespan API provides: -- A way to initialize resources when the server starts and clean them up when it stops -- Access to initialized resources through the request context in handlers -- Type-safe context passing between lifespan and request handlers +_Full example: [examples/snippets/servers/images.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/images.py)_ + -```python -import mcp.server.stdio -import mcp.types as types -from mcp.server.lowlevel import NotificationOptions, Server -from mcp.server.models import InitializationOptions +### Context -# Create a server instance -server = Server("example-server") +The Context object is automatically injected into tool and resource functions that request it via type hints. It provides access to MCP capabilities like logging, progress reporting, resource reading, user interaction, and request metadata. +#### Getting Context in Functions -@server.list_prompts() -async def handle_list_prompts() -> list[types.Prompt]: - return [ - types.Prompt( - name="example-prompt", - description="An example prompt template", - arguments=[ - types.PromptArgument( - name="arg1", description="Example argument", required=True - ) - ], - ) - ] +To use context in a tool or resource function, add a parameter with the `Context` type annotation: +```python +from mcp.server.mcpserver import Context, MCPServer -@server.get_prompt() -async def handle_get_prompt( - name: str, arguments: dict[str, str] | None -) -> types.GetPromptResult: - if name != "example-prompt": - raise ValueError(f"Unknown prompt: {name}") +mcp = MCPServer(name="Context Example") - return types.GetPromptResult( - description="Example prompt", - messages=[ - types.PromptMessage( - role="user", - content=types.TextContent(type="text", text="Example prompt text"), - ) - ], - ) +@mcp.tool() +async def my_tool(x: int, ctx: Context) -> str: + """Tool that uses context capabilities.""" + # The context parameter can have any name as long as it's type-annotated + return await process_with_context(x, ctx) +``` -async def run(): - async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): - await server.run( - read_stream, - write_stream, - InitializationOptions( - server_name="example", - server_version="0.1.0", - capabilities=server.get_capabilities( - notification_options=NotificationOptions(), - experimental_capabilities={}, - ), - ), - ) +#### Context Properties and Methods + +The Context object provides the following capabilities: + +- `ctx.request_id` - Unique ID for the current request +- `ctx.client_id` - Client ID if available +- `ctx.mcp_server` - Access to the MCPServer server instance (see [MCPServer Properties](#mcpserver-properties)) +- `ctx.session` - Access to the underlying session for advanced communication (see [Session Properties and Methods](#session-properties-and-methods)) +- `ctx.request_context` - Access to request-specific data and lifespan resources (see [Request Context Properties](#request-context-properties)) +- `await ctx.debug(message)` - Send debug log message +- `await ctx.info(message)` - Send info log message +- `await ctx.warning(message)` - Send warning log message +- `await ctx.error(message)` - Send error log message +- `await ctx.log(level, message, logger_name=None)` - Send log with custom level +- `await ctx.report_progress(progress, total=None, message=None)` - Report operation progress +- `await ctx.read_resource(uri)` - Read a resource by URI +- `await ctx.elicit(message, schema)` - Request additional information from user with validation + + +```python +from mcp.server.mcpserver import Context, MCPServer +from mcp.server.session import ServerSession +mcp = MCPServer(name="Progress Example") -if __name__ == "__main__": - import asyncio - asyncio.run(run()) +@mcp.tool() +async def long_running_task(task_name: str, ctx: Context[ServerSession, None], steps: int = 5) -> str: + """Execute a task with progress updates.""" + await ctx.info(f"Starting: {task_name}") + + for i in range(steps): + progress = (i + 1) / steps + await ctx.report_progress( + progress=progress, + total=1.0, + message=f"Step {i + 1}/{steps}", + ) + await ctx.debug(f"Completed step {i + 1}") + + return f"Task '{task_name}' completed" ``` -### Writing MCP Clients +_Full example: [examples/snippets/servers/tool_progress.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/tool_progress.py)_ + -The SDK provides a high-level client interface for connecting to MCP servers: +### Completions +MCP supports providing completion suggestions for prompt arguments and resource template parameters. With the context parameter, servers can provide completions based on previously resolved values: + +Client usage: + + ```python -from mcp import ClientSession, StdioServerParameters, types +"""cd to the `examples/snippets` directory and run: +uv run completion-client +""" + +import asyncio +import os + +from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client +from mcp.types import PromptReference, ResourceTemplateReference # Create server parameters for stdio connection server_params = StdioServerParameters( - command="python", # Executable - args=["example_server.py"], # Optional command line arguments - env=None, # Optional environment variables + command="uv", # Using uv to run the server + args=["run", "server", "completion", "stdio"], # Server with completion support + env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, ) -# Optional: create a sampling callback -async def handle_sampling_message( - message: types.CreateMessageRequestParams, -) -> types.CreateMessageResult: - return types.CreateMessageResult( - role="assistant", - content=types.TextContent( - type="text", - text="Hello, world! from model", - ), - model="gpt-3.5-turbo", - stopReason="endTurn", - ) - - async def run(): + """Run the completion client example.""" async with stdio_client(server_params) as (read, write): - async with ClientSession( - read, write, sampling_callback=handle_sampling_message - ) as session: + async with ClientSession(read, write) as session: # Initialize the connection await session.initialize() + # List available resource templates + templates = await session.list_resource_templates() + print("Available resource templates:") + for template in templates.resource_templates: + print(f" - {template.uri_template}") + # List available prompts prompts = await session.list_prompts() + print("\nAvailable prompts:") + for prompt in prompts.prompts: + print(f" - {prompt.name}") + + # Complete resource template arguments + if templates.resource_templates: + template = templates.resource_templates[0] + print(f"\nCompleting arguments for resource template: {template.uri_template}") + + # Complete without context + result = await session.complete( + ref=ResourceTemplateReference(type="ref/resource", uri=template.uri_template), + argument={"name": "owner", "value": "model"}, + ) + print(f"Completions for 'owner' starting with 'model': {result.completion.values}") - # Get a prompt - prompt = await session.get_prompt( - "example-prompt", arguments={"arg1": "value"} - ) + # Complete with context - repo suggestions based on owner + result = await session.complete( + ref=ResourceTemplateReference(type="ref/resource", uri=template.uri_template), + argument={"name": "repo", "value": ""}, + context_arguments={"owner": "modelcontextprotocol"}, + ) + print(f"Completions for 'repo' with owner='modelcontextprotocol': {result.completion.values}") - # List available resources - resources = await session.list_resources() + # Complete prompt arguments + if prompts.prompts: + prompt_name = prompts.prompts[0].name + print(f"\nCompleting arguments for prompt: {prompt_name}") - # List available tools - tools = await session.list_tools() + result = await session.complete( + ref=PromptReference(type="ref/prompt", name=prompt_name), + argument={"name": "style", "value": ""}, + ) + print(f"Completions for 'style' argument: {result.completion.values}") - # Read a resource - content, mime_type = await session.read_resource("file://some/path") - # Call a tool - result = await session.call_tool("tool-name", arguments={"arg1": "value"}) +def main(): + """Entry point for the completion client.""" + asyncio.run(run()) if __name__ == "__main__": - import asyncio + main() +``` - asyncio.run(run()) +_Full example: [examples/snippets/clients/completion_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/completion_client.py)_ + +### Elicitation + +Request additional information from users. This example shows an Elicitation during a Tool Call: + + +```python +"""Elicitation examples demonstrating form and URL mode elicitation. + +Form mode elicitation collects structured, non-sensitive data through a schema. +URL mode elicitation directs users to external URLs for sensitive operations +like OAuth flows, credential collection, or payment processing. +""" + +import uuid + +from pydantic import BaseModel, Field + +from mcp.server.mcpserver import Context, MCPServer +from mcp.server.session import ServerSession +from mcp.shared.exceptions import UrlElicitationRequiredError +from mcp.types import ElicitRequestURLParams + +mcp = MCPServer(name="Elicitation Example") + + +class BookingPreferences(BaseModel): + """Schema for collecting user preferences.""" + + checkAlternative: bool = Field(description="Would you like to check another date?") + alternativeDate: str = Field( + default="2024-12-26", + description="Alternative date (YYYY-MM-DD)", + ) + + +@mcp.tool() +async def book_table(date: str, time: str, party_size: int, ctx: Context[ServerSession, None]) -> str: + """Book a table with date availability check. + + This demonstrates form mode elicitation for collecting non-sensitive user input. + """ + # Check if date is available + if date == "2024-12-25": + # Date unavailable - ask user for alternative + result = await ctx.elicit( + message=(f"No tables available for {party_size} on {date}. Would you like to try another date?"), + schema=BookingPreferences, + ) + + if result.action == "accept" and result.data: + if result.data.checkAlternative: + return f"[SUCCESS] Booked for {result.data.alternativeDate}" + return "[CANCELLED] No booking made" + return "[CANCELLED] Booking cancelled" + + # Date available + return f"[SUCCESS] Booked for {date} at {time}" + + +@mcp.tool() +async def secure_payment(amount: float, ctx: Context[ServerSession, None]) -> str: + """Process a secure payment requiring URL confirmation. + + This demonstrates URL mode elicitation using ctx.elicit_url() for + operations that require out-of-band user interaction. + """ + elicitation_id = str(uuid.uuid4()) + + result = await ctx.elicit_url( + message=f"Please confirm payment of ${amount:.2f}", + url=f"https://payments.example.com/confirm?amount={amount}&id={elicitation_id}", + elicitation_id=elicitation_id, + ) + + if result.action == "accept": + # In a real app, the payment confirmation would happen out-of-band + # and you'd verify the payment status from your backend + return f"Payment of ${amount:.2f} initiated - check your browser to complete" + elif result.action == "decline": + return "Payment declined by user" + return "Payment cancelled" + + +@mcp.tool() +async def connect_service(service_name: str, ctx: Context[ServerSession, None]) -> str: + """Connect to a third-party service requiring OAuth authorization. + + This demonstrates the "throw error" pattern using UrlElicitationRequiredError. + Use this pattern when the tool cannot proceed without user authorization. + """ + elicitation_id = str(uuid.uuid4()) + + # Raise UrlElicitationRequiredError to signal that the client must complete + # a URL elicitation before this request can be processed. + # The MCP framework will convert this to a -32042 error response. + raise UrlElicitationRequiredError( + [ + ElicitRequestURLParams( + mode="url", + message=f"Authorization required to connect to {service_name}", + url=f"https://{service_name}.example.com/oauth/authorize?elicit={elicitation_id}", + elicitation_id=elicitation_id, + ) + ] + ) +``` + +_Full example: [examples/snippets/servers/elicitation.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/elicitation.py)_ + + +Elicitation schemas support default values for all field types. Default values are automatically included in the JSON schema sent to clients, allowing them to pre-populate forms. + +The `elicit()` method returns an `ElicitationResult` with: + +- `action`: "accept", "decline", or "cancel" +- `data`: The validated response (only when accepted) +- `validation_error`: Any validation error message + +### Sampling + +Tools can interact with LLMs through sampling (generating text): + + +```python +from mcp.server.mcpserver import Context, MCPServer +from mcp.server.session import ServerSession +from mcp.types import SamplingMessage, TextContent + +mcp = MCPServer(name="Sampling Example") + + +@mcp.tool() +async def generate_poem(topic: str, ctx: Context[ServerSession, None]) -> str: + """Generate a poem using LLM sampling.""" + prompt = f"Write a short poem about {topic}" + + result = await ctx.session.create_message( + messages=[ + SamplingMessage( + role="user", + content=TextContent(type="text", text=prompt), + ) + ], + max_tokens=100, + ) + + # Since we're not passing tools param, result.content is single content + if result.content.type == "text": + return result.content.text + return str(result.content) +``` + +_Full example: [examples/snippets/servers/sampling.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/sampling.py)_ + + +### Logging and Notifications + +Tools can send logs and notifications through the context: + + +```python +from mcp.server.mcpserver import Context, MCPServer +from mcp.server.session import ServerSession + +mcp = MCPServer(name="Notifications Example") + + +@mcp.tool() +async def process_data(data: str, ctx: Context[ServerSession, None]) -> str: + """Process data with logging.""" + # Different log levels + await ctx.debug(f"Debug: Processing '{data}'") + await ctx.info("Info: Starting processing") + await ctx.warning("Warning: This is experimental") + await ctx.error("Error: (This is just a demo)") + + # Notify about resource changes + await ctx.session.send_resource_list_changed() + + return f"Processed: {data}" +``` + +_Full example: [examples/snippets/servers/notifications.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/notifications.py)_ + + +### Authentication + +Authentication can be used by servers that want to expose tools accessing protected resources. + +`mcp.server.auth` implements OAuth 2.1 resource server functionality, where MCP servers act as Resource Servers (RS) that validate tokens issued by separate Authorization Servers (AS). This follows the [MCP authorization specification](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization) and implements RFC 9728 (Protected Resource Metadata) for AS discovery. + +MCP servers can use authentication by providing an implementation of the `TokenVerifier` protocol: + + +```python +"""Run from the repository root: +uv run examples/snippets/servers/oauth_server.py +""" + +from pydantic import AnyHttpUrl + +from mcp.server.auth.provider import AccessToken, TokenVerifier +from mcp.server.auth.settings import AuthSettings +from mcp.server.mcpserver import MCPServer + + +class SimpleTokenVerifier(TokenVerifier): + """Simple token verifier for demonstration.""" + + async def verify_token(self, token: str) -> AccessToken | None: + pass # This is where you would implement actual token validation + + +# Create MCPServer instance as a Resource Server +mcp = MCPServer( + "Weather Service", + # Token verifier for authentication + token_verifier=SimpleTokenVerifier(), + # Auth settings for RFC 9728 Protected Resource Metadata + auth=AuthSettings( + issuer_url=AnyHttpUrl("https://auth.example.com"), # Authorization Server URL + resource_server_url=AnyHttpUrl("http://localhost:3001"), # This server's URL + required_scopes=["user"], + ), +) + + +@mcp.tool() +async def get_weather(city: str = "London") -> dict[str, str]: + """Get weather data for a city""" + return { + "city": city, + "temperature": "22", + "condition": "Partly cloudy", + "humidity": "65%", + } + + +if __name__ == "__main__": + mcp.run(transport="streamable-http", json_response=True) +``` + +_Full example: [examples/snippets/servers/oauth_server.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/oauth_server.py)_ + + +For a complete example with separate Authorization Server and Resource Server implementations, see [`examples/servers/simple-auth/`](examples/servers/simple-auth/). + +**Architecture:** + +- **Authorization Server (AS)**: Handles OAuth flows, user authentication, and token issuance +- **Resource Server (RS)**: Your MCP server that validates tokens and serves protected resources +- **Client**: Discovers AS through RFC 9728, obtains tokens, and uses them with the MCP server + +See [TokenVerifier](src/mcp/server/auth/provider.py) for more details on implementing token validation. + +### MCPServer Properties + +The MCPServer server instance accessible via `ctx.mcp_server` provides access to server configuration and metadata: + +- `ctx.mcp_server.name` - The server's name as defined during initialization +- `ctx.mcp_server.instructions` - Server instructions/description provided to clients +- `ctx.mcp_server.website_url` - Optional website URL for the server +- `ctx.mcp_server.icons` - Optional list of icons for UI display +- `ctx.mcp_server.settings` - Complete server configuration object containing: + - `debug` - Debug mode flag + - `log_level` - Current logging level + - `host` and `port` - Server network configuration + - `sse_path`, `streamable_http_path` - Transport paths + - `stateless_http` - Whether the server operates in stateless mode + - And other configuration options + +```python +@mcp.tool() +def server_info(ctx: Context) -> dict: + """Get information about the current server.""" + return { + "name": ctx.mcp_server.name, + "instructions": ctx.mcp_server.instructions, + "debug_mode": ctx.mcp_server.settings.debug, + "log_level": ctx.mcp_server.settings.log_level, + "host": ctx.mcp_server.settings.host, + "port": ctx.mcp_server.settings.port, + } +``` + +### Session Properties and Methods + +The session object accessible via `ctx.session` provides advanced control over client communication: + +- `ctx.session.client_params` - Client initialization parameters and declared capabilities +- `await ctx.session.send_log_message(level, data, logger)` - Send log messages with full control +- `await ctx.session.create_message(messages, max_tokens)` - Request LLM sampling/completion +- `await ctx.session.send_progress_notification(token, progress, total, message)` - Direct progress updates +- `await ctx.session.send_resource_updated(uri)` - Notify clients that a specific resource changed +- `await ctx.session.send_resource_list_changed()` - Notify clients that the resource list changed +- `await ctx.session.send_tool_list_changed()` - Notify clients that the tool list changed +- `await ctx.session.send_prompt_list_changed()` - Notify clients that the prompt list changed + +```python +@mcp.tool() +async def notify_data_update(resource_uri: str, ctx: Context) -> str: + """Update data and notify clients of the change.""" + # Perform data update logic here + + # Notify clients that this specific resource changed + await ctx.session.send_resource_updated(AnyUrl(resource_uri)) + + # If this affects the overall resource list, notify about that too + await ctx.session.send_resource_list_changed() + + return f"Updated {resource_uri} and notified clients" +``` + +### Request Context Properties + +The request context accessible via `ctx.request_context` contains request-specific information and resources: + +- `ctx.request_context.lifespan_context` - Access to resources initialized during server startup + - Database connections, configuration objects, shared services + - Type-safe access to resources defined in your server's lifespan function +- `ctx.request_context.meta` - Request metadata from the client including: + - `progressToken` - Token for progress notifications + - Other client-provided metadata +- `ctx.request_context.request` - The original MCP request object for advanced processing +- `ctx.request_context.request_id` - Unique identifier for this request + +```python +# Example with typed lifespan context +@dataclass +class AppContext: + db: Database + config: AppConfig + +@mcp.tool() +def query_with_config(query: str, ctx: Context) -> str: + """Execute a query using shared database and configuration.""" + # Access typed lifespan context + app_ctx: AppContext = ctx.request_context.lifespan_context + + # Use shared resources + connection = app_ctx.db + settings = app_ctx.config + + # Execute query with configuration + result = connection.execute(query, timeout=settings.query_timeout) + return str(result) +``` + +_Full lifespan example: [examples/snippets/servers/lifespan_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lifespan_example.py)_ + +## Running Your Server + +### Development Mode + +The fastest way to test and debug your server is with the MCP Inspector: + +```bash +uv run mcp dev server.py + +# Add dependencies +uv run mcp dev server.py --with pandas --with numpy + +# Mount local code +uv run mcp dev server.py --with-editable . +``` + +### Claude Desktop Integration + +Once your server is ready, install it in Claude Desktop: + +```bash +uv run mcp install server.py + +# Custom name +uv run mcp install server.py --name "My Analytics Server" + +# Environment variables +uv run mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://... +uv run mcp install server.py -f .env +``` + +### Direct Execution + +For advanced scenarios like custom deployments: + + +```python +"""Example showing direct execution of an MCP server. + +This is the simplest way to run an MCP server directly. +cd to the `examples/snippets` directory and run: + uv run direct-execution-server + or + python servers/direct_execution.py +""" + +from mcp.server.mcpserver import MCPServer + +mcp = MCPServer("My App") + + +@mcp.tool() +def hello(name: str = "World") -> str: + """Say hello to someone.""" + return f"Hello, {name}!" + + +def main(): + """Entry point for the direct execution server.""" + mcp.run() + + +if __name__ == "__main__": + main() +``` + +_Full example: [examples/snippets/servers/direct_execution.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/direct_execution.py)_ + + +Run it with: + +```bash +python servers/direct_execution.py +# or +uv run mcp run servers/direct_execution.py +``` + +Note that `uv run mcp run` or `uv run mcp dev` only supports server using MCPServer and not the low-level server variant. + +### Streamable HTTP Transport + +> **Note**: Streamable HTTP transport is the recommended transport for production deployments. Use `stateless_http=True` and `json_response=True` for optimal scalability. + + +```python +"""Run from the repository root: +uv run examples/snippets/servers/streamable_config.py +""" + +from mcp.server.mcpserver import MCPServer + +mcp = MCPServer("StatelessServer") + + +# Add a simple tool to demonstrate the server +@mcp.tool() +def greet(name: str = "World") -> str: + """Greet someone by name.""" + return f"Hello, {name}!" + + +# Run server with streamable_http transport +# Transport-specific options (stateless_http, json_response) are passed to run() +if __name__ == "__main__": + # Stateless server with JSON responses (recommended) + mcp.run(transport="streamable-http", stateless_http=True, json_response=True) + + # Other configuration options: + # Stateless server with SSE streaming responses + # mcp.run(transport="streamable-http", stateless_http=True) + + # Stateful server with session persistence + # mcp.run(transport="streamable-http") +``` + +_Full example: [examples/snippets/servers/streamable_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_config.py)_ + + +You can mount multiple MCPServer servers in a Starlette application: + + +```python +"""Run from the repository root: +uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload +""" + +import contextlib + +from starlette.applications import Starlette +from starlette.routing import Mount + +from mcp.server.mcpserver import MCPServer + +# Create the Echo server +echo_mcp = MCPServer(name="EchoServer") + + +@echo_mcp.tool() +def echo(message: str) -> str: + """A simple echo tool""" + return f"Echo: {message}" + + +# Create the Math server +math_mcp = MCPServer(name="MathServer") + + +@math_mcp.tool() +def add_two(n: int) -> int: + """Tool to add two to the input""" + return n + 2 + + +# Create a combined lifespan to manage both session managers +@contextlib.asynccontextmanager +async def lifespan(app: Starlette): + async with contextlib.AsyncExitStack() as stack: + await stack.enter_async_context(echo_mcp.session_manager.run()) + await stack.enter_async_context(math_mcp.session_manager.run()) + yield + + +# Create the Starlette app and mount the MCP servers +app = Starlette( + routes=[ + Mount("/echo", echo_mcp.streamable_http_app(stateless_http=True, json_response=True)), + Mount("/math", math_mcp.streamable_http_app(stateless_http=True, json_response=True)), + ], + lifespan=lifespan, +) + +# Note: Clients connect to http://localhost:8000/echo/mcp and http://localhost:8000/math/mcp +# To mount at the root of each path (e.g., /echo instead of /echo/mcp): +# echo_mcp.streamable_http_app(streamable_http_path="/", stateless_http=True, json_response=True) +# math_mcp.streamable_http_app(streamable_http_path="/", stateless_http=True, json_response=True) +``` + +_Full example: [examples/snippets/servers/streamable_starlette_mount.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_starlette_mount.py)_ + + +For low level server with Streamable HTTP implementations, see: + +- Stateful server: [`examples/servers/simple-streamablehttp/`](examples/servers/simple-streamablehttp/) +- Stateless server: [`examples/servers/simple-streamablehttp-stateless/`](examples/servers/simple-streamablehttp-stateless/) + +The streamable HTTP transport supports: + +- Stateful and stateless operation modes +- Resumability with event stores +- JSON or SSE response formats +- Better scalability for multi-node deployments + +#### CORS Configuration for Browser-Based Clients + +If you'd like your server to be accessible by browser-based MCP clients, you'll need to configure CORS headers. The `Mcp-Session-Id` header must be exposed for browser clients to access it: + +```python +from starlette.applications import Starlette +from starlette.middleware.cors import CORSMiddleware + +# Create your Starlette app first +starlette_app = Starlette(routes=[...]) + +# Then wrap it with CORS middleware +starlette_app = CORSMiddleware( + starlette_app, + allow_origins=["*"], # Configure appropriately for production + allow_methods=["GET", "POST", "DELETE"], # MCP streamable HTTP methods + expose_headers=["Mcp-Session-Id"], +) +``` + +This configuration is necessary because: + +- The MCP streamable HTTP transport uses the `Mcp-Session-Id` header for session management +- Browsers restrict access to response headers unless explicitly exposed via CORS +- Without this configuration, browser-based clients won't be able to read the session ID from initialization responses + +### Mounting to an Existing ASGI Server + +By default, SSE servers are mounted at `/sse` and Streamable HTTP servers are mounted at `/mcp`. You can customize these paths using the methods described below. + +For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes). + +#### StreamableHTTP servers + +You can mount the StreamableHTTP server to an existing ASGI server using the `streamable_http_app` method. This allows you to integrate the StreamableHTTP server with other ASGI applications. + +##### Basic mounting + + +```python +"""Basic example showing how to mount StreamableHTTP server in Starlette. + +Run from the repository root: + uvicorn examples.snippets.servers.streamable_http_basic_mounting:app --reload +""" + +import contextlib + +from starlette.applications import Starlette +from starlette.routing import Mount + +from mcp.server.mcpserver import MCPServer + +# Create MCP server +mcp = MCPServer("My App") + + +@mcp.tool() +def hello() -> str: + """A simple hello tool""" + return "Hello from MCP!" + + +# Create a lifespan context manager to run the session manager +@contextlib.asynccontextmanager +async def lifespan(app: Starlette): + async with mcp.session_manager.run(): + yield + + +# Mount the StreamableHTTP server to the existing ASGI server +# Transport-specific options are passed to streamable_http_app() +app = Starlette( + routes=[ + Mount("/", app=mcp.streamable_http_app(json_response=True)), + ], + lifespan=lifespan, +) +``` + +_Full example: [examples/snippets/servers/streamable_http_basic_mounting.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_basic_mounting.py)_ + + +##### Host-based routing + + +```python +"""Example showing how to mount StreamableHTTP server using Host-based routing. + +Run from the repository root: + uvicorn examples.snippets.servers.streamable_http_host_mounting:app --reload +""" + +import contextlib + +from starlette.applications import Starlette +from starlette.routing import Host + +from mcp.server.mcpserver import MCPServer + +# Create MCP server +mcp = MCPServer("MCP Host App") + + +@mcp.tool() +def domain_info() -> str: + """Get domain-specific information""" + return "This is served from mcp.acme.corp" + + +# Create a lifespan context manager to run the session manager +@contextlib.asynccontextmanager +async def lifespan(app: Starlette): + async with mcp.session_manager.run(): + yield + + +# Mount using Host-based routing +# Transport-specific options are passed to streamable_http_app() +app = Starlette( + routes=[ + Host("mcp.acme.corp", app=mcp.streamable_http_app(json_response=True)), + ], + lifespan=lifespan, +) +``` + +_Full example: [examples/snippets/servers/streamable_http_host_mounting.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_host_mounting.py)_ + + +##### Multiple servers with path configuration + + +```python +"""Example showing how to mount multiple StreamableHTTP servers with path configuration. + +Run from the repository root: + uvicorn examples.snippets.servers.streamable_http_multiple_servers:app --reload +""" + +import contextlib + +from starlette.applications import Starlette +from starlette.routing import Mount + +from mcp.server.mcpserver import MCPServer + +# Create multiple MCP servers +api_mcp = MCPServer("API Server") +chat_mcp = MCPServer("Chat Server") + + +@api_mcp.tool() +def api_status() -> str: + """Get API status""" + return "API is running" + + +@chat_mcp.tool() +def send_message(message: str) -> str: + """Send a chat message""" + return f"Message sent: {message}" + + +# Create a combined lifespan to manage both session managers +@contextlib.asynccontextmanager +async def lifespan(app: Starlette): + async with contextlib.AsyncExitStack() as stack: + await stack.enter_async_context(api_mcp.session_manager.run()) + await stack.enter_async_context(chat_mcp.session_manager.run()) + yield + + +# Mount the servers with transport-specific options passed to streamable_http_app() +# streamable_http_path="/" means endpoints will be at /api and /chat instead of /api/mcp and /chat/mcp +app = Starlette( + routes=[ + Mount("/api", app=api_mcp.streamable_http_app(json_response=True, streamable_http_path="/")), + Mount("/chat", app=chat_mcp.streamable_http_app(json_response=True, streamable_http_path="/")), + ], + lifespan=lifespan, +) +``` + +_Full example: [examples/snippets/servers/streamable_http_multiple_servers.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_multiple_servers.py)_ + + +##### Path configuration at initialization + + +```python +"""Example showing path configuration when mounting MCPServer. + +Run from the repository root: + uvicorn examples.snippets.servers.streamable_http_path_config:app --reload +""" + +from starlette.applications import Starlette +from starlette.routing import Mount + +from mcp.server.mcpserver import MCPServer + +# Create a simple MCPServer server +mcp_at_root = MCPServer("My Server") + + +@mcp_at_root.tool() +def process_data(data: str) -> str: + """Process some data""" + return f"Processed: {data}" + + +# Mount at /process with streamable_http_path="/" so the endpoint is /process (not /process/mcp) +# Transport-specific options like json_response are passed to streamable_http_app() +app = Starlette( + routes=[ + Mount( + "/process", + app=mcp_at_root.streamable_http_app(json_response=True, streamable_http_path="/"), + ), + ] +) +``` + +_Full example: [examples/snippets/servers/streamable_http_path_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_path_config.py)_ + + +#### SSE servers + +> **Note**: SSE transport is being superseded by [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http). + +You can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications. + +```python +from starlette.applications import Starlette +from starlette.routing import Mount, Host +from mcp.server.mcpserver import MCPServer + + +mcp = MCPServer("My App") + +# Mount the SSE server to the existing ASGI server +app = Starlette( + routes=[ + Mount('/', app=mcp.sse_app()), + ] +) + +# or dynamically mount as host +app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app())) +``` + +You can also mount multiple MCP servers at different sub-paths. The SSE transport automatically detects the mount path via ASGI's `root_path` mechanism, so message endpoints are correctly routed: + +```python +from starlette.applications import Starlette +from starlette.routing import Mount +from mcp.server.mcpserver import MCPServer + +# Create multiple MCP servers +github_mcp = MCPServer("GitHub API") +browser_mcp = MCPServer("Browser") +search_mcp = MCPServer("Search") + +# Mount each server at its own sub-path +# The SSE transport automatically uses ASGI's root_path to construct +# the correct message endpoint (e.g., /github/messages/, /browser/messages/) +app = Starlette( + routes=[ + Mount("/github", app=github_mcp.sse_app()), + Mount("/browser", app=browser_mcp.sse_app()), + Mount("/search", app=search_mcp.sse_app()), + ] +) +``` + +For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes). + +## Advanced Usage + +### Low-Level Server + +For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API: + + +```python +"""Run from the repository root: +uv run examples/snippets/servers/lowlevel/lifespan.py +""" + +from collections.abc import AsyncIterator +from contextlib import asynccontextmanager +from typing import Any + +import mcp.server.stdio +import mcp.types as types +from mcp.server.lowlevel import NotificationOptions, Server +from mcp.server.models import InitializationOptions + + +# Mock database class for example +class Database: + """Mock database class for example.""" + + @classmethod + async def connect(cls) -> "Database": + """Connect to database.""" + print("Database connected") + return cls() + + async def disconnect(self) -> None: + """Disconnect from database.""" + print("Database disconnected") + + async def query(self, query_str: str) -> list[dict[str, str]]: + """Execute a query.""" + # Simulate database query + return [{"id": "1", "name": "Example", "query": query_str}] + + +@asynccontextmanager +async def server_lifespan(_server: Server) -> AsyncIterator[dict[str, Any]]: + """Manage server startup and shutdown lifecycle.""" + # Initialize resources on startup + db = await Database.connect() + try: + yield {"db": db} + finally: + # Clean up on shutdown + await db.disconnect() + + +# Pass lifespan to server +server = Server("example-server", lifespan=server_lifespan) + + +@server.list_tools() +async def handle_list_tools() -> list[types.Tool]: + """List available tools.""" + return [ + types.Tool( + name="query_db", + description="Query the database", + input_schema={ + "type": "object", + "properties": {"query": {"type": "string", "description": "SQL query to execute"}}, + "required": ["query"], + }, + ) + ] + + +@server.call_tool() +async def query_db(name: str, arguments: dict[str, Any]) -> list[types.TextContent]: + """Handle database query tool call.""" + if name != "query_db": + raise ValueError(f"Unknown tool: {name}") + + # Access lifespan context + ctx = server.request_context + db = ctx.lifespan_context["db"] + + # Execute query + results = await db.query(arguments["query"]) + + return [types.TextContent(type="text", text=f"Query results: {results}")] + + +async def run(): + """Run the server with lifespan management.""" + async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): + await server.run( + read_stream, + write_stream, + InitializationOptions( + server_name="example-server", + server_version="0.1.0", + capabilities=server.get_capabilities( + notification_options=NotificationOptions(), + experimental_capabilities={}, + ), + ), + ) + + +if __name__ == "__main__": + import asyncio + + asyncio.run(run()) +``` + +_Full example: [examples/snippets/servers/lowlevel/lifespan.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/lifespan.py)_ + + +The lifespan API provides: + +- A way to initialize resources when the server starts and clean them up when it stops +- Access to initialized resources through the request context in handlers +- Type-safe context passing between lifespan and request handlers + + +```python +"""Run from the repository root: +uv run examples/snippets/servers/lowlevel/basic.py +""" + +import asyncio + +import mcp.server.stdio +import mcp.types as types +from mcp.server.lowlevel import NotificationOptions, Server +from mcp.server.models import InitializationOptions + +# Create a server instance +server = Server("example-server") + + +@server.list_prompts() +async def handle_list_prompts() -> list[types.Prompt]: + """List available prompts.""" + return [ + types.Prompt( + name="example-prompt", + description="An example prompt template", + arguments=[types.PromptArgument(name="arg1", description="Example argument", required=True)], + ) + ] + + +@server.get_prompt() +async def handle_get_prompt(name: str, arguments: dict[str, str] | None) -> types.GetPromptResult: + """Get a specific prompt by name.""" + if name != "example-prompt": + raise ValueError(f"Unknown prompt: {name}") + + arg1_value = (arguments or {}).get("arg1", "default") + + return types.GetPromptResult( + description="Example prompt", + messages=[ + types.PromptMessage( + role="user", + content=types.TextContent(type="text", text=f"Example prompt text with argument: {arg1_value}"), + ) + ], + ) + + +async def run(): + """Run the basic low-level server.""" + async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): + await server.run( + read_stream, + write_stream, + InitializationOptions( + server_name="example", + server_version="0.1.0", + capabilities=server.get_capabilities( + notification_options=NotificationOptions(), + experimental_capabilities={}, + ), + ), + ) + + +if __name__ == "__main__": + asyncio.run(run()) +``` + +_Full example: [examples/snippets/servers/lowlevel/basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/basic.py)_ + + +Caution: The `uv run mcp run` and `uv run mcp dev` tool doesn't support low-level server. + +#### Structured Output Support + +The low-level server supports structured output for tools, allowing you to return both human-readable content and machine-readable structured data. Tools can define an `outputSchema` to validate their structured output: + + +```python +"""Run from the repository root: +uv run examples/snippets/servers/lowlevel/structured_output.py +""" + +import asyncio +from typing import Any + +import mcp.server.stdio +import mcp.types as types +from mcp.server.lowlevel import NotificationOptions, Server +from mcp.server.models import InitializationOptions + +server = Server("example-server") + + +@server.list_tools() +async def list_tools() -> list[types.Tool]: + """List available tools with structured output schemas.""" + return [ + types.Tool( + name="get_weather", + description="Get current weather for a city", + input_schema={ + "type": "object", + "properties": {"city": {"type": "string", "description": "City name"}}, + "required": ["city"], + }, + output_schema={ + "type": "object", + "properties": { + "temperature": {"type": "number", "description": "Temperature in Celsius"}, + "condition": {"type": "string", "description": "Weather condition"}, + "humidity": {"type": "number", "description": "Humidity percentage"}, + "city": {"type": "string", "description": "City name"}, + }, + "required": ["temperature", "condition", "humidity", "city"], + }, + ) + ] + + +@server.call_tool() +async def call_tool(name: str, arguments: dict[str, Any]) -> dict[str, Any]: + """Handle tool calls with structured output.""" + if name == "get_weather": + city = arguments["city"] + + # Simulated weather data - in production, call a weather API + weather_data = { + "temperature": 22.5, + "condition": "partly cloudy", + "humidity": 65, + "city": city, # Include the requested city + } + + # low-level server will validate structured output against the tool's + # output schema, and additionally serialize it into a TextContent block + # for backwards compatibility with pre-2025-06-18 clients. + return weather_data + else: + raise ValueError(f"Unknown tool: {name}") + + +async def run(): + """Run the structured output server.""" + async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): + await server.run( + read_stream, + write_stream, + InitializationOptions( + server_name="structured-output-example", + server_version="0.1.0", + capabilities=server.get_capabilities( + notification_options=NotificationOptions(), + experimental_capabilities={}, + ), + ), + ) + + +if __name__ == "__main__": + asyncio.run(run()) +``` + +_Full example: [examples/snippets/servers/lowlevel/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/structured_output.py)_ + + +Tools can return data in four ways: + +1. **Content only**: Return a list of content blocks (default behavior before spec revision 2025-06-18) +2. **Structured data only**: Return a dictionary that will be serialized to JSON (Introduced in spec revision 2025-06-18) +3. **Both**: Return a tuple of (content, structured_data) preferred option to use for backwards compatibility +4. **Direct CallToolResult**: Return `CallToolResult` directly for full control (including `_meta` field) + +When an `outputSchema` is defined, the server automatically validates the structured output against the schema. This ensures type safety and helps catch errors early. + +##### Returning CallToolResult Directly + +For full control over the response including the `_meta` field (for passing data to client applications without exposing it to the model), return `CallToolResult` directly: + + +```python +"""Run from the repository root: +uv run examples/snippets/servers/lowlevel/direct_call_tool_result.py +""" + +import asyncio +from typing import Any + +import mcp.server.stdio +import mcp.types as types +from mcp.server.lowlevel import NotificationOptions, Server +from mcp.server.models import InitializationOptions + +server = Server("example-server") + + +@server.list_tools() +async def list_tools() -> list[types.Tool]: + """List available tools.""" + return [ + types.Tool( + name="advanced_tool", + description="Tool with full control including _meta field", + input_schema={ + "type": "object", + "properties": {"message": {"type": "string"}}, + "required": ["message"], + }, + ) + ] + + +@server.call_tool() +async def handle_call_tool(name: str, arguments: dict[str, Any]) -> types.CallToolResult: + """Handle tool calls by returning CallToolResult directly.""" + if name == "advanced_tool": + message = str(arguments.get("message", "")) + return types.CallToolResult( + content=[types.TextContent(type="text", text=f"Processed: {message}")], + structured_content={"result": "success", "message": message}, + _meta={"hidden": "data for client applications only"}, + ) + + raise ValueError(f"Unknown tool: {name}") + + +async def run(): + """Run the server.""" + async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): + await server.run( + read_stream, + write_stream, + InitializationOptions( + server_name="example", + server_version="0.1.0", + capabilities=server.get_capabilities( + notification_options=NotificationOptions(), + experimental_capabilities={}, + ), + ), + ) + + +if __name__ == "__main__": + asyncio.run(run()) +``` + +_Full example: [examples/snippets/servers/lowlevel/direct_call_tool_result.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/direct_call_tool_result.py)_ + + +**Note:** When returning `CallToolResult`, you bypass the automatic content/structured conversion. You must construct the complete response yourself. + +### Pagination (Advanced) + +For servers that need to handle large datasets, the low-level server provides paginated versions of list operations. This is an optional optimization - most servers won't need pagination unless they're dealing with hundreds or thousands of items. + +#### Server-side Implementation + + +```python +"""Example of implementing pagination with MCP server decorators.""" + +import mcp.types as types +from mcp.server.lowlevel import Server + +# Initialize the server +server = Server("paginated-server") + +# Sample data to paginate +ITEMS = [f"Item {i}" for i in range(1, 101)] # 100 items + + +@server.list_resources() +async def list_resources_paginated(request: types.ListResourcesRequest) -> types.ListResourcesResult: + """List resources with pagination support.""" + page_size = 10 + + # Extract cursor from request params + cursor = request.params.cursor if request.params is not None else None + + # Parse cursor to get offset + start = 0 if cursor is None else int(cursor) + end = start + page_size + + # Get page of resources + page_items = [ + types.Resource(uri=f"resource://items/{item}", name=item, description=f"Description for {item}") + for item in ITEMS[start:end] + ] + + # Determine next cursor + next_cursor = str(end) if end < len(ITEMS) else None + + return types.ListResourcesResult(resources=page_items, next_cursor=next_cursor) +``` + +_Full example: [examples/snippets/servers/pagination_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/pagination_example.py)_ + + +#### Client-side Consumption + + +```python +"""Example of consuming paginated MCP endpoints from a client.""" + +import asyncio + +from mcp.client.session import ClientSession +from mcp.client.stdio import StdioServerParameters, stdio_client +from mcp.types import PaginatedRequestParams, Resource + + +async def list_all_resources() -> None: + """Fetch all resources using pagination.""" + async with stdio_client(StdioServerParameters(command="uv", args=["run", "mcp-simple-pagination"])) as ( + read, + write, + ): + async with ClientSession(read, write) as session: + await session.initialize() + + all_resources: list[Resource] = [] + cursor = None + + while True: + # Fetch a page of resources + result = await session.list_resources(params=PaginatedRequestParams(cursor=cursor)) + all_resources.extend(result.resources) + + print(f"Fetched {len(result.resources)} resources") + + # Check if there are more pages + if result.next_cursor: + cursor = result.next_cursor + else: + break + + print(f"Total resources: {len(all_resources)}") + + +if __name__ == "__main__": + asyncio.run(list_all_resources()) +``` + +_Full example: [examples/snippets/clients/pagination_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/pagination_client.py)_ + + +#### Key Points + +- **Cursors are opaque strings** - the server defines the format (numeric offsets, timestamps, etc.) +- **Return `nextCursor=None`** when there are no more pages +- **Backward compatible** - clients that don't support pagination will still work (they'll just get the first page) +- **Flexible page sizes** - Each endpoint can define its own page size based on data characteristics + +See the [simple-pagination example](examples/servers/simple-pagination) for a complete implementation. + +### Writing MCP Clients + +The SDK provides a high-level client interface for connecting to MCP servers using various [transports](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports): + + +```python +"""cd to the `examples/snippets/clients` directory and run: +uv run client +""" + +import asyncio +import os + +from mcp import ClientSession, StdioServerParameters, types +from mcp.client.stdio import stdio_client +from mcp.shared.context import RequestContext + +# Create server parameters for stdio connection +server_params = StdioServerParameters( + command="uv", # Using uv to run the server + args=["run", "server", "mcpserver_quickstart", "stdio"], # We're already in snippets dir + env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, +) + + +# Optional: create a sampling callback +async def handle_sampling_message( + context: RequestContext[ClientSession, None], params: types.CreateMessageRequestParams +) -> types.CreateMessageResult: + print(f"Sampling request: {params.messages}") + return types.CreateMessageResult( + role="assistant", + content=types.TextContent( + type="text", + text="Hello, world! from model", + ), + model="gpt-3.5-turbo", + stop_reason="endTurn", + ) + + +async def run(): + async with stdio_client(server_params) as (read, write): + async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session: + # Initialize the connection + await session.initialize() + + # List available prompts + prompts = await session.list_prompts() + print(f"Available prompts: {[p.name for p in prompts.prompts]}") + + # Get a prompt (greet_user prompt from mcpserver_quickstart) + if prompts.prompts: + prompt = await session.get_prompt("greet_user", arguments={"name": "Alice", "style": "friendly"}) + print(f"Prompt result: {prompt.messages[0].content}") + + # List available resources + resources = await session.list_resources() + print(f"Available resources: {[r.uri for r in resources.resources]}") + + # List available tools + tools = await session.list_tools() + print(f"Available tools: {[t.name for t in tools.tools]}") + + # Read a resource (greeting resource from mcpserver_quickstart) + resource_content = await session.read_resource("greeting://World") + content_block = resource_content.contents[0] + if isinstance(content_block, types.TextContent): + print(f"Resource content: {content_block.text}") + + # Call a tool (add tool from mcpserver_quickstart) + result = await session.call_tool("add", arguments={"a": 5, "b": 3}) + result_unstructured = result.content[0] + if isinstance(result_unstructured, types.TextContent): + print(f"Tool result: {result_unstructured.text}") + result_structured = result.structured_content + print(f"Structured tool result: {result_structured}") + + +def main(): + """Entry point for the client script.""" + asyncio.run(run()) + + +if __name__ == "__main__": + main() +``` + +_Full example: [examples/snippets/clients/stdio_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/stdio_client.py)_ + + +Clients can also connect using [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http): + + +```python +"""Run from the repository root: +uv run examples/snippets/clients/streamable_basic.py +""" + +import asyncio + +from mcp import ClientSession +from mcp.client.streamable_http import streamable_http_client + + +async def main(): + # Connect to a streamable HTTP server + async with streamable_http_client("http://localhost:8000/mcp") as ( + read_stream, + write_stream, + _, + ): + # Create a session using the client streams + async with ClientSession(read_stream, write_stream) as session: + # Initialize the connection + await session.initialize() + # List available tools + tools = await session.list_tools() + print(f"Available tools: {[tool.name for tool in tools.tools]}") + + +if __name__ == "__main__": + asyncio.run(main()) +``` + +_Full example: [examples/snippets/clients/streamable_basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/streamable_basic.py)_ + + +### Client Display Utilities + +When building MCP clients, the SDK provides utilities to help display human-readable names for tools, resources, and prompts: + + +```python +"""cd to the `examples/snippets` directory and run: +uv run display-utilities-client +""" + +import asyncio +import os + +from mcp import ClientSession, StdioServerParameters +from mcp.client.stdio import stdio_client +from mcp.shared.metadata_utils import get_display_name + +# Create server parameters for stdio connection +server_params = StdioServerParameters( + command="uv", # Using uv to run the server + args=["run", "server", "mcpserver_quickstart", "stdio"], + env={"UV_INDEX": os.environ.get("UV_INDEX", "")}, +) + + +async def display_tools(session: ClientSession): + """Display available tools with human-readable names""" + tools_response = await session.list_tools() + + for tool in tools_response.tools: + # get_display_name() returns the title if available, otherwise the name + display_name = get_display_name(tool) + print(f"Tool: {display_name}") + if tool.description: + print(f" {tool.description}") + + +async def display_resources(session: ClientSession): + """Display available resources with human-readable names""" + resources_response = await session.list_resources() + + for resource in resources_response.resources: + display_name = get_display_name(resource) + print(f"Resource: {display_name} ({resource.uri})") + + templates_response = await session.list_resource_templates() + for template in templates_response.resource_templates: + display_name = get_display_name(template) + print(f"Resource Template: {display_name}") + + +async def run(): + """Run the display utilities example.""" + async with stdio_client(server_params) as (read, write): + async with ClientSession(read, write) as session: + # Initialize the connection + await session.initialize() + + print("=== Available Tools ===") + await display_tools(session) + + print("\n=== Available Resources ===") + await display_resources(session) + + +def main(): + """Entry point for the display utilities client.""" + asyncio.run(run()) + + +if __name__ == "__main__": + main() +``` + +_Full example: [examples/snippets/clients/display_utilities.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/display_utilities.py)_ + + +The `get_display_name()` function implements the proper precedence rules for displaying names: + +- For tools: `title` > `annotations.title` > `name` +- For other objects: `title` > `name` + +This ensures your client UI shows the most user-friendly names that servers provide. + +### OAuth Authentication for Clients + +The SDK includes [authorization support](https://modelcontextprotocol.io/specification/2025-03-26/basic/authorization) for connecting to protected MCP servers: + + +```python +"""Before running, specify running MCP RS server URL. +To spin up RS server locally, see + examples/servers/simple-auth/README.md + +cd to the `examples/snippets` directory and run: + uv run oauth-client +""" + +import asyncio +from urllib.parse import parse_qs, urlparse + +import httpx +from pydantic import AnyUrl + +from mcp import ClientSession +from mcp.client.auth import OAuthClientProvider, TokenStorage +from mcp.client.streamable_http import streamable_http_client +from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken + + +class InMemoryTokenStorage(TokenStorage): + """Demo In-memory token storage implementation.""" + + def __init__(self): + self.tokens: OAuthToken | None = None + self.client_info: OAuthClientInformationFull | None = None + + async def get_tokens(self) -> OAuthToken | None: + """Get stored tokens.""" + return self.tokens + + async def set_tokens(self, tokens: OAuthToken) -> None: + """Store tokens.""" + self.tokens = tokens + + async def get_client_info(self) -> OAuthClientInformationFull | None: + """Get stored client information.""" + return self.client_info + + async def set_client_info(self, client_info: OAuthClientInformationFull) -> None: + """Store client information.""" + self.client_info = client_info + + +async def handle_redirect(auth_url: str) -> None: + print(f"Visit: {auth_url}") + + +async def handle_callback() -> tuple[str, str | None]: + callback_url = input("Paste callback URL: ") + params = parse_qs(urlparse(callback_url).query) + return params["code"][0], params.get("state", [None])[0] + + +async def main(): + """Run the OAuth client example.""" + oauth_auth = OAuthClientProvider( + server_url="http://localhost:8001", + client_metadata=OAuthClientMetadata( + client_name="Example MCP Client", + redirect_uris=[AnyUrl("http://localhost:3000/callback")], + grant_types=["authorization_code", "refresh_token"], + response_types=["code"], + scope="user", + ), + storage=InMemoryTokenStorage(), + redirect_handler=handle_redirect, + callback_handler=handle_callback, + ) + + async with httpx.AsyncClient(auth=oauth_auth, follow_redirects=True) as custom_client: + async with streamable_http_client("http://localhost:8001/mcp", http_client=custom_client) as (read, write, _): + async with ClientSession(read, write) as session: + await session.initialize() + + tools = await session.list_tools() + print(f"Available tools: {[tool.name for tool in tools.tools]}") + + resources = await session.list_resources() + print(f"Available resources: {[r.uri for r in resources.resources]}") + + +def run(): + asyncio.run(main()) + + +if __name__ == "__main__": + run() +``` + +_Full example: [examples/snippets/clients/oauth_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/oauth_client.py)_ + + +For a complete working example, see [`examples/clients/simple-auth-client/`](examples/clients/simple-auth-client/). + +### Parsing Tool Results + +When calling tools through MCP, the `CallToolResult` object contains the tool's response in a structured format. Understanding how to parse this result is essential for properly handling tool outputs. + +```python +"""examples/snippets/clients/parsing_tool_results.py""" + +import asyncio + +from mcp import ClientSession, StdioServerParameters, types +from mcp.client.stdio import stdio_client + + +async def parse_tool_results(): + """Demonstrates how to parse different types of content in CallToolResult.""" + server_params = StdioServerParameters( + command="python", args=["path/to/mcp_server.py"] + ) + + async with stdio_client(server_params) as (read, write): + async with ClientSession(read, write) as session: + await session.initialize() + + # Example 1: Parsing text content + result = await session.call_tool("get_data", {"format": "text"}) + for content in result.content: + if isinstance(content, types.TextContent): + print(f"Text: {content.text}") + + # Example 2: Parsing structured content from JSON tools + result = await session.call_tool("get_user", {"id": "123"}) + if hasattr(result, "structuredContent") and result.structuredContent: + # Access structured data directly + user_data = result.structuredContent + print(f"User: {user_data.get('name')}, Age: {user_data.get('age')}") + + # Example 3: Parsing embedded resources + result = await session.call_tool("read_config", {}) + for content in result.content: + if isinstance(content, types.EmbeddedResource): + resource = content.resource + if isinstance(resource, types.TextResourceContents): + print(f"Config from {resource.uri}: {resource.text}") + elif isinstance(resource, types.BlobResourceContents): + print(f"Binary data from {resource.uri}") + + # Example 4: Parsing image content + result = await session.call_tool("generate_chart", {"data": [1, 2, 3]}) + for content in result.content: + if isinstance(content, types.ImageContent): + print(f"Image ({content.mimeType}): {len(content.data)} bytes") + + # Example 5: Handling errors + result = await session.call_tool("failing_tool", {}) + if result.isError: + print("Tool execution failed!") + for content in result.content: + if isinstance(content, types.TextContent): + print(f"Error: {content.text}") + + +async def main(): + await parse_tool_results() + + +if __name__ == "__main__": + asyncio.run(main()) ``` ### MCP Primitives @@ -635,18 +2502,20 @@ The MCP protocol defines three core primitives that servers can implement: MCP servers declare capabilities during initialization: -| Capability | Feature Flag | Description | -|-------------|------------------------------|------------------------------------| -| `prompts` | `listChanged` | Prompt template management | -| `resources` | `subscribe`
`listChanged`| Resource exposure and updates | -| `tools` | `listChanged` | Tool discovery and execution | -| `logging` | - | Server logging configuration | -| `completion`| - | Argument completion suggestions | +| Capability | Feature Flag | Description | +|--------------|------------------------------|------------------------------------| +| `prompts` | `listChanged` | Prompt template management | +| `resources` | `subscribe`
`listChanged`| Resource exposure and updates | +| `tools` | `listChanged` | Tool discovery and execution | +| `logging` | - | Server logging configuration | +| `completions`| - | Argument completion suggestions | ## Documentation +- [API Reference](https://modelcontextprotocol.github.io/python-sdk/api/) +- [Experimental Features (Tasks)](https://modelcontextprotocol.github.io/python-sdk/experimental/tasks/) - [Model Context Protocol documentation](https://modelcontextprotocol.io) -- [Model Context Protocol specification](https://spec.modelcontextprotocol.io) +- [Model Context Protocol specification](https://modelcontextprotocol.io/specification/latest) - [Officially supported servers](https://github.com/modelcontextprotocol/servers) ## Contributing diff --git a/context/llms-full.txt b/context/llms-full.txt index 6d3f928..028cf1f 100644 --- a/context/llms-full.txt +++ b/context/llms-full.txt @@ -1,18371 +1,24339 @@ -# Example Clients -Source: https://modelcontextprotocol.io/clients - -A list of applications that support MCP integrations +# Build an MCP client +Source: https://modelcontextprotocol.io/docs/develop/build-client -This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers. - -## Feature support matrix - -| Client | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes | -| ---------------------------------------- | ----------- | --------- | ------- | ---------- | ----- | ----------------------------------------------------------------------------------------------- | -| [5ire][5ire] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | -| [Apify MCP Tester][Apify MCP Tester] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools | -| [BeeAI Framework][BeeAI Framework] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in agentic workflows. | -| [Claude Code][Claude Code] | ❌ | ✅ | ✅ | ❌ | ❌ | Supports prompts and tools | -| [Claude Desktop App][Claude Desktop] | ✅ | ✅ | ✅ | ❌ | ❌ | Supports tools, prompts, and resources. | -| [Cline][Cline] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. | -| [Continue][Continue] | ✅ | ✅ | ✅ | ❌ | ❌ | Supports tools, prompts, and resources. | -| [Copilot-MCP][CopilotMCP] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. | -| [Cursor][Cursor] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | -| [Daydreams Agents][Daydreams] | ✅ | ✅ | ✅ | ❌ | ❌ | Support for drop in Servers to Daydreams agents | -| [Emacs Mcp][Mcp.el] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in Emacs. | -| [fast-agent][fast-agent] | ✅ | ✅ | ✅ | ✅ | ✅ | Full multimodal MCP support, with end-to-end tests | -| [FLUJO][FLUJO] | ❌ | ❌ | ✅ | ❌ | ❌ | Support for resources, Prompts and Roots are coming soon | -| [Genkit][Genkit] | ⚠️ | ✅ | ✅ | ❌ | ❌ | Supports resource list and lookup through tools. | -| [GenAIScript][GenAIScript] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | -| [Goose][Goose] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. | -| [Klavis AI Slack/Discord/Web][Klavis AI] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. | -| [LibreChat][LibreChat] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Agents | -| [mcp-agent][mcp-agent] | ❌ | ❌ | ✅ | ⚠️ | ❌ | Supports tools, server connection management, and agent workflows. | -| [MCPHub][MCPHub] | ✅ | ✅ | ✅ | ❌ | ❌ | Supports tools, resources, and prompts in Neovim | -| [MCPOmni-Connect][MCPOmni-Connect] | ✅ | ✅ | ✅ | ✅ | ❌ | Supports tools with agentic mode, ReAct, and orchestrator capabilities. | -| [Microsoft Copilot Studio] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools | -| [OpenSumi][OpenSumi] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in OpenSumi | -| [oterm][oterm] | ❌ | ✅ | ✅ | ✅ | ❌ | Supports tools, prompts and sampling for Ollama. | -| [Roo Code][Roo Code] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. | -| [Sourcegraph Cody][Cody] | ✅ | ❌ | ❌ | ❌ | ❌ | Supports resources through OpenCTX | -| [SpinAI][SpinAI] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Typescript AI Agents | -| [Superinterface][Superinterface] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools | -| [TheiaAI/TheiaIDE][TheiaAI/TheiaIDE] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Agents in Theia AI and the AI-powered Theia IDE | -| [TypingMind App][TypingMind App] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools at app-level (appear as plugins) or when assigned to Agents | -| [VS Code GitHub Copilot][VS Code] | ❌ | ❌ | ✅ | ❌ | ✅ | Supports dynamic tool/roots discovery, secure secret configuration, and explicit tool prompting | -| [Windsurf Editor][Windsurf] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools with AI Flow for collaborative development. | -| [Witsy][Witsy] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in Witsy. | -| [Zed][Zed] | ❌ | ✅ | ❌ | ❌ | ❌ | Prompts appear as slash commands | +Get started building your own client that can integrate with all MCP servers. -[5ire]: https://github.com/nanbingxyz/5ire +In this tutorial, you'll learn how to build an LLM-powered chatbot client that connects to MCP servers. -[Apify MCP Tester]: https://apify.com/jiri.spilka/tester-mcp-client +Before you begin, it helps to have gone through our [Build an MCP Server](/docs/develop/build-server) tutorial so you can understand how clients and servers communicate. -[BeeAI Framework]: https://i-am-bee.github.io/beeai-framework + + + [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-python) -[Claude Code]: https://claude.ai/code + ## System Requirements -[Claude Desktop]: https://claude.ai/download + Before starting, ensure your system meets these requirements: -[Cline]: https://github.com/cline/cline + * Mac or Windows computer + * Latest Python version installed + * Latest version of `uv` installed -[Continue]: https://github.com/continuedev/continue + ## Setting Up Your Environment -[CopilotMCP]: https://github.com/VikashLoomba/copilot-mcp + First, create a new Python project with `uv`: -[Cursor]: https://cursor.com + + ```bash macOS/Linux theme={null} + # Create project directory + uv init mcp-client + cd mcp-client -[Daydreams]: https://github.com/daydreamsai/daydreams + # Create virtual environment + uv venv -[Klavis AI]: https://www.klavis.ai/ + # Activate virtual environment + source .venv/bin/activate -[Mcp.el]: https://github.com/lizqwerscott/mcp.el + # Install required packages + uv add mcp anthropic python-dotenv -[fast-agent]: https://github.com/evalstate/fast-agent + # Remove boilerplate files + rm main.py -[FLUJO]: https://github.com/mario-andreschak/flujo + # Create our main file + touch client.py + ``` -[Genkit]: https://github.com/firebase/genkit + ```powershell Windows theme={null} + # Create project directory + uv init mcp-client + cd mcp-client -[GenAIScript]: https://microsoft.github.io/genaiscript/reference/scripts/mcp-tools/ + # Create virtual environment + uv venv -[Goose]: https://block.github.io/goose/docs/goose-architecture/#interoperability-with-extensions + # Activate virtual environment + .venv\Scripts\activate -[LibreChat]: https://github.com/danny-avila/LibreChat + # Install required packages + uv add mcp anthropic python-dotenv -[mcp-agent]: https://github.com/lastmile-ai/mcp-agent + # Remove boilerplate files + del main.py -[MCPHub]: https://github.com/ravitemer/mcphub.nvim + # Create our main file + new-item client.py + ``` + -[MCPOmni-Connect]: https://github.com/Abiorh001/mcp_omni_connect + ## Setting Up Your API Key -[Microsoft Copilot Studio]: https://learn.microsoft.com/en-us/microsoft-copilot-studio/agent-extend-action-mcp + You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys). -[OpenSumi]: https://github.com/opensumi/core + Create a `.env` file to store it: -[oterm]: https://github.com/ggozad/oterm + ```bash theme={null} + echo "ANTHROPIC_API_KEY=your-api-key-goes-here" > .env + ``` -[Roo Code]: https://roocode.com + Add `.env` to your `.gitignore`: -[Cody]: https://sourcegraph.com/cody + ```bash theme={null} + echo ".env" >> .gitignore + ``` -[SpinAI]: https://spinai.dev + + Make sure you keep your `ANTHROPIC_API_KEY` secure! + -[Superinterface]: https://superinterface.ai + ## Creating the Client -[TheiaAI/TheiaIDE]: https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/ + ### Basic Client Structure -[TypingMind App]: https://www.typingmind.com + First, let's set up our imports and create the basic client class: -[VS Code]: https://code.visualstudio.com/ + ```python theme={null} + import asyncio + from typing import Optional + from contextlib import AsyncExitStack -[Windsurf]: https://codeium.com/windsurf + from mcp import ClientSession, StdioServerParameters + from mcp.client.stdio import stdio_client -[Witsy]: https://github.com/nbonamy/witsy + from anthropic import Anthropic + from dotenv import load_dotenv -[Zed]: https://zed.dev + load_dotenv() # load environment variables from .env -[Resources]: https://modelcontextprotocol.io/docs/concepts/resources + class MCPClient: + def __init__(self): + # Initialize session and client objects + self.session: Optional[ClientSession] = None + self.exit_stack = AsyncExitStack() + self.anthropic = Anthropic() + # methods will go here + ``` -[Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts + ### Server Connection Management -[Tools]: https://modelcontextprotocol.io/docs/concepts/tools + Next, we'll implement the method to connect to an MCP server: -[Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling + ```python theme={null} + async def connect_to_server(self, server_script_path: str): + """Connect to an MCP server -## Client details + Args: + server_script_path: Path to the server script (.py or .js) + """ + is_python = server_script_path.endswith('.py') + is_js = server_script_path.endswith('.js') + if not (is_python or is_js): + raise ValueError("Server script must be a .py or .js file") -### 5ire + command = "python" if is_python else "node" + server_params = StdioServerParameters( + command=command, + args=[server_script_path], + env=None + ) -[5ire](https://github.com/nanbingxyz/5ire) is an open source cross-platform desktop AI assistant that supports tools through MCP servers. + stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) + self.stdio, self.write = stdio_transport + self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) -**Key features:** + await self.session.initialize() -* Built-in MCP servers can be quickly enabled and disabled. -* Users can add more servers by modifying the configuration file. -* It is open-source and user-friendly, suitable for beginners. -* Future support for MCP will be continuously improved. + # List available tools + response = await self.session.list_tools() + tools = response.tools + print("\nConnected to server with tools:", [tool.name for tool in tools]) + ``` -### Apify MCP Tester + ### Query Processing Logic -[Apify MCP Tester](https://github.com/apify/tester-mcp-client) is an open-source client that connects to any MCP server using Server-Sent Events (SSE). -It is a standalone Apify Actor designed for testing MCP servers over SSE, with support for Authorization headers. -It uses plain JavaScript (old-school style) and is hosted on Apify, allowing you to run it without any setup. + Now let's add the core functionality for processing queries and handling tool calls: -**Key features:** + ```python theme={null} + async def process_query(self, query: str) -> str: + """Process a query using Claude and available tools""" + messages = [ + { + "role": "user", + "content": query + } + ] -* Connects to any MCP server via SSE. -* Works with the [Apify MCP Server](https://apify.com/apify/actors-mcp-server) to interact with one or more Apify [Actors](https://apify.com/store). -* Dynamically utilizes tools based on context and user queries (if supported by the server). + response = await self.session.list_tools() + available_tools = [{ + "name": tool.name, + "description": tool.description, + "input_schema": tool.inputSchema + } for tool in response.tools] -### BeeAI Framework + # Initial Claude API call + response = self.anthropic.messages.create( + model="claude-sonnet-4-20250514", + max_tokens=1000, + messages=messages, + tools=available_tools + ) -[BeeAI Framework](https://i-am-bee.github.io/beeai-framework) is an open-source framework for building, deploying, and serving powerful agentic workflows at scale. The framework includes the **MCP Tool**, a native feature that simplifies the integration of MCP servers into agentic workflows. + # Process response and handle tool calls + final_text = [] -**Key features:** + assistant_message_content = [] + for content in response.content: + if content.type == 'text': + final_text.append(content.text) + assistant_message_content.append(content) + elif content.type == 'tool_use': + tool_name = content.name + tool_args = content.input -* Seamlessly incorporate MCP tools into agentic workflows. -* Quickly instantiate framework-native tools from connected MCP client(s). -* Planned future support for agentic MCP capabilities. + # Execute tool call + result = await self.session.call_tool(tool_name, tool_args) + final_text.append(f"[Calling tool {tool_name} with args {tool_args}]") -**Learn more:** + assistant_message_content.append(content) + messages.append({ + "role": "assistant", + "content": assistant_message_content + }) + messages.append({ + "role": "user", + "content": [ + { + "type": "tool_result", + "tool_use_id": content.id, + "content": result.content + } + ] + }) -* [Example of using MCP tools in agentic workflow](https://i-am-bee.github.io/beeai-framework/#/typescript/tools?id=using-the-mcptool-class) + # Get next response from Claude + response = self.anthropic.messages.create( + model="claude-sonnet-4-20250514", + max_tokens=1000, + messages=messages, + tools=available_tools + ) -### Claude Code + final_text.append(response.content[0].text) -Claude Code is an interactive agentic coding tool from Anthropic that helps you code faster through natural language commands. It supports MCP integration for prompts and tools, and also functions as an MCP server to integrate with other clients. + return "\n".join(final_text) + ``` -**Key features:** + ### Interactive Chat Interface -* Tool and prompt support for MCP servers -* Offers its own tools through an MCP server for integrating with other MCP clients + Now we'll add the chat loop and cleanup functionality: -### Claude Desktop App + ```python theme={null} + async def chat_loop(self): + """Run an interactive chat loop""" + print("\nMCP Client Started!") + print("Type your queries or 'quit' to exit.") -The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources. + while True: + try: + query = input("\nQuery: ").strip() -**Key features:** + if query.lower() == 'quit': + break -* Full support for resources, allowing attachment of local files and data -* Support for prompt templates -* Tool integration for executing commands and scripts -* Local server connections for enhanced privacy and security + response = await self.process_query(query) + print("\n" + response) -> ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application. + except Exception as e: + print(f"\nError: {str(e)}") -### Cline + async def cleanup(self): + """Clean up resources""" + await self.exit_stack.aclose() + ``` -[Cline](https://github.com/cline/cline) is an autonomous coding agent in VS Code that edits files, runs commands, uses a browser, and more–with your permission at each step. + ### Main Entry Point -**Key features:** + Finally, we'll add the main execution logic: -* Create and add tools through natural language (e.g. "add a tool that searches the web") -* Share custom MCP servers Cline creates with others via the `~/Documents/Cline/MCP` directory -* Displays configured MCP servers along with their tools, resources, and any error logs + ```python theme={null} + async def main(): + if len(sys.argv) < 2: + print("Usage: python client.py ") + sys.exit(1) -### Continue + client = MCPClient() + try: + await client.connect_to_server(sys.argv[1]) + await client.chat_loop() + finally: + await client.cleanup() -[Continue](https://github.com/continuedev/continue) is an open-source AI code assistant, with built-in support for all MCP features. + if __name__ == "__main__": + import sys + asyncio.run(main()) + ``` -**Key features** + You can find the complete `client.py` file [here](https://github.com/modelcontextprotocol/quickstart-resources/blob/main/mcp-client-python/client.py). -* Type "@" to mention MCP resources -* Prompt templates surface as slash commands -* Use both built-in and MCP tools directly in chat -* Supports VS Code and JetBrains IDEs, with any LLM + ## Key Components Explained -### Copilot-MCP + ### 1. Client Initialization -[Copilot-MCP](https://github.com/VikashLoomba/copilot-mcp) enables AI coding assistance via MCP. + * The `MCPClient` class initializes with session management and API clients + * Uses `AsyncExitStack` for proper resource management + * Configures the Anthropic client for Claude interactions -**Key features:** + ### 2. Server Connection -* Support for MCP tools and resources -* Integration with development workflows -* Extensible AI capabilities + * Supports both Python and Node.js servers + * Validates server script type + * Sets up proper communication channels + * Initializes the session and lists available tools -### Cursor + ### 3. Query Processing -[Cursor](https://docs.cursor.com/advanced/model-context-protocol) is an AI code editor. + * Maintains conversation context + * Handles Claude's responses and tool calls + * Manages the message flow between Claude and tools + * Combines results into a coherent response -**Key Features**: + ### 4. Interactive Interface -* Support for MCP tools in Cursor Composer -* Support for both STDIO and SSE + * Provides a simple command-line interface + * Handles user input and displays responses + * Includes basic error handling + * Allows graceful exit -### Daydreams + ### 5. Resource Management -[Daydreams](https://github.com/daydreamsai/daydreams) is a generative agent framework for executing anything onchain + * Proper cleanup of resources + * Error handling for connection issues + * Graceful shutdown procedures -**Key features:** + ## Common Customization Points -* Supports MCP Servers in config -* Exposes MCP Client + 1. **Tool Handling** + * Modify `process_query()` to handle specific tool types + * Add custom error handling for tool calls + * Implement tool-specific response formatting -### Emacs Mcp + 2. **Response Processing** + * Customize how tool results are formatted + * Add response filtering or transformation + * Implement custom logging -[Emacs Mcp](https://github.com/lizqwerscott/mcp.el) is an Emacs client designed to interface with MCP servers, enabling seamless connections and interactions. It provides MCP tool invocation support for AI plugins like [gptel](https://github.com/karthink/gptel) and [llm](https://github.com/ahyatt/llm), adhering to Emacs' standard tool invocation format. This integration enhances the functionality of AI tools within the Emacs ecosystem. + 3. **User Interface** + * Add a GUI or web interface + * Implement rich console output + * Add command history or auto-completion -**Key features:** + ## Running the Client -* Provides MCP tool support for Emacs. + To run your client with any MCP server: -### fast-agent + ```bash theme={null} + uv run client.py path/to/server.py # python server + uv run client.py path/to/build/index.js # node server + ``` -[fast-agent](https://github.com/evalstate/fast-agent) is a Python Agent framework, with simple declarative support for creating Agents and Workflows, with full multi-modal support for Anthropic and OpenAI models. + + If you're continuing [the weather tutorial from the server quickstart](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python), your command might look something like this: `python client.py .../quickstart-resources/weather-server-python/weather.py` + -**Key features:** + The client will: -* PDF and Image support, based on MCP Native types -* Interactive front-end to develop and diagnose Agent applications, including passthrough and playback simulators -* Built in support for "Building Effective Agents" workflows. -* Deploy Agents as MCP Servers + 1. Connect to the specified server + 2. List available tools + 3. Start an interactive chat session where you can: + * Enter queries + * See tool executions + * Get responses from Claude -### FLUJO + Here's an example of what it should look like if connected to the weather server from the server quickstart: -Think n8n + ChatGPT. FLUJO is an desktop application that integrates with MCP to provide a workflow-builder interface for AI interactions. Built with Next.js and React, it supports both online and offline (ollama) models, it manages API Keys and environment variables centrally and can install MCP Servers from GitHub. FLUJO has an ChatCompletions endpoint and flows can be executed from other AI applications like Cline, Roo or Claude. + + + -**Key features:** + ## How It Works -* Environment & API Key Management -* Model Management -* MCP Server Integration -* Workflow Orchestration -* Chat Interface + When you submit a query: -### Genkit + 1. The client gets the list of available tools from the server + 2. Your query is sent to Claude along with tool descriptions + 3. Claude decides which tools (if any) to use + 4. The client executes any requested tool calls through the server + 5. Results are sent back to Claude + 6. Claude provides a natural language response + 7. The response is displayed to you -[Genkit](https://github.com/firebase/genkit) is a cross-language SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts. + ## Best practices -**Key features:** + 1. **Error Handling** + * Always wrap tool calls in try-catch blocks + * Provide meaningful error messages + * Gracefully handle connection issues -* Client support for tools and prompts (resources partially supported) -* Rich discovery with support in Genkit's Dev UI playground -* Seamless interoperability with Genkit's existing tools and prompts -* Works across a wide variety of GenAI models from top providers + 2. **Resource Management** + * Use `AsyncExitStack` for proper cleanup + * Close connections when done + * Handle server disconnections -### GenAIScript + 3. **Security** + * Store API keys securely in `.env` + * Validate server responses + * Be cautious with tool permissions -Programmatically assemble prompts for LLMs using [GenAIScript](https://microsoft.github.io/genaiscript/) (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript. + 4. **Tool Names** + * Tool names can be validated according to the format specified [here](/specification/draft/server/tools#tool-names) + * If a tool name conforms to the specified format, it should not fail validation by an MCP client -**Key features:** + ## Troubleshooting -* JavaScript toolbox to work with prompts -* Abstraction to make it easy and productive -* Seamless Visual Studio Code integration + ### Server Path Issues -### Goose - -[Goose](https://github.com/block/goose) is an open source AI agent that supercharges your software development by automating coding tasks. + * Double-check the path to your server script is correct + * Use the absolute path if the relative path isn't working + * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path + * Verify the server file has the correct extension (.py for Python or .js for Node.js) -**Key features:** + Example of correct path usage: -* Expose MCP functionality to Goose through tools. -* MCPs can be installed directly via the [extensions directory](https://block.github.io/goose/v1/extensions/), CLI, or UI. -* Goose allows you to extend its functionality by [building your own MCP servers](https://block.github.io/goose/docs/tutorials/custom-extensions). -* Includes built-in tools for development, web scraping, automation, memory, and integrations with JetBrains and Google Drive. + ```bash theme={null} + # Relative path + uv run client.py ./server/weather.py -### Klavis AI Slack/Discord/Web + # Absolute path + uv run client.py /Users/username/projects/mcp-server/weather.py -[Klavis AI](https://www.klavis.ai/) is an Open-Source Infra to Use, Build & Scale MCPs with ease. + # Windows path (either format works) + uv run client.py C:/projects/mcp-server/weather.py + uv run client.py C:\\projects\\mcp-server\\weather.py + ``` -**Key features:** + ### Response Timing -* Slack/Discord/Web MCP clients for using MCPs directly -* Simple web UI dashboard for easy MCP configuration -* Direct OAuth integration with Slack & Discord Clients and MCP Servers for secure user authentication -* SSE transport support -* Open-source infrastructure ([GitHub repository](https://github.com/Klavis-AI/klavis)) + * The first response might take up to 30 seconds to return + * This is normal and happens while: + * The server initializes + * Claude processes the query + * Tools are being executed + * Subsequent responses are typically faster + * Don't interrupt the process during this initial waiting period -**Learn more:** + ### Common Error Messages -* [Demo video showing MCP usage in Slack/Discord](https://youtu.be/9-QQAhrQWw8) + If you see: -### LibreChat + * `FileNotFoundError`: Check your server path + * `Connection refused`: Ensure the server is running and the path is correct + * `Tool execution failed`: Verify the tool's required environment variables are set + * `Timeout error`: Consider increasing the timeout in your client configuration + -[LibreChat](https://github.com/danny-avila/LibreChat) is an open-source, customizable AI chat UI that supports multiple AI providers, now including MCP integration. + + [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-typescript) -**Key features:** + ## System Requirements -* Extend current tool ecosystem, including [Code Interpreter](https://www.librechat.ai/docs/features/code_interpreter) and Image generation tools, through MCP servers -* Add tools to customizable [Agents](https://www.librechat.ai/docs/features/agents), using a variety of LLMs from top providers -* Open-source and self-hostable, with secure multi-user support -* Future roadmap includes expanded MCP feature support + Before starting, ensure your system meets these requirements: -### mcp-agent + * Mac or Windows computer + * Node.js 17 or higher installed + * Latest version of `npm` installed + * Anthropic API key (Claude) -[mcp-agent] is a simple, composable framework to build agents using Model Context Protocol. + ## Setting Up Your Environment -**Key features:** + First, let's create and set up our project: -* Automatic connection management of MCP servers. -* Expose tools from multiple servers to an LLM. -* Implements every pattern defined in [Building Effective Agents](https://www.anthropic.com/research/building-effective-agents). -* Supports workflow pause/resume signals, such as waiting for human feedback. + + ```bash macOS/Linux theme={null} + # Create project directory + mkdir mcp-client-typescript + cd mcp-client-typescript -### MCPHub + # Initialize npm project + npm init -y -[MCPHub] is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. + # Install dependencies + npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv -**Key features** + # Install dev dependencies + npm install -D @types/node typescript -* Install, configure and manage MCP servers with an intuitive UI. -* Built-in Neovim MCP server with support for file operations (read, write, search, replace), command execution, terminal integration, LSP integration, buffers, and diagnostics. -* Create Lua-based MCP servers directly in Neovim. -* Inegrates with popular Neovim chat plugins Avante.nvim and CodeCompanion.nvim + # Create source file + touch index.ts + ``` -### MCPOmni-Connect + ```powershell Windows theme={null} + # Create project directory + md mcp-client-typescript + cd mcp-client-typescript -[MCPOmni-Connect](https://github.com/Abiorh001/mcp_omni_connect) is a versatile command-line interface (CLI) client designed to connect to various Model Context Protocol (MCP) servers using both stdio and SSE transport. + # Initialize npm project + npm init -y -**Key features:** + # Install dependencies + npm install @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv -* Support for resources, prompts, tools, and sampling -* Agentic mode with ReAct and orchestrator capabilities -* Seamless integration with OpenAI models and other LLMs -* Dynamic tool and resource management across multiple servers -* Support for both stdio and SSE transport protocols -* Comprehensive tool orchestration and resource analysis capabilities + # Install dev dependencies + npm install -D @types/node typescript -### Microsoft Copilot Studio + # Create source file + new-item index.ts + ``` + -[Microsoft Copilot Studio] is a robust SaaS platform designed for building custom AI-driven applications and intelligent agents, empowering developers to create, deploy, and manage sophisticated AI solutions. + Update your `package.json` to set `type: "module"` and a build script: -**Key features:** + ```json package.json theme={null} + { + "type": "module", + "scripts": { + "build": "tsc && chmod 755 build/index.js" + } + } + ``` -* Support for MCP tools -* Extend Copilot Studio agents with MCP servers -* Leveraging Microsoft unified, governed, and secure API management solutions + Create a `tsconfig.json` in the root of your project: -### OpenSumi + ```json tsconfig.json theme={null} + { + "compilerOptions": { + "target": "ES2022", + "module": "Node16", + "moduleResolution": "Node16", + "outDir": "./build", + "rootDir": "./", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + }, + "include": ["index.ts"], + "exclude": ["node_modules"] + } + ``` -[OpenSumi](https://github.com/opensumi/core) is a framework helps you quickly build AI Native IDE products. + ## Setting Up Your API Key -**Key features:** + You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys). -* Supports MCP tools in OpenSumi -* Supports built-in IDE MCP servers and custom MCP servers + Create a `.env` file to store it: -### oterm + ```bash theme={null} + echo "ANTHROPIC_API_KEY=" > .env + ``` -[oterm] is a terminal client for Ollama allowing users to create chats/agents. + Add `.env` to your `.gitignore`: -**Key features:** + ```bash theme={null} + echo ".env" >> .gitignore + ``` -* Support for multiple fully customizable chat sessions with Ollama connected with tools. -* Support for MCP tools. + + Make sure you keep your `ANTHROPIC_API_KEY` secure! + -### Roo Code + ## Creating the Client -[Roo Code](https://roocode.com) enables AI coding assistance via MCP. + ### Basic Client Structure -**Key features:** + First, let's set up our imports and create the basic client class in `index.ts`: -* Support for MCP tools and resources -* Integration with development workflows -* Extensible AI capabilities + ```typescript theme={null} + import { Anthropic } from "@anthropic-ai/sdk"; + import { + MessageParam, + Tool, + } from "@anthropic-ai/sdk/resources/messages/messages.mjs"; + import { Client } from "@modelcontextprotocol/sdk/client/index.js"; + import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js"; + import readline from "readline/promises"; + import dotenv from "dotenv"; -### Sourcegraph Cody + dotenv.config(); -[Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX. + const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY; + if (!ANTHROPIC_API_KEY) { + throw new Error("ANTHROPIC_API_KEY is not set"); + } -**Key features:** + class MCPClient { + private mcp: Client; + private anthropic: Anthropic; + private transport: StdioClientTransport | null = null; + private tools: Tool[] = []; -* Support for MCP resources -* Integration with Sourcegraph's code intelligence -* Uses OpenCTX as an abstraction layer -* Future support planned for additional MCP features + constructor() { + this.anthropic = new Anthropic({ + apiKey: ANTHROPIC_API_KEY, + }); + this.mcp = new Client({ name: "mcp-client-cli", version: "1.0.0" }); + } + // methods will go here + } + ``` -### SpinAI + ### Server Connection Management -[SpinAI](https://spinai.dev) is an open-source TypeScript framework for building observable AI agents. The framework provides native MCP compatibility, allowing agents to seamlessly integrate with MCP servers and tools. + Next, we'll implement the method to connect to an MCP server: -**Key features:** + ```typescript theme={null} + async connectToServer(serverScriptPath: string) { + try { + const isJs = serverScriptPath.endsWith(".js"); + const isPy = serverScriptPath.endsWith(".py"); + if (!isJs && !isPy) { + throw new Error("Server script must be a .js or .py file"); + } + const command = isPy + ? process.platform === "win32" + ? "python" + : "python3" + : process.execPath; -* Built-in MCP compatibility for AI agents -* Open-source TypeScript framework -* Observable agent architecture -* Native support for MCP tools integration + this.transport = new StdioClientTransport({ + command, + args: [serverScriptPath], + }); + await this.mcp.connect(this.transport); -### Superinterface + const toolsResult = await this.mcp.listTools(); + this.tools = toolsResult.tools.map((tool) => { + return { + name: tool.name, + description: tool.description, + input_schema: tool.inputSchema, + }; + }); + console.log( + "Connected to server with tools:", + this.tools.map(({ name }) => name) + ); + } catch (e) { + console.log("Failed to connect to MCP server: ", e); + throw e; + } + } + ``` -[Superinterface](https://superinterface.ai) is AI infrastructure and a developer platform to build in-app AI assistants with support for MCP, interactive components, client-side function calling and more. + ### Query Processing Logic -**Key features:** + Now let's add the core functionality for processing queries and handling tool calls: -* Use tools from MCP servers in assistants embedded via React components or script tags -* SSE transport support -* Use any AI model from any AI provider (OpenAI, Anthropic, Ollama, others) + ```typescript theme={null} + async processQuery(query: string) { + const messages: MessageParam[] = [ + { + role: "user", + content: query, + }, + ]; -### TheiaAI/TheiaIDE + const response = await this.anthropic.messages.create({ + model: "claude-sonnet-4-20250514", + max_tokens: 1000, + messages, + tools: this.tools, + }); -[Theia AI](https://eclipsesource.com/blogs/2024/10/07/introducing-theia-ai/) is a framework for building AI-enhanced tools and IDEs. The [AI-powered Theia IDE](https://eclipsesource.com/blogs/2024/10/08/introducting-ai-theia-ide/) is an open and flexible development environment built on Theia AI. + const finalText = []; -**Key features:** + for (const content of response.content) { + if (content.type === "text") { + finalText.push(content.text); + } else if (content.type === "tool_use") { + const toolName = content.name; + const toolArgs = content.input as { [x: string]: unknown } | undefined; -* **Tool Integration**: Theia AI enables AI agents, including those in the Theia IDE, to utilize MCP servers for seamless tool interaction. -* **Customizable Prompts**: The Theia IDE allows users to define and adapt prompts, dynamically integrating MCP servers for tailored workflows. -* **Custom agents**: The Theia IDE supports creating custom agents that leverage MCP capabilities, enabling users to design dedicated workflows on the fly. + const result = await this.mcp.callTool({ + name: toolName, + arguments: toolArgs, + }); + finalText.push( + `[Calling tool ${toolName} with args ${JSON.stringify(toolArgs)}]` + ); -Theia AI and Theia IDE's MCP integration provide users with flexibility, making them powerful platforms for exploring and adapting MCP. + messages.push({ + role: "user", + content: result.content as string, + }); -**Learn more:** + const response = await this.anthropic.messages.create({ + model: "claude-sonnet-4-20250514", + max_tokens: 1000, + messages, + }); -* [Theia IDE and Theia AI MCP Announcement](https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/) -* [Download the AI-powered Theia IDE](https://theia-ide.org/) + finalText.push( + response.content[0].type === "text" ? response.content[0].text : "" + ); + } + } -### TypingMind App + return finalText.join("\n"); + } + ``` -[TypingMind](https://www.typingmind.com) is an advanced frontend for LLMs with MCP support. TypingMind supports all popular LLM providers like OpenAI, Gemini, Claude, and users can use with their own API keys. + ### Interactive Chat Interface -**Key features:** + Now we'll add the chat loop and cleanup functionality: -* **MCP Tool Integration**: Once MCP is configured, MCP tools will show up as plugins that can be enabled/disabled easily via the main app interface. -* **Assign MCP Tools to Agents**: TypingMind allows users to create AI agents that have a set of MCP servers assigned. -* **Remote MCP servers**: Allows users to customize where to run the MCP servers via its MCP Connector configuration, allowing the use of MCP tools across multiple devices (laptop, mobile devices, etc.) or control MCP servers from a remote private server. + ```typescript theme={null} + async chatLoop() { + const rl = readline.createInterface({ + input: process.stdin, + output: process.stdout, + }); -**Learn more:** + try { + console.log("\nMCP Client Started!"); + console.log("Type your queries or 'quit' to exit."); -* [TypingMind MCP Document](https://www.typingmind.com/mcp) -* [Download TypingMind (PWA)](https://www.typingmind.com/) + while (true) { + const message = await rl.question("\nQuery: "); + if (message.toLowerCase() === "quit") { + break; + } + const response = await this.processQuery(message); + console.log("\n" + response); + } + } finally { + rl.close(); + } + } -### VS Code GitHub Copilot + async cleanup() { + await this.mcp.close(); + } + ``` -[VS Code](https://code.visualstudio.com/) integrates MCP with GitHub Copilot through [agent mode](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode), allowing direct interaction with MCP-provided tools within your agentic coding workflow. Configure servers in Claude Desktop, workspace or user settings, with guided MCP installation and secure handling of keys in input variables to avoid leaking hard-coded keys. + ### Main Entry Point -**Key features:** + Finally, we'll add the main execution logic: -* Support for stdio and server-sent events (SSE) transport -* Per-session selection of tools per agent session for optimal performance -* Easy server debugging with restart commands and output logging -* Tool calls with editable inputs and always-allow toggle -* Integration with existing VS Code extension system to register MCP servers from extensions + ```typescript theme={null} + async function main() { + if (process.argv.length < 3) { + console.log("Usage: node index.ts "); + return; + } + const mcpClient = new MCPClient(); + try { + await mcpClient.connectToServer(process.argv[2]); + await mcpClient.chatLoop(); + } catch (e) { + console.error("Error:", e); + await mcpClient.cleanup(); + process.exit(1); + } finally { + await mcpClient.cleanup(); + process.exit(0); + } + } -### Windsurf Editor + main(); + ``` -[Windsurf Editor](https://codeium.com/windsurf) is an agentic IDE that combines AI assistance with developer workflows. It features an innovative AI Flow system that enables both collaborative and independent AI interactions while maintaining developer control. + ## Running the Client -**Key features:** + To run your client with any MCP server: -* Revolutionary AI Flow paradigm for human-AI collaboration -* Intelligent code generation and understanding -* Rich development tools with multi-model support + ```bash theme={null} + # Build TypeScript + npm run build -### Witsy + # Run the client + node build/index.js path/to/server.py # python server + node build/index.js path/to/build/index.js # node server + ``` -[Witsy](https://github.com/nbonamy/witsy) is an AI desktop assistant, supoorting Anthropic models and MCP servers as LLM tools. + + If you're continuing [the weather tutorial from the server quickstart](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript), your command might look something like this: `node build/index.js .../quickstart-resources/weather-server-typescript/build/index.js` + -**Key features:** + **The client will:** -* Multiple MCP servers support -* Tool integration for executing commands and scripts -* Local server connections for enhanced privacy and security -* Easy-install from Smithery.ai -* Open-source, available for macOS, Windows and Linux + 1. Connect to the specified server + 2. List available tools + 3. Start an interactive chat session where you can: + * Enter queries + * See tool executions + * Get responses from Claude -### Zed + ## How It Works -[Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration. + When you submit a query: -**Key features:** + 1. The client gets the list of available tools from the server + 2. Your query is sent to Claude along with tool descriptions + 3. Claude decides which tools (if any) to use + 4. The client executes any requested tool calls through the server + 5. Results are sent back to Claude + 6. Claude provides a natural language response + 7. The response is displayed to you -* Prompt templates surface as slash commands in the editor -* Tool integration for enhanced coding workflows -* Tight integration with editor features and workspace context -* Does not support MCP resources + ## Best practices -## Adding MCP support to your application + 1. **Error Handling** + * Use TypeScript's type system for better error detection + * Wrap tool calls in try-catch blocks + * Provide meaningful error messages + * Gracefully handle connection issues -If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem. + 2. **Security** + * Store API keys securely in `.env` + * Validate server responses + * Be cautious with tool permissions -Benefits of adding MCP support: + ## Troubleshooting -* Enable users to bring their own context and tools -* Join a growing ecosystem of interoperable AI applications -* Provide users with flexible integration options -* Support local-first AI workflows + ### Server Path Issues -To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk) + * Double-check the path to your server script is correct + * Use the absolute path if the relative path isn't working + * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path + * Verify the server file has the correct extension (.js for Node.js or .py for Python) -## Updates and corrections + Example of correct path usage: -This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/modelcontextprotocol/issues). + ```bash theme={null} + # Relative path + node build/index.js ./server/build/index.js + # Absolute path + node build/index.js /Users/username/projects/mcp-server/build/index.js -# Contributing -Source: https://modelcontextprotocol.io/development/contributing + # Windows path (either format works) + node build/index.js C:/projects/mcp-server/build/index.js + node build/index.js C:\\projects\\mcp-server\\build\\index.js + ``` -How to participate in Model Context Protocol development + ### Response Timing -We welcome contributions from the community! Please review our [contributing guidelines](https://github.com/modelcontextprotocol/.github/blob/main/CONTRIBUTING.md) for details on how to submit changes. + * The first response might take up to 30 seconds to return + * This is normal and happens while: + * The server initializes + * Claude processes the query + * Tools are being executed + * Subsequent responses are typically faster + * Don't interrupt the process during this initial waiting period -All contributors must adhere to our [Code of Conduct](https://github.com/modelcontextprotocol/.github/blob/main/CODE_OF_CONDUCT.md). + ### Common Error Messages -For questions and discussions, please use [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions). + If you see: + * `Error: Cannot find module`: Check your build folder and ensure TypeScript compilation succeeded + * `Connection refused`: Ensure the server is running and the path is correct + * `Tool execution failed`: Verify the tool's required environment variables are set + * `ANTHROPIC_API_KEY is not set`: Check your .env file and environment variables + * `TypeError`: Ensure you're using the correct types for tool arguments + * `BadRequestError`: Ensure you have enough credits to access the Anthropic API + -# Roadmap -Source: https://modelcontextprotocol.io/development/roadmap + + + This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters. + To learn how to create sync and async MCP Clients manually, consult the [Java SDK Client](/sdk/java/mcp-client) documentation + -Our plans for evolving Model Context Protocol + This example demonstrates how to build an interactive chatbot that combines Spring AI's Model Context Protocol (MCP) with the [Brave Search MCP Server](https://github.com/modelcontextprotocol/servers-archived/tree/main/src/brave-search). The application creates a conversational interface powered by Anthropic's Claude AI model that can perform internet searches through Brave Search, enabling natural language interactions with real-time web data. + [You can find the complete code for this tutorial here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/web-search/brave-chatbot) -Last updated: **2025-03-27** + ## System Requirements -The Model Context Protocol is rapidly evolving. This page outlines our current thinking on key priorities and direction for approximately **the next six months**, though these may change significantly as the project develops. To see what's changed recently, check out the **[specification changelog](/specification/2025-03-26/changelog/)**. + Before starting, ensure your system meets these requirements: -The ideas presented here are not commitments—we may solve these challenges differently than described, or some may not materialize at all. This is also not an *exhaustive* list; we may incorporate work that isn't mentioned here. + * Java 17 or higher + * Maven 3.6+ + * npx package manager + * Anthropic API key (Claude) + * Brave Search API key -We value community participation! Each section links to relevant discussions where you can learn more and contribute your thoughts. + ## Setting Up Your Environment -For a technical view of our standardization process, visit the [Standards Track](https://github.com/orgs/modelcontextprotocol/projects/2/views/2) on GitHub, which tracks how proposals progress toward inclusion in the official [MCP specification](https://spec.modelcontextprotocol.io). + 1. Install npx (Node Package eXecute): + First, make sure to install [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) + and then run: -## Validation + ```bash theme={null} + npm install -g npx + ``` -To foster a robust developer ecosystem, we plan to invest in: + 2. Clone the repository: -* **Reference Client Implementations**: demonstrating protocol features with high-quality AI applications -* **Compliance Test Suites**: automated verification that clients, servers, and SDKs properly implement the specification + ```bash theme={null} + git clone https://github.com/spring-projects/spring-ai-examples.git + cd model-context-protocol/web-search/brave-chatbot + ``` -These tools will help developers confidently implement MCP while ensuring consistent behavior across the ecosystem. + 3. Set up your API keys: -## Registry + ```bash theme={null} + export ANTHROPIC_API_KEY='your-anthropic-api-key-here' + export BRAVE_API_KEY='your-brave-api-key-here' + ``` -For MCP to reach its full potential, we need streamlined ways to distribute and discover MCP servers. + 4. Build the application: -We plan to develop an [**MCP Registry**](https://github.com/orgs/modelcontextprotocol/discussions/159) that will enable centralized server discovery and metadata. This registry will primarily function as an API layer that third-party marketplaces and discovery services can build upon. + ```bash theme={null} + ./mvnw clean install + ``` -## Agents + 5. Run the application using Maven: + ```bash theme={null} + ./mvnw spring-boot:run + ``` -As MCP increasingly becomes part of agentic workflows, we're exploring [improvements](https://github.com/modelcontextprotocol/specification/discussions/111) such as: + + Make sure you keep your `ANTHROPIC_API_KEY` and `BRAVE_API_KEY` keys secure! + -* **[Agent Graphs](https://github.com/modelcontextprotocol/specification/discussions/94)**: enabling complex agent topologies through namespacing and graph-aware communication patterns -* **Interactive Workflows**: improving human-in-the-loop experiences with granular permissioning, standardized interaction patterns, and [ways to directly communicate](https://github.com/modelcontextprotocol/specification/issues/97) with the end user + ## How it Works -## Multimodality + The application integrates Spring AI with the Brave Search MCP server through several components: -Supporting the full spectrum of AI capabilities in MCP, including: + ### MCP Client Configuration -* **Additional Modalities**: video and other media types -* **[Streaming](https://github.com/modelcontextprotocol/specification/issues/117)**: multipart, chunked messages, and bidirectional communication for interactive experiences + 1. Required dependencies in pom.xml: -## Governance + ```xml theme={null} + + org.springframework.ai + spring-ai-starter-mcp-client + + + org.springframework.ai + spring-ai-starter-model-anthropic + + ``` -We're implementing governance structures that prioritize: + 2. Application properties (application.yml): -* **Community-Led Development**: fostering a collaborative ecosystem where community members and AI developers can all participate in MCP's evolution, ensuring it serves diverse applications and use cases -* **Transparent Standardization**: establishing clear processes for contributing to the specification, while exploring formal standardization via industry bodies + ```yml theme={null} + spring: + ai: + mcp: + client: + enabled: true + name: brave-search-client + version: 1.0.0 + type: SYNC + request-timeout: 20s + stdio: + root-change-notification: true + servers-configuration: classpath:/mcp-servers-config.json + toolcallback: + enabled: true + anthropic: + api-key: ${ANTHROPIC_API_KEY} + ``` -## Get Involved + This activates the `spring-ai-starter-mcp-client` to create one or more `McpClient`s based on the provided server configuration. + The `spring.ai.mcp.client.toolcallback.enabled=true` property enables the tool callback mechanism, that automatically registers all MCP tool as spring ai tools. + It is disabled by default. -We welcome your contributions to MCP's future! Join our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to share ideas, provide feedback, or participate in the development process. + 3. MCP Server Configuration (`mcp-servers-config.json`): + ```json theme={null} + { + "mcpServers": { + "brave-search": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-brave-search"], + "env": { + "BRAVE_API_KEY": "" + } + } + } + } + ``` -# What's New -Source: https://modelcontextprotocol.io/development/updates - -The latest updates and improvements to MCP - - - * Version [0.9.0](https://github.com/modelcontextprotocol/java-sdk/releases/tag/v0.9.0) of the MCP Java SDK has been released. - * Refactored logging system to use exchange mechanism - * Custom Context Paths - * Server Instructions - * CallToolResult Enhancement - - - - * Fix issues and cleanup API - * Added binary compatibility tracking to avoid breaking changes - * Drop jdk requirements to JDK8 - * Added Claude Desktop integration with sample - * The full changelog can be found here: [https://github.com/modelcontextprotocol/kotlin-sdk/releases/tag/0.4.0](https://github.com/modelcontextprotocol/kotlin-sdk/releases/tag/0.4.0) - - - - * Version [0.8.1](https://github.com/modelcontextprotocol/java-sdk/releases/tag/v0.8.1) of the MCP Java SDK has been released, - providing important bug fixes. - - - - * We are exited to announce the availability of the MCP - [C# SDK](https://github.com/modelcontextprotocol/csharp-sdk/) developed by - [Peder Holdgaard Pedersen](http://github.com/PederHP) and Microsoft. This joins our growing - list of supported languages. The C# SDK is also available as - [NuGet package](https://www.nuget.org/packages/ModelContextProtocol) - * Python SDK 1.5.0 was released with multiple fixes and improvements. - - - - * Version [0.8.0](https://github.com/modelcontextprotocol/java-sdk/releases/tag/v0.8.0) of the MCP Java SDK has been released, - delivering important session management improvements and bug fixes. - - - - * Typescript SDK 1.7.0 was released with multiple fixes and improvements. - - - - * We're excited to announce that the Java SDK developed by Spring AI at VMware Tanzu is now - the official [Java SDK](https://github.com/modelcontextprotocol/java-sdk) for MCP. - This joins our existing Kotlin SDK in our growing list of supported languages. - The Spring AI team will maintain the SDK as an integral part of the Model Context Protocol - organization. We're thrilled to welcome them to the MCP community! - - - - * Version [1.2.1](https://github.com/modelcontextprotocol/python-sdk/releases/tag/v1.2.1) of the MCP Python SDK has been released, - delivering important stability improvements and bug fixes. - - - - * Simplified, express-like API in the [TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) - * Added 8 new clients to the [clients page](https://modelcontextprotocol.io/clients) - - - - * FastMCP API in the [Python SDK](https://github.com/modelcontextprotocol/python-sdk) - * Dockerized MCP servers in the [servers repo](https://github.com/modelcontextprotocol/servers) - - - - * Jetbrains released a Kotlin SDK for MCP! - * For a sample MCP Kotlin server, check out [this repository](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/kotlin-mcp-server) - - - -# Core architecture -Source: https://modelcontextprotocol.io/docs/concepts/architecture - -Understand how MCP connects clients, servers, and LLMs - -The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts. + ### Chat Implementation -## Overview + The chatbot is implemented using Spring AI's ChatClient with MCP tool integration: -MCP follows a client-server architecture where: + ```java theme={null} + var chatClient = chatClientBuilder + .defaultSystem("You are useful assistant, expert in AI and Java.") + .defaultToolCallbacks((Object[]) mcpToolAdapter.toolCallbacks()) + .defaultAdvisors(new MessageChatMemoryAdvisor(new InMemoryChatMemory())) + .build(); + ``` -* **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections -* **Clients** maintain 1:1 connections with servers, inside the host application -* **Servers** provide context, tools, and prompts to clients + Key features: -```mermaid -flowchart LR - subgraph "Host" - client1[MCP Client] - client2[MCP Client] - end - subgraph "Server Process" - server1[MCP Server] - end - subgraph "Server Process" - server2[MCP Server] - end + * Uses Claude AI model for natural language understanding + * Integrates Brave Search through MCP for real-time web search capabilities + * Maintains conversation memory using InMemoryChatMemory + * Runs as an interactive command-line application - client1 <-->|Transport Layer| server1 - client2 <-->|Transport Layer| server2 -``` + ### Build and run -## Core components + ```bash theme={null} + ./mvnw clean install + java -jar ./target/ai-mcp-brave-chatbot-0.0.1-SNAPSHOT.jar + ``` -### Protocol layer + or -The protocol layer handles message framing, request/response linking, and high-level communication patterns. + ```bash theme={null} + ./mvnw spring-boot:run + ``` - - - ```typescript - class Protocol { - // Handle incoming requests - setRequestHandler(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise): void + The application will start an interactive chat session where you can ask questions. The chatbot will use Brave Search when it needs to find information from the internet to answer your queries. - // Handle incoming notifications - setNotificationHandler(schema: T, handler: (notification: T) => Promise): void + The chatbot can: - // Send requests and await responses - request(request: Request, schema: T, options?: RequestOptions): Promise + * Answer questions using its built-in knowledge + * Perform web searches when needed using Brave Search + * Remember context from previous messages in the conversation + * Combine information from multiple sources to provide comprehensive answers - // Send one-way notifications - notification(notification: Notification): Promise - } - ``` - + ### Advanced Configuration - - ```python - class Session(BaseSession[RequestT, NotificationT, ResultT]): - async def send_request( - self, - request: RequestT, - result_type: type[Result] - ) -> Result: - """Send request and wait for response. Raises McpError if response contains error.""" - # Request handling implementation + The MCP client supports additional configuration options: - async def send_notification( - self, - notification: NotificationT - ) -> None: - """Send one-way notification that doesn't expect response.""" - # Notification handling implementation + * Client customization through `McpSyncClientCustomizer` or `McpAsyncClientCustomizer` + * Multiple clients with multiple transport types: `STDIO` and `SSE` (Server-Sent Events) + * Integration with Spring AI's tool execution framework + * Automatic client initialization and lifecycle management - async def _received_request( - self, - responder: RequestResponder[ReceiveRequestT, ResultT] - ) -> None: - """Handle incoming request from other side.""" - # Request handling implementation + For WebFlux-based applications, you can use the WebFlux starter instead: - async def _received_notification( - self, - notification: ReceiveNotificationT - ) -> None: - """Handle incoming notification from other side.""" - # Notification handling implementation + ```xml theme={null} + + org.springframework.ai + spring-ai-mcp-client-webflux-spring-boot-starter + ``` + + This provides similar functionality but uses a WebFlux-based SSE transport implementation, recommended for production deployments. - -Key classes include: + + [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/kotlin-mcp-client) -* `Protocol` -* `Client` -* `Server` + ## System Requirements -### Transport layer + Before starting, ensure your system meets these requirements: -The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms: + * Java 17 or higher + * Anthropic API key (Claude) -1. **Stdio transport** - * Uses standard input/output for communication - * Ideal for local processes + ## Setting up your environment -2. **HTTP with SSE transport** - * Uses Server-Sent Events for server-to-client messages - * HTTP POST for client-to-server messages + First, let's install `java` and `gradle` if you haven't already. + You can download `java` from [official Oracle JDK website](https://www.oracle.com/java/technologies/downloads/). + Verify your `java` installation: -All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](/specification/) for detailed information about the Model Context Protocol message format. + ```bash theme={null} + java --version + ``` -### Message types + Now, let's create and set up your project: -MCP has these main types of messages: + + ```bash macOS/Linux theme={null} + # Create a new directory for our project + mkdir kotlin-mcp-client + cd kotlin-mcp-client -1. **Requests** expect a response from the other side: - ```typescript - interface Request { - method: string; - params?: { ... }; - } - ``` + # Initialize a new kotlin project + gradle init + ``` -2. **Results** are successful responses to requests: - ```typescript - interface Result { - [key: string]: unknown; - } - ``` + ```powershell Windows theme={null} + # Create a new directory for our project + md kotlin-mcp-client + cd kotlin-mcp-client + # Initialize a new kotlin project + gradle init + ``` + -3. **Errors** indicate that a request failed: - ```typescript - interface Error { - code: number; - message: string; - data?: unknown; - } - ``` + After running `gradle init`, you will be presented with options for creating your project. + Select **Application** as the project type, **Kotlin** as the programming language, and **Java 17** as the Java version. -4. **Notifications** are one-way messages that don't expect a response: - ```typescript - interface Notification { - method: string; - params?: { ... }; - } - ``` + Alternatively, you can create a Kotlin application using the [IntelliJ IDEA project wizard](https://kotlinlang.org/docs/jvm-get-started.html). -## Connection lifecycle + After creating the project, add the following dependencies: -### 1. Initialization + + ```kotlin build.gradle.kts theme={null} + val mcpVersion = "0.4.0" + val slf4jVersion = "2.0.9" + val anthropicVersion = "0.8.0" -```mermaid -sequenceDiagram - participant Client - participant Server + dependencies { + implementation("io.modelcontextprotocol:kotlin-sdk:$mcpVersion") + implementation("org.slf4j:slf4j-nop:$slf4jVersion") + implementation("com.anthropic:anthropic-java:$anthropicVersion") + } + ``` - Client->>Server: initialize request - Server->>Client: initialize response - Client->>Server: initialized notification + ```groovy build.gradle theme={null} + def mcpVersion = '0.3.0' + def slf4jVersion = '2.0.9' + def anthropicVersion = '0.8.0' + dependencies { + implementation "io.modelcontextprotocol:kotlin-sdk:$mcpVersion" + implementation "org.slf4j:slf4j-nop:$slf4jVersion" + implementation "com.anthropic:anthropic-java:$anthropicVersion" + } + ``` + - Note over Client,Server: Connection ready for use -``` + Also, add the following plugins to your build script: -1. Client sends `initialize` request with protocol version and capabilities -2. Server responds with its protocol version and capabilities -3. Client sends `initialized` notification as acknowledgment -4. Normal message exchange begins + + ```kotlin build.gradle.kts theme={null} + plugins { + id("com.gradleup.shadow") version "8.3.9" + } + ``` -### 2. Message exchange + ```groovy build.gradle theme={null} + plugins { + id 'com.gradleup.shadow' version '8.3.9' + } + ``` + -After initialization, the following patterns are supported: + ## Setting up your API key -* **Request-Response**: Client or server sends requests, the other responds -* **Notifications**: Either party sends one-way messages + You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys). -### 3. Termination + Set up your API key: -Either party can terminate the connection: + ```bash theme={null} + export ANTHROPIC_API_KEY='your-anthropic-api-key-here' + ``` -* Clean shutdown via `close()` -* Transport disconnection -* Error conditions + + Make sure you keep your `ANTHROPIC_API_KEY` secure! + -## Error handling + ## Creating the Client -MCP defines these standard error codes: + ### Basic Client Structure -```typescript -enum ErrorCode { - // Standard JSON-RPC error codes - ParseError = -32700, - InvalidRequest = -32600, - MethodNotFound = -32601, - InvalidParams = -32602, - InternalError = -32603 -} -``` + First, let's create the basic client class: -SDKs and applications can define their own error codes above -32000. + ```kotlin theme={null} + class MCPClient : AutoCloseable { + private val anthropic = AnthropicOkHttpClient.fromEnv() + private val mcp: Client = Client(clientInfo = Implementation(name = "mcp-client-cli", version = "1.0.0")) + private lateinit var tools: List -Errors are propagated through: + // methods will go here -* Error responses to requests -* Error events on transports -* Protocol-level error handlers + override fun close() { + runBlocking { + mcp.close() + anthropic.close() + } + } + ``` -## Implementation example + ### Server connection management -Here's a basic example of implementing an MCP server: + Next, we'll implement the method to connect to an MCP server: - - - ```typescript - import { Server } from "@modelcontextprotocol/sdk/server/index.js"; - import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; + ```kotlin theme={null} + suspend fun connectToServer(serverScriptPath: String) { + try { + val command = buildList { + when (serverScriptPath.substringAfterLast(".")) { + "js" -> add("node") + "py" -> add(if (System.getProperty("os.name").lowercase().contains("win")) "python" else "python3") + "jar" -> addAll(listOf("java", "-jar")) + else -> throw IllegalArgumentException("Server script must be a .js, .py or .jar file") + } + add(serverScriptPath) + } - const server = new Server({ - name: "example-server", - version: "1.0.0" - }, { - capabilities: { - resources: {} - } - }); + val process = ProcessBuilder(command).start() + val transport = StdioClientTransport( + input = process.inputStream.asSource().buffered(), + output = process.outputStream.asSink().buffered() + ) - // Handle requests - server.setRequestHandler(ListResourcesRequestSchema, async () => { - return { - resources: [ - { - uri: "example://resource", - name: "Example Resource" - } - ] - }; - }); + mcp.connect(transport) - // Connect transport - const transport = new StdioServerTransport(); - await server.connect(transport); + val toolsResult = mcp.listTools() + tools = toolsResult?.tools?.map { tool -> + ToolUnion.ofTool( + Tool.builder() + .name(tool.name) + .description(tool.description ?: "") + .inputSchema( + Tool.InputSchema.builder() + .type(JsonValue.from(tool.inputSchema.type)) + .properties(tool.inputSchema.properties.toJsonValue()) + .putAdditionalProperty("required", JsonValue.from(tool.inputSchema.required)) + .build() + ) + .build() + ) + } ?: emptyList() + println("Connected to server with tools: ${tools.joinToString(", ") { it.tool().get().name() }}") + } catch (e: Exception) { + println("Failed to connect to MCP server: $e") + throw e + } + } ``` - - - - ```python - import asyncio - import mcp.types as types - from mcp.server import Server - from mcp.server.stdio import stdio_server - - app = Server("example-server") - - @app.list_resources() - async def list_resources() -> list[types.Resource]: - return [ - types.Resource( - uri="example://resource", - name="Example Resource" - ) - ] - async def main(): - async with stdio_server() as streams: - await app.run( - streams[0], - streams[1], - app.create_initialization_options() - ) + Also create a helper function to convert from `JsonObject` to `JsonValue` for Anthropic: - if __name__ == "__main__": - asyncio.run(main()) + ```kotlin theme={null} + private fun JsonObject.toJsonValue(): JsonValue { + val mapper = ObjectMapper() + val node = mapper.readTree(this.toString()) + return JsonValue.fromJsonNode(node) + } ``` - - - -## Best practices -### Transport selection - -1. **Local communication** - * Use stdio transport for local processes - * Efficient for same-machine communication - * Simple process management - -2. **Remote communication** - * Use SSE for scenarios requiring HTTP compatibility - * Consider security implications including authentication and authorization - -### Message handling - -1. **Request processing** - * Validate inputs thoroughly - * Use type-safe schemas - * Handle errors gracefully - * Implement timeouts - -2. **Progress reporting** - * Use progress tokens for long operations - * Report progress incrementally - * Include total progress when known - -3. **Error management** - * Use appropriate error codes - * Include helpful error messages - * Clean up resources on errors - -## Security considerations - -1. **Transport security** - * Use TLS for remote connections - * Validate connection origins - * Implement authentication when needed - -2. **Message validation** - * Validate all incoming messages - * Sanitize inputs - * Check message size limits - * Verify JSON-RPC format - -3. **Resource protection** - * Implement access controls - * Validate resource paths - * Monitor resource usage - * Rate limit requests - -4. **Error handling** - * Don't leak sensitive information - * Log security-relevant errors - * Implement proper cleanup - * Handle DoS scenarios - -## Debugging and monitoring - -1. **Logging** - * Log protocol events - * Track message flow - * Monitor performance - * Record errors - -2. **Diagnostics** - * Implement health checks - * Monitor connection state - * Track resource usage - * Profile performance - -3. **Testing** - * Test different transports - * Verify error handling - * Check edge cases - * Load test servers + ### Query processing logic + Now let's add the core functionality for processing queries and handling tool calls: -# Prompts -Source: https://modelcontextprotocol.io/docs/concepts/prompts + ```kotlin theme={null} + private val messageParamsBuilder: MessageCreateParams.Builder = MessageCreateParams.builder() + .model(Model.CLAUDE_SONNET_4_20250514) + .maxTokens(1024) -Create reusable prompt templates and workflows + suspend fun processQuery(query: String): String { + val messages = mutableListOf( + MessageParam.builder() + .role(MessageParam.Role.USER) + .content(query) + .build() + ) -Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions. + val response = anthropic.messages().create( + messageParamsBuilder + .messages(messages) + .tools(tools) + .build() + ) - - Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use. - + val finalText = mutableListOf() + response.content().forEach { content -> + when { + content.isText() -> finalText.add(content.text().getOrNull()?.text() ?: "") -## Overview + content.isToolUse() -> { + val toolName = content.toolUse().get().name() + val toolArgs = + content.toolUse().get()._input().convert(object : TypeReference>() {}) -Prompts in MCP are predefined templates that can: + val result = mcp.callTool( + name = toolName, + arguments = toolArgs ?: emptyMap() + ) + finalText.add("[Calling tool $toolName with args $toolArgs]") -* Accept dynamic arguments -* Include context from resources -* Chain multiple interactions -* Guide specific workflows -* Surface as UI elements (like slash commands) + messages.add( + MessageParam.builder() + .role(MessageParam.Role.USER) + .content( + """ + "type": "tool_result", + "tool_name": $toolName, + "result": ${result?.content?.joinToString("\n") { (it as TextContent).text ?: "" }} + """.trimIndent() + ) + .build() + ) -## Prompt structure + val aiResponse = anthropic.messages().create( + messageParamsBuilder + .messages(messages) + .build() + ) -Each prompt is defined with: + finalText.add(aiResponse.content().first().text().getOrNull()?.text() ?: "") + } + } + } -```typescript -{ - name: string; // Unique identifier for the prompt - description?: string; // Human-readable description - arguments?: [ // Optional list of arguments - { - name: string; // Argument identifier - description?: string; // Argument description - required?: boolean; // Whether argument is required + return finalText.joinToString("\n", prefix = "", postfix = "") } - ] -} -``` + ``` -## Discovering prompts + ### Interactive chat -Clients can discover available prompts through the `prompts/list` endpoint: + We'll add the chat loop: -```typescript -// Request -{ - method: "prompts/list" -} + ```kotlin theme={null} + suspend fun chatLoop() { + println("\nMCP Client Started!") + println("Type your queries or 'quit' to exit.") -// Response -{ - prompts: [ - { - name: "analyze-code", - description: "Analyze code for potential improvements", - arguments: [ - { - name: "language", - description: "Programming language", - required: true + while (true) { + print("\nQuery: ") + val message = readLine() ?: break + if (message.lowercase() == "quit") break + val response = processQuery(message) + println("\n$response") } - ] } - ] -} -``` + ``` -## Using prompts + ### Main entry point -To use a prompt, clients make a `prompts/get` request: + Finally, we'll add the main execution function: -````typescript -// Request -{ - method: "prompts/get", - params: { - name: "analyze-code", - arguments: { - language: "python" + ```kotlin theme={null} + fun main(args: Array) = runBlocking { + if (args.isEmpty()) throw IllegalArgumentException("Usage: java -jar /build/libs/kotlin-mcp-client-0.1.0-all.jar ") + val serverPath = args.first() + val client = MCPClient() + client.use { + client.connectToServer(serverPath) + client.chatLoop() + } } - } -} + ``` -// Response -{ - description: "Analyze Python code for potential improvements", - messages: [ - { - role: "user", - content: { - type: "text", - text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```" - } - } - ] -} -```` + ## Running the client -## Dynamic prompts + To run your client with any MCP server: -Prompts can be dynamic and include: + ```bash theme={null} + ./gradlew build -### Embedded resource context + # Run the client + java -jar build/libs/.jar path/to/server.jar # jvm server + java -jar build/libs/.jar path/to/server.py # python server + java -jar build/libs/.jar path/to/build/index.js # node server + ``` -```json -{ - "name": "analyze-project", - "description": "Analyze project logs and code", - "arguments": [ - { - "name": "timeframe", - "description": "Time period to analyze logs", - "required": true - }, - { - "name": "fileUri", - "description": "URI of code file to review", - "required": true - } - ] -} -``` - -When handling the `prompts/get` request: + + If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `java -jar build/libs/kotlin-mcp-client-0.1.0-all.jar .../samples/weather-stdio-server/build/libs/weather-stdio-server-0.1.0-all.jar` + -```json -{ - "messages": [ - { - "role": "user", - "content": { - "type": "text", - "text": "Analyze these system logs and the code file for any issues:" - } - }, - { - "role": "user", - "content": { - "type": "resource", - "resource": { - "uri": "logs://recent?timeframe=1h", - "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded", - "mimeType": "text/plain" - } - } - }, - { - "role": "user", - "content": { - "type": "resource", - "resource": { - "uri": "file:///path/to/code.py", - "text": "def connect_to_service(timeout=30):\n retries = 3\n for attempt in range(retries):\n try:\n return establish_connection(timeout)\n except TimeoutError:\n if attempt == retries - 1:\n raise\n time.sleep(5)\n\ndef establish_connection(timeout):\n # Connection implementation\n pass", - "mimeType": "text/x-python" - } - } - } - ] -} -``` + **The client will:** -### Multi-step workflows + 1. Connect to the specified server + 2. List available tools + 3. Start an interactive chat session where you can: + * Enter queries + * See tool executions + * Get responses from Claude -```typescript -const debugWorkflow = { - name: "debug-error", - async getMessages(error: string) { - return [ - { - role: "user", - content: { - type: "text", - text: `Here's an error I'm seeing: ${error}` - } - }, - { - role: "assistant", - content: { - type: "text", - text: "I'll help analyze this error. What have you tried so far?" - } - }, - { - role: "user", - content: { - type: "text", - text: "I've tried restarting the service, but the error persists." - } - } - ]; - } -}; -``` + ## How it works -## Example implementation + Here's a high-level workflow schema: -Here's a complete example of implementing prompts in an MCP server: + ```mermaid theme={null} + --- + config: + theme: neutral + --- + sequenceDiagram + actor User + participant Client + participant Claude + participant MCP_Server as MCP Server + participant Tools - - - ```typescript - import { Server } from "@modelcontextprotocol/sdk/server"; - import { - ListPromptsRequestSchema, - GetPromptRequestSchema - } from "@modelcontextprotocol/sdk/types"; - - const PROMPTS = { - "git-commit": { - name: "git-commit", - description: "Generate a Git commit message", - arguments: [ - { - name: "changes", - description: "Git diff or description of changes", - required: true - } - ] - }, - "explain-code": { - name: "explain-code", - description: "Explain how code works", - arguments: [ - { - name: "code", - description: "Code to explain", - required: true - }, - { - name: "language", - description: "Programming language", - required: false - } - ] - } - }; + User->>Client: Send query + Client<<->>MCP_Server: Get available tools + Client->>Claude: Send query with tool descriptions + Claude-->>Client: Decide tool execution + Client->>MCP_Server: Request tool execution + MCP_Server->>Tools: Execute chosen tools + Tools-->>MCP_Server: Return results + MCP_Server-->>Client: Send results + Client->>Claude: Send tool results + Claude-->>Client: Provide final response + Client-->>User: Display response + ``` - const server = new Server({ - name: "example-prompts-server", - version: "1.0.0" - }, { - capabilities: { - prompts: {} - } - }); + When you submit a query: - // List available prompts - server.setRequestHandler(ListPromptsRequestSchema, async () => { - return { - prompts: Object.values(PROMPTS) - }; - }); + 1. The client gets the list of available tools from the server + 2. Your query is sent to Claude along with tool descriptions + 3. Claude decides which tools (if any) to use + 4. The client executes any requested tool calls through the server + 5. Results are sent back to Claude + 6. Claude provides a natural language response + 7. The response is displayed to you - // Get specific prompt - server.setRequestHandler(GetPromptRequestSchema, async (request) => { - const prompt = PROMPTS[request.params.name]; - if (!prompt) { - throw new Error(`Prompt not found: ${request.params.name}`); - } + ## Best practices - if (request.params.name === "git-commit") { - return { - messages: [ - { - role: "user", - content: { - type: "text", - text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}` - } - } - ] - }; - } + 1. **Error Handling** + * Leverage Kotlin's type system to model errors explicitly + * Wrap external tool and API calls in `try-catch` blocks when exceptions are possible + * Provide clear and meaningful error messages + * Handle network timeouts and connection issues gracefully - if (request.params.name === "explain-code") { - const language = request.params.arguments?.language || "Unknown"; - return { - messages: [ - { - role: "user", - content: { - type: "text", - text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}` - } - } - ] - }; - } + 2. **Security** + * Store API keys and secrets securely in `local.properties`, environment variables, or secret managers + * Validate all external responses to avoid unexpected or unsafe data usage + * Be cautious with permissions and trust boundaries when using tools - throw new Error("Prompt implementation not found"); - }); - ``` - + ## Troubleshooting - - ```python - from mcp.server import Server - import mcp.types as types - - # Define available prompts - PROMPTS = { - "git-commit": types.Prompt( - name="git-commit", - description="Generate a Git commit message", - arguments=[ - types.PromptArgument( - name="changes", - description="Git diff or description of changes", - required=True - ) - ], - ), - "explain-code": types.Prompt( - name="explain-code", - description="Explain how code works", - arguments=[ - types.PromptArgument( - name="code", - description="Code to explain", - required=True - ), - types.PromptArgument( - name="language", - description="Programming language", - required=False - ) - ], - ) - } + ### Server Path Issues - # Initialize server - app = Server("example-prompts-server") - - @app.list_prompts() - async def list_prompts() -> list[types.Prompt]: - return list(PROMPTS.values()) - - @app.get_prompt() - async def get_prompt( - name: str, arguments: dict[str, str] | None = None - ) -> types.GetPromptResult: - if name not in PROMPTS: - raise ValueError(f"Prompt not found: {name}") - - if name == "git-commit": - changes = arguments.get("changes") if arguments else "" - return types.GetPromptResult( - messages=[ - types.PromptMessage( - role="user", - content=types.TextContent( - type="text", - text=f"Generate a concise but descriptive commit message " - f"for these changes:\n\n{changes}" - ) - ) - ] - ) + * Double-check the path to your server script is correct + * Use the absolute path if the relative path isn't working + * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path + * Make sure that the required runtime is installed (java for Java, npm for Node.js, or uv for Python) + * Verify the server file has the correct extension (.jar for Java, .js for Node.js or .py for Python) - if name == "explain-code": - code = arguments.get("code") if arguments else "" - language = arguments.get("language", "Unknown") if arguments else "Unknown" - return types.GetPromptResult( - messages=[ - types.PromptMessage( - role="user", - content=types.TextContent( - type="text", - text=f"Explain how this {language} code works:\n\n{code}" - ) - ) - ] - ) + Example of correct path usage: - raise ValueError("Prompt implementation not found") - ``` - - + ```bash theme={null} + # Relative path + java -jar build/libs/client.jar ./server/build/libs/server.jar -## Best practices + # Absolute path + java -jar build/libs/client.jar /Users/username/projects/mcp-server/build/libs/server.jar -When implementing prompts: + # Windows path (either format works) + java -jar build/libs/client.jar C:/projects/mcp-server/build/libs/server.jar + java -jar build/libs/client.jar C:\\projects\\mcp-server\\build\\libs\\server.jar + ``` -1. Use clear, descriptive prompt names -2. Provide detailed descriptions for prompts and arguments -3. Validate all required arguments -4. Handle missing arguments gracefully -5. Consider versioning for prompt templates -6. Cache dynamic content when appropriate -7. Implement error handling -8. Document expected argument formats -9. Consider prompt composability -10. Test prompts with various inputs + ### Response Timing -## UI integration + * The first response might take up to 30 seconds to return + * This is normal and happens while: + * The server initializes + * Claude processes the query + * Tools are being executed + * Subsequent responses are typically faster + * Don't interrupt the process during this initial waiting period -Prompts can be surfaced in client UIs as: + ### Common Error Messages -* Slash commands -* Quick actions -* Context menu items -* Command palette entries -* Guided workflows -* Interactive forms + If you see: -## Updates and changes + * `Connection refused`: Ensure the server is running and the path is correct + * `Tool execution failed`: Verify the tool's required environment variables are set + * `ANTHROPIC_API_KEY is not set`: Check your environment variables + -Servers can notify clients about prompt changes: + + [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/csharp-sdk/tree/main/samples/QuickstartClient) -1. Server capability: `prompts.listChanged` -2. Notification: `notifications/prompts/list_changed` -3. Client re-fetches prompt list + ## System Requirements -## Security considerations + Before starting, ensure your system meets these requirements: -When implementing prompts: + * .NET 8.0 or higher + * Anthropic API key (Claude) + * Windows, Linux, or macOS -* Validate all arguments -* Sanitize user input -* Consider rate limiting -* Implement access controls -* Audit prompt usage -* Handle sensitive data appropriately -* Validate generated content -* Implement timeouts -* Consider prompt injection risks -* Document security requirements + ## Setting up your environment + First, create a new .NET project: -# Resources -Source: https://modelcontextprotocol.io/docs/concepts/resources + ```bash theme={null} + dotnet new console -n QuickstartClient + cd QuickstartClient + ``` -Expose data and content from your servers to LLMs + Then, add the required dependencies to your project: -Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions. + ```bash theme={null} + dotnet add package ModelContextProtocol --prerelease + dotnet add package Anthropic.SDK + dotnet add package Microsoft.Extensions.Hosting + dotnet add package Microsoft.Extensions.AI + ``` - - Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used. - Different MCP clients may handle resources differently. For example: + ## Setting up your API key - * Claude Desktop currently requires users to explicitly select resources before they can be used - * Other clients might automatically select resources based on heuristics - * Some implementations may even allow the AI model itself to determine which resources to use + You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys). - Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools). - + ```bash theme={null} + dotnet user-secrets init + dotnet user-secrets set "ANTHROPIC_API_KEY" "" + ``` -## Overview + ## Creating the Client -Resources represent any kind of data that an MCP server wants to make available to clients. This can include: + ### Basic Client Structure -* File contents -* Database records -* API responses -* Live system data -* Screenshots and images -* Log files -* And more + First, let's setup the basic client class in the file `Program.cs`: -Each resource is identified by a unique URI and can contain either text or binary data. + ```csharp theme={null} + using Anthropic.SDK; + using Microsoft.Extensions.AI; + using Microsoft.Extensions.Configuration; + using Microsoft.Extensions.Hosting; + using ModelContextProtocol.Client; + using ModelContextProtocol.Protocol.Transport; -## Resource URIs + var builder = Host.CreateApplicationBuilder(args); -Resources are identified using URIs that follow this format: + builder.Configuration + .AddEnvironmentVariables() + .AddUserSecrets(); + ``` -``` -[protocol]://[host]/[path] -``` + This creates the beginnings of a .NET console application that can read the API key from user secrets. -For example: + Next, we'll setup the MCP Client: -* `file:///home/user/documents/report.pdf` -* `postgres://database/customers/schema` -* `screen://localhost/display1` + ```csharp theme={null} + var (command, arguments) = GetCommandAndArguments(args); -The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes. + var clientTransport = new StdioClientTransport(new() + { + Name = "Demo Server", + Command = command, + Arguments = arguments, + }); -## Resource types + await using var mcpClient = await McpClient.CreateAsync(clientTransport); -Resources can contain two types of content: + var tools = await mcpClient.ListToolsAsync(); + foreach (var tool in tools) + { + Console.WriteLine($"Connected to server with tools: {tool.Name}"); + } + ``` -### Text resources + Add this function at the end of the `Program.cs` file: -Text resources contain UTF-8 encoded text data. These are suitable for: + ```csharp theme={null} + static (string command, string[] arguments) GetCommandAndArguments(string[] args) + { + return args switch + { + [var script] when script.EndsWith(".py") => ("python", args), + [var script] when script.EndsWith(".js") => ("node", args), + [var script] when Directory.Exists(script) || (File.Exists(script) && script.EndsWith(".csproj")) => ("dotnet", ["run", "--project", script, "--no-build"]), + _ => throw new NotSupportedException("An unsupported server script was provided. Supported scripts are .py, .js, or .csproj") + }; + } + ``` -* Source code -* Configuration files -* Log files -* JSON/XML data -* Plain text + This creates an MCP client that will connect to a server that is provided as a command line argument. It then lists the available tools from the connected server. -### Binary resources + ### Query processing logic -Binary resources contain raw binary data encoded in base64. These are suitable for: + Now let's add the core functionality for processing queries and handling tool calls: -* Images -* PDFs -* Audio files -* Video files -* Other non-text formats + ```csharp theme={null} + using var anthropicClient = new AnthropicClient(new APIAuthentication(builder.Configuration["ANTHROPIC_API_KEY"])) + .Messages + .AsBuilder() + .UseFunctionInvocation() + .Build(); -## Resource discovery + var options = new ChatOptions + { + MaxOutputTokens = 1000, + ModelId = "claude-sonnet-4-20250514", + Tools = [.. tools] + }; -Clients can discover available resources through two main methods: + Console.ForegroundColor = ConsoleColor.Green; + Console.WriteLine("MCP Client Started!"); + Console.ResetColor(); -### Direct resources - -Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes: - -```typescript -{ - uri: string; // Unique identifier for the resource - name: string; // Human-readable name - description?: string; // Optional description - mimeType?: string; // Optional MIME type -} -``` - -### Resource templates - -For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs: - -```typescript -{ - uriTemplate: string; // URI template following RFC 6570 - name: string; // Human-readable name for this type - description?: string; // Optional description - mimeType?: string; // Optional MIME type for all matching resources -} -``` - -## Reading resources + PromptForInput(); + while(Console.ReadLine() is string query && !"exit".Equals(query, StringComparison.OrdinalIgnoreCase)) + { + if (string.IsNullOrWhiteSpace(query)) + { + PromptForInput(); + continue; + } -To read a resource, clients make a `resources/read` request with the resource URI. + await foreach (var message in anthropicClient.GetStreamingResponseAsync(query, options)) + { + Console.Write(message); + } + Console.WriteLine(); -The server responds with a list of resource contents: + PromptForInput(); + } -```typescript -{ - contents: [ + static void PromptForInput() { - uri: string; // The URI of the resource - mimeType?: string; // Optional MIME type - - // One of: - text?: string; // For text resources - blob?: string; // For binary resources (base64 encoded) + Console.WriteLine("Enter a command (or 'exit' to quit):"); + Console.ForegroundColor = ConsoleColor.Cyan; + Console.Write("> "); + Console.ResetColor(); } - ] -} -``` + ``` - - Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read. - + ## Key Components Explained -## Resource updates + ### 1. Client Initialization -MCP supports real-time updates for resources through two mechanisms: + * The client is initialized using `McpClient.CreateAsync()`, which sets up the transport type and command to run the server. -### List changes + ### 2. Server Connection -Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification. + * Supports Python, Node.js, and .NET servers. + * The server is started using the command specified in the arguments. + * Configures to use stdio for communication with the server. + * Initializes the session and available tools. -### Content changes + ### 3. Query Processing -Clients can subscribe to updates for specific resources: + * Leverages [Microsoft.Extensions.AI](https://learn.microsoft.com/dotnet/ai/ai-extensions) for the chat client. + * Configures the `IChatClient` to use automatic tool (function) invocation. + * The client reads user input and sends it to the server. + * The server processes the query and returns a response. + * The response is displayed to the user. -1. Client sends `resources/subscribe` with resource URI -2. Server sends `notifications/resources/updated` when the resource changes -3. Client can fetch latest content with `resources/read` -4. Client can unsubscribe with `resources/unsubscribe` + ## Running the Client -## Example implementation + To run your client with any MCP server: -Here's a simple example of implementing resource support in an MCP server: + ```bash theme={null} + dotnet run -- path/to/server.csproj # dotnet server + dotnet run -- path/to/server.py # python server + dotnet run -- path/to/server.js # node server + ``` - - - ```typescript - const server = new Server({ - name: "example-server", - version: "1.0.0" - }, { - capabilities: { - resources: {} - } - }); + + If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `dotnet run -- path/to/QuickstartWeatherServer`. + - // List available resources - server.setRequestHandler(ListResourcesRequestSchema, async () => { - return { - resources: [ - { - uri: "file:///logs/app.log", - name: "Application Logs", - mimeType: "text/plain" - } - ] - }; - }); + The client will: - // Read resource contents - server.setRequestHandler(ReadResourceRequestSchema, async (request) => { - const uri = request.params.uri; + 1. Connect to the specified server + 2. List available tools + 3. Start an interactive chat session where you can: + * Enter queries + * See tool executions + * Get responses from Claude + 4. Exit the session when done - if (uri === "file:///logs/app.log") { - const logContents = await readLogFile(); - return { - contents: [ - { - uri, - mimeType: "text/plain", - text: logContents - } - ] - }; - } + Here's an example of what it should look like if connected to the weather server quickstart: - throw new Error("Resource not found"); - }); - ``` + + + + - - ```python - app = Server("example-server") - - @app.list_resources() - async def list_resources() -> list[types.Resource]: - return [ - types.Resource( - uri="file:///logs/app.log", - name="Application Logs", - mimeType="text/plain" - ) - ] +## Next steps - @app.read_resource() - async def read_resource(uri: AnyUrl) -> str: - if str(uri) == "file:///logs/app.log": - log_contents = await read_log_file() - return log_contents + + + Check out our gallery of official MCP servers and implementations + - raise ValueError("Resource not found") + + View the list of clients that support MCP integrations + + - # Start server - async with stdio_server() as streams: - await app.run( - streams[0], - streams[1], - app.create_initialization_options() - ) - ``` - - -## Best practices +# Build an MCP server +Source: https://modelcontextprotocol.io/docs/develop/build-server -When implementing resource support: +Get started building your own server to use in Claude for Desktop and other clients. -1. Use clear, descriptive resource names and URIs -2. Include helpful descriptions to guide LLM understanding -3. Set appropriate MIME types when known -4. Implement resource templates for dynamic content -5. Use subscriptions for frequently changing resources -6. Handle errors gracefully with clear error messages -7. Consider pagination for large resource lists -8. Cache resource contents when appropriate -9. Validate URIs before processing -10. Document your custom URI schemes +In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop. -## Security considerations +### What we'll be building -When exposing resources: +We'll build a server that exposes two tools: `get_alerts` and `get_forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop): -* Validate all resource URIs -* Implement appropriate access controls -* Sanitize file paths to prevent directory traversal -* Be cautious with binary data handling -* Consider rate limiting for resource reads -* Audit resource access -* Encrypt sensitive data in transit -* Validate MIME types -* Implement timeouts for long-running reads -* Handle resource cleanup appropriately + + + + + Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/docs/develop/build-client) as well as a [list of other clients here](/clients). + -# Roots -Source: https://modelcontextprotocol.io/docs/concepts/roots +### Core MCP Concepts -Understanding roots in MCP +MCP servers can provide three main types of capabilities: -Roots are a concept in MCP that define the boundaries where servers can operate. They provide a way for clients to inform servers about relevant resources and their locations. +1. **[Resources](/docs/learn/server-concepts#resources)**: File-like data that can be read by clients (like API responses or file contents) +2. **[Tools](/docs/learn/server-concepts#tools)**: Functions that can be called by the LLM (with user approval) +3. **[Prompts](/docs/learn/server-concepts#prompts)**: Pre-written templates that help users accomplish specific tasks -## What are Roots? +This tutorial will primarily focus on tools. -A root is a URI that a client suggests a server should focus on. When a client connects to a server, it declares which roots the server should work with. While primarily used for filesystem paths, roots can be any valid URI including HTTP URLs. + + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python) -For example, roots could be: + ### Prerequisite knowledge -``` -file:///home/user/projects/myapp -https://api.example.com/v1 -``` + This quickstart assumes you have familiarity with: -## Why Use Roots? + * Python + * LLMs like Claude -Roots serve several important purposes: + ### Logging in MCP Servers -1. **Guidance**: They inform servers about relevant resources and locations -2. **Clarity**: Roots make it clear which resources are part of your workspace -3. **Organization**: Multiple roots let you work with different resources simultaneously + When implementing MCP servers, be careful about how you handle logging: -## How Roots Work + **For STDIO-based servers:** Never write to standard output (stdout). This includes: -When a client supports roots, it: + * `print()` statements in Python + * `console.log()` in JavaScript + * `fmt.Println()` in Go + * Similar stdout functions in other languages -1. Declares the `roots` capability during connection -2. Provides a list of suggested roots to the server -3. Notifies the server when roots change (if supported) + Writing to stdout will corrupt the JSON-RPC messages and break your server. -While roots are informational and not strictly enforcing, servers should: + **For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses. -1. Respect the provided roots -2. Use root URIs to locate and access resources -3. Prioritize operations within root boundaries + ### Best Practices -## Common Use Cases + 1. Use a logging library that writes to stderr or files. + 2. For Python, be especially careful - `print()` writes to stdout by default. -Roots are commonly used to define: + ### Quick Examples -* Project directories -* Repository locations -* API endpoints -* Configuration locations -* Resource boundaries + ```python theme={null} + # ❌ Bad (STDIO) + print("Processing request") -## Best Practices + # ✅ Good (STDIO) + import logging + logging.info("Processing request") + ``` -When working with roots: + ### System requirements -1. Only suggest necessary resources -2. Use clear, descriptive names for roots -3. Monitor root accessibility -4. Handle root changes gracefully + * Python 3.10 or higher installed. + * You must use the Python MCP SDK 1.2.0 or higher. -## Example + ### Set up your environment -Here's how a typical MCP client might expose roots: + First, let's install `uv` and set up our Python project and environment: -```json -{ - "roots": [ - { - "uri": "file:///home/user/projects/frontend", - "name": "Frontend Repository" - }, - { - "uri": "https://api.example.com/v1", - "name": "API Endpoint" - } - ] -} -``` + + ```bash macOS/Linux theme={null} + curl -LsSf https://astral.sh/uv/install.sh | sh + ``` -This configuration suggests the server focus on both a local repository and an API endpoint while keeping them logically separated. + ```powershell Windows theme={null} + powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" + ``` + + Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up. -# Sampling -Source: https://modelcontextprotocol.io/docs/concepts/sampling + Now, let's create and set up our project: -Let your servers request completions from LLMs + + ```bash macOS/Linux theme={null} + # Create a new directory for our project + uv init weather + cd weather -Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy. + # Create virtual environment and activate it + uv venv + source .venv/bin/activate - - This feature of MCP is not yet supported in the Claude Desktop client. - + # Install dependencies + uv add "mcp[cli]" httpx -## How sampling works + # Create our server file + touch weather.py + ``` -The sampling flow follows these steps: + ```powershell Windows theme={null} + # Create a new directory for our project + uv init weather + cd weather -1. Server sends a `sampling/createMessage` request to the client -2. Client reviews the request and can modify it -3. Client samples from an LLM -4. Client reviews the completion -5. Client returns the result to the server + # Create virtual environment and activate it + uv venv + .venv\Scripts\activate -This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. + # Install dependencies + uv add mcp[cli] httpx -## Message format + # Create our server file + new-item weather.py + ``` + -Sampling requests use a standardized message format: + Now let's dive into building your server. -```typescript -{ - messages: [ - { - role: "user" | "assistant", - content: { - type: "text" | "image", + ## Building your server - // For text: - text?: string, + ### Importing packages and setting up the instance - // For images: - data?: string, // base64 encoded - mimeType?: string - } - } - ], - modelPreferences?: { - hints?: [{ - name?: string // Suggested model name/family - }], - costPriority?: number, // 0-1, importance of minimizing cost - speedPriority?: number, // 0-1, importance of low latency - intelligencePriority?: number // 0-1, importance of capabilities - }, - systemPrompt?: string, - includeContext?: "none" | "thisServer" | "allServers", - temperature?: number, - maxTokens: number, - stopSequences?: string[], - metadata?: Record -} -``` + Add these to the top of your `weather.py`: -## Request parameters + ```python theme={null} + from typing import Any -### Messages + import httpx + from mcp.server.fastmcp import FastMCP -The `messages` array contains the conversation history to send to the LLM. Each message has: + # Initialize FastMCP server + mcp = FastMCP("weather") -* `role`: Either "user" or "assistant" -* `content`: The message content, which can be: - * Text content with a `text` field - * Image content with `data` (base64) and `mimeType` fields + # Constants + NWS_API_BASE = "https://api.weather.gov" + USER_AGENT = "weather-app/1.0" + ``` -### Model preferences + The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools. -The `modelPreferences` object allows servers to specify their model selection preferences: + ### Helper functions -* `hints`: Array of model name suggestions that clients can use to select an appropriate model: - * `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet") - * Clients may map hints to equivalent models from different providers - * Multiple hints are evaluated in preference order + Next, let's add our helper functions for querying and formatting the data from the National Weather Service API: -* Priority values (0-1 normalized): - * `costPriority`: Importance of minimizing costs - * `speedPriority`: Importance of low latency response - * `intelligencePriority`: Importance of advanced model capabilities + ```python theme={null} + async def make_nws_request(url: str) -> dict[str, Any] | None: + """Make a request to the NWS API with proper error handling.""" + headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"} + async with httpx.AsyncClient() as client: + try: + response = await client.get(url, headers=headers, timeout=30.0) + response.raise_for_status() + return response.json() + except Exception: + return None -Clients make the final model selection based on these preferences and their available models. -### System prompt + def format_alert(feature: dict) -> str: + """Format an alert feature into a readable string.""" + props = feature["properties"] + return f""" + Event: {props.get("event", "Unknown")} + Area: {props.get("areaDesc", "Unknown")} + Severity: {props.get("severity", "Unknown")} + Description: {props.get("description", "No description available")} + Instructions: {props.get("instruction", "No specific instructions provided")} + """ + ``` -An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this. + ### Implementing tool execution -### Context inclusion + The tool execution handler is responsible for actually executing the logic of each tool. Let's add it: -The `includeContext` parameter specifies what MCP context to include: + ```python theme={null} + @mcp.tool() + async def get_alerts(state: str) -> str: + """Get weather alerts for a US state. -* `"none"`: No additional context -* `"thisServer"`: Include context from the requesting server -* `"allServers"`: Include context from all connected MCP servers + Args: + state: Two-letter US state code (e.g. CA, NY) + """ + url = f"{NWS_API_BASE}/alerts/active/area/{state}" + data = await make_nws_request(url) -The client controls what context is actually included. + if not data or "features" not in data: + return "Unable to fetch alerts or no alerts found." -### Sampling parameters + if not data["features"]: + return "No active alerts for this state." -Fine-tune the LLM sampling with: + alerts = [format_alert(feature) for feature in data["features"]] + return "\n---\n".join(alerts) -* `temperature`: Controls randomness (0.0 to 1.0) -* `maxTokens`: Maximum tokens to generate -* `stopSequences`: Array of sequences that stop generation -* `metadata`: Additional provider-specific parameters -## Response format + @mcp.tool() + async def get_forecast(latitude: float, longitude: float) -> str: + """Get weather forecast for a location. -The client returns a completion result: + Args: + latitude: Latitude of the location + longitude: Longitude of the location + """ + # First get the forecast grid endpoint + points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}" + points_data = await make_nws_request(points_url) -```typescript -{ - model: string, // Name of the model used - stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string, - role: "user" | "assistant", - content: { - type: "text" | "image", - text?: string, - data?: string, - mimeType?: string - } -} -``` + if not points_data: + return "Unable to fetch forecast data for this location." -## Example request + # Get the forecast URL from the points response + forecast_url = points_data["properties"]["forecast"] + forecast_data = await make_nws_request(forecast_url) -Here's an example of requesting sampling from a client: + if not forecast_data: + return "Unable to fetch detailed forecast." -```json -{ - "method": "sampling/createMessage", - "params": { - "messages": [ - { - "role": "user", - "content": { - "type": "text", - "text": "What files are in the current directory?" - } - } - ], - "systemPrompt": "You are a helpful file system assistant.", - "includeContext": "thisServer", - "maxTokens": 100 - } -} -``` - -## Best practices - -When implementing sampling: - -1. Always provide clear, well-structured prompts -2. Handle both text and image content appropriately -3. Set reasonable token limits -4. Include relevant context through `includeContext` -5. Validate responses before using them -6. Handle errors gracefully -7. Consider rate limiting sampling requests -8. Document expected sampling behavior -9. Test with various model parameters -10. Monitor sampling costs - -## Human in the loop controls - -Sampling is designed with human oversight in mind: + # Format the periods into a readable forecast + periods = forecast_data["properties"]["periods"] + forecasts = [] + for period in periods[:5]: # Only show next 5 periods + forecast = f""" + {period["name"]}: + Temperature: {period["temperature"]}°{period["temperatureUnit"]} + Wind: {period["windSpeed"]} {period["windDirection"]} + Forecast: {period["detailedForecast"]} + """ + forecasts.append(forecast) -### For prompts + return "\n---\n".join(forecasts) + ``` -* Clients should show users the proposed prompt -* Users should be able to modify or reject prompts -* System prompts can be filtered or modified -* Context inclusion is controlled by the client + ### Running the server -### For completions + Finally, let's initialize and run the server: -* Clients should show users the completion -* Users should be able to modify or reject completions -* Clients can filter or modify completions -* Users control which model is used + ```python theme={null} + def main(): + # Initialize and run the server + mcp.run(transport="stdio") -## Security considerations -When implementing sampling: + if __name__ == "__main__": + main() + ``` -* Validate all message content -* Sanitize sensitive information -* Implement appropriate rate limits -* Monitor sampling usage -* Encrypt data in transit -* Handle user data privacy -* Audit sampling requests -* Control cost exposure -* Implement timeouts -* Handle model errors gracefully + Your server is complete! Run `uv run weather.py` to start the MCP server, which will listen for messages from MCP hosts. -## Common patterns + Let's now test your server from an existing MCP host, Claude for Desktop. -### Agentic workflows + ## Testing your server with Claude for Desktop -Sampling enables agentic patterns like: + + Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built. + -* Reading and analyzing resources -* Making decisions based on context -* Generating structured data -* Handling multi-step tasks -* Providing interactive assistance + First, make sure you have Claude for Desktop installed. [You can install the latest version + here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** -### Context management + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist. -Best practices for context: + For example, if you have [VS Code](https://code.visualstudio.com/) installed: -* Request minimal necessary context -* Structure context clearly -* Handle context size limits -* Update context as needed -* Clean up stale context + + ```bash macOS/Linux theme={null} + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` -### Error handling + ```powershell Windows theme={null} + code $env:AppData\Claude\claude_desktop_config.json + ``` + -Robust error handling should: + You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. -* Catch sampling failures -* Handle timeout errors -* Manage rate limits -* Validate responses -* Provide fallback behaviors -* Log errors appropriately + In this case, we'll add our single weather server like so: -## Limitations + + ```json macOS/Linux theme={null} + { + "mcpServers": { + "weather": { + "command": "uv", + "args": [ + "--directory", + "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather", + "run", + "weather.py" + ] + } + } + } + ``` -Be aware of these limitations: + ```json Windows theme={null} + { + "mcpServers": { + "weather": { + "command": "uv", + "args": [ + "--directory", + "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather", + "run", + "weather.py" + ] + } + } + } + ``` + -* Sampling depends on client capabilities -* Users control sampling behavior -* Context size has limits -* Rate limits may apply -* Costs should be considered -* Model availability varies -* Response times vary -* Not all content types supported + + You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on macOS/Linux or `where uv` on Windows. + + + Make sure you pass in the absolute path to your server. You can get this by running `pwd` on macOS/Linux or `cd` on Windows Command Prompt. On Windows, remember to use double backslashes (`\\`) or forward slashes (`/`) in the JSON path. + -# Tools -Source: https://modelcontextprotocol.io/docs/concepts/tools + This tells Claude for Desktop: -Enable LLMs to perform actions through your server + 1. There's an MCP server named "weather" + 2. To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather.py` -Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world. + Save the file, and restart **Claude for Desktop**. + - - Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval). - + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript) -## Overview + ### Prerequisite knowledge -Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include: + This quickstart assumes you have familiarity with: -* **Discovery**: Clients can list available tools through the `tools/list` endpoint -* **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results -* **Flexibility**: Tools can range from simple calculations to complex API interactions + * TypeScript + * LLMs like Claude -Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems. + ### Logging in MCP Servers -## Tool definition structure + When implementing MCP servers, be careful about how you handle logging: -Each tool is defined with the following structure: + **For STDIO-based servers:** Never write to standard output (stdout). This includes: -```typescript -{ - name: string; // Unique identifier for the tool - description?: string; // Human-readable description - inputSchema: { // JSON Schema for the tool's parameters - type: "object", - properties: { ... } // Tool-specific parameters - }, - annotations?: { // Optional hints about tool behavior - title?: string; // Human-readable title for the tool - readOnlyHint?: boolean; // If true, the tool does not modify its environment - destructiveHint?: boolean; // If true, the tool may perform destructive updates - idempotentHint?: boolean; // If true, repeated calls with same args have no additional effect - openWorldHint?: boolean; // If true, tool interacts with external entities - } -} -``` + * `print()` statements in Python + * `console.log()` in JavaScript + * `fmt.Println()` in Go + * Similar stdout functions in other languages -## Implementing tools + Writing to stdout will corrupt the JSON-RPC messages and break your server. -Here's an example of implementing a basic tool in an MCP server: + **For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses. - - - ```typescript - const server = new Server({ - name: "example-server", - version: "1.0.0" - }, { - capabilities: { - tools: {} - } - }); + ### Best Practices - // Define available tools - server.setRequestHandler(ListToolsRequestSchema, async () => { - return { - tools: [{ - name: "calculate_sum", - description: "Add two numbers together", - inputSchema: { - type: "object", - properties: { - a: { type: "number" }, - b: { type: "number" } - }, - required: ["a", "b"] - } - }] - }; - }); + 1. Use a logging library that writes to stderr or files, such as `logging` in Python. + 2. For JavaScript, be especially careful - `console.log()` writes to stdout by default. - // Handle tool execution - server.setRequestHandler(CallToolRequestSchema, async (request) => { - if (request.params.name === "calculate_sum") { - const { a, b } = request.params.arguments; - return { - content: [ - { - type: "text", - text: String(a + b) - } - ] - }; - } - throw new Error("Tool not found"); - }); - ``` - + ### Quick Examples - - ```python - app = Server("example-server") - - @app.list_tools() - async def list_tools() -> list[types.Tool]: - return [ - types.Tool( - name="calculate_sum", - description="Add two numbers together", - inputSchema={ - "type": "object", - "properties": { - "a": {"type": "number"}, - "b": {"type": "number"} - }, - "required": ["a", "b"] - } - ) - ] + ```javascript theme={null} + // ❌ Bad (STDIO) + console.log("Server started"); - @app.call_tool() - async def call_tool( - name: str, - arguments: dict - ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]: - if name == "calculate_sum": - a = arguments["a"] - b = arguments["b"] - result = a + b - return [types.TextContent(type="text", text=str(result))] - raise ValueError(f"Tool not found: {name}") + // ✅ Good (STDIO) + console.error("Server started"); // stderr is safe ``` - - - -## Example tool patterns -Here are some examples of types of tools that a server could provide: + ### System requirements -### System operations + For TypeScript, make sure you have the latest version of Node installed. -Tools that interact with the local system: + ### Set up your environment -```typescript -{ - name: "execute_command", - description: "Run a shell command", - inputSchema: { - type: "object", - properties: { - command: { type: "string" }, - args: { type: "array", items: { type: "string" } } - } - } -} -``` + First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/). + Verify your Node.js installation: -### API integrations + ```bash theme={null} + node --version + npm --version + ``` -Tools that wrap external APIs: + For this tutorial, you'll need Node.js version 16 or higher. -```typescript -{ - name: "github_create_issue", - description: "Create a GitHub issue", - inputSchema: { - type: "object", - properties: { - title: { type: "string" }, - body: { type: "string" }, - labels: { type: "array", items: { type: "string" } } - } - } -} -``` + Now, let's create and set up our project: -### Data processing + + ```bash macOS/Linux theme={null} + # Create a new directory for our project + mkdir weather + cd weather -Tools that transform or analyze data: + # Initialize a new npm project + npm init -y -```typescript -{ - name: "analyze_csv", - description: "Analyze a CSV file", - inputSchema: { - type: "object", - properties: { - filepath: { type: "string" }, - operations: { - type: "array", - items: { - enum: ["sum", "average", "count"] - } - } - } - } -} -``` + # Install dependencies + npm install @modelcontextprotocol/sdk zod@3 + npm install -D @types/node typescript -## Best practices + # Create our files + mkdir src + touch src/index.ts + ``` -When implementing tools: + ```powershell Windows theme={null} + # Create a new directory for our project + md weather + cd weather -1. Provide clear, descriptive names and descriptions -2. Use detailed JSON Schema definitions for parameters -3. Include examples in tool descriptions to demonstrate how the model should use them -4. Implement proper error handling and validation -5. Use progress reporting for long operations -6. Keep tool operations focused and atomic -7. Document expected return value structures -8. Implement proper timeouts -9. Consider rate limiting for resource-intensive operations -10. Log tool usage for debugging and monitoring + # Initialize a new npm project + npm init -y -## Security considerations + # Install dependencies + npm install @modelcontextprotocol/sdk zod@3 + npm install -D @types/node typescript -When exposing tools: + # Create our files + md src + new-item src\index.ts + ``` + -### Input validation + Update your package.json to add type: "module" and a build script: -* Validate all parameters against the schema -* Sanitize file paths and system commands -* Validate URLs and external identifiers -* Check parameter sizes and ranges -* Prevent command injection + ```json package.json theme={null} + { + "type": "module", + "bin": { + "weather": "./build/index.js" + }, + "scripts": { + "build": "tsc && chmod 755 build/index.js" + }, + "files": ["build"] + } + ``` -### Access control + Create a `tsconfig.json` in the root of your project: -* Implement authentication where needed -* Use appropriate authorization checks -* Audit tool usage -* Rate limit requests -* Monitor for abuse + ```json tsconfig.json theme={null} + { + "compilerOptions": { + "target": "ES2022", + "module": "Node16", + "moduleResolution": "Node16", + "outDir": "./build", + "rootDir": "./src", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + }, + "include": ["src/**/*"], + "exclude": ["node_modules"] + } + ``` -### Error handling + Now let's dive into building your server. -* Don't expose internal errors to clients -* Log security-relevant errors -* Handle timeouts appropriately -* Clean up resources after errors -* Validate return values + ## Building your server -## Tool discovery and updates + ### Importing packages and setting up the instance -MCP supports dynamic tool discovery: + Add these to the top of your `src/index.ts`: -1. Clients can list available tools at any time -2. Servers can notify clients when tools change using `notifications/tools/list_changed` -3. Tools can be added or removed during runtime -4. Tool definitions can be updated (though this should be done carefully) + ```typescript theme={null} + import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; + import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; + import { z } from "zod"; -## Error handling + const NWS_API_BASE = "https://api.weather.gov"; + const USER_AGENT = "weather-app/1.0"; -Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error: + // Create server instance + const server = new McpServer({ + name: "weather", + version: "1.0.0", + }); + ``` -1. Set `isError` to `true` in the result -2. Include error details in the `content` array + ### Helper functions -Here's an example of proper error handling for tools: + Next, let's add our helper functions for querying and formatting the data from the National Weather Service API: - - - ```typescript - try { - // Tool operation - const result = performOperation(); - return { - content: [ - { - type: "text", - text: `Operation successful: ${result}` - } - ] - }; - } catch (error) { - return { - isError: true, - content: [ - { - type: "text", - text: `Error: ${error.message}` - } - ] + ```typescript theme={null} + // Helper function for making NWS API requests + async function makeNWSRequest(url: string): Promise { + const headers = { + "User-Agent": USER_AGENT, + Accept: "application/geo+json", }; + + try { + const response = await fetch(url, { headers }); + if (!response.ok) { + throw new Error(`HTTP error! status: ${response.status}`); + } + return (await response.json()) as T; + } catch (error) { + console.error("Error making NWS request:", error); + return null; + } } - ``` - - - ```python - try: - # Tool operation - result = perform_operation() - return types.CallToolResult( - content=[ - types.TextContent( - type="text", - text=f"Operation successful: {result}" - ) - ] - ) - except Exception as error: - return types.CallToolResult( - isError=True, - content=[ - types.TextContent( - type="text", - text=f"Error: {str(error)}" - ) - ] - ) - ``` - - - -This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention. - -## Tool annotations - -Tool annotations provide additional metadata about a tool's behavior, helping clients understand how to present and manage tools. These annotations are hints that describe the nature and impact of a tool, but should not be relied upon for security decisions. + interface AlertFeature { + properties: { + event?: string; + areaDesc?: string; + severity?: string; + status?: string; + headline?: string; + }; + } -### Purpose of tool annotations + // Format alert data + function formatAlert(feature: AlertFeature): string { + const props = feature.properties; + return [ + `Event: ${props.event || "Unknown"}`, + `Area: ${props.areaDesc || "Unknown"}`, + `Severity: ${props.severity || "Unknown"}`, + `Status: ${props.status || "Unknown"}`, + `Headline: ${props.headline || "No headline"}`, + "---", + ].join("\n"); + } -Tool annotations serve several key purposes: + interface ForecastPeriod { + name?: string; + temperature?: number; + temperatureUnit?: string; + windSpeed?: string; + windDirection?: string; + shortForecast?: string; + } -1. Provide UX-specific information without affecting model context -2. Help clients categorize and present tools appropriately -3. Convey information about a tool's potential side effects -4. Assist in developing intuitive interfaces for tool approval + interface AlertsResponse { + features: AlertFeature[]; + } -### Available tool annotations + interface PointsResponse { + properties: { + forecast?: string; + }; + } -The MCP specification defines the following annotations for tools: + interface ForecastResponse { + properties: { + periods: ForecastPeriod[]; + }; + } + ``` -| Annotation | Type | Default | Description | -| ----------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| `title` | string | - | A human-readable title for the tool, useful for UI display | -| `readOnlyHint` | boolean | false | If true, indicates the tool does not modify its environment | -| `destructiveHint` | boolean | true | If true, the tool may perform destructive updates (only meaningful when `readOnlyHint` is false) | -| `idempotentHint` | boolean | false | If true, calling the tool repeatedly with the same arguments has no additional effect (only meaningful when `readOnlyHint` is false) | -| `openWorldHint` | boolean | true | If true, the tool may interact with an "open world" of external entities | + ### Implementing tool execution -### Example usage + The tool execution handler is responsible for actually executing the logic of each tool. Let's add it: -Here's how to define tools with annotations for different scenarios: + ```typescript theme={null} + // Register weather tools -```typescript -// A read-only search tool -{ - name: "web_search", - description: "Search the web for information", - inputSchema: { - type: "object", - properties: { - query: { type: "string" } - }, - required: ["query"] - }, - annotations: { - title: "Web Search", - readOnlyHint: true, - openWorldHint: true - } -} + server.registerTool( + "get_alerts", + { + description: "Get weather alerts for a state", + inputSchema: { + state: z + .string() + .length(2) + .describe("Two-letter state code (e.g. CA, NY)"), + }, + }, + async ({ state }) => { + const stateCode = state.toUpperCase(); + const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`; + const alertsData = await makeNWSRequest(alertsUrl); -// A destructive file deletion tool -{ - name: "delete_file", - description: "Delete a file from the filesystem", - inputSchema: { - type: "object", - properties: { - path: { type: "string" } - }, - required: ["path"] - }, - annotations: { - title: "Delete File", - readOnlyHint: false, - destructiveHint: true, - idempotentHint: true, - openWorldHint: false - } -} + if (!alertsData) { + return { + content: [ + { + type: "text", + text: "Failed to retrieve alerts data", + }, + ], + }; + } -// A non-destructive database record creation tool -{ - name: "create_record", - description: "Create a new record in the database", - inputSchema: { - type: "object", - properties: { - table: { type: "string" }, - data: { type: "object" } - }, - required: ["table", "data"] - }, - annotations: { - title: "Create Database Record", - readOnlyHint: false, - destructiveHint: false, - idempotentHint: false, - openWorldHint: false - } -} -``` + const features = alertsData.features || []; + if (features.length === 0) { + return { + content: [ + { + type: "text", + text: `No active alerts for ${stateCode}`, + }, + ], + }; + } -### Integrating annotations in server implementation + const formattedAlerts = features.map(formatAlert); + const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`; - - - ```typescript - server.setRequestHandler(ListToolsRequestSchema, async () => { - return { - tools: [{ - name: "calculate_sum", - description: "Add two numbers together", - inputSchema: { - type: "object", - properties: { - a: { type: "number" }, - b: { type: "number" } + return { + content: [ + { + type: "text", + text: alertsText, }, - required: ["a", "b"] - }, - annotations: { - title: "Calculate Sum", - readOnlyHint: true, - openWorldHint: false - } - }] - }; - }); - ``` - - - - ```python - from mcp.server.fastmcp import FastMCP + ], + }; + }, + ); - mcp = FastMCP("example-server") + server.registerTool( + "get_forecast", + { + description: "Get weather forecast for a location", + inputSchema: { + latitude: z + .number() + .min(-90) + .max(90) + .describe("Latitude of the location"), + longitude: z + .number() + .min(-180) + .max(180) + .describe("Longitude of the location"), + }, + }, + async ({ latitude, longitude }) => { + // Get grid point data + const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`; + const pointsData = await makeNWSRequest(pointsUrl); - @mcp.tool( - annotations={ - "title": "Calculate Sum", - "readOnlyHint": True, - "openWorldHint": False + if (!pointsData) { + return { + content: [ + { + type: "text", + text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`, + }, + ], + }; } - ) - async def calculate_sum(a: float, b: float) -> str: - """Add two numbers together. - - Args: - a: First number to add - b: Second number to add - """ - result = a + b - return str(result) - ``` - - -### Best practices for tool annotations + const forecastUrl = pointsData.properties?.forecast; + if (!forecastUrl) { + return { + content: [ + { + type: "text", + text: "Failed to get forecast URL from grid point data", + }, + ], + }; + } -1. **Be accurate about side effects**: Clearly indicate whether a tool modifies its environment and whether those modifications are destructive. + // Get forecast data + const forecastData = await makeNWSRequest(forecastUrl); + if (!forecastData) { + return { + content: [ + { + type: "text", + text: "Failed to retrieve forecast data", + }, + ], + }; + } -2. **Use descriptive titles**: Provide human-friendly titles that clearly describe the tool's purpose. + const periods = forecastData.properties?.periods || []; + if (periods.length === 0) { + return { + content: [ + { + type: "text", + text: "No forecast periods available", + }, + ], + }; + } -3. **Indicate idempotency properly**: Mark tools as idempotent only if repeated calls with the same arguments truly have no additional effect. + // Format forecast periods + const formattedForecast = periods.map((period: ForecastPeriod) => + [ + `${period.name || "Unknown"}:`, + `Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`, + `Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`, + `${period.shortForecast || "No forecast available"}`, + "---", + ].join("\n"), + ); -4. **Set appropriate open/closed world hints**: Indicate whether a tool interacts with a closed system (like a database) or an open system (like the web). + const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`; -5. **Remember annotations are hints**: All properties in ToolAnnotations are hints and not guaranteed to provide a faithful description of tool behavior. Clients should never make security-critical decisions based solely on annotations. + return { + content: [ + { + type: "text", + text: forecastText, + }, + ], + }; + }, + ); + ``` -## Testing tools + ### Running the server -A comprehensive testing strategy for MCP tools should cover: + Finally, implement the main function to run the server: -* **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately -* **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies -* **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting -* **Performance testing**: Check behavior under load, timeout handling, and resource cleanup -* **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources + ```typescript theme={null} + async function main() { + const transport = new StdioServerTransport(); + await server.connect(transport); + console.error("Weather MCP Server running on stdio"); + } + main().catch((error) => { + console.error("Fatal error in main():", error); + process.exit(1); + }); + ``` -# Transports -Source: https://modelcontextprotocol.io/docs/concepts/transports + Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect. -Learn about MCP's communication mechanisms + Let's now test your server from an existing MCP host, Claude for Desktop. -Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received. + ## Testing your server with Claude for Desktop -## Message Format + + Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built. + -MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages. + First, make sure you have Claude for Desktop installed. [You can install the latest version + here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** -There are three types of JSON-RPC messages used: + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist. -### Requests + For example, if you have [VS Code](https://code.visualstudio.com/) installed: -```typescript -{ - jsonrpc: "2.0", - id: number | string, - method: string, - params?: object -} -``` + + ```bash macOS/Linux theme={null} + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` -### Responses + ```powershell Windows theme={null} + code $env:AppData\Claude\claude_desktop_config.json + ``` + -```typescript -{ - jsonrpc: "2.0", - id: number | string, - result?: object, - error?: { - code: number, - message: string, - data?: unknown - } -} -``` + You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. -### Notifications + In this case, we'll add our single weather server like so: -```typescript -{ - jsonrpc: "2.0", - method: string, - params?: object -} -``` - -## Built-in Transport Types - -MCP includes two standard transport implementations: - -### Standard Input/Output (stdio) - -The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools. + + ```json macOS/Linux theme={null} + { + "mcpServers": { + "weather": { + "command": "node", + "args": ["/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"] + } + } + } + ``` -Use stdio when: + ```json Windows theme={null} + { + "mcpServers": { + "weather": { + "command": "node", + "args": ["C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js"] + } + } + } + ``` + -* Building command-line tools -* Implementing local integrations -* Needing simple process communication -* Working with shell scripts + This tells Claude for Desktop: - - - ```typescript - const server = new Server({ - name: "example-server", - version: "1.0.0" - }, { - capabilities: {} - }); + 1. There's an MCP server named "weather" + 2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js` - const transport = new StdioServerTransport(); - await server.connect(transport); - ``` + Save the file, and restart **Claude for Desktop**. - - ```typescript - const client = new Client({ - name: "example-client", - version: "1.0.0" - }, { - capabilities: {} - }); - - const transport = new StdioClientTransport({ - command: "./server", - args: ["--option", "value"] - }); - await client.connect(transport); - ``` - + + + This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters. + To learn how to create sync and async MCP Servers, manually, consult the [Java SDK Server](/sdk/java/mcp-server) documentation. + - - ```python - app = Server("example-server") + Let's get started with building our weather server! + [You can find the complete code for what we'll be building here.](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-stdio-server) - async with stdio_server() as streams: - await app.run( - streams[0], - streams[1], - app.create_initialization_options() - ) - ``` - + For more information, see the [MCP Server Boot Starter](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html) reference documentation. + For manual MCP Server implementation, refer to the [MCP Server Java SDK documentation](/sdk/java/mcp-server). - - ```python - params = StdioServerParameters( - command="./server", - args=["--option", "value"] - ) + ### Logging in MCP Servers - async with stdio_client(params) as streams: - async with ClientSession(streams[0], streams[1]) as session: - await session.initialize() - ``` - - + When implementing MCP servers, be careful about how you handle logging: -### Server-Sent Events (SSE) + **For STDIO-based servers:** Never write to standard output (stdout). This includes: -SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication. + * `print()` statements in Python + * `console.log()` in JavaScript + * `fmt.Println()` in Go + * Similar stdout functions in other languages -Use SSE when: + Writing to stdout will corrupt the JSON-RPC messages and break your server. -* Only server-to-client streaming is needed -* Working with restricted networks -* Implementing simple updates + **For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses. -#### Security Warning: DNS Rebinding Attacks + ### Best Practices -SSE transports can be vulnerable to DNS rebinding attacks if not properly secured. To prevent this: + 1. Use a logging library that writes to stderr or files. + 2. Ensure any configured logging library will not write to STDOUT -1. **Always validate Origin headers** on incoming SSE connections to ensure they come from expected sources -2. **Avoid binding servers to all network interfaces** (0.0.0.0) when running locally - bind only to localhost (127.0.0.1) instead -3. **Implement proper authentication** for all SSE connections + ### System requirements -Without these protections, attackers could use DNS rebinding to interact with local MCP servers from remote websites. + * Java 17 or higher installed. + * [Spring Boot 3.3.x](https://docs.spring.io/spring-boot/installing.html) or higher - - - ```typescript - import express from "express"; + ### Set up your environment - const app = express(); + Use the [Spring Initializer](https://start.spring.io/) to bootstrap the project. - const server = new Server({ - name: "example-server", - version: "1.0.0" - }, { - capabilities: {} - }); + You will need to add the following dependencies: - let transport: SSEServerTransport | null = null; + + ```xml Maven theme={null} + + + org.springframework.ai + spring-ai-starter-mcp-server + - app.get("/sse", (req, res) => { - transport = new SSEServerTransport("/messages", res); - server.connect(transport); - }); + + org.springframework + spring-web + + + ``` - app.post("/messages", (req, res) => { - if (transport) { - transport.handlePostMessage(req, res); + ```groovy Gradle theme={null} + dependencies { + implementation platform("org.springframework.ai:spring-ai-starter-mcp-server") + implementation platform("org.springframework:spring-web") } - }); - - app.listen(3000); - ``` - - - - ```typescript - const client = new Client({ - name: "example-client", - version: "1.0.0" - }, { - capabilities: {} - }); - - const transport = new SSEClientTransport( - new URL("http://localhost:3000/sse") - ); - await client.connect(transport); - ``` - + ``` + - - ```python - from mcp.server.sse import SseServerTransport - from starlette.applications import Starlette - from starlette.routing import Route + Then configure your application by setting the application properties: - app = Server("example-server") - sse = SseServerTransport("/messages") + + ```bash application.properties theme={null} + spring.main.bannerMode=off + logging.pattern.console= + ``` - async def handle_sse(scope, receive, send): - async with sse.connect_sse(scope, receive, send) as streams: - await app.run(streams[0], streams[1], app.create_initialization_options()) + ```yaml application.yml theme={null} + logging: + pattern: + console: + spring: + main: + banner-mode: off + ``` + - async def handle_messages(scope, receive, send): - await sse.handle_post_message(scope, receive, send) + The [Server Configuration Properties](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-starter-docs.html#_configuration_properties) documents all available properties. - starlette_app = Starlette( - routes=[ - Route("/sse", endpoint=handle_sse), - Route("/messages", endpoint=handle_messages, methods=["POST"]), - ] - ) - ``` - + Now let's dive into building your server. - - ```python - async with sse_client("http://localhost:8000/sse") as streams: - async with ClientSession(streams[0], streams[1]) as session: - await session.initialize() - ``` - - + ## Building your server -## Custom Transports + ### Weather Service -MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface: + Let's implement a [WeatherService.java](https://github.com/spring-projects/spring-ai-examples/blob/main/model-context-protocol/weather/starter-stdio-server/src/main/java/org/springframework/ai/mcp/sample/server/WeatherService.java) that uses a REST client to query the data from the National Weather Service API: -You can implement custom transports for: + ```java theme={null} + @Service + public class WeatherService { -* Custom network protocols -* Specialized communication channels -* Integration with existing systems -* Performance optimization + private final RestClient restClient; - - - ```typescript - interface Transport { - // Start processing messages - start(): Promise; + public WeatherService() { + this.restClient = RestClient.builder() + .baseUrl("https://api.weather.gov") + .defaultHeader("Accept", "application/geo+json") + .defaultHeader("User-Agent", "WeatherApiClient/1.0 (your@email.com)") + .build(); + } - // Send a JSON-RPC message - send(message: JSONRPCMessage): Promise; + @Tool(description = "Get weather forecast for a specific latitude/longitude") + public String getWeatherForecastByLocation( + double latitude, // Latitude coordinate + double longitude // Longitude coordinate + ) { + // Returns detailed forecast including: + // - Temperature and unit + // - Wind speed and direction + // - Detailed forecast description + } - // Close the connection - close(): Promise; + @Tool(description = "Get weather alerts for a US state") + public String getAlerts( + @ToolParam(description = "Two-letter US state code (e.g. CA, NY)") String state + ) { + // Returns active alerts including: + // - Event type + // - Affected area + // - Severity + // - Description + // - Safety instructions + } - // Callbacks - onclose?: () => void; - onerror?: (error: Error) => void; - onmessage?: (message: JSONRPCMessage) => void; + // ...... } ``` - - - - Note that while MCP Servers are often implemented with asyncio, we recommend - implementing low-level interfaces like transports with `anyio` for wider compatibility. - - ```python - @contextmanager - async def create_transport( - read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception], - write_stream: MemoryObjectSendStream[JSONRPCMessage] - ): - """ - Transport interface for MCP. - - Args: - read_stream: Stream to read incoming messages from - write_stream: Stream to write outgoing messages to - """ - async with anyio.create_task_group() as tg: - try: - # Start processing messages - tg.start_soon(lambda: process_messages(read_stream)) - - # Send messages - async with write_stream: - yield write_stream - - except Exception as exc: - # Handle errors - raise exc - finally: - # Clean up - tg.cancel_scope.cancel() - await write_stream.aclose() - await read_stream.aclose() - ``` - - -## Error Handling + The `@Service` annotation will auto-register the service in your application context. + The Spring AI `@Tool` annotation makes it easy to create and maintain MCP tools. -Transport implementations should handle various error scenarios: + The auto-configuration will automatically register these tools with the MCP server. -1. Connection errors -2. Message parsing errors -3. Protocol errors -4. Network timeouts -5. Resource cleanup + ### Create your Boot Application -Example error handling: + ```java theme={null} + @SpringBootApplication + public class McpServerApplication { - - - ```typescript - class ExampleTransport implements Transport { - async start() { - try { - // Connection logic - } catch (error) { - this.onerror?.(new Error(`Failed to connect: ${error}`)); - throw error; - } - } + public static void main(String[] args) { + SpringApplication.run(McpServerApplication.class, args); + } - async send(message: JSONRPCMessage) { - try { - // Sending logic - } catch (error) { - this.onerror?.(new Error(`Failed to send message: ${error}`)); - throw error; - } - } + @Bean + public ToolCallbackProvider weatherTools(WeatherService weatherService) { + return MethodToolCallbackProvider.builder().toolObjects(weatherService).build(); + } } ``` - - - Note that while MCP Servers are often implemented with asyncio, we recommend - implementing low-level interfaces like transports with `anyio` for wider compatibility. + Uses the `MethodToolCallbackProvider` utils to convert the `@Tools` into actionable callbacks used by the MCP server. - ```python - @contextmanager - async def example_transport(scope: Scope, receive: Receive, send: Send): - try: - # Create streams for bidirectional communication - read_stream_writer, read_stream = anyio.create_memory_object_stream(0) - write_stream, write_stream_reader = anyio.create_memory_object_stream(0) + ### Running the server - async def message_handler(): - try: - async with read_stream_writer: - # Message handling logic - pass - except Exception as exc: - logger.error(f"Failed to handle message: {exc}") - raise exc - - async with anyio.create_task_group() as tg: - tg.start_soon(message_handler) - try: - # Yield streams for communication - yield read_stream, write_stream - except Exception as exc: - logger.error(f"Transport error: {exc}") - raise exc - finally: - tg.cancel_scope.cancel() - await write_stream.aclose() - await read_stream.aclose() - except Exception as exc: - logger.error(f"Failed to initialize transport: {exc}") - raise exc + Finally, let's build the server: + + ```bash theme={null} + ./mvnw clean install ``` - - -## Best Practices + This will generate an `mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar` file within the `target` folder. -When implementing or using MCP transport: + Let's now test your server from an existing MCP host, Claude for Desktop. -1. Handle connection lifecycle properly -2. Implement proper error handling -3. Clean up resources on connection close -4. Use appropriate timeouts -5. Validate messages before sending -6. Log transport events for debugging -7. Implement reconnection logic when appropriate -8. Handle backpressure in message queues -9. Monitor connection health -10. Implement proper security measures + ## Testing your server with Claude for Desktop -## Security Considerations + + Claude for Desktop is not yet available on Linux. + -When implementing transport: + First, make sure you have Claude for Desktop installed. + [You can install the latest version here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** -### Authentication and Authorization + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. + To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. + Make sure to create the file if it doesn't exist. -* Implement proper authentication mechanisms -* Validate client credentials -* Use secure token handling -* Implement authorization checks + For example, if you have [VS Code](https://code.visualstudio.com/) installed: -### Data Security + + ```bash macOS/Linux theme={null} + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` -* Use TLS for network transport -* Encrypt sensitive data -* Validate message integrity -* Implement message size limits -* Sanitize input data + ```powershell Windows theme={null} + code $env:AppData\Claude\claude_desktop_config.json + ``` + -### Network Security + You'll then add your servers in the `mcpServers` key. + The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. -* Implement rate limiting -* Use appropriate timeouts -* Handle denial of service scenarios -* Monitor for unusual patterns -* Implement proper firewall rules -* For SSE transports, validate Origin headers to prevent DNS rebinding attacks -* For local SSE servers, bind only to localhost (127.0.0.1) instead of all interfaces (0.0.0.0) + In this case, we'll add our single weather server like so: -## Debugging Transport + + ```json macOS/Linux theme={null} + { + "mcpServers": { + "spring-ai-mcp-weather": { + "command": "java", + "args": [ + "-Dspring.ai.mcp.server.stdio=true", + "-jar", + "/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar" + ] + } + } + } + ``` -Tips for debugging transport issues: + ```json Windows theme={null} + { + "mcpServers": { + "spring-ai-mcp-weather": { + "command": "java", + "args": [ + "-Dspring.ai.mcp.server.transport=STDIO", + "-jar", + "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather\\mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar" + ] + } + } + } + ``` + -1. Enable debug logging -2. Monitor message flow -3. Check connection states -4. Validate message formats -5. Test error scenarios -6. Use network analysis tools -7. Implement health checks -8. Monitor resource usage -9. Test edge cases -10. Use proper error tracking + + Make sure you pass in the absolute path to your server. + + This tells Claude for Desktop: -# Debugging -Source: https://modelcontextprotocol.io/docs/tools/debugging + 1. There's an MCP server named "my-weather-server" + 2. To launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar` -A comprehensive guide to debugging Model Context Protocol (MCP) integrations + Save the file, and restart **Claude for Desktop**. -Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem. + ## Testing your server with Java client - - This guide is for macOS. Guides for other platforms are coming soon. - + ### Create an MCP Client manually -## Debugging tools overview + Use the `McpClient` to connect to the server: -MCP provides several tools for debugging at different levels: + ```java theme={null} + var stdioParams = ServerParameters.builder("java") + .args("-jar", "/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-weather-stdio-server-0.0.1-SNAPSHOT.jar") + .build(); -1. **MCP Inspector** - * Interactive debugging interface - * Direct server testing - * See the [Inspector guide](/docs/tools/inspector) for details + var stdioTransport = new StdioClientTransport(stdioParams); -2. **Claude Desktop Developer Tools** - * Integration testing - * Log collection - * Chrome DevTools integration + var mcpClient = McpClient.sync(stdioTransport).build(); -3. **Server Logging** - * Custom logging implementations - * Error tracking - * Performance monitoring + mcpClient.initialize(); -## Debugging in Claude Desktop + ListToolsResult toolsList = mcpClient.listTools(); -### Checking server status + CallToolResult weather = mcpClient.callTool( + new CallToolRequest("getWeatherForecastByLocation", + Map.of("latitude", "47.6062", "longitude", "-122.3321"))); -The Claude.app interface provides basic server status information: + CallToolResult alert = mcpClient.callTool( + new CallToolRequest("getAlerts", Map.of("state", "NY"))); -1. Click the icon to view: - * Connected servers - * Available prompts and resources + mcpClient.closeGracefully(); + ``` -2. Click the icon to view: - * Tools made available to the model + ### Use MCP Client Boot Starter -### Viewing logs + Create a new boot starter application using the `spring-ai-starter-mcp-client` dependency: -Review detailed MCP logs from Claude Desktop: + ```xml theme={null} + + org.springframework.ai + spring-ai-starter-mcp-client + + ``` -```bash -# Follow logs in real-time -tail -n 20 -F ~/Library/Logs/Claude/mcp*.log -``` + and set the `spring.ai.mcp.client.stdio.servers-configuration` property to point to your `claude_desktop_config.json`. + You can reuse the existing Anthropic Desktop configuration: -The logs capture: + ```properties theme={null} + spring.ai.mcp.client.stdio.servers-configuration=file:PATH/TO/claude_desktop_config.json + ``` -* Server connection events -* Configuration issues -* Runtime errors -* Message exchanges + When you start your client application, the auto-configuration will automatically create MCP clients from the claude\_desktop\_config.json. -### Using Chrome DevTools + For more information, see the [MCP Client Boot Starters](https://docs.spring.io/spring-ai/reference/api/mcp/mcp-server-boot-client-docs.html) reference documentation. -Access Chrome's developer tools inside Claude Desktop to investigate client-side errors: + ## More Java MCP Server examples -1. Create a `developer_settings.json` file with `allowDevTools` set to true: + The [starter-webflux-server](https://github.com/spring-projects/spring-ai-examples/tree/main/model-context-protocol/weather/starter-webflux-server) demonstrates how to create an MCP server using SSE transport. + It showcases how to define and register MCP Tools, Resources, and Prompts, using the Spring Boot's auto-configuration capabilities. + -```bash -echo '{"allowDevTools": true}' > ~/Library/Application\ Support/Claude/developer_settings.json -``` + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/weather-stdio-server) -2. Open DevTools: `Command-Option-Shift-i` + ### Prerequisite knowledge -Note: You'll see two DevTools windows: + This quickstart assumes you have familiarity with: -* Main content window -* App title bar window - -Use the Console panel to inspect client-side errors. + * Kotlin + * LLMs like Claude -Use the Network panel to inspect: + ### System requirements -* Message payloads -* Connection timing + * Java 17 or higher installed. -## Common issues + ### Set up your environment -### Working directory + First, let's install `java` and `gradle` if you haven't already. + You can download `java` from [official Oracle JDK website](https://www.oracle.com/java/technologies/downloads/). + Verify your `java` installation: -When using MCP servers with Claude Desktop: + ```bash theme={null} + java --version + ``` -* The working directory for servers launched via `claude_desktop_config.json` may be undefined (like `/` on macOS) since Claude Desktop could be started from anywhere -* Always use absolute paths in your configuration and `.env` files to ensure reliable operation -* For testing servers directly via command line, the working directory will be where you run the command + Now, let's create and set up your project: -For example in `claude_desktop_config.json`, use: + + ```bash macOS/Linux theme={null} + # Create a new directory for our project + mkdir weather + cd weather -```json -{ - "command": "npx", - "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/data"] -} -``` + # Initialize a new kotlin project + gradle init + ``` -Instead of relative paths like `./data` + ```powershell Windows theme={null} + # Create a new directory for our project + md weather + cd weather -### Environment variables + # Initialize a new kotlin project + gradle init + ``` + -MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`. + After running `gradle init`, you will be presented with options for creating your project. + Select **Application** as the project type, **Kotlin** as the programming language, and **Java 17** as the Java version. -To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`: + Alternatively, you can create a Kotlin application using the [IntelliJ IDEA project wizard](https://kotlinlang.org/docs/jvm-get-started.html). -```json -{ - "myserver": { - "command": "mcp-server-myapp", - "env": { - "MYAPP_API_KEY": "some_key", - } - } -} -``` + After creating the project, add the following dependencies: -### Server initialization + + ```kotlin build.gradle.kts theme={null} + val mcpVersion = "0.4.0" + val slf4jVersion = "2.0.9" + val ktorVersion = "3.1.1" -Common initialization problems: + dependencies { + implementation("io.modelcontextprotocol:kotlin-sdk:$mcpVersion") + implementation("org.slf4j:slf4j-nop:$slf4jVersion") + implementation("io.ktor:ktor-client-content-negotiation:$ktorVersion") + implementation("io.ktor:ktor-serialization-kotlinx-json:$ktorVersion") + } + ``` -1. **Path Issues** - * Incorrect server executable path - * Missing required files - * Permission problems - * Try using an absolute path for `command` + ```groovy build.gradle theme={null} + def mcpVersion = '0.3.0' + def slf4jVersion = '2.0.9' + def ktorVersion = '3.1.1' -2. **Configuration Errors** - * Invalid JSON syntax - * Missing required fields - * Type mismatches + dependencies { + implementation "io.modelcontextprotocol:kotlin-sdk:$mcpVersion" + implementation "org.slf4j:slf4j-nop:$slf4jVersion" + implementation "io.ktor:ktor-client-content-negotiation:$ktorVersion" + implementation "io.ktor:ktor-serialization-kotlinx-json:$ktorVersion" + } + ``` + -3. **Environment Problems** - * Missing environment variables - * Incorrect variable values - * Permission restrictions + Also, add the following plugins to your build script: -### Connection problems + + ```kotlin build.gradle.kts theme={null} + plugins { + kotlin("plugin.serialization") version "your_version_of_kotlin" + id("com.gradleup.shadow") version "8.3.9" + } + ``` -When servers fail to connect: + ```groovy build.gradle theme={null} + plugins { + id 'org.jetbrains.kotlin.plugin.serialization' version 'your_version_of_kotlin' + id 'com.gradleup.shadow' version '8.3.9' + } + ``` + -1. Check Claude Desktop logs -2. Verify server process is running -3. Test standalone with [Inspector](/docs/tools/inspector) -4. Verify protocol compatibility + Now let’s dive into building your server. -## Implementing logging + ## Building your server -### Server-side logging + ### Setting up the instance -When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically. + Add a server initialization function: - - Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation. - + ```kotlin theme={null} + // Main function to run the MCP server + fun `run mcp server`() { + // Create the MCP Server instance with a basic implementation + val server = Server( + Implementation( + name = "weather", // Tool name is "weather" + version = "1.0.0" // Version of the implementation + ), + ServerOptions( + capabilities = ServerCapabilities(tools = ServerCapabilities.Tools(listChanged = true)) + ) + ) -For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification: + // Create a transport using standard IO for server communication + val transport = StdioServerTransport( + System.`in`.asInput(), + System.out.asSink().buffered() + ) - - - ```python - server.request_context.session.send_log_message( - level="info", - data="Server started successfully", - ) + runBlocking { + server.connect(transport) + val done = Job() + server.onClose { + done.complete() + } + done.join() + } + } ``` - - - ```typescript - server.sendLoggingMessage({ - level: "info", - data: "Server started successfully", - }); - ``` - - + ### Weather API helper functions -Important events to log: + Next, let's add functions and data classes for querying and converting responses from the National Weather Service API: -* Initialization steps -* Resource access -* Tool execution -* Error conditions -* Performance metrics + ```kotlin theme={null} + // Extension function to fetch forecast information for given latitude and longitude + suspend fun HttpClient.getForecast(latitude: Double, longitude: Double): List { + val points = this.get("/points/$latitude,$longitude").body() + val forecast = this.get(points.properties.forecast).body() + return forecast.properties.periods.map { period -> + """ + ${period.name}: + Temperature: ${period.temperature} ${period.temperatureUnit} + Wind: ${period.windSpeed} ${period.windDirection} + Forecast: ${period.detailedForecast} + """.trimIndent() + } + } -### Client-side logging + // Extension function to fetch weather alerts for a given state + suspend fun HttpClient.getAlerts(state: String): List { + val alerts = this.get("/alerts/active/area/$state").body() + return alerts.features.map { feature -> + """ + Event: ${feature.properties.event} + Area: ${feature.properties.areaDesc} + Severity: ${feature.properties.severity} + Description: ${feature.properties.description} + Instruction: ${feature.properties.instruction} + """.trimIndent() + } + } -In client applications: + @Serializable + data class Points( + val properties: Properties + ) { + @Serializable + data class Properties(val forecast: String) + } -1. Enable debug logging -2. Monitor network traffic -3. Track message exchanges -4. Record error states + @Serializable + data class Forecast( + val properties: Properties + ) { + @Serializable + data class Properties(val periods: List) -## Debugging workflow + @Serializable + data class Period( + val number: Int, val name: String, val startTime: String, val endTime: String, + val isDaytime: Boolean, val temperature: Int, val temperatureUnit: String, + val temperatureTrend: String, val probabilityOfPrecipitation: JsonObject, + val windSpeed: String, val windDirection: String, + val shortForecast: String, val detailedForecast: String, + ) + } -### Development cycle + @Serializable + data class Alert( + val features: List + ) { + @Serializable + data class Feature( + val properties: Properties + ) -1. Initial Development - * Use [Inspector](/docs/tools/inspector) for basic testing - * Implement core functionality - * Add logging points + @Serializable + data class Properties( + val event: String, val areaDesc: String, val severity: String, + val description: String, val instruction: String?, + ) + } + ``` -2. Integration Testing - * Test in Claude Desktop - * Monitor logs - * Check error handling + ### Implementing tool execution -### Testing changes + The tool execution handler is responsible for actually executing the logic of each tool. Let's add it: -To test changes efficiently: + ```kotlin theme={null} + // Create an HTTP client with a default request configuration and JSON content negotiation + val httpClient = HttpClient { + defaultRequest { + url("https://api.weather.gov") + headers { + append("Accept", "application/geo+json") + append("User-Agent", "WeatherApiClient/1.0") + } + contentType(ContentType.Application.Json) + } + // Install content negotiation plugin for JSON serialization/deserialization + install(ContentNegotiation) { json(Json { ignoreUnknownKeys = true }) } + } -* **Configuration changes**: Restart Claude Desktop -* **Server code changes**: Use Command-R to reload -* **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development + // Register a tool to fetch weather alerts by state + server.addTool( + name = "get_alerts", + description = """ + Get weather alerts for a US state. Input is Two-letter US state code (e.g. CA, NY) + """.trimIndent(), + inputSchema = Tool.Input( + properties = buildJsonObject { + putJsonObject("state") { + put("type", "string") + put("description", "Two-letter US state code (e.g. CA, NY)") + } + }, + required = listOf("state") + ) + ) { request -> + val state = request.arguments["state"]?.jsonPrimitive?.content + if (state == null) { + return@addTool CallToolResult( + content = listOf(TextContent("The 'state' parameter is required.")) + ) + } -## Best practices + val alerts = httpClient.getAlerts(state) -### Logging strategy + CallToolResult(content = alerts.map { TextContent(it) }) + } -1. **Structured Logging** - * Use consistent formats - * Include context - * Add timestamps - * Track request IDs + // Register a tool to fetch weather forecast by latitude and longitude + server.addTool( + name = "get_forecast", + description = """ + Get weather forecast for a specific latitude/longitude + """.trimIndent(), + inputSchema = Tool.Input( + properties = buildJsonObject { + putJsonObject("latitude") { put("type", "number") } + putJsonObject("longitude") { put("type", "number") } + }, + required = listOf("latitude", "longitude") + ) + ) { request -> + val latitude = request.arguments["latitude"]?.jsonPrimitive?.doubleOrNull + val longitude = request.arguments["longitude"]?.jsonPrimitive?.doubleOrNull + if (latitude == null || longitude == null) { + return@addTool CallToolResult( + content = listOf(TextContent("The 'latitude' and 'longitude' parameters are required.")) + ) + } -2. **Error Handling** - * Log stack traces - * Include error context - * Track error patterns - * Monitor recovery + val forecast = httpClient.getForecast(latitude, longitude) -3. **Performance Tracking** - * Log operation timing - * Monitor resource usage - * Track message sizes - * Measure latency + CallToolResult(content = forecast.map { TextContent(it) }) + } + ``` -### Security considerations + ### Running the server -When debugging: + Finally, implement the main function to run the server: -1. **Sensitive Data** - * Sanitize logs - * Protect credentials - * Mask personal information + ```kotlin theme={null} + fun main() = `run mcp server`() + ``` -2. **Access Control** - * Verify permissions - * Check authentication - * Monitor access patterns + Make sure to run `./gradlew build` to build your server. This is a very important step in getting your server to connect. -## Getting help + Let's now test your server from an existing MCP host, Claude for Desktop. -When encountering issues: + ## Testing your server with Claude for Desktop -1. **First Steps** - * Check server logs - * Test with [Inspector](/docs/tools/inspector) - * Review configuration - * Verify environment + + Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built. + -2. **Support Channels** - * GitHub issues - * GitHub discussions + First, make sure you have Claude for Desktop installed. [You can install the latest version + here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** -3. **Providing Information** - * Log excerpts - * Configuration files - * Steps to reproduce - * Environment details + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. + To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. + Make sure to create the file if it doesn't exist. -## Next steps + For example, if you have [VS Code](https://code.visualstudio.com/) installed: - - - Learn to use the MCP Inspector - - + + ```bash macOS/Linux theme={null} + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` + ```powershell Windows theme={null} + code $env:AppData\Claude\claude_desktop_config.json + ``` + -# Inspector -Source: https://modelcontextprotocol.io/docs/tools/inspector + You'll then add your servers in the `mcpServers` key. + The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. -In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers + In this case, we'll add our single weather server like so: -The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities. + + ```json macOS/Linux theme={null} + { + "mcpServers": { + "weather": { + "command": "java", + "args": [ + "-jar", + "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/libs/weather-0.1.0-all.jar" + ] + } + } + } + ``` -## Getting started + ```json Windows theme={null} + { + "mcpServers": { + "weather": { + "command": "java", + "args": [ + "-jar", + "C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\libs\\weather-0.1.0-all.jar" + ] + } + } + } + ``` + -### Installation and basic usage + This tells Claude for Desktop: -The Inspector runs directly through `npx` without requiring installation: + 1. There's an MCP server named "weather" + 2. Launch it by running `java -jar /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/libs/weather-0.1.0-all.jar` -```bash -npx @modelcontextprotocol/inspector -``` + Save the file, and restart **Claude for Desktop**. + -```bash -npx @modelcontextprotocol/inspector -``` + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/csharp-sdk/tree/main/samples/QuickstartWeatherServer) -#### Inspecting servers from NPM or PyPi + ### Prerequisite knowledge -A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com). + This quickstart assumes you have familiarity with: - - - ```bash - npx -y @modelcontextprotocol/inspector npx - # For example - npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb - ``` - + * C# + * LLMs like Claude + * .NET 8 or higher - - ```bash - npx @modelcontextprotocol/inspector uvx - # For example - npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git - ``` - - + ### Logging in MCP Servers -#### Inspecting locally developed servers + When implementing MCP servers, be careful about how you handle logging: -To inspect servers locally developed or downloaded as a repository, the most common -way is: + **For STDIO-based servers:** Never write to standard output (stdout). This includes: - - - ```bash - npx @modelcontextprotocol/inspector node path/to/server/index.js args... - ``` - + * `print()` statements in Python + * `console.log()` in JavaScript + * `fmt.Println()` in Go + * Similar stdout functions in other languages - - ```bash - npx @modelcontextprotocol/inspector \ - uv \ - --directory path/to/server \ - run \ - package-name \ - args... - ``` - - + Writing to stdout will corrupt the JSON-RPC messages and break your server. -Please carefully read any attached README for the most accurate instructions. + **For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses. -## Feature overview + ### Best Practices - - - + 1. Use a logging library that writes to stderr or files -The Inspector provides several features for interacting with your MCP server: + ### System requirements -### Server connection pane + * [.NET 8 SDK](https://dotnet.microsoft.com/download/dotnet/8.0) or higher installed. -* Allows selecting the [transport](/docs/concepts/transports) for connecting to the server -* For local servers, supports customizing the command-line arguments and environment + ### Set up your environment -### Resources tab + First, let's install `dotnet` if you haven't already. You can download `dotnet` from [official Microsoft .NET website](https://dotnet.microsoft.com/download/). Verify your `dotnet` installation: -* Lists all available resources -* Shows resource metadata (MIME types, descriptions) -* Allows resource content inspection -* Supports subscription testing + ```bash theme={null} + dotnet --version + ``` -### Prompts tab + Now, let's create and set up your project: -* Displays available prompt templates -* Shows prompt arguments and descriptions -* Enables prompt testing with custom arguments -* Previews generated messages + + ```bash macOS/Linux theme={null} + # Create a new directory for our project + mkdir weather + cd weather + # Initialize a new C# project + dotnet new console + ``` -### Tools tab + ```powershell Windows theme={null} + # Create a new directory for our project + mkdir weather + cd weather + # Initialize a new C# project + dotnet new console + ``` + -* Lists available tools -* Shows tool schemas and descriptions -* Enables tool testing with custom inputs -* Displays tool execution results + After running `dotnet new console`, you will be presented with a new C# project. + You can open the project in your favorite IDE, such as [Visual Studio](https://visualstudio.microsoft.com/) or [Rider](https://www.jetbrains.com/rider/). + Alternatively, you can create a C# application using the [Visual Studio project wizard](https://learn.microsoft.com/en-us/visualstudio/get-started/csharp/tutorial-console?view=vs-2022). + After creating the project, add NuGet package for the Model Context Protocol SDK and hosting: -### Notifications pane + ```bash theme={null} + # Add the Model Context Protocol SDK NuGet package + dotnet add package ModelContextProtocol --prerelease + # Add the .NET Hosting NuGet package + dotnet add package Microsoft.Extensions.Hosting + ``` -* Presents all logs recorded from the server -* Shows notifications received from the server + Now let’s dive into building your server. -## Best practices + ## Building your server -### Development workflow + Open the `Program.cs` file in your project and replace its contents with the following code: -1. Start Development - * Launch Inspector with your server - * Verify basic connectivity - * Check capability negotiation + ```csharp theme={null} + using Microsoft.Extensions.DependencyInjection; + using Microsoft.Extensions.Hosting; + using ModelContextProtocol; + using System.Net.Http.Headers; -2. Iterative testing - * Make server changes - * Rebuild the server - * Reconnect the Inspector - * Test affected features - * Monitor messages + var builder = Host.CreateEmptyApplicationBuilder(settings: null); -3. Test edge cases - * Invalid inputs - * Missing prompt arguments - * Concurrent operations - * Verify error handling and error responses + builder.Services.AddMcpServer() + .WithStdioServerTransport() + .WithToolsFromAssembly(); -## Next steps + builder.Services.AddSingleton(_ => + { + var client = new HttpClient() { BaseAddress = new Uri("https://api.weather.gov") }; + client.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue("weather-tool", "1.0")); + return client; + }); - - - Check out the MCP Inspector source code - + var app = builder.Build(); - - Learn about broader debugging strategies - - + await app.RunAsync(); + ``` + + When creating the `ApplicationHostBuilder`, ensure you use `CreateEmptyApplicationBuilder` instead of `CreateDefaultBuilder`. This ensures that the server does not write any additional messages to the console. This is only necessary for servers using STDIO transport. + -# Example Servers -Source: https://modelcontextprotocol.io/examples + This code sets up a basic console application that uses the Model Context Protocol SDK to create an MCP server with standard I/O transport. -A list of example servers and implementations + ### Weather API helper functions -This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources. + Create an extension class for `HttpClient` which helps simplify JSON request handling: -## Reference implementations + ```csharp theme={null} + using System.Text.Json; -These official reference servers demonstrate core MCP features and SDK usage: + internal static class HttpClientExt + { + public static async Task ReadJsonDocumentAsync(this HttpClient client, string requestUri) + { + using var response = await client.GetAsync(requestUri); + response.EnsureSuccessStatusCode(); + return await JsonDocument.ParseAsync(await response.Content.ReadAsStreamAsync()); + } + } + ``` -### Data and file systems + Next, define a class with the tool execution handlers for querying and converting responses from the National Weather Service API: -* **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls -* **[PostgreSQL](https://github.com/modelcontextprotocol/servers/tree/main/src/postgres)** - Read-only database access with schema inspection capabilities -* **[SQLite](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite)** - Database interaction and business intelligence features -* **[Google Drive](https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive)** - File access and search capabilities for Google Drive + ```csharp theme={null} + using ModelContextProtocol.Server; + using System.ComponentModel; + using System.Globalization; + using System.Text.Json; -### Development tools + namespace QuickstartWeatherServer.Tools; -* **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories -* **[GitHub](https://github.com/modelcontextprotocol/servers/tree/main/src/github)** - Repository management, file operations, and GitHub API integration -* **[GitLab](https://github.com/modelcontextprotocol/servers/tree/main/src/gitlab)** - GitLab API integration enabling project management -* **[Sentry](https://github.com/modelcontextprotocol/servers/tree/main/src/sentry)** - Retrieving and analyzing issues from Sentry.io + [McpServerToolType] + public static class WeatherTools + { + [McpServerTool, Description("Get weather alerts for a US state code.")] + public static async Task GetAlerts( + HttpClient client, + [Description("The US state code to get alerts for.")] string state) + { + using var jsonDocument = await client.ReadJsonDocumentAsync($"/alerts/active/area/{state}"); + var jsonElement = jsonDocument.RootElement; + var alerts = jsonElement.GetProperty("features").EnumerateArray(); -### Web and browser automation + if (!alerts.Any()) + { + return "No active alerts for this state."; + } -* **[Brave Search](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search)** - Web and local search using Brave's Search API -* **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion optimized for LLM usage -* **[Puppeteer](https://github.com/modelcontextprotocol/servers/tree/main/src/puppeteer)** - Browser automation and web scraping capabilities + return string.Join("\n--\n", alerts.Select(alert => + { + JsonElement properties = alert.GetProperty("properties"); + return $""" + Event: {properties.GetProperty("event").GetString()} + Area: {properties.GetProperty("areaDesc").GetString()} + Severity: {properties.GetProperty("severity").GetString()} + Description: {properties.GetProperty("description").GetString()} + Instruction: {properties.GetProperty("instruction").GetString()} + """; + })); + } -### Productivity and communication + [McpServerTool, Description("Get weather forecast for a location.")] + public static async Task GetForecast( + HttpClient client, + [Description("Latitude of the location.")] double latitude, + [Description("Longitude of the location.")] double longitude) + { + var pointUrl = string.Create(CultureInfo.InvariantCulture, $"/points/{latitude},{longitude}"); + using var jsonDocument = await client.ReadJsonDocumentAsync(pointUrl); + var forecastUrl = jsonDocument.RootElement.GetProperty("properties").GetProperty("forecast").GetString() + ?? throw new Exception($"No forecast URL provided by {client.BaseAddress}points/{latitude},{longitude}"); -* **[Slack](https://github.com/modelcontextprotocol/servers/tree/main/src/slack)** - Channel management and messaging capabilities -* **[Google Maps](https://github.com/modelcontextprotocol/servers/tree/main/src/google-maps)** - Location services, directions, and place details -* **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system + using var forecastDocument = await client.ReadJsonDocumentAsync(forecastUrl); + var periods = forecastDocument.RootElement.GetProperty("properties").GetProperty("periods").EnumerateArray(); -### AI and specialized tools + return string.Join("\n---\n", periods.Select(period => $""" + {period.GetProperty("name").GetString()} + Temperature: {period.GetProperty("temperature").GetInt32()}°F + Wind: {period.GetProperty("windSpeed").GetString()} {period.GetProperty("windDirection").GetString()} + Forecast: {period.GetProperty("detailedForecast").GetString()} + """)); + } + } + ``` -* **[EverArt](https://github.com/modelcontextprotocol/servers/tree/main/src/everart)** - AI image generation using various models -* **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic problem-solving through thought sequences -* **[AWS KB Retrieval](https://github.com/modelcontextprotocol/servers/tree/main/src/aws-kb-retrieval-server)** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime + ### Running the server -## Official integrations + Finally, run the server using the following command: -These MCP servers are maintained by companies for their platforms: + ```bash theme={null} + dotnet run + ``` -* **[Axiom](https://github.com/axiomhq/mcp-server-axiom)** - Query and analyze logs, traces, and event data using natural language -* **[Browserbase](https://github.com/browserbase/mcp-server-browserbase)** - Automate browser interactions in the cloud -* **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy and manage resources on the Cloudflare developer platform -* **[E2B](https://github.com/e2b-dev/mcp-server)** - Execute code in secure cloud sandboxes -* **[Neon](https://github.com/neondatabase/mcp-server-neon)** - Interact with the Neon serverless Postgres platform -* **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through Markdown notes in Obsidian vaults -* **[Prisma](https://pris.ly/docs/mcp-server)** - Manage and interact with Prisma Postgres databases -* **[Qdrant](https://github.com/qdrant/mcp-server-qdrant/)** - Implement semantic memory using the Qdrant vector search engine -* **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Access crash reporting and monitoring data -* **[Search1API](https://github.com/fatwang2/search1api-mcp)** - Unified API for search, crawling, and sitemaps -* **[Stripe](https://github.com/stripe/agent-toolkit)** - Interact with the Stripe API -* **[Tinybird](https://github.com/tinybirdco/mcp-tinybird)** - Interface with the Tinybird serverless ClickHouse platform -* **[Weaviate](https://github.com/weaviate/mcp-server-weaviate)** - Enable Agentic RAG through your Weaviate collection(s) + This will start the server and listen for incoming requests on standard input/output. -## Community highlights + ## Testing your server with Claude for Desktop -A growing ecosystem of community-developed servers extends MCP's capabilities: + + Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/docs/develop/build-client) tutorial to build an MCP client that connects to the server we just built. + -* **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Manage containers, images, volumes, and networks -* **[Kubernetes](https://github.com/Flux159/mcp-server-kubernetes)** - Manage pods, deployments, and services -* **[Linear](https://github.com/jerhadf/linear-mcp-server)** - Project management and issue tracking -* **[Snowflake](https://github.com/datawiz168/mcp-snowflake-service)** - Interact with Snowflake databases -* **[Spotify](https://github.com/varunneal/spotify-mcp)** - Control Spotify playback and manage playlists -* **[Todoist](https://github.com/abhiz123/todoist-mcp-server)** - Task management integration + First, make sure you have Claude for Desktop installed. [You can install the latest version + here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.** + We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist. + For example, if you have [VS Code](https://code.visualstudio.com/) installed: -> **Note:** Community servers are untested and should be used at your own risk. They are not affiliated with or endorsed by Anthropic. + + ```bash macOS/Linux theme={null} + code ~/Library/Application\ Support/Claude/claude_desktop_config.json + ``` -For a complete list of community servers, visit the [MCP Servers Repository](https://github.com/modelcontextprotocol/servers). + ```powershell Windows theme={null} + code $env:AppData\Claude\claude_desktop_config.json + ``` + -## Getting started + You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured. + In this case, we'll add our single weather server like so: -### Using reference servers + + ```json macOS/Linux theme={null} + { + "mcpServers": { + "weather": { + "command": "dotnet", + "args": ["run", "--project", "/ABSOLUTE/PATH/TO/PROJECT", "--no-build"] + } + } + } + ``` -TypeScript-based servers can be used directly with `npx`: + ```json Windows theme={null} + { + "mcpServers": { + "weather": { + "command": "dotnet", + "args": [ + "run", + "--project", + "C:\\ABSOLUTE\\PATH\\TO\\PROJECT", + "--no-build" + ] + } + } + } + ``` + -```bash -npx -y @modelcontextprotocol/server-memory -``` + This tells Claude for Desktop: -Python-based servers can be used with `uvx` (recommended) or `pip`: + 1. There's an MCP server named "weather" + 2. Launch it by running `dotnet run /ABSOLUTE/PATH/TO/PROJECT` + Save the file, and restart **Claude for Desktop**. + -```bash -# Using uvx -uvx mcp-server-git + + Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-rust) -# Using pip -pip install mcp-server-git -python -m mcp_server_git -``` + ### Prerequisite knowledge -### Configuring with Claude + This quickstart assumes you have familiarity with: -To use an MCP server with Claude, add it to your configuration: + * Rust programming language + * Async/await in Rust + * LLMs like Claude -```json -{ - "mcpServers": { - "memory": { - "command": "npx", - "args": ["-y", "@modelcontextprotocol/server-memory"] - }, - "filesystem": { - "command": "npx", - "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"] - }, - "github": { - "command": "npx", - "args": ["-y", "@modelcontextprotocol/server-github"], - "env": { - "GITHUB_PERSONAL_ACCESS_TOKEN": "" - } - } - } -} -``` + ### Logging in MCP Servers -## Additional resources + When implementing MCP servers, be careful about how you handle logging: -* [MCP Servers Repository](https://github.com/modelcontextprotocol/servers) - Complete collection of reference implementations and community servers -* [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP servers -* [MCP CLI](https://github.com/wong2/mcp-cli) - Command-line inspector for testing MCP servers -* [MCP Get](https://mcp-get.com) - Tool for installing and managing MCP servers -* [Supergateway](https://github.com/supercorp-ai/supergateway) - Run MCP stdio servers over SSE -* [Zapier MCP](https://zapier.com/mcp) - MCP Server with over 7,000+ apps and 30,000+ actions + **For STDIO-based servers:** Never write to standard output (stdout). This includes: -Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community. + * `print()` statements in Python + * `console.log()` in JavaScript + * `println!()` in Rust + * Similar stdout functions in other languages + Writing to stdout will corrupt the JSON-RPC messages and break your server. -# FAQs -Source: https://modelcontextprotocol.io/faqs + **For HTTP-based servers:** Standard output logging is fine since it doesn't interfere with HTTP responses. -Explaining MCP and why it matters in simple terms + ### Best Practices -## What is MCP? + 1. Use a logging library that writes to stderr or files, such as `tracing` or `log` in Rust. + 2. Configure your logging framework to avoid stdout output. -MCP (Model Context Protocol) is a standard way for AI applications and agents to connect to and work with your data sources (e.g. local files, databases, or content repositories) and tools (e.g. GitHub, Google Maps, or Puppeteer). + ### Quick Examples -Think of MCP as a universal adapter for AI applications, similar to what USB-C is for physical devices. USB-C acts as a universal adapter to connect devices to various peripherals and accessories. Similarly, MCP provides a standardized way to connect AI applications to different data and tools. + ```rust theme={null} + // ❌ Bad (STDIO) + println!("Processing request"); -Before USB-C, you needed different cables for different connections. Similarly, before MCP, developers had to build custom connections to each data source or tool they wanted their AI application to work with—a time-consuming process that often resulted in limited functionality. Now, with MCP, developers can easily add connections to their AI applications, making their applications much more powerful from day one. + // ✅ Good (STDIO) + use tracing::info; + info!("Processing request"); // writes to stderr + ``` -## Why does MCP matter? + ### System requirements -### For AI application users + * Rust 1.70 or higher installed. + * Cargo (comes with Rust installation). -MCP means your AI applications can access the information and tools you work with every day, making them much more helpful. Rather than AI being limited to what it already knows about, it can now understand your specific documents, data, and work context. + ### Set up your environment -For example, by using MCP servers, applications can access your personal documents from Google Drive or data about your codebase from GitHub, providing more personalized and contextually relevant assistance. + First, let's install Rust if you haven't already. You can install Rust from [rust-lang.org](https://www.rust-lang.org/tools/install): -Imagine asking an AI assistant: "Summarize last week's team meeting notes and schedule follow-ups with everyone." + + ```bash macOS/Linux theme={null} + curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + ``` -By using connections to data sources powered by MCP, the AI assistant can: + ```powershell Windows theme={null} + # Download and run rustup-init.exe from https://rustup.rs/ + ``` + -* Connect to your Google Drive through an MCP server to read meeting notes -* Understand who needs follow-ups based on the notes -* Connect to your calendar through another MCP server to schedule the meetings automatically + Verify your Rust installation: -### For developers + ```bash theme={null} + rustc --version + cargo --version + ``` -MCP reduces development time and complexity when building AI applications that need to access various data sources. With MCP, developers can focus on building great AI experiences rather than repeatedly creating custom connectors. + Now, let's create and set up our project: -Traditionally, connecting applications with data sources required building custom, one-off connections for each data source and each application. This created significant duplicative work—every developer wanting to connect their AI application to Google Drive or Slack needed to build their own connection. + + ```bash macOS/Linux theme={null} + # Create a new Rust project + cargo new weather + cd weather + ``` -MCP simplifies this by enabling developers to build MCP servers for data sources that are then reusable by various applications. For example, using the open source Google Drive MCP server, many different applications can access data from Google Drive without each developer needing to build a custom connection. + ```powershell Windows theme={null} + # Create a new Rust project + cargo new weather + cd weather + ``` + -This open source ecosystem of MCP servers means developers can leverage existing work rather than starting from scratch, making it easier to build powerful AI applications that seamlessly integrate with the tools and data sources their users already rely on. + Update your `Cargo.toml` to add the required dependencies: + + ```toml Cargo.toml theme={null} + [package] + name = "weather" + version = "0.1.0" + edition = "2024" + + [dependencies] + rmcp = { version = "0.3", features = ["server", "macros", "transport-io"] } + tokio = { version = "1.46", features = ["full"] } + reqwest = { version = "0.12", features = ["json"] } + serde = { version = "1.0", features = ["derive"] } + serde_json = "1.0" + anyhow = "1.0" + tracing = "0.1" + tracing-subscriber = { version = "0.3", features = ["env-filter", "std", "fmt"] } + ``` -## How does MCP work? + Now let's dive into building your server. - - - + ## Building your server -MCP creates a bridge between your AI applications and your data through a straightforward system: + ### Importing packages and constants -* **MCP servers** connect to your data sources and tools (like Google Drive or Slack) -* **MCP clients** are run by AI applications (like Claude Desktop) to connect them to these servers -* When you give permission, your AI application discovers available MCP servers -* The AI model can then use these connections to read information and take actions + Open `src/main.rs` and add these imports and constants at the top: -This modular system means new capabilities can be added without changing AI applications themselves—just like adding new accessories to your computer without upgrading your entire system. + ```rust theme={null} + use anyhow::Result; + use rmcp::{ + ServerHandler, ServiceExt, + handler::server::{router::tool::ToolRouter, tool::Parameters}, + model::*, + schemars, tool, tool_handler, tool_router, + }; + use serde::Deserialize; + use serde::de::DeserializeOwned; -## Who creates and maintains MCP servers? + const NWS_API_BASE: &str = "https://api.weather.gov"; + const USER_AGENT: &str = "weather-app/1.0"; + ``` -MCP servers are developed and maintained by: + The `rmcp` crate provides the Model Context Protocol SDK for Rust, with features for server implementation, procedural macros, and stdio transport. -* Developers at Anthropic who build servers for common tools and data sources -* Open source contributors who create servers for tools they use -* Enterprise development teams building servers for their internal systems -* Software providers making their applications AI-ready + ### Data structures -Once an open source MCP server is created for a data source, it can be used by any MCP-compatible AI application, creating a growing ecosystem of connections. See our [list of example servers](https://modelcontextprotocol.io/examples), or [get started building your own server](https://modelcontextprotocol.io/quickstart/server). + Next, let's define the data structures for deserializing responses from the National Weather Service API: + ```rust theme={null} + #[derive(Debug, Deserialize)] + struct AlertsResponse { + features: Vec, + } -# Introduction -Source: https://modelcontextprotocol.io/introduction + #[derive(Debug, Deserialize)] + struct AlertFeature { + properties: AlertProperties, + } -Get started with the Model Context Protocol (MCP) + #[derive(Debug, Deserialize)] + struct AlertProperties { + event: Option, + #[serde(rename = "areaDesc")] + area_desc: Option, + severity: Option, + description: Option, + instruction: Option, + } -C# SDK released! Check out [what else is new.](/development/updates) + #[derive(Debug, Deserialize)] + struct PointsResponse { + properties: PointsProperties, + } -MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools. + #[derive(Debug, Deserialize)] + struct PointsProperties { + forecast: String, + } -## Why MCP? + #[derive(Debug, Deserialize)] + struct ForecastResponse { + properties: ForecastProperties, + } -MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides: + #[derive(Debug, Deserialize)] + struct ForecastProperties { + periods: Vec, + } -* A growing list of pre-built integrations that your LLM can directly plug into -* The flexibility to switch between LLM providers and vendors -* Best practices for securing your data within your infrastructure + #[derive(Debug, Deserialize)] + struct ForecastPeriod { + name: String, + temperature: i32, + #[serde(rename = "temperatureUnit")] + temperature_unit: String, + #[serde(rename = "windSpeed")] + wind_speed: String, + #[serde(rename = "windDirection")] + wind_direction: String, + #[serde(rename = "detailedForecast")] + detailed_forecast: String, + } + ``` -### General architecture + Now define the request types that MCP clients will send: -At its core, MCP follows a client-server architecture where a host application can connect to multiple servers: + ```rust theme={null} + #[derive(serde::Deserialize, schemars::JsonSchema)] + pub struct MCPForecastRequest { + latitude: f32, + longitude: f32, + } -```mermaid -flowchart LR - subgraph "Your Computer" - Host["Host with MCP Client\n(Claude, IDEs, Tools)"] - S1["MCP Server A"] - S2["MCP Server B"] - S3["MCP Server C"] - Host <-->|"MCP Protocol"| S1 - Host <-->|"MCP Protocol"| S2 - Host <-->|"MCP Protocol"| S3 - S1 <--> D1[("Local\nData Source A")] - S2 <--> D2[("Local\nData Source B")] - end - subgraph "Internet" - S3 <-->|"Web APIs"| D3[("Remote\nService C")] - end -``` + #[derive(serde::Deserialize, schemars::JsonSchema)] + pub struct MCPAlertRequest { + state: String, + } + ``` -* **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP -* **MCP Clients**: Protocol clients that maintain 1:1 connections with servers -* **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol -* **Local Data Sources**: Your computer's files, databases, and services that MCP servers can securely access -* **Remote Services**: External systems available over the internet (e.g., through APIs) that MCP servers can connect to + ### Helper functions -## Get started + Add helper functions for making API requests and formatting responses: + + ```rust theme={null} + async fn make_nws_request(url: &str) -> Result { + let client = reqwest::Client::new(); + let rsp = client + .get(url) + .header(reqwest::header::USER_AGENT, USER_AGENT) + .header(reqwest::header::ACCEPT, "application/geo+json") + .send() + .await? + .error_for_status()?; + Ok(rsp.json::().await?) + } -Choose the path that best fits your needs: + fn format_alert(feature: &AlertFeature) -> String { + let props = &feature.properties; + format!( + "Event: {}\nArea: {}\nSeverity: {}\nDescription: {}\nInstructions: {}", + props.event.as_deref().unwrap_or("Unknown"), + props.area_desc.as_deref().unwrap_or("Unknown"), + props.severity.as_deref().unwrap_or("Unknown"), + props + .description + .as_deref() + .unwrap_or("No description available"), + props + .instruction + .as_deref() + .unwrap_or("No specific instructions provided") + ) + } -#### Quick Starts + fn format_period(period: &ForecastPeriod) -> String { + format!( + "{}:\nTemperature: {}°{}\nWind: {} {}\nForecast: {}", + period.name, + period.temperature, + period.temperature_unit, + period.wind_speed, + period.wind_direction, + period.detailed_forecast + ) + } + ``` - - - Get started building your own server to use in Claude for Desktop and other clients - + ### Implementing the Weather server and tools - - Get started building your own client that can integrate with all MCP servers - + Now let's implement the main Weather server struct with the tool handlers: - - Get started using pre-built servers in Claude for Desktop - - + ```rust theme={null} + pub struct Weather { + tool_router: ToolRouter, + } -#### Examples + #[tool_router] + impl Weather { + fn new() -> Self { + Self { + tool_router: Self::tool_router(), + } + } - - - Check out our gallery of official MCP servers and implementations - + #[tool(description = "Get weather alerts for a US state.")] + async fn get_alerts( + &self, + Parameters(MCPAlertRequest { state }): Parameters, + ) -> String { + let url = format!( + "{}/alerts/active/area/{}", + NWS_API_BASE, + state.to_uppercase() + ); + + match make_nws_request::(&url).await { + Ok(data) => { + if data.features.is_empty() { + "No active alerts for this state.".to_string() + } else { + data.features + .iter() + .map(format_alert) + .collect::>() + .join("\n---\n") + } + } + Err(_) => "Unable to fetch alerts or no alerts found.".to_string(), + } + } - - View the list of clients that support MCP integrations - - + #[tool(description = "Get weather forecast for a location.")] + async fn get_forecast( + &self, + Parameters(MCPForecastRequest { + latitude, + longitude, + }): Parameters, + ) -> String { + let points_url = format!("{NWS_API_BASE}/points/{latitude},{longitude}"); + let Ok(points_data) = make_nws_request::(&points_url).await else { + return "Unable to fetch forecast data for this location.".to_string(); + }; + + let forecast_url = points_data.properties.forecast; + + let Ok(forecast_data) = make_nws_request::(&forecast_url).await else { + return "Unable to fetch forecast data for this location.".to_string(); + }; + + let periods = &forecast_data.properties.periods; + let forecast_summary: String = periods + .iter() + .take(5) // Next 5 periods only + .map(format_period) + .collect::>() + .join("\n---\n"); + forecast_summary + } + } + ``` -## Tutorials + The `#[tool_router]` macro automatically generates the routing logic, and the `#[tool]` attribute marks methods as MCP tools. - - - Learn how to use LLMs like Claude to speed up your MCP development - + ### Implementing the ServerHandler - - Learn how to effectively debug MCP servers and integrations - + Implement the `ServerHandler` trait to define server capabilities: - - Test and inspect your MCP servers with our interactive debugging tool - + ```rust theme={null} + #[tool_handler] + impl ServerHandler for Weather { + fn get_info(&self) -> ServerInfo { + ServerInfo { + capabilities: ServerCapabilities::builder().enable_tools().build(), + ..Default::default() + } + } + } + ``` - -