What is MCP Host?
MCP Host is a Python implementation of a Model Context Protocol (MCP) host that connects to Ollama LLM backends and MCP servers, enabling seamless interactions with language models and tools.
How to use MCP Host?
To use MCP Host, run the command python mcp_host.py
in your terminal after configuring the necessary JSON configuration file for your servers and LLM provider.
Key features of MCP Host?
- Multiple server support for connecting to various MCP-compatible servers.
- Supports multiple transport types including stdio and SSE.
- Integration with local Ollama models for enhanced functionality.
- Tool execution capabilities allowing LLMs to utilize tools from connected servers.
- A simple command-line interface for ease of use.
- JSON configuration for straightforward server and LLM setup.
Use cases of MCP Host?
- Connecting to multiple LLM backends for diverse language model interactions.
- Executing tools from various servers during LLM queries.
- Facilitating real-time communication between LLMs and external resources.
FAQ from MCP Host?
- What is required to run MCP Host?
You need to have Ollama running locally or on a remote server.
- Can I connect multiple servers?
Yes! MCP Host supports connecting to any number of MCP-compatible servers.
- How do I configure the servers?
You can define your servers and LLM provider in a JSON configuration file.
MCP Host
A Python implementation of a Model Context Protocol (MCP) host that connects to Ollama LLM backends and MCP servers.
Features
- Multiple Server Support: Connect to any number of MCP-compatible servers
- Multiple Transport Types: Supports both stdio and SSE transports
- Ollama Integration: Seamless connection to local Ollama models
- Tool Execution: Enable LLMs to use tools from connected servers
- Simple CLI: Easy-to-use command-line interface
- JSON Configuration: Simple config file for server and LLM setup
Requirements
- Ollama running locally or on a remote server
Run the MCP Host
python mcp_host.py
Run the weather server(SSE)
python weather.py
Configuration
The MCP Host uses a JSON configuration file to define:
- MCP Servers: The servers that provide tools and resources
- LLM Provider: Configuration for the Ollama backend
Server Configuration
Each server needs:
- type: The transport mechanism (
stdio
orsse
) - For stdio servers:
- command: The command to run
- args: Command-line arguments (optional)
- env: Environment variables (optional)
- For SSE servers:
- url: The SSE endpoint URL
LLM Provider Configuration
- type: The provider type (currently only
ollama
is supported) - model: The model name to use (e.g.,
llama3
,mistral
, etc.) - url: The Ollama API URL (default:
http://localhost:11434
) - parameters: Additional parameters for Ollama (temperature, top_p, etc.)
Command-Line Options
usage: mcp_host.py [-h] [--config CONFIG] [--model MODEL]
[--message-window MESSAGE_WINDOW]
[--provider {ollama}] [--ollama-url OLLAMA_URL]
[--ollama-model OLLAMA_MODEL] [--debug] [--save-config]
MCP Host for LLM tool interactions
options:
-h, --help show this help message and exit
--config CONFIG Path to config file (default: config.json in current directory)
--model MODEL, -m MODEL
Override model specified in config
--message-window MESSAGE_WINDOW
Number of messages to keep in context
Provider Selection:
--provider {ollama} Select LLM provider
Ollama Options:
--ollama-url OLLAMA_URL
URL for Ollama API (e.g., http://localhost:11434)
--ollama-model OLLAMA_MODEL
Ollama model to use (e.g., llama3, mistral, etc.)
Other options:
--debug Enable debug logging
--save-config Save provider options to config file
Usage Examples
Basic Usage
python mcp_host.py
Debug Mode
python mcp_host.py --debug
Special Commands
During a chat session, you can use the following special commands:
tools
: List all available tools from connected serversservers
: List all connected MCP serversexit
orquit
: End the session
How It Works
- When you start MCP Host, it connects to all configured MCP servers
- Each server provides a list of available tools
- When you enter a query, it's sent to the Ollama LLM
- If the LLM decides to use tools, MCP Host executes those tool calls
- The results are sent back to the LLM
- The LLM provides a final response incorporating the tool results