MCP Host CLI

MCP Host CLI

By VyacheslavVanin GitHub

Local http server that proxies requests o LLMs and uses mcp-servers if needed

Overview

what is MCP Host CLI?

MCP Host CLI is a FastAPI-based command-line interface application that hosts and manages Model Context Protocol (MCP) servers, providing an HTTP API for interacting with various tools and resources.

how to use MCP Host CLI?

To use MCP Host CLI, clone the repository from GitHub, navigate to the project directory, and run the main application using the command uv run main.py. You can configure the MCP servers through a JSON configuration file or environment variables.

key features of MCP Host CLI?

  • Manages multiple MCP server connections.
  • Provides HTTP API endpoints for user requests, tool approval workflows, and session state management.
  • Supports both direct LLM API and local models via Ollama.

use cases of MCP Host CLI?

  1. Hosting and managing multiple LLM servers for various applications.
  2. Facilitating user interactions with LLMs through a structured API.
  3. Approving or denying tool execution requests based on user input.

FAQ from MCP Host CLI?

  • Can MCP Host CLI manage multiple LLM servers?

Yes! It can manage multiple MCP server connections simultaneously.

  • How do I configure the servers?

You can configure the servers by editing the servers_config.json file or using environment variables.

  • Is there a default model provided?

Yes! The default model is set to "qwen2.5-coder:latest", but you can specify a different model using command line arguments.

Content

MCP Host CLI

A FastAPI-based CLI application that hosts and manages MCP (Model Context Protocol) servers, providing an HTTP API for interacting with tools and resources.

Features

  • Manages multiple MCP server connections
  • Provides HTTP API endpoints for:
    • User requests
    • Tool approval workflow
    • Session state management
  • Supports both direct LLM API and Ollama local models

Installation

  1. Clone the repository:
git clone https://github.com/VyacheslavVanin/mcp-host-cli.git
cd mcp-host-cli
  1. Run:
uv run main.py

Configuration

Server Configuration

  1. Create/edit servers_config.json to configure your MCP servers:
{
  "mcpServers": {
    "server-name": {
      "command": "node",
      "args": ["path/to/server.js"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}

Application Configuration

Configuration can be set via environment variables or command line arguments (CLI args take precedence).

Environment Variables

  • LLM_API_KEY: API key for LLM service (if not using Ollama)
  • LLM_PROVIDER: "ollama" (default) or "openai"
  • LLM_MODEL: Model name (default: "qwen2.5-coder:latest")
  • PORT: Server port (default: 8000)
  • OPENAI_BASE_URL: Base URL for OpenAI-compatible API (default: "https://openrouter.ai/api/v1")
  • USE_OLLAMA: Set to "true" to use local Ollama models

Command Line Arguments

python main.py --model MODEL_NAME --port PORT_NUMBER --provider PROVIDER --openai-base-url URL

Where:

Configuration Precedence

  1. Command line arguments (highest priority)
  2. Environment variables
  3. Default values (lowest priority)

Examples

# Using environment variables
export LLM_MODEL="llama3:latest"
export PORT=8080
python main.py

# Using CLI arguments
python main.py --model "llama3:latest" --port 8080

# Using defaults
python main.py

API Endpoints

POST /user_request

Handle user input and return LLM response or tool approval request.

Request:

{
  "input": "your question or command"
}

Response:

{
  "message": "response text",
  "request_id": "uuid-if-approval-needed",
  "requires_approval": true/false,
  "tool": "tool-name-if-applicable"
}

POST /approve

Approve or deny a tool execution request.

Request:

{
  "request_id": "uuid-from-user_request",
  "approve": true/false
}

Response:

{
  "message": "execution result or denial message",
  "request_id": "same-request-id",
  "tool": "tool-name"
}

GET /session_state

Get current chat session state including messages and pending requests.

Response:

{
  "messages": [
    {"role": "system/user/assistant", "content": "message text"}
  ],
  "_pending_request_id": "uuid-or-null",
  "_pending_tool_call": {
    "tool": "tool-name",
    "arguments": {}
  }
}
No tools information available.
No content found.