what is MCP Web UI?
MCP Web UI is a web-based user interface designed to facilitate interactions with Large Language Models (LLMs) within the Model Context Protocol (MCP) architecture, providing a user-friendly interface for context management and coordination between clients and servers.
how to use MCP Web UI?
To use MCP Web UI, clone the repository from GitHub, configure your environment with the necessary API keys for LLM providers, and run the application either locally or via Docker.
key features of MCP Web UI?
- Multi-provider LLM integration (Anthropic, OpenAI, Ollama, OpenRouter)
- Intuitive chat interface for real-time interactions
- Dynamic configuration management
- Advanced context aggregation and persistent chat history
- Real-time response streaming via Server-Sent Events (SSE)
use cases of MCP Web UI?
- Facilitating seamless communication with various AI language models.
- Managing context for complex AI interactions in applications.
- Providing a unified interface for developers working with multiple LLMs.
FAQ from MCP Web UI?
- What LLM providers are supported?
MCP Web UI supports multiple providers including Anthropic, OpenAI, Ollama, and OpenRouter.
- Is there a demo available?
Yes! A demo video is available on YouTube to showcase the features of MCP Web UI.
- What are the prerequisites for installation?
You need Go 1.23+, Docker (optional), and API keys for the desired LLM providers.
MCP Web UI
MCP Web UI is a web-based user interface that serves as a Host within the Model Context Protocol (MCP) architecture. It provides a powerful and user-friendly interface for interacting with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.
🌟 Overview
MCP Web UI is designed to simplify and enhance interactions with AI language models by providing:
- A unified interface for multiple LLM providers
- Real-time, streaming chat experiences
- Flexible configuration and model management
- Robust context handling using the MCP protocol
Demo Video
🚀 Features
- 🤖 Multi-Provider LLM Integration:
- Anthropic (Claude models)
- OpenAI (GPT models)
- Ollama (local models)
- OpenRouter (multiple providers)
- 💬 Intuitive Chat Interface
- 🔄 Real-time Response Streaming via Server-Sent Events (SSE)
- 🔧 Dynamic Configuration Management
- 📊 Advanced Context Aggregation
- 💾 Persistent Chat History using BoltDB
- 🎯 Flexible Model Selection
📋 Prerequisites
- Go 1.23+
- Docker (optional)
- API keys for desired LLM providers
🛠 Installation
Quick Start
-
Clone the repository:
git clone https://github.com/MegaGrindStone/mcp-web-ui.git cd mcp-web-ui
-
Configure your environment:
mkdir -p $HOME/.config/mcpwebui cp config.example.yaml $HOME/.config/mcpwebui/config.yaml
-
Set up API keys:
export ANTHROPIC_API_KEY=your_anthropic_key export OPENAI_API_KEY=your_openai_key export OPENROUTER_API_KEY=your_openrouter_key
Running the Application
Local Development
go mod download
go run ./cmd/server/main.go
Docker Deployment
docker build -t mcp-web-ui .
docker run -p 8080:8080 \
-v $HOME/.config/mcpwebui/config.yaml:/app/config.yaml \
-e ANTHROPIC_API_KEY \
-e OPENAI_API_KEY \
-e OPENROUTER_API_KEY \
mcp-web-ui
🔧 Configuration
The configuration file (config.yaml
) provides comprehensive settings for customizing the MCP Web UI. Here's a detailed breakdown:
Server Configuration
port
: The port on which the server will run (default: 8080)logLevel
: Logging verbosity (options: debug, info, warn, error; default: info)logMode
: Log output format (options: json, text; default: text)
Prompt Configuration
systemPrompt
: Default system prompt for the AI assistanttitleGeneratorPrompt
: Prompt used to generate chat titles
LLM (Language Model) Configuration
The llm
section supports multiple providers with provider-specific configurations:
Common LLM Parameters
provider
: Choose from: ollama, anthropic, openai, openroutermodel
: Specific model name (e.g., 'claude-3-5-sonnet-20241022')parameters
: Fine-tune model behavior:temperature
: Randomness of responses (0.0-1.0)topP
: Nucleus sampling thresholdtopK
: Number of highest probability tokens to keepfrequencyPenalty
: Reduce repetition of token sequencespresencePenalty
: Encourage discussing new topicsmaxTokens
: Maximum response lengthstop
: Sequences to stop generation- And more provider-specific parameters
Provider-Specific Configurations
-
Ollama:
host
: Ollama server URL (default: http://localhost:11434)
-
Anthropic:
apiKey
: Anthropic API key (can use ANTHROPIC_API_KEY env variable)maxTokens
: Maximum token limit
-
OpenAI:
apiKey
: OpenAI API key (can use OPENAI_API_KEY env variable)
-
OpenRouter:
apiKey
: OpenRouter API key (can use OPENROUTER_API_KEY env variable)
Title Generator Configuration
The genTitleLLM
section allows separate configuration for title generation, defaulting to the main LLM if not specified.
MCP Server Configurations
-
mcpSSEServers
: Configure Server-Sent Events (SSE) serversurl
: SSE server URLmaxPayloadSize
: Maximum payload size
-
mcpStdIOServers
: Configure Standard Input/Output serverscommand
: Command to run serverargs
: Arguments for the server command
Example Configuration Snippet
port: 8080
logLevel: info
systemPrompt: You are a helpful assistant.
llm:
provider: anthropic
model: claude-3-5-sonnet-20241022
parameters:
temperature: 0.7
maxTokens: 1000
genTitleLLM:
provider: openai
model: gpt-3.5-turbo
🏗 Project Structure
cmd/
: Application entry pointinternal/handlers/
: Web request handlersinternal/models/
: Data modelsinternal/services/
: LLM provider integrationsstatic/
: Static assets (CSS)templates/
: HTML templates
🤝 Contributing
- Fork the repository
- Create a feature branch
- Commit your changes
- Push and create a Pull Request
📄 License
MIT License