
MCP TTS Server
MCP Server wrapper for TTS engines (Kokoro TTS and OpenAI TTS)
What is MCP TTS Server?
MCP TTS Server is a versatile Text-to-Speech (TTS) server built on the Model Context Protocol (MCP) framework, providing access to multiple TTS engines through a unified interface, including Kokoro TTS and OpenAI TTS.
How to use MCP TTS Server?
To use the MCP TTS Server, clone the repository, set up a virtual environment, install dependencies, and configure your OpenAI API key. You can then run the server and integrate it with applications like Claude Desktop.
Key features of MCP TTS Server?
- Multiple TTS engines in one server (Kokoro and OpenAI)
- Real-time streaming audio playback
- MCP protocol support for integration with other LLMs
- Configurable voice selection and customization
- Speed adjustment and playback control
Use cases of MCP TTS Server?
- Converting text to speech for applications and services.
- Integrating TTS functionality into chatbots and virtual assistants.
- Providing audio output for educational content and accessibility tools.
FAQ from MCP TTS Server?
- What TTS engines does MCP TTS Server support?
It supports Kokoro TTS (local) and OpenAI TTS (cloud).
- Is there a cost to use the OpenAI TTS engine?
Yes, using OpenAI TTS requires an API key and may incur costs based on usage.
- Can I customize the voice output?
Yes, you can customize voice output using natural language instructions with OpenAI TTS.
MCP TTS Server
A versatile TTS (Text-to-Speech) server built on the Model Context Protocol (MCP) framework. This server provides access to multiple TTS engines through a unified interface:
- Kokoro TTS - High-quality local TTS engine
- OpenAI TTS - Cloud-based TTS via OpenAI's API
Features
- 🌐 Multiple TTS engines in one unified server
- 🎧 Real-time streaming audio playback
- 🔄 MCP protocol support for seamless integration with Claude and other LLMs
- 🎛️ Configurable voice selection for both engines
- 💬 Support for voice customization via natural language instructions (OpenAI)
- ⚡ Speed adjustment for both TTS engines
- 🛑 Playback control for stopping audio and clearing the queue
Installation
Prerequisites
- Python 3.10 or higher
- uv package manager
- OpenAI API key (for OpenAI TTS functionality)
Quick Install
# Clone the repository
git clone https://github.com/kristofferv98/MCP_tts_server.git
cd MCP_tts_server
# Create a virtual environment and install dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e .
Configuration
Create a .env
file based on the provided .env.example
:
cp .env.example .env
Edit the .env
file to add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key_here
Integration with Claude Desktop
To use this server with Claude Desktop:
-
Install the server:
fastmcp install ./tts_mcp.py --name tts
-
Alternatively, you can manually add the server to Claude Desktop's configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add this entry to the
mcpServers
section:"kokoro_tts": { "command": "uv", "args": [ "--directory", "/path/to/MCP_tts_server", "run", "tts_mcp.py" ] }
Example configuration using the full path to uv:
"kokoro_tts": { "command": "/Users/username/.local/bin/uv", "args": [ "--directory", "/Users/username/Documents/MCP_Servers/MCP_tts_server", "run", "tts_mcp.py" ] }
- macOS:
MCP Function Definitions
The server exposes the following MCP tools:
Main TTS Function
{
"description": "Convert text to speech using the preferred engine and streams the speech to the user. The base voice for the AI is the Kokoro engine, to keep AI's personality consistent. This unified function provides access to both Kokoro TTS (local) and OpenAI TTS (cloud API).",
"name": "tts",
"parameters": {
"properties": {
"text": {"title": "Text", "type": "string"},
"engine": {"default": "kokoro", "title": "Engine", "type": "string"},
"speed": {"default": 1, "title": "Speed", "type": "number"},
"voice": {"default": "", "title": "Voice", "type": "string"},
"instructions": {"default": "", "title": "Instructions", "type": "string"}
},
"required": ["text"]
}
}
Parameters:
- text (required): Text to convert to speech
- engine (optional): TTS engine to use - "kokoro" (default, local) or "openai" (cloud)
- speed (optional): Playback speed (0.8-1.5 typical)
- voice (optional): Voice name to use (engine-specific)
- instructions (optional): Voice customization instructions for OpenAI TTS
Stop Playback Function
{
"description": "Stops the currently playing audio (if any) and clears all pending TTS requests from the queue. Relies on the background worker detecting the cancellation signal.",
"name": "tts_stop_playback_and_clear_queue",
"parameters": {
"properties": {}
}
}
Voice Examples Function
{
"description": "Provides research-based examples of effective voice instructions for OpenAI TTS.",
"name": "tts_examples",
"parameters": {
"properties": {
"category": {"default": "general", "title": "Category", "type": "string"}
}
}
}
Categories:
- general
- accents
- characters
- emotions
- narration
Get TTS Instructions Function
{
"description": "Fetches TTS instructions by calling get_voice_info.",
"name": "get_tts_instructions",
"parameters": {
"properties": {}
}
}
Direct Usage
The primary way to use this server is through Claude Desktop or other MCP supported integration as described above. However, you can also run the server directly for testing purposes:
# Run with the uv environment manager
uv run python tts_mcp.py
This will start the MCP server, making it available for connection.
Available Voices
Kokoro TTS
- Default voice:
af_heart
OpenAI TTS
- Available voices:
alloy
,ash
,ballad
,coral
,echo
,fable
,onyx
,nova
,sage
,shimmer
- Default model:
gpt-4o-mini-tts
Development and Testing
To test the server locally during development:
fastmcp dev ./tts_mcp.py
This will start the MCP Inspector interface where you can test the server's functionality.
Implementation Details
The server is implemented using FastMCP and follows best practices for MCP server development:
- Unified Interface: A single function supports both Kokoro and OpenAI engines
- Streaming Support: Audio is streamed directly to the client when possible
- Fallback Mechanisms: File-based playback when streaming isn't available
- Voice Customization: Support for natural language instructions with OpenAI TTS
- Lifespan Management: Proper initialization and cleanup of resources
Troubleshooting
- No Audio Output: Check your system's audio configuration
- OpenAI TTS Failures: Verify your API key is valid and has TTS access permissions
- Server Not Found: Make sure the MCP server is correctly registered in your MCP host
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.