What is the AI Customer Support Bot - MCP Server?
The AI Customer Support Bot - MCP Server is a server application that utilizes AI technologies from Cursor AI and Glama.ai to provide automated customer support through the Model Context Protocol (MCP).
How to use the AI Customer Support Bot?
To use the bot, clone the repository, set up the environment, configure your API keys, and run the server. You can then interact with the bot through various API endpoints.
Key features of the AI Customer Support Bot?
- Real-time context fetching from Glama.ai
- AI-powered response generation with Cursor AI
- Batch processing support for multiple queries
- Priority queuing for urgent requests
- Rate limiting to manage request load
- User interaction tracking for analytics
- Health monitoring to ensure service availability
- Compliance with MCP protocol for standardization
Use cases of the AI Customer Support Bot?
- Automating responses to frequently asked questions.
- Providing 24/7 customer support without human intervention.
- Handling multiple customer queries simultaneously through batch processing.
FAQ from the AI Customer Support Bot?
- What technologies does the bot use?
The bot integrates Cursor AI for response generation and Glama.ai for context fetching.
- Is there a limit on the number of requests?
Yes, the server implements rate limiting to prevent abuse, with a default of 100 requests per 60 seconds.
- How can I monitor the server's health?
You can check the server health and service status through the health check endpoint.
AI Customer Support Bot - MCP Server
A Model Context Protocol (MCP) server that provides AI-powered customer support using Cursor AI and Glama.ai integration.
Features
- Real-time context fetching from Glama.ai
- AI-powered response generation with Cursor AI
- Batch processing support
- Priority queuing
- Rate limiting
- User interaction tracking
- Health monitoring
- MCP protocol compliance
Prerequisites
- Python 3.8+
- PostgreSQL database
- Glama.ai API key
- Cursor AI API key
Installation
- Clone the repository:
git clone <repository-url>
cd <repository-name>
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Create a
.env
file based on.env.example
:
cp .env.example .env
- Configure your
.env
file with your credentials:
# API Keys
GLAMA_API_KEY=your_glama_api_key_here
CURSOR_API_KEY=your_cursor_api_key_here
# Database
DATABASE_URL=postgresql://user:password@localhost/customer_support_bot
# API URLs
GLAMA_API_URL=https://api.glama.ai/v1
# Security
SECRET_KEY=your_secret_key_here
# MCP Server Configuration
SERVER_NAME="AI Customer Support Bot"
SERVER_VERSION="1.0.0"
API_PREFIX="/mcp"
MAX_CONTEXT_RESULTS=5
# Rate Limiting
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_PERIOD=60
# Logging
LOG_LEVEL=INFO
- Set up the database:
# Create the database
createdb customer_support_bot
# Run migrations (if using Alembic)
alembic upgrade head
Running the Server
Start the server:
python app.py
The server will be available at http://localhost:8000
API Endpoints
1. Root Endpoint
GET /
Returns basic server information.
2. MCP Version
GET /mcp/version
Returns supported MCP protocol versions.
3. Capabilities
GET /mcp/capabilities
Returns server capabilities and supported features.
4. Process Request
POST /mcp/process
Process a single query with context.
Example request:
curl -X POST http://localhost:8000/mcp/process \
-H "Content-Type: application/json" \
-H "X-MCP-Auth: your-auth-token" \
-H "X-MCP-Version: 1.0" \
-d '{
"query": "How do I reset my password?",
"priority": "high",
"mcp_version": "1.0"
}'
5. Batch Processing
POST /mcp/batch
Process multiple queries in a single request.
Example request:
curl -X POST http://localhost:8000/mcp/batch \
-H "Content-Type: application/json" \
-H "X-MCP-Auth: your-auth-token" \
-H "X-MCP-Version: 1.0" \
-d '{
"queries": [
"How do I reset my password?",
"What are your business hours?",
"How do I contact support?"
],
"mcp_version": "1.0"
}'
6. Health Check
GET /mcp/health
Check server health and service status.
Rate Limiting
The server implements rate limiting with the following defaults:
- 100 requests per 60 seconds
- Rate limit information is included in the health check endpoint
- Rate limit exceeded responses include reset time
Error Handling
The server returns structured error responses in the following format:
{
"code": "ERROR_CODE",
"message": "Error description",
"details": {
"timestamp": "2024-02-14T12:00:00Z",
"additional_info": "value"
}
}
Common error codes:
RATE_LIMIT_EXCEEDED
: Rate limit exceededUNSUPPORTED_MCP_VERSION
: Unsupported MCP versionPROCESSING_ERROR
: Error processing requestCONTEXT_FETCH_ERROR
: Error fetching context from Glama.aiBATCH_PROCESSING_ERROR
: Error processing batch request
Development
Project Structure
.
├── app.py # Main application file
├── database.py # Database configuration
├── middleware.py # Middleware (rate limiting, validation)
├── models.py # Database models
├── mcp_config.py # MCP-specific configuration
├── requirements.txt # Python dependencies
└── .env # Environment variables
Adding New Features
- Update
mcp_config.py
with new configuration options - Add new models in
models.py
if needed - Create new endpoints in
app.py
- Update capabilities endpoint to reflect new features
Security
- All MCP endpoints require authentication via
X-MCP-Auth
header - Rate limiting is implemented to prevent abuse
- Database credentials should be kept secure
- API keys should never be committed to version control
Monitoring
The server provides health check endpoints for monitoring:
- Service status
- Rate limit usage
- Connected services
- Processing times
Contributing
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
For support, please create an issue in the repository or contact the development team.