MCP (Model Context Protocol) Server

MCP (Model Context Protocol) Server

By VajraM-dev GitHub

-

Overview

what is MCP Server?

MCP (Model Context Protocol) Server is a server application that facilitates interaction with AI models using a PostgreSQL database and supports multiple AI providers.

how to use MCP Server?

To use the MCP Server, clone the repository, set up a virtual environment, install dependencies, configure the environment variables, and run the server and client scripts.

key features of MCP Server?

  • Secure configuration management
  • PostgreSQL database integration
  • Multi-provider AI model support
  • Flexible communication transport options
  • Extensible tool registration

use cases of MCP Server?

  1. Integrating AI models for various applications
  2. Managing data with PostgreSQL for AI interactions
  3. Customizing tools for specific AI tasks

FAQ from MCP Server?

  • What AI providers are supported?

The server supports Anthropic and Google AI models.

  • Is there a way to add custom tools?

Yes! You can easily add custom tools via decorators in the code.

  • What are the prerequisites for running the server?

You need Python 3.10+, PostgreSQL, and API access to AI providers.

Content

MCP (Model Context Protocol) Server

Project Structure

├── client.py             # Client-side interaction script
├── server.py             # Main MCP server implementation
├── pg_connect.py         # PostgreSQL database connection
├── lm_config.py          # Language model configuration
├── .env.example          # Example environment configuration
├── .env.dev              # Development environment configuration
├── requirements.txt      # Project dependencies
└── .gitignore            # Git ignore file

Prerequisites

  • Python 3.10+
  • PostgreSQL
  • API access to AI providers (Anthropic, Google)

Installation

1. Clone the Repository

git https://github.com/VajraM-dev/Postgres-MCP-Server-With-SSE-Transport.git

2. Create Virtual Environment

python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

3. Install Dependencies

pip install -r requirements.txt

4. Configure Environment

  1. Copy .env.example to .env.dev
  2. Fill in the required configuration:
cp .env.example .env.dev
nano .env.dev  # or use your preferred text editor

Configuration Parameters

  • POSTGRES_USERNAME: PostgreSQL database username
  • POSTGRES_PASSWORD: PostgreSQL database password
  • POSTGRES_DB_NAME: Database name
  • POSTGRES_HOST: Database host
  • POSTGRES_PORT: Database port
  • MCP_NAME: Server name
  • MCP_HOST: Server host
  • MCP_PORT: Server port
  • TRANSPORT: Communication transport (sse/stdio)
  • ANTHROPIC_API_KEY: Anthropic API key
  • GOOGLE_API_KEY: Google API key
  • USE_PROVIDER: Default AI provider

Running the Server

Development Mode

python server.py

Client Interaction

python client.py

Key Features

  • 🔒 Secure configuration management
  • 🗃️ PostgreSQL database integration
  • 🤖 Multi-provider AI model support
  • 📡 Flexible communication transport
  • 🛡️ Extensible tool registration

Supported AI Providers

  • Anthropic (Claude models)
  • Google (Gemini models)

Tools and Endpoints

Available Tools

  • list_tables(): Retrieve database tables
  • Custom tools can be easily added via decorators

Endpoints

  • /sse: Server-Sent Events endpoint
  • Customizable routing and tool registration

Extending the Framework

Adding New Tools

@app.tool()
def custom_tool():
    """Custom tool implementation"""
    # Your tool logic here

Configuring AI Providers

Modify lm_config.py to add or configure new AI providers.

No tools information available.
No content found.