Kontxt MCP Server

Kontxt MCP Server

By ReyNeill GitHub

Codebase indexing MCP server

codebase-indexing mcp-server
Overview

What is Kontxt MCP Server?

Kontxt MCP Server is a Model Context Protocol (MCP) server designed to facilitate codebase indexing, enabling AI clients to analyze and generate context from local code repositories.

How to use Kontxt MCP Server?

To use Kontxt, clone or download the server code, set up a Python virtual environment, install dependencies, configure your Google Gemini API key, and run the server with the specified repository path.

Key features of Kontxt MCP Server?

  • Connects to user-specified local code repositories.
  • Provides the get_codebase_context tool for AI clients.
  • Utilizes Gemini 2.0 Flash's 1M input window for context generation.
  • Supports both SSE and stdio transport protocols.
  • Allows for user-attached files/docs for targeted analysis.
  • Tracks token usage and provides detailed API consumption analysis.

Use cases of Kontxt MCP Server?

  1. Analyzing code structure and functionality.
  2. Assisting AI clients in understanding complex codebases.
  3. Generating context for specific queries related to the code.

FAQ from Kontxt MCP Server?

  • Can Kontxt work with any code repository?

Yes! Kontxt can connect to any local code repository specified by the user.

  • Is there a specific programming language required?

No, Kontxt is designed to work with any language as long as the code is in a local repository.

  • How do I track token usage?

The server logs token usage during operations, allowing you to monitor API usage effectively.

Content

Kontxt MCP Server

A Model Context Protocol (MCP) server that tries to solve condebase indexing (until agents can).

Features

  • Connects to a user-specified local code repository.
  • Provides the (get_codebase_context) tool for AI clients (like Cursor, Claude Desktop).
  • Uses Gemini 2.0 Flash's 1M input window internally to analyze the codebase and generate context based on the user's client querry.
  • Flash itself can use internal tools (list_repository_structure, read_files, grep_codebase) to understand the code.
  • Supports both SSE (recommended) and stdio transport protocols.
  • Supports user-attached files/docs/context from client's queries for more targeted analysis.
  • Tracks token usage and provides detailed analysis of API consumption.
  • Maxes out possible context tokens for the best index summary.

Setup

  1. Clone/Download: Get the server code.
  2. Create Environment:
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
  3. Install Dependencies:
    pip install -r requirements.txt
    
  4. Install tree: Ensure the tree command is available on your system.
    • macOS: brew install tree
    • Debian/Ubuntu: sudo apt update && sudo apt install tree
    • Windows: Requires installing a port or using WSL.
  5. Configure API Key:
    • Copy .env.example to .env.
    • Edit .env and add your Google Gemini API Key:
      GEMINI_API_KEY="YOUR_ACTUAL_API_KEY"
      
    • Alternatively, you can provide the key via the --gemini-api-key command-line argument.

By default, the server runs in SSE mode, which allows you to:

  • Start the server independently
  • Connect from multiple clients
  • Keep it running while restarting clients

Run the server:

python kontxt_server.py --repo-path /path/to/your/codebase

PS: you can use pwd to list the project path

The server will start on http://127.0.0.1:8080/sse by default.

For additional options:

python kontxt_server.py --repo-path /path/to/your/codebase --host 0.0.0.0 --port 6900

Shutting Down the Server

The server can be stopped by pressing Ctrl+C in the terminal where it's running. The server will attempt to close gracefully with a 3-second timeout.

Connecting to the Server from client (Cursor example)

Once your server is running, you can connect Cursor to it by editing your ~/.cursor/mcp.json file:

{
  "mcpServers": {
    "kontxt-server": {
      "serverType": "sse",
      "url": "http://localhost:8080/sse"
    }
  }
}

PS: remember to always refresh the MCP server on Cursor Settings or other client to connect to the MCP via sse

Alternative: Running with stdio Transport

If you prefer to have the client start and manage the server process:

python kontxt_server.py --repo-path /path/to/your/codebase --transport stdio

For this mode, configure your ~/.cursor/mcp.json file like this:

{
  "mcpServers": {
    "kontxt-server": {
      "serverType": "stdio",
      "command": "python",
      "args": ["/absolute/path/to/kontxt_server.py", "--repo-path", "/absolute/path/to/your/codebase", "--transport", "stdio"],
      "env": {
        "GEMINI_API_KEY": "your-api-key-here"
      }
    }
  }
}

Command Line Arguments

  • --repo-path PATH: Required. Absolute path to the local code repository to analyze.
  • --gemini-api-key KEY: Google Gemini API Key (overrides .env if provided).
  • --token-threshold NUM: Target maximum token count for the context (default: 800000).
  • --gemini-model NAME: Specific Gemini model to use (default: 'gemini-2.0-flash').
  • --transport {stdio,sse}: Transport protocol to use (default: sse).
  • --host HOST: Host address for the SSE server (default: 127.0.0.1).
  • --port PORT: Port for the SSE server (default: 8080).

Basic Usage

Example queries:

  • "What's this codebase about"
  • "How does the authentication system work?"
  • "Explain the data flow in the application"

PS: you can further specify the agent to use the MCP tool if it's not using it: "What is the last word of the third codeblock of the auth file? Use the MCP tool available."

Context Attachment

Your referenced files/context in your queries are included as context for analysis:

  • "Explain how this file works: @kontxt_server.py"
  • "Find all files that interact with @user_model.py"
  • "Compare the implementation of @file1.js and @file2.js"

The server will mention these files to Gemini but will NOT automatically read or include their contents. Instead, Gemini will decide which files to read using its tools based on the query context.

This approach allows Gemini to only read files that are actually needed and prevents the context from being bloated with irrelevant file content.

Token Usage Tracking

The server tracks token usage across different operations:

  • Repository structure listing
  • File reading
  • Grep searches
  • Attached files from user queries
  • Generated responses

This information is logged during operation, helping you monitor API usage and optimize your queries.

PD: want the tool to improve? PR's are open.

No tools information available.

Mirror of

image-generation mcp-server
View Details

Secure MCP server for analyzing Excel files with oletools

oletools mcp-server
View Details

Mirror of

bigquery mcp-server
View Details

MCPHubs is a website that showcases projects related to Anthropic's Model Context Protocol (MCP)

mcp mcp-server
View Details
Dealx
Dealx by DealExpress

-

dealx mcp-server
View Details

Google Analytics MCP server for accessing analytics data through tools and resources

google-analytics mcp-server
View Details

A Python-based MCP server that lets Claude run boto3 code to query and manage AWS resources. Execute powerful AWS operations directly through Claude with proper sandboxing and containerization. No need for complex setups - just pass your AWS credentials and start interacting with all AWS services.

aws mcp-server
View Details