
Atla MCP Server
An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API.
What is Atla MCP Server?
Atla MCP Server is an implementation of an MCP server that provides a standardized interface for Large Language Models (LLMs) to interact with the Atla API and utilize state-of-the-art evaluation models.
How to use Atla MCP Server?
To use Atla MCP Server, clone the repository, set up a Python environment, install dependencies, and run the server locally. You can connect to it using any MCP client like Claude Desktop or Cursor.
Key features of Atla MCP Server?
- Evaluate individual responses with Selene 1.
- Run batch evaluations with Selene 1.
- List available evaluation metrics and create new ones.
Use cases of Atla MCP Server?
- Integrating LLMs with the Atla API for evaluation tasks.
- Running batch evaluations for performance analysis.
- Developing and testing new evaluation metrics.
FAQ from Atla MCP Server?
- What is the purpose of the Atla MCP Server?
It provides a standardized interface for LLMs to interact with the Atla API and evaluation models.
- Is there a remote usage option?
Remote usage is coming soon!
- How can I contribute to the project?
Contributions are welcome! Please refer to the CONTRIBUTING.md file for details.
Atla MCP Server
An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLMJ evaluation.
Learn more about Atla here. Learn more about the Model Context Protocol here.
Features
- Evaluate individual responses with Selene 1
- Run batch evaluations with Selene 1
- List available evaluation metrics, create new ones or fetch them by name
Usage
To use the MCP server, you will need an Atla API key. You can find your existing API key here or create a new one here.
Installation
We recommend using
uv
to manage the Python environment. See here for installation instructions.
- Clone the repository:
git clone https://github.com/atla-ai/atla-mcp-server.git
cd atla-mcp-server
- Create and activate a virtual environment:
uv venv
source .venv/bin/activate
- Install dependencies depending on your needs:
# Basic installation
uv pip install -e .
# Installation with development tools (recommended)
uv pip install -e ".[dev]"
pre-commit install
- Add your
ATLA_API_KEY
to your environment:
export ATLA_API_KEY=<your-atla-api-key>
Connecting to the Server
Once you have installed the server, you can connect to it using any MCP client.
Here, we provide specific instructions for connection to some common MCP clients.
In what follows:
- If you are having issues with
uv
, you might need to pass in the full path to theuv
executable. You can find it by runningwhich uv
in your terminal.path/to/atla-mcp-server
is the path to theatla-mcp-server
directory, which is the path to the repository you cloned in step 1.
Claude Desktop
For more details on configuring MCP servers in Claude Desktop, refer to the official MCP quickstart guide.
- Add the following to your
claude_desktop_config.json
file:
{
"mcpServers": {
"atla-mcp-server": {
"command": "uv",
"args": [
"--directory",
"/path/to/atla-mcp-server",
"run",
"atla-mcp-server"
],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
- Restart Claude Desktop to apply the changes.
You should now see options from atla-mcp-server
in the list of available MCP tools.
Cursor
For more details on configuring MCP servers in Cursor, refer to the official documentation.
- Add the following to your
.cursor/mcp.json
file:
{
"mcpServers": {
"atla-mcp-server": {
"command": "/path/to/uv",
"args": [
"--directory",
"/path/to/atla-mcp-server",
"run",
"atla-mcp-server"
],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
You should now see atla-mcp-server
in the list of available MCP servers.
OpenAI Agents SDK
For more details on using the OpenAI Agents SDK with MCP servers, refer to the official documentation.
- Install the OpenAI Agents SDK:
pip install openai-agents
- Use the OpenAI Agents SDK to connect to the server:
import os
from agents import Agent
from agents.mcp import MCPServerStdio
async with MCPServerStdio(
params={
"command": "uv",
"args": ["run", "--directory", "/path/to/atla-mcp-server", "atla-mcp-server"],
"env": {"ATLA_API_KEY": os.environ.get("ATLA_API_KEY")}
}
) as atla_mcp_server:
...
Connecting to the Server
Cursor
{
"mcpServers": {
"atla-mcp-server": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.atla-ai.com/sse",
"--header",
"Authorization: Bearer ${ATLA_API_KEY}"
],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
Running the Server
If you are using an MCP client, you will generally not need to run the server locally.
Running the server locally can be useful for development and debugging. After installation, you can run the server in several ways:
- Using
uv run
(recommended):
cd path/to/atla-mcp-server
uv run atla-mcp-server
- Using Python directly:
cd path/to/atla-mcp-server
python -m atla_mcp_server
- With the MCP Inspector:
cd path/to/atla-mcp-server
uv run mcp dev src/atla_mcp_server/debug.py
All methods will start the MCP server with stdio
transport, ready to accept connections from MCP clients. The MCP Inspector will provide a web interface for testing and debugging the MCP server.
Contributing
Contributions are welcome! Please see the CONTRIBUTING.md file for details.
License
This project is licensed under the MIT License. See the LICENSE file for details.