what is mcp-lance-db?
mcp-lance-db is a Model Context Protocol (MCP) server designed for LanceDB, an embedded vector database that facilitates the integration of LLM applications with external data sources and tools.
how to use mcp-lance-db?
To use mcp-lance-db, configure your environment to point to the LanceDB database and run the server using the provided command. You can add memories and search for semantically similar memories through the implemented tools.
key features of mcp-lance-db?
- Adds and retrieves memories with vector embeddings.
- Implements two main tools: add-memory and search-memories.
- Configurable database path and collection name.
use cases of mcp-lance-db?
- Storing and retrieving contextual information for AI applications.
- Enhancing chat interfaces with memory capabilities.
- Building custom AI workflows that require semantic memory.
FAQ from mcp-lance-db?
- What is the Model Context Protocol (MCP)?
MCP is an open protocol that allows seamless integration between LLM applications and external data sources.
- How do I configure the server?
You need to set the database path and collection name in the server configuration.
- Can I use mcp-lance-db for any type of data?
Yes, as long as the data can be represented in text format and has vector embeddings.
mcp-lance-db: A LanceDB MCP server
The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
This repository is an example of how to create a MCP server for LanceDB, an embedded vector database.
Overview
A basic Model Context Protocol server for storing and retrieving memories in the LanceDB vector database. It acts as a semantic memory layer that allows storing text with vector embeddings for later retrieval.
Components
Tools
The server implements two tools:
- add-memory: Adds a new memory to the vector database
- Takes "content" as a required string argument
- Stores the text with vector embeddings for later retrieval
- search-memories: Retrieves semantically similar memories
- Takes "query" as a required string argument
- Optional "limit" parameter to control number of results (default: 5)
- Returns memories ranked by semantic similarity to the query
- Updates server state and notifies clients of resource changes
Configuration
The server uses the following configuration:
- Database path: "./lancedb"
- Collection name: "memories"
- Embedding provider: "sentence-transformers"
- Model: "BAAI/bge-small-en-v1.5"
- Device: "cpu"
- Similarity threshold: 0.7 (upper bound for distance range)
Quickstart
Claude Desktop
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"lancedb": {
"command": "uvx",
"args": [
"mcp-lance-db"
]
}
}
Development
Building and Publishing
To prepare the package for distribution:
- Sync dependencies and update lockfile:
uv sync
- Build package distributions:
uv build
This will create source and wheel distributions in the dist/
directory.
- Publish to PyPI:
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token:
--token
orUV_PUBLISH_TOKEN
- Or username/password:
--username
/UV_PUBLISH_USERNAME
and--password
/UV_PUBLISH_PASSWORD
Debugging
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm
with this command:
npx @modelcontextprotocol/inspector uv --directory $(PWD) run mcp-lance-db
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.