
🤖 mcp-ollama-beeai
A minimal agentic app to interact with OLLAMA models leveraging multiple MCP server tools using BeeAI framework.
What is mcp-ollama-beeai?
mcp-ollama-beeai is a minimal client application designed to interact with local OLLAMA models by leveraging multiple MCP server tools through the BeeAI framework.
How to use mcp-ollama-beeai?
To use mcp-ollama-beeai, you need to set up a local OLLAMA server and configure your MCP agents in the mcp-servers.json
file. After that, clone the repository, install the dependencies, and start the application to access it via your browser.
Key features of mcp-ollama-beeai?
- Interaction with local OLLAMA models.
- Configuration of multiple MCP agents for enhanced functionality.
- User-friendly interface for selecting agents and tools.
- Integration with the BeeAI framework for easy setup of ReAct agents.
Use cases of mcp-ollama-beeai?
- Building AI-driven applications that require model interactions.
- Utilizing various MCP agents for different tasks like database operations and data fetching.
- Experimenting with local AI models in a controlled environment.
FAQ from mcp-ollama-beeai?
- What are the prerequisites for using mcp-ollama-beeai?
You need to have a local OLLAMA server running and sufficient memory (at least 16GB RAM) for the models to perform effectively.
- Can I use remote servers instead of local?
Yes, you can configure the application to use remote servers for model interactions.
- How do I configure MCP agents?
You can add your MCP agents in the
mcp-servers.json
file located in the root folder of the application.
🤖 mcp-ollama-beeai
A minimal client app to interact with local OLLAMA models leveraging multiple MCP agent tools using BeeAI framework.
Below is a sample visual of this client app with chat interface, displaying the postgres database operation performed with thinking steps the AI has taken to use the right MCP agent and tranforming the request & response with LLM:
![]()
Usage
📋 Pre-requisite
1. Local ollama server
Install and serve ollama in your local machine with the following commands.
- Make sure you have enough memory available in your machine, atleast 16GB RAM for models to perform.
- Skip this installation in your local, if you're going to use a remote server for model.
$ curl -fsSL https://ollama.com/install.sh | sh
$ ollama serve
$ ollama pull llama3.1
2. MCP servers list configuration
Add your MCP agents in the mcp-servers.json
file in root folder, for the app to pickup and work along with the LLM.
- Default servers included are postgres and fetch.
Make sure to update you postgres connection URL
- List of other MCP agent tools availabe for configuration - https://modelcontextprotocol.io/examples
3 .env
If you want to use a different LLM model and LLM server, override the below properties before npm start
OLLAMA_CHAT_MODEL=llama3.1
OLLAMA_BASE_URL=http://localhost:11434/api
🎮 Boot up your app
$ git clone https://github.com/tamdilip/mcp-ollama-beeai.git
$ cd mcp-ollama-beeai
$ npm i
$ npm start
Once the app is up and running, hit in Browser -> http://localhost:3000
Additional Context:
- By default on landing no MCP agent is referred for the questions.
- The respective MCP agent to be used a question can be selected from the
Server
&tools
dropdown in UI. BeeAI
framework is used for ease setup ofReAct
(Reason And Act) agent with MCP tools.Markdown
JS library is used to render the responses in proper readable visual format.
Happy coding :) !!