Llama MCP Streamlit

Llama MCP Streamlit

By Nikunj2003 GitHub

AI assistant built with Streamlit, NVIDIA NIM (LLaMa 3.3:70B) / Ollama, and Model Control Protocol (MCP).

python mcp
Overview

What is LLaMa-MCP-Streamlit?

LLaMa-MCP-Streamlit is an interactive AI assistant built using Streamlit, NVIDIA NIM (LLaMa 3.3:70B), and Model Control Protocol (MCP). It allows users to interact with a large language model (LLM) to execute real-time external tools, retrieve data, and perform various actions seamlessly.

How to use LLaMa-MCP-Streamlit?

To use the assistant, you can run the Streamlit app after configuring the necessary API keys in the .env file. You can either use Poetry or Docker to set up and run the application.

Key features of LLaMa-MCP-Streamlit?

  • Custom model selection from NVIDIA NIM or Ollama.
  • API configuration for different backends.
  • Tool integration via MCP for enhanced usability.
  • User-friendly chat-based interface.

Use cases of LLaMa-MCP-Streamlit?

  1. Executing real-time data processing tasks.
  2. Interacting with various LLMs for different applications.
  3. Enhancing productivity through seamless tool integration.

FAQ from LLaMa-MCP-Streamlit?

  • Can I use my own models?
    Yes! You can select custom models from NVIDIA NIM or Ollama.

  • Is Docker required to run the project?
    No, Docker is optional. You can run the project using Poetry as well.

  • How do I configure the MCP server?
    You can modify the utils/mcp_server.py file to change the MCP server configuration.

Content

Llama MCP Streamlit

This project is an interactive AI assistant built with Streamlit, NVIDIA NIM's API (LLaMa 3.3:70b)/Ollama, and Model Control Protocol (MCP). It provides a conversational interface where you can interact with an LLM to execute real-time external tools via MCP, retrieve data, and perform actions seamlessly.

The assistant supports:

  • Custom model selection (NVIDIA NIM / Ollama)
  • API configuration for different backends
  • Tool integration via MCP to enhance usability and real-time data processing
  • A user-friendly chat-based experience with Streamlit

📸 Screenshots

Homepage Screenshot

Tools Screenshot

Chat Screenshot

Chat (What can you do?) Screenshot

📁 Project Structure

llama_mcp_streamlit/
│── ui/
│   ├── sidebar.py       # UI components for Streamlit sidebar
│   ├── chat_ui.py       # Chat interface components
│── utils/
│   ├── agent.py         # Handles interaction with LLM and tools
│   ├── mcp_client.py    # MCP client for connecting to external tools
│   ├── mcp_server.py    # Configuration for MCP server selection
│── config.py            # Configuration settings
│── main.py              # Entry point for the Streamlit app
.env                      # Environment variables
Dockerfile                # Docker configuration
pyproject.toml            # Poetry dependency management

🔧 Environment Variables

Before running the project, configure the .env file with your API keys:

# Endpoint for the NVIDIA Integrate API
API_ENDPOINT=https://integrate.api.nvidia.com/v1
API_KEY=your_api_key_here

# Endpoint for the Ollama API
API_ENDPOINT=http://localhost:11434/v1/
API_KEY=ollama

🚀 Running the Project

Using Poetry

  1. Install dependencies:
    poetry install
    
  2. Run the Streamlit app:
    poetry run streamlit run llama_mcp_streamlit/main.py
    

Using Docker

  1. Build the Docker image:
    docker build -t llama-mcp-assistant .
    
  2. Run the container:
    docker compose up
    

🔄 Changing MCP Server Configuration

To modify which MCP server to use, update the utils/mcp_server.py file. You can use either NPX or Docker as the MCP server:

NPX Server

server_params = StdioServerParameters(
    command="npx",
    args=[
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/username/Desktop",
        "/path/to/other/allowed/dir"
    ],
    env=None,
)

Docker Server

server_params = StdioServerParameters(
    command="docker",
    args=[
        "run",
        "-i",
        "--rm",
        "--mount", "type=bind,src=/Users/username/Desktop,dst=/projects/Desktop",
        "--mount", "type=bind,src=/path/to/other/allowed/dir,dst=/projects/other/allowed/dir,ro",
        "--mount", "type=bind,src=/path/to/file.txt,dst=/projects/path/to/file.txt",
        "mcp/filesystem",
        "/projects"
    ],
    env=None,
)

Modify the server_params configuration as needed to fit your setup.


📌 Features

  • Real-time tool execution via MCP
  • LLM-powered chat interface
  • Streamlit UI with interactive chat elements
  • Support for multiple LLM backends (NVIDIA NIM & Ollama)
  • Docker support for easy deployment

🛠 Dependencies

  • Python 3.11+
  • Streamlit
  • OpenAI API (for NVIDIA NIM integration)
  • MCP (Model Control Protocol)
  • Poetry (for dependency management)
  • Docker (optional, for containerized deployment)

📜 License

This project is licensed under the MIT License.


🤝 Contributing

Feel free to submit pull requests or report issues!


📬 Contact

For any questions, reach out via GitHub Issues.


No tools information available.
School MCP
School MCP by 54yyyu

A Model Context Protocol (MCP) server for academic tools, integrating with Canvas and Gradescope platforms.

canvas mcp
View Details
repo-template
repo-template by loonghao

A Model Context Protocol (MCP) server for Python package intelligence, providing structured queries for PyPI packages and GitHub repositories. Features include dependency analysis, version tracking, and package metadata retrieval for LLM interactions.

-

google-calendar mcp
View Details
strava-mcp
strava-mcp by jeremysilva1098

MCP server for strava

strava mcp
View Details

Model Context Protocol (MCP) server implementation for Rhinoceros/Grasshopper integration, enabling AI models to interact with parametric design tools

grasshopper mcp
View Details

The open-source multi-agent chat interface that lets you manage multiple agents in one dynamic conversation and add MCP servers for deep research

python typescript
View Details

MCP configuration to connect AI agent to a Linux machine.

security mcp
View Details