
MCP Client-Server Sandbox for LLM Augmentation
Complete sandbox for augmenting LLM inference (local or cloud) with MCP Client-Server. Low friction testbed for MCP Server validation and agentic evaluation.
What is MCP Client-Server Sandbox?
MCP Client-Server Sandbox is a minimal environment designed for augmenting LLM inference with the Model Context Protocol (MCP). It serves as a testbed for validating MCP servers and evaluating LLM behavior.
How to use MCP Client-Server Sandbox?
To use the sandbox, developers can plug in new MCP servers and test them against a working LLM client. Initially, it supports local LLMs like LLaMA 7B, with plans to extend to cloud inference for more powerful models.
Key features of MCP Client-Server Sandbox?
- Minimal friction for integrating new MCP servers.
- Local and cloud inference support for LLMs.
- Chatbox UI for interactive testing.
- Reference architecture for MCP specification development.
Use cases of MCP Client-Server Sandbox?
- Validating new MCP servers against LLM clients.
- Testing LLM behavior in a controlled environment.
- Developing and evolving alongside the MCP specification.
FAQ from MCP Client-Server Sandbox?
- Is the sandbox suitable for all LLMs?
Initially, it supports local models like LLaMA 7B, with future support for cloud models.
- What is the purpose of the sandbox?
It aims to provide a low-friction environment for testing and validating MCP servers and LLM interactions.
- Is the project open-source?
Yes, it is available on GitHub under the MIT license.
MCP Client-Server Sandbox for LLM Augmentation
Overview
Under Development
mcp-scaffold is a minimal sandbox for validating Model Context Protocol (MCP) servers against a working LLM client and live chat interface. The aim is minimal friction when plugging in new MCP Servers and evaluating LLM behavior.
At first a local LLM, such as LLaMA 7B is used for local network only testing capabilties. Next, cloud inference will be supported, so devs can use more powerful models for validation without complete local network sandboxing. LLaMA 7B is large (~13GB in common HF format), however, smaller models lack the conversational ability essential for validating MCP augmentation. That said, LLaMA 7b is a popular local LLM Inference model with over 1.3m downloads last month (Mar 2025).
With chatbox UI, LLM inference options in place, MCP Client and a couple demo MCP servers will be added. This project serves as both a reference architecture and a practical development environment, evolving alongside the MCP specification.
Architecture

