
DISCLAIMER
MCP-Hub and -Inspector, Multi-Model Workflow and Chat Interface
What is FLUJO?
FLUJO is an open-source platform designed for workflow orchestration, integrating Model-Context-Protocol (MCP) and AI tools, providing a unified interface for managing AI models and complex workflows.
How to use FLUJO?
To use FLUJO, download the setup from the releases page or clone the repository, install dependencies, and start the development server. You can also run it as a desktop application using Electron.
Key features of FLUJO?
- Environment & API Key Management: Secure storage and centralized management of API keys.
- Model Management: Configure and use multiple AI models with pre-defined prompts.
- MCP Server Integration: Easy installation and management of MCP servers.
- Workflow Orchestration: Visual flow builder for creating complex workflows.
- Chat Interface: Interact with workflows through a chat interface.
- Desktop Application: Run FLUJO as a native application with system tray support.
Use cases of FLUJO?
- Managing AI models and workflows for data processing.
- Integrating various AI tools for enhanced functionality.
- Creating automated workflows for machine learning tasks.
FAQ from FLUJO?
- Is FLUJO free to use?
Yes! FLUJO is open-source and free to use.
- What programming languages does FLUJO support?
FLUJO is built with TypeScript and Node.js.
- Can I contribute to FLUJO?
Yes! Contributions are welcome through GitHub.
DISCLAIMER
====> FLUJO is still an early preview! Expect it to break at some points, but improve rapidly! <====
For anything that you struggle with (MCP Installation, Application Issues, Usability Issues, Feedback): PLEASE LET ME KNOW! -> Create a Github Issue or write on Discord (https://discord.gg/KPyrjTSSat) and I will look into it! Maybe a response will take a day, but I will try to get back to each and every one of you.
Here's a video guiding you through the whole thing - from installation to output! (15min) Sorry for the bad audio, a new Video is coming soon!
IMPORTANT SECURITY NOTE
FLUJO has currently EXTENSIVE logging enabled by default! This may expose your encrypted API-Keys to the terminal output!. Be VERY careful when grabbing videos or streaming and showing the terminal output!
FLUJO
FLUJO is an open-source platform that bridges the gap between workflow orchestration, Model-Context-Protocol (MCP), and AI tool integration. It provides a unified interface for managing AI models, MCP servers, and complex workflows - all locally and open-source.
FLUJO is powered by the PocketFlowFramework and built with CLine and a lot of LOVE.
🌟 Key Features
🔑 Environment & API Key Management
- Secure Storage: Store environment variables and API keys with encryption
- Global Access: Use your stored keys across the entire application
- Centralized Management: Keep all your credentials in one secure place
🤖 Model Management
- Multiple Models: Configure and use different AI models simultaneously
- Pre-defined Prompts: Create custom system instructions for each model
- Provider Flexibility: Connect to various API providers (OpenAI, Anthropic, etc.)
- Local Models: Integrate with Ollama for local model execution
🔌 MCP Server Integration
- Easy Installation: Install MCP servers from GitHub or local filesystem
- Server Management: Comprehensive interface for managing MCP servers
- Tool Inspection: View and manage available tools from MCP servers
- Environment Binding: Connect server environment variables to global storage
- Docker Support: Run Docker-based MCP servers within Flujo
🔄 Workflow Orchestration
- Visual Flow Builder: Create and design complex workflows
- Model Integration: Connect different models in your workflow
- Tool Management: Allow or restrict specific tools for each model
- Prompt Design: Configure system prompts at multiple levels (Model, Flow, Node)
💬 Chat Interface
- Flow Interaction: Interact with your flows through a chat interface
- Message Management: Edit or disable messages or split conversations to reduce context size
- File Attachments: Attach documents or audio for LLM processing (really bad atm, because for this you should use mcp!)
- Transcription: Process audio inputs with automatic transcription (really bad atm, see roadmap)
🔄 External Tool Integration
- OpenAI Compatible Endpoint: Integrate with tools like CLine or Roo
- Seamless Connection: Use FLUJO as a backend for other AI applications
🚀 Getting Started
Manual installation:
Prerequisites
- Node.js (v18 or higher)
- npm or yarn
Installation
-
Clone the repository:
git clone https://github.com/mario-andreschak/FLUJO.git cd FLUJO
-
Install dependencies:
npm install # or yarn install
-
Start the development server:
npm run dev # or yarn dev
-
Open your browser and navigate to:
http://localhost:4200
-
FLUJO feels and works best if you run it compiled:
npm run build npm start
-
To run as a desktop application:
npm run electron-dev # Development mode # or npm run electron-dist # Build and package for your platform
📖 Usage
Setting up often used API keys
- Navigate to Settings
- Save your API Keys globally to secure them
Setting Up Models
- Navigate to the Models page
- Click "Add Model" to create a new model configuration
- Configure your model with name, provider, API key, and system prompt
- Save your configuration
Managing MCP Servers
- Go to the MCP page
- Click "Add Server" to install a new MCP server
- Choose from GitHub repository or local filesystem
- Configure server settings and environment variables
- Start and manage your server
Using official Reference servers
- Go to the MCP page
- Click "Add Server" to install a new MCP server
- Go to the "Reference Servers" Tab
- (First time executing:) Click "Refresh" and waaaaaaait.
- Click a server of your choice, wait for the screen to change, click "Save" / "Update Server" at the bottom.
Using Docker-based MCP Servers
When running FLUJO in Docker, you can use Docker-based MCP servers:
- Go to the MCP page
- Click "Add Server" to install a new MCP server
- Choose "Docker" as the installation method
- Provide the Docker image name and any required environment variables
- Start and manage your server
Creating Workflows
- Visit the Flows page
- Click "Create Flow" to start a new workflow
- Add processing nodes and connect them
- Configure each node with models and tools
- Save your flow
Branching
- Connect one MCP node to multiple subsequent ones
- Define the branching in the prompt, using the handoff-tools on the "Agent Tools" tab
Loops
- Same as branching, but connect back to a previous node
Orchestration
- Same as loops but with multiple ones
Using the Chat Interface
- Go to the Chat page
- Select a flow to interact with
- Start chatting with your configured workflow
🔄 MCP Integration
FLUJO provides comprehensive support for the Model Context Protocol (MCP), allowing you to:
- Install and manage MCP servers
- Inspect server tools
- Connect MCP servers to your workflows
- Reference tools directly in prompts
- Bind environment variables to your global encrypted storage
Docker Installation
The easiest way to run FLUJO is using Docker, which provides a consistent environment and supports running Docker-based MCP servers.
Prerequisites
- Docker and Docker Compose installed on your system
Using Docker Compose
-
Clone the repository:
git clone https://github.com/mario-andreschak/FLUJO.git cd FLUJO
-
Build and start the container:
docker-compose up -d
-
Access FLUJO in your browser:
http://localhost:4200
Using Docker Scripts
For more control over the Docker build and run process, you can use the provided scripts:
-
Build the Docker image:
./scripts/build-docker.sh
-
Run the Docker container:
./scripts/run-docker.sh
Options for run-docker.sh:
--tag=<tag>
: Specify the image tag (default: latest)--detached
: Run in detached mode--no-privileged
: Run without privileged mode (not recommended)--port=<port>
: Specify the host port (default: 4200)
For more detailed information about Docker support, including Docker-in-Docker capabilities, persistent storage, and troubleshooting, see DOCKER.md.
📄 License
FLUJO is licensed under the MIT License.
🚀 Roadmap
Here's a roadmap of upcoming features and improvements:
- Real-time Voice Feature: Adding support for Whisper.js or OpenWhisper to enable real-time voice capabilities.
- Visual Debugger: Introducing a visual tool to help debug and troubleshoot more effectively.
- MCP Roots Support: Implementing Checkpoints and Restore features within MCP Roots for better control and recovery options.
- MCP Prompts: Enabling users to build custom prompts that fully leverage the capabilities of the MCP server.
- MCP Proxying STDIO<>SSE: Likely utilizing SuperGateway to proxy standard input/output with Server-Sent Events for enhanced communication: Use MCP Servers managed in FLUJo in any other MCP client.
- Enhanced Integrations: Improving compatibility and functionality with tools like Windsurf, Cursor, and Cline.
- Advanced Orchestration: Adding agent-driven orchestration, batch processing, and incorporating features inspired by Pocketflow.
- Online Template Repository: Creating a platform for sharing models, flows, or complete "packages," making it easy to distribute FLUJO flows to others.
- Edge Device Optimization: Enhancing performance and usability for edge devices.
🤝 Contributing
Contributions are welcome! Feel free to open issues or submit pull requests.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
📬 Contact
- GitHub: mario-andreschak
- LinkedIn: https://www.linkedin.com/in/mario-andreschak-674033299/
Notes:
- You can add ~FLUJO=HTML, ~FLUJO=MARKDOWN, ~FLUJO=JSON, ~FLUJO=TEXT in your message to format the response, this will give varying results in different tools where you integrate FLUJO.
- You can add ~FLUJOEXPAND=1 or ~FLUJODEBUG=1 somewhere in your message to show more details
- in config/features.ts you can change the Logging-level for the whole application
- in config/features.ts you can enable SSE support which is currently disabled by default
FLUJO - Empowering your AI workflows with open-source orchestration.