
Comfy MCP Server
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
what is Comfy MCP Server?
Comfy MCP Server is a server application that utilizes the FastMCP framework to generate images based on user-defined prompts by interacting with a remote Comfy server.
how to use Comfy MCP Server?
To use Comfy MCP Server, set the required environment variables, install the necessary packages, and launch the server using the command uvx comfy-mcp-server
.
key features of Comfy MCP Server?
- Generates images from prompts using a remote Comfy server.
- Supports configuration through environment variables.
- Allows for custom workflows exported from Comfy UI.
use cases of Comfy MCP Server?
- Creating images for artistic projects based on textual descriptions.
- Automating image generation for design prototypes.
- Integrating with applications that require dynamic image generation.
FAQ from Comfy MCP Server?
- What is required to run Comfy MCP Server?
You need the
uv
package and a workflow file exported from Comfy UI.
- Can I customize the image generation process?
Yes! You can set various environment variables to customize the workflow and output.
- Is there a specific format for the prompt?
The prompt can be any string that describes the desired image.
Comfy MCP Server
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
Overview
This script sets up a server using the FastMCP framework to generate images based on prompts using a specified workflow. It interacts with a remote Comfy server to submit prompts and retrieve generated images.
Prerequisites
- uv package and project manager for Python.
- Workflow file exported from Comfy UI. This code includes a sample
Flux-Dev-ComfyUI-Workflow.json
which is only used here as reference. You will need to export from your workflow and set the environment variables accordingly.
You can install the required packages for local development:
uvx mcp[cli]
Configuration
Set the following environment variables:
COMFY_URL
to point to your Comfy server URL.COMFY_WORKFLOW_JSON_FILE
to point to the absolute path of the API export json file for the comfyui workflow.PROMPT_NODE_ID
to the id of the text prompt node.OUTPUT_NODE_ID
to the id of the output node with the final image.OUTPUT_MODE
to eitherurl
orfile
to select desired output.
Optionally, if you have an Ollama server running, you can connect to it for prompt generation.
OLLAMA_API_BASE
to the url where ollama is running.PROMPT_LLM
to the name of the model hosted on ollama for prompt generation.
Example:
export COMFY_URL=http://your-comfy-server-url:port
export COMFY_WORKFLOW_JSON_FILE=/path/to/the/comfyui_workflow_export.json
export PROMPT_NODE_ID=6 # use the correct node id here
export OUTPUT_NODE_ID=9 # use the correct node id here
export OUTPUT_MODE=file
Usage
Comfy MCP Server can be launched by the following command:
uvx comfy-mcp-server
Example Claude Desktop Config
{
"mcpServers": {
"Comfy MCP Server": {
"command": "/path/to/uvx",
"args": [
"comfy-mcp-server"
],
"env": {
"COMFY_URL": "http://your-comfy-server-url:port",
"COMFY_WORKFLOW_JSON_FILE": "/path/to/the/comfyui_workflow_export.json",
"PROMPT_NODE_ID": "6",
"OUTPUT_NODE_ID": "9",
"OUTPUT_MODE": "file",
}
}
}
}
Functionality
generate_image(prompt: str, ctx: Context) -> Image | str
This function generates an image using a specified prompt. It follows these steps:
- Checks if all the environment variable are set.
- Loads a prompt template from a JSON file.
- Submits the prompt to the Comfy server.
- Polls the server for the status of the prompt processing.
- Retrieves and returns the generated image once it's ready.
generate_prompt(topic: str, ctx: Context) -> str
This function generates a comprehensive image generation prompt from specified topic.
Dependencies
mcp
: For setting up the FastMCP server.json
: For handling JSON data.urllib
: For making HTTP requests.time
: For adding delays in polling.os
: For accessing environment variables.langchain
: For creating simple LLM Prompt chain to generate image generation prompt from topic.langchain-ollama
: For ollama specific modules for LangChain.
License
This project is licensed under the MIT License - see the LICENSE file for details.