browser-use-mcp-server

browser-use-mcp-server

By MCP-Mirror GitHub

Mirror of

browser-automation AI-agents
Overview

What is browser-use-mcp-server?

The browser-use-mcp-server is an MCP server that enables AI agents to control web browsers using the browser-use framework.

How to use browser-use-mcp-server?

To use the browser-use-mcp-server, install the necessary prerequisites, set up your environment, and run the server using the provided commands for either SSE or stdio mode.

Key features of browser-use-mcp-server?

  • Browser Automation: Control web browsers through AI agents.
  • Dual Transport: Support for both SSE and stdio protocols.
  • VNC Streaming: Watch browser automation in real-time.
  • Async Tasks: Execute browser operations asynchronously.

Use cases of browser-use-mcp-server?

  1. Automating web tasks using AI agents.
  2. Real-time monitoring of browser activities.
  3. Integrating AI capabilities into web applications.

FAQ from browser-use-mcp-server?

  • What are the prerequisites for using this server?

You need to install uv, Playwright, and mcp-proxy.

  • Can I run this server in a Docker container?

Yes! Docker provides a consistent environment for running the server.

  • How do I configure the client?

Client configuration can be done through JSON files specifying the MCP server details.

Content

browser-use-mcp-server

Twitter URL PyPI version

An MCP server that enables AI agents to control web browsers using browser-use.

Prerequisites

# Install prerequisites
curl -LsSf https://astral.sh/uv/install.sh | sh
uv tool install mcp-proxy
uv tool update-shell

Environment

Create a .env file:

OPENAI_API_KEY=your-api-key
CHROME_PATH=optional/path/to/chrome
PATIENT=false  # Set to true if API calls should wait for task completion

Installation

# Install dependencies
uv sync
uv pip install playwright
uv run playwright install --with-deps --no-shell chromium

Usage

SSE Mode

# Run directly from source
uv run server --port 8000

stdio Mode

# 1. Build and install globally
uv build
uv tool uninstall browser-use-mcp-server 2>/dev/null || true
uv tool install dist/browser_use_mcp_server-*.whl

# 2. Run with stdio transport
browser-use-mcp-server run server --port 8000 --stdio --proxy-port 9000

Client Configuration

SSE Mode Client Configuration

{
  "mcpServers": {
    "browser-use-mcp-server": {
      "url": "http://localhost:8000/sse"
    }
  }
}

stdio Mode Client Configuration

{
  "mcpServers": {
    "browser-server": {
      "command": "browser-use-mcp-server",
      "args": [
        "run",
        "server",
        "--port",
        "8000",
        "--stdio",
        "--proxy-port",
        "9000"
      ],
      "env": {
        "OPENAI_API_KEY": "your-api-key"
      }
    }
  }
}

Config Locations

ClientConfiguration Path
Cursor./.cursor/mcp.json
Windsurf~/.codeium/windsurf/mcp_config.json
Claude (Mac)~/Library/Application Support/Claude/claude_desktop_config.json
Claude (Windows)%APPDATA%\Claude\claude_desktop_config.json

Features

  • Browser Automation: Control browsers through AI agents
  • Dual Transport: Support for both SSE and stdio protocols
  • VNC Streaming: Watch browser automation in real-time
  • Async Tasks: Execute browser operations asynchronously

Local Development

To develop and test the package locally:

  1. Build a distributable wheel:

    # From the project root directory
    uv build
    
  2. Install it as a global tool:

    uv tool uninstall browser-use-mcp-server 2>/dev/null || true
    uv tool install dist/browser_use_mcp_server-*.whl
    
  3. Run from any directory:

    # Set your OpenAI API key for the current session
    export OPENAI_API_KEY=your-api-key-here
    
    # Or provide it inline for a one-time run
    OPENAI_API_KEY=your-api-key-here browser-use-mcp-server run server --port 8000 --stdio --proxy-port 9000
    
  4. After making changes, rebuild and reinstall:

    uv build
    uv tool uninstall browser-use-mcp-server
    uv tool install dist/browser_use_mcp_server-*.whl
    

Docker

Using Docker provides a consistent and isolated environment for running the server.

# Build the Docker image
docker build -t browser-use-mcp-server .

# Run the container with the default VNC password ("browser-use")
# --rm ensures the container is automatically removed when it stops
# -p 8000:8000 maps the server port
# -p 5900:5900 maps the VNC port
docker run --rm -p8000:8000 -p5900:5900 browser-use-mcp-server

# Run with a custom VNC password read from a file
# Create a file (e.g., vnc_password.txt) containing only your desired password
echo "your-secure-password" > vnc_password.txt
# Mount the password file as a secret inside the container
docker run --rm -p8000:8000 -p5900:5900 \
  -v $(pwd)/vnc_password.txt:/run/secrets/vnc_password:ro \
  browser-use-mcp-server

Note: The :ro flag in the volume mount (-v) makes the password file read-only inside the container for added security.

VNC Viewer

# Browser-based viewer
git clone https://github.com/novnc/noVNC
cd noVNC
./utils/novnc_proxy --vnc localhost:5900

Default password: browser-use (unless overridden using the custom password method)

VNC Screenshot

VNC Screenshot

Example

Try asking your AI:

open https://news.ycombinator.com and return the top ranked article

Support

For issues or inquiries: cobrowser.xyz

Star History

Star History Chart
No tools information available.
PostgreSQL MCP Server
PostgreSQL MCP Server by vignesh-codes

This repo is an extension of PostgreSQL MCP Server providing functionalities to create tables, insert entries, update entries, delete entries, and drop tables.

mcp ai-agents
View Details

Mirror of

browser-automation MCP-server
View Details
Puppeteer
Puppeteer by modelcontextprotocol

Browser automation and web scraping

browser-automation web-scraping
View Details

Mirror of

browser-automation playwright
View Details
Dive AI Agent 🤿 🤖
Dive AI Agent 🤿 🤖 by OpenAgentPlatform

Dive is an open-source MCP Host Desktop Application that seamlessly integrates with any LLMs supporting function calling capabilities. ✨

ai ai-agents
View Details

Exploring the Model Context Protocol (MCP) through practical guides, clients, and servers I've built while learning about this new protocol.

mcp ai-agents
View Details
Agentica
Agentica by Wrtnlabs

Agentica is an open-source framework that makes working with AI agents simple and reliable. It helps you integrate structured function calls with Large Language Models (LLMs) without the usual headaches. Built around Typia's robust JSON Schema validation, Agentica eliminates the common frustrations of building agent systems - no more dealing with unpredictable outputs or complex integration challenges.

mcp-client ai-agents
View Details