
Gemini API with MCP Tool Integration
AI agent that retrieves weather data from the MCP server to provide automated forecasts. Ideal for integration into weather-related applications.
What is the Weather AI Agent?
The Weather AI Agent is an AI-driven tool that retrieves weather data from the MCP server to provide automated forecasts, making it ideal for integration into weather-related applications.
How to use the Weather AI Agent?
To use the Weather AI Agent, clone the repository, set up your environment variables in a .env
file, and run the application using the command python main.py
.
Key features of the Weather AI Agent?
- Integration with Google Gemini API for natural language processing.
- Automated weather data retrieval from the MCP server.
- Customizable prompts and responses based on user queries.
Use cases of the Weather AI Agent?
- Providing real-time weather forecasts for applications.
- Automating responses to user inquiries about weather conditions.
- Enhancing weather-related services with AI-driven insights.
FAQ from the Weather AI Agent?
- What are the prerequisites for using the Weather AI Agent?
You need Python 3.7 or higher, a Google Cloud project with the Gemini API enabled, and an MCP environment set up.
- Is there a license for the Weather AI Agent?
Yes, the project is licensed under the MIT License.
- Can I customize the behavior of the Weather AI Agent?
Yes, you can modify the prompt and adjust the response handling in the code.
Gemini API with MCP Tool Integration
This project demonstrates how to integrate the Google Gemini API with custom tools managed by the MCP (Multi-Cloud Platform) framework. It uses the Gemini API to process natural language queries, and leverages MCP tools to execute specific actions based on the query's intent.
Prerequisites
Before running this project, ensure you have the following:
-
Python 3.7 or higher
-
A Google Cloud project with the Gemini API enabled and an API key.
-
An MCP environment set up with the necessary tools.
-
.env
file with the following environment variables:GEMINI_API_KEY=<your_gemini_api_key> GEMINI_MODEL=<your_gemini_model_name> MCP_RUNNER=<path_to_mcp_runner> MCP_SCRIPT=<path_to_mcp_script>
Installation
-
Clone the repository:
git clone <repository_url> cd <repository_directory>
-
Create a virtual environment (recommended):
python3 -m venv venv source venv/bin/activate # On macOS/Linux
-
Install the required dependencies using
uv
:uv pip install dotenv google-generativeai mcp uv add "mcp[cli]" httpx uv pip install python-dotenv google-generativeai mcp
-
Create a
.env
file in the project root and add your environment variables.
GEMINI_API_KEY=your_api_key_here
GEMINI_MODEL=gemini-pro
MCP_RUNNER=path_to_mcp_runner
MCP_SCRIPT=path_to_mcp_script
Usage
To run the application, execute the following command:
python main.py
How It Works
- The application loads environment variables and validates their presence
- Establishes a connection with the MCP client
- Retrieves available tools from the MCP session
- Sends the prompt to Gemini's API along with tool definitions
- Processes any tool calls made by the model
- Returns the final response that includes results from tool calls
Customization
To customize the prompt or behavior:
- Modify the
prompt
variable with your desired text - Adjust the
get_contents()
function to change how prompts are formatted - Extend
process_response()
to handle different response types