Content
# 🧠 Multi-Agent AI System using Google's A2A Protocol
This project demonstrates how to build a modular multi-agent AI assistant inspired by Google’s **Agent-to-Agent (A2A)** Protocol. Each agent handles a specific task—translation, memory retrieval, real-time web search, and final LLM-based response generation—communicating over structured HTTP.
---
## 📐 Architecture Overview
- **Router Agent** (`router_server.py`): Orchestrates the entire flow, detects language, and coordinates other agents.
- **Translator Agent** (`translator_server.py`): Converts non-English text to English.
- **Memory Agent** (`memory_server.py`): Retrieves relevant knowledge using Graphiti + Neo4j.
- **Search Agent** (`search_server.py`): Uses SerpAPI to fetch real-time web data.
- **Final Agent** (`final_server.py`): Uses an LLM to generate the final, contextual response.
- **Test Client** (`test_client.py`): Sends sample queries to the system.
All agents follow the **A2A spec** by exposing a standard `/tasks/send` endpoint with structured JSON.
---
## 🚀 Setup Guide
### 1. Clone the Repository
```bash
git clone https://github.com/manavpatel571/multi-agent-a2a-system.git
cd multi-agent-a2a-system
```
---
### 2. Install Dependencies
Create and activate a virtual environment:
```bash
python -m venv a2a-env
source a2a-env/bin/activate # On Windows: a2a-env\Scripts\activate
```
Install required packages:
```bash
pip install -r requirements.txt
```
---
### 3. Configure Environment Variables
Create a `.env` file in the root directory and add the following:
```env
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# Google Cloud APIs
GOOGLE_API_KEY=your_google_api_key_here
CUSTOM_SEARCH_API_KEY=your_custom_search_api_key_here
# SerpAPI
SERPAPI_API_KEY=your_serpapi_key_here
# A2A Agent URLs
TRANSLATOR_URL=http://localhost:5001/tasks/send
MEMORY_URL=http://localhost:5002/tasks/send
SEARCH_URL=http://localhost:5003/tasks/send
FINAL_URL=http://localhost:5004/tasks/send
```
> 💡 You can get a free SerpAPI key at: [https://serpapi.com/manage-api-key](https://serpapi.com/manage-api-key)
---
### 4. Start the Agents
Open **5 terminals** (or use a process manager like `tmux`, `concurrently`, or Docker) and run:
```bash
python translator_server.py # Port 5001
python memory_server.py # Port 5002
python search_server.py # Port 5003
python final_server.py # Port 5004
python router_server.py # Port 5006
```
---
### 5. Test the System
Run the test client:
```bash
python test_client.py
```
You should see the final AI response generated by chaining all agents.
---
## 📎 Notes
- If SerpAPI fails (e.g., invalid key or rate limit), fallback logic returns a static placeholder message.
- Neo4j and Graphiti setup is required for full Memory Agent functionality.
- You can replace the LLM in `final_server.py` with GPT-4, Gemini Pro, or a HuggingFace model.
---
## 📚 Inspired By
- [Google Cloud’s A2A Protocol Announcement](https://cloud.google.com/blog/products/ai-machine-learning/announcing-the-agent2agent-protocol-a2a)
- Concepts from LangChain, Neo4j, SerpAPI, and Flask-based microservices.
---
## 🛠️ Future Improvements
- Add Docker support for one-click startup
- Replace fallback search with DuckDuckGo or Bing
- Add streaming and push notification support as per full A2A spec
- Integrate authentication for agent discovery and invocation