Content
# Multi-Agent Collaboration System Based on A2A Protocol
A complex multi-agent collaboration system built using the **A2A Python SDK** ([a2a-sdk](https://github.com/a2aproject/a2a-python)), implementing Google's Agent2Agent (A2A) protocol and supporting the Model Context Protocol (MCP).
## Features
🤖 **Multi-Agent System**
- Create and manage multiple AI agents
- **Built using A2A SDK (a2a-sdk v0.3.20+)**
- Compliant with A2A protocol
- Supports various AI models:
- **Google Gemini**: 2.0 Flash, 1.5 Pro, 1.5 Flash
- **OpenAI GPT**: GPT-4, GPT-4 Turbo, GPT-4o, GPT-3.5 Turbo
- **Local Large Models**: LM Studio, LocalAI, Ollama, etc., via OpenAI compatible API
- Customizable agent configurations
- **Independent API key configuration for each agent** - Different agents can use different keys
- **Independent API endpoint configuration for each agent** for flexible large model server setups
🤝 **Multi-Agent Collaboration**
- **Interactive collaboration interface** for coordinating multiple agents
- Intuitive interface to select agents and define collaboration tasks
- Real-time visualization of agent discussions and contributions
- Round-based collaboration, supporting configurable iteration counts
- Option to select a coordinator agent to manage the collaboration process
- Complete conversation history with timestamps and metadata
- Inspired by CrewAI's multi-agent mode and A2A protocol standards
🔧 **MCP Integration**
- Full Model Context Protocol support
- Connect agents to MCP servers
- Access tools and resources provided by MCP servers
- Seamless tool execution
💬 **Agent Communication**
- Real-time chat with individual agents
- Messaging compliant with A2A protocol
- Conversation history tracking
- Supports streaming responses
## Architecture
### Backend (Python/FastAPI)
- **A2A SDK**: Agent2Agent protocol Python SDK
- **FastAPI**: API server
- **Google GenAI**: Gemini model provider
- **OpenAI**: GPT model provider
- **MCP Client**: Model Context Protocol integration
- **Pydantic**: Data validation and settings management
### Frontend (React/Vite)
- **React 18**: Modern UI library
- **Vite**: Fast build tool and development server
- **Lucide Icons**: Icon set
- **Axios**: HTTP API communication
## Quick Start
### Prerequisites
- Python 3.10+
- Node.js 18+
- Google API Key (for Gemini models) and/or OpenAI API Key (for GPT models)
### Installation
1. **Clone the repository**
```bash
git clone <repository-url>
cd base-on-a2a-agent-system
```
2. **Set up the backend**
```bash
# Install Python dependencies
pip install -r requirements.txt
# Create environment file
cp .env.example .env
# Edit .env and add your GOOGLE_API_KEY and/or OPENAI_API_KEY
```
3. **Set up the frontend**
```bash
cd frontend
npm install
```
### Run the Application
1. **Start the backend server**
```bash
# From the project root directory
python -m uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000
```
2. **Start the frontend development server**
```bash
# From the frontend directory
cd frontend
npm run dev
```
3. **Access the application**
Open in your browser: `http://localhost:5173`
Backend API documentation can be found at: `http://localhost:8000/docs`
## Usage
### Create an Agent
1. Click the "Create Agent" button on the dashboard
2. Fill in the agent details:
- **Name**: Give the agent a descriptive name
- **Description**: Describe the purpose of the agent
- **Provider**: Choose Google (Gemini) or OpenAI (GPT)
- **Model**: Select a model from the chosen provider
- **API Key** (optional): Configure an API key for the individual agent
- **Google API Key**: For Google (Gemini) provider
- **OpenAI API Key**: For OpenAI (GPT) provider
- Leave blank to use the global key from the .env file
- Individual agent keys will override global settings
- **OpenAI API Base URL** (OpenAI only): Configure a custom API endpoint
- Select from common local large model server presets (LM Studio, LocalAI, Ollama, etc.)
- Or enter a custom URL
- Leave blank to use the official OpenAI API
- **System Prompt**: Define the agent's behavior and personality
- **Temperature**: Control randomness (0.0 - 2.0)
- **Max Tokens**: Set output length limit (optional)
3. **Add MCP Server** (optional):
- Server Name: Identifier for the MCP server
- Command: Executable command (e.g., `npx`, `python`)
- Args: Additional command parameters
4. Click "Create Agent"
### Chat with an Agent
1. Click the chat icon on the agent card
2. Enter your message in the input box
3. Press enter or click the send button
4. The agent will respond using the A2A protocol
### Multi-Agent Collaboration
Utilize multiple agents to solve complex tasks together:
**Using the interface:**
1. Create at least 2 agents with different capabilities
2. Click the "Collaborate" button at the top of the dashboard
3. Select the agents to collaborate
4. Enter a task description (the more specific, the better)
5. Optionally select a coordinating agent (or let the system choose automatically)
6. Set the maximum collaboration rounds
7. Click "Start Collaboration"
8. Watch the agents work together, each contributing their expertise
9. View the complete conversation history and contributions from all agents
**Using the API:**
Start multi-agent collaboration using the API endpoint `/api/agents/collaborate`:
```bash
curl -X POST http://localhost:8000/api/agents/collaborate \
-H "Content-Type: application/json" \
-d '{
"agents": ["agent-id-1", "agent-id-2"],
"task": "Design a web application architecture",
"max_rounds": 5
}'
```
**Collaboration Features:**
- **A2A compliant**: Adheres to Google's Agent-to-Agent protocol standards
- **Flexible coordination**: Option to select a coordinating agent or automatic selection
- **Round-based**: Control the number of iterations for agent collaboration
- **Complete history**: View full conversations with metadata and timestamps
- **Real-time updates**: See agents working together in real-time
## API Documentation
### Agent Endpoints
- `POST /api/agents/` - Create a new agent
- `GET /api/agents/` - List all agents
- `GET /api/agents/{agent_id}` - Get agent details
- `PUT /api/agents/{agent_id}` - Update agent configuration
- `DELETE /api/agents/{agent_id}` - Delete agent
- `POST /api/agents/message` - Send a message to an agent
- `POST /api/agents/collaborate` - Start agent collaboration
### MCP Endpoints
- `GET /api/mcp/agents/{agent_id}/tools` - Get available tools
- `GET /api/mcp/agents/{agent_id}/resources` - Get available resources
- `POST /api/mcp/agents/{agent_id}/tools/{server_name}/{tool_name}` - Call a tool
## Configuration
### Environment Variables
Create a `.env` file in the root directory:
```env
# At least one of these is required
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key
# Optional
ANTHROPIC_API_KEY=your_anthropic_api_key
# OpenAI configuration (optional)
# For connecting to OpenAI compatible APIs like LM Studio, LocalAI, etc.
# If not set, the official OpenAI API endpoint will be used
OPENAI_BASE_URL=http://localhost:1234/v1
HOST=0.0.0.0
PORT=8000
DEBUG=true
DATABASE_URL=sqlite+aiosqlite:///./agents.db
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173
```
### Using OpenAI Compatible APIs (LM Studio, LocalAI, etc.)
The system supports any OpenAI compatible API endpoint, with two configuration methods:
#### Method 1: Per-Agent Configuration (Recommended)
Configure the base URL directly in the agent settings through the interface:
1. **Start the local large model server** (e.g., LM Studio, LocalAI)
2. **Create or edit an agent on the dashboard**
3. **Select "OpenAI (GPT)" as the provider**
4. **Choose a preset in the "OpenAI API Base URL" dropdown**:
- LM Studio (default): `http://localhost:1234/v1`
- LocalAI: `http://localhost:8080/v1`
- Ollama: `http://localhost:11434/v1`
- Text Generation WebUI: `http://localhost:5000/v1`
- Or select "Custom URL..." to enter a custom address
5. **Configure API key** (can be any string for local models)
6. **Select model** (use the model name supported by the local server)
This method allows different agents to use different API endpoints.
#### Method 2: Global Environment Variables
Set a default base URL for all agents via environment variables:
1. **Start LM Studio** and load the model
2. **Enable the local server in LM Studio** (usually runs at `http://localhost:1234`)
3. **Configure the .env file**:
```env
OPENAI_API_KEY=lm-studio # Can be any string for local models
OPENAI_BASE_URL=http://localhost:1234/v1
```
4. **Create agents** using `provider: "openai"` and select the model name supported by LM Studio
**Note**: Per-agent configuration takes precedence over global environment variables.
**Supported OpenAI Compatible Platforms:**
- LM Studio
- LocalAI
- Ollama (with OpenAI compatibility layer)
- Text Generation WebUI (with OpenAI extension)
- vLLM
- Any other service implementing the OpenAI API format
### Agent Configuration Structure
```json
{
"name": "string",
"description": "string",
"provider": "google",
"model": "gemini-2.0-flash-exp",
"system_prompt": "string",
"temperature": 0.7,
"max_tokens": null,
"openai_base_url": "http://localhost:1234/v1",
"mcp_servers": [
{
"name": "string",
"command": "string",
"args": [],
"env": {}
}
],
"capabilities": [],
"metadata": {}
}
```
## MCP Integration
The system supports full MCP (Model Context Protocol) integration. Agents can be connected to MCP servers to provide tools and resources.
### Example MCP Server Configuration
```json
{
"name": "filesystem",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
"env": {}
}
```
### Available MCP Servers
- **filesystem**: File system operations
- **github**: GitHub API integration
- **postgres**: PostgreSQL database access
- **sqlite**: SQLite database access
- **brave-search**: Web search functionality
- And more servers in the MCP ecosystem
## Development
### Project Structure
```
base-on-a2a-agent-system/
├── backend/
│ ├── agents/ # Agent implementations
│ ├── api/ # FastAPI routes
│ ├── config/ # Configuration
│ ├── mcp/ # MCP integration
│ ├── models/ # Data models
│ └── main.py # Application entry point
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── pages/ # Page components
│ │ ├── services/ # API services
│ │ └── styles/ # CSS styles
│ └── package.json
├── requirements.txt
└── README.md
```
### Production Build
**Backend:**
```bash
pip install -r requirements.txt
python -m uvicorn backend.main:app --host 0.0.0.0 --port 8000
```
**Frontend:**
```bash
cd frontend
npm run build
```
The built frontend will be located in `frontend/dist`, and the backend will serve it automatically.
## Technologies Used
- **Backend**: Python, FastAPI, **A2A SDK (official)**, Google GenAI SDK, OpenAI SDK, MCP, SQLAlchemy
- **Frontend**: React, Vite, Axios, Lucide Icons
- **AI**: Google Gemini models, OpenAI GPT models
- **Protocols**: Agent2Agent (A2A) protocol, Model Context Protocol (MCP)
## Contribution
Contributions are welcome! Feel free to submit a Pull Request.
## License
This project is open-source and licensed under the MIT License.
## Support
If you have any questions or issues, please open an issue on GitHub.
## Acknowledgments
- [A2A Project](https://a2a-protocol.org/) - Official Agent2Agent protocol
- [A2A Python SDK](https://github.com/a2aproject/a2a-python) - Official Python SDK
- Model Context Protocol (MCP) community