Content
# Multi Agent Development with MCP and A2A
FastCampus online course [Multi Agent Development with MCP and A2A](https://fastcampus.co.kr/data_online_mcpa2a) practical materials (Part2 - Chapter2. LangGraph and MCP & A2A)
## Development Environment Setup
1) Cursor
2) Shrimp-Task-Manager (MCP)
3) Preparation of reference documents has been organized in advance with RepoMix
(Note: If the package versions used in the lecture differ from the document versions, **unexpected errors may occur**)
- LangGraph 0.6.2
- FastMCP 2.10.6
- LangChain MCP Adapters 0.1.9
- a2a-sdk 0.3.0
- Google ADK 1.9.0
## Main Reference Document Locations During Development
- [LangChain-llms.txt](/docs/langchain-llms.txt)
- [LangGraph-llms-full.txt](/docs/langgraph-llms-full_0.6.2.txt)
- [LangGraph-llms.txt](/docs/langgraph-llms_0.6.2.txt)
- [FastMCP-llms.txt](/docs/fastmcp-llms_2.10.6.txt)
- [FastMCP-llms-full.txt](/docs/fastmcp-llms-full_2.10.6.txt)
- [a2a-python](/docs/a2a-python_0.3.0.txt)
- [a2a-python-api-reference](https://a2a-protocol.org/latest/sdk/python/api/)
- [a2a-samples](/docs/a2a-samples_0.3.0.txt)
## Development Environment Setup (Package Manager `uv`)
Based on Python 3.12
```bash
uv init
uv venv
uv sync
```
## MCP Server Management
This project provides three MCP (Model Context Protocol) servers:
- **ArXiv Search Server** (port 3000): Paper search
- **Tavily Search Server** (port 3001): Web search
- **Serper Search Server** (port 3002): Google search
### 🚀 Quick Start
```bash
# Start the entire MCP development environment (image build + server start + Redis Commander)
make mcp-dev
# Or run step by step
make mcp-build # Image build
make mcp-up-tools # Start servers + Redis Commander
```
### 📋 Key Commands
```bash
# Server management
make mcp-up # Start all MCP servers
make mcp-down # Stop all MCP servers
make mcp-restart # Restart all MCP servers
make mcp-status # Check server status
make mcp-test # Health check test
# Log checking
make mcp-logs # Logs for all servers
make mcp-logs-tavily # Tavily server logs only
make mcp-logs-arxiv # ArXiv server logs only
make mcp-logs-serper # Serper server logs only
# Cleanup
make mcp-clean # Clean up containers and images
```
### 🔧 Using Scripts Directly
```bash
# Run scripts directly
./scripts/mcp_docker.sh help # Help
./scripts/mcp_docker.sh build # Image build
./scripts/mcp_docker.sh up # Start server
./scripts/mcp_docker.sh test # Health check
```
### 📍 Server Access Information
After starting, you can access the servers at the following addresses:
- **ArXiv MCP Server**: http://localhost:3000
- **Tavily MCP Server**: http://localhost:3001
- **Serper MCP Server**: http://localhost:3002
- **Redis Commander**: http://localhost:8081 (admin/mcp2025)
### 🔑 Environment Variable Setup
Set the API keys in the `.env` file:
```bash
# API keys for web search
TAVILY_API_KEY=your_tavily_api_key
SERPER_API_KEY=your_serper_api_key
# No API key required for ArXiv
```
---
## Architecture
### Overall System Architecture
```mermaid
graph TB
subgraph "1. MCP + LangGraph Integration"
MCP1[MCP Server - Retriever]
LG1[LangGraph Agent]
MCP1 -->|langchain_mcp_adapter| LG1
end
subgraph "2. LangGraph + A2A Integration"
LG2[LangGraph Agent]
A2A1[A2A Wrapper]
LG2 -->|Wrapped by| A2A1
end
subgraph "3. Multi-Agent Deep Research"
subgraph "MCP Tools"
T1[Tavily Search]
T2[Web Scraper]
T3[Vector DB]
T4[RDB]
end
MA[Multi-Agent System]
A2A2[A2A Orchestrator]
T1 & T2 & T3 & T4 --> MA
MA -->|Enhanced by| A2A2
end
subgraph "4. HITL Integration"
DR[Deep Research Agent]
HITL[Human-In-The-Loop]
Report[Final Report]
DR -->|Request Approval| HITL
HITL -->|Feedback| DR
DR --> Report
end
```
---
## 📚 Project Directory Structure
This project builds a multi-agent system utilizing MCP and A2A through a four-step learning process. Each directory contains a detailed README.md for reference.
```
src/
├── lg_agents/ # Step 1-2, 4: LangGraph based agents
│ └── README.md # 👉 Detailed guide on MCP integration, HITL implementation
├── multi_agent/ # Step 3: LangGraph Supervisor system
│ └── README.md # 👉 Explanation of complex state management issues
├── a2a_orchestrator/ # Step 3: A2A simplification system
│ └── README.md # 👉 Advantages and implementation methods of A2A
├── a2a_integration/ # Step 2, 4: A2A integration and server
│ └── unified_server.py # Unified A2A agent server
└── mcp_servers/ # MCP servers
├── tavily_search/ # Tavily web search
├── arxiv_search/ # arXiv paper search
└── serper_search/ # Google search
```
### 📖 Step-by-Step Learning Path
1. **[Step 1: MCP + LangGraph](src/lg_agents/README.md#step-1-mcp--langgraph-통합)** - Integrate MCP tools into LangGraph
2. **[Step 2: LangGraph + A2A](src/lg_agents/README.md#step-2-a2a로-래핑)** - Wrap agents with A2A protocol
3. **Step 3: Compare Multi-Agent Systems**
- **[LangGraph Approach](src/multi_agent/README.md)** - Limitations of complex nested states
- **[A2A Approach](src/a2a_orchestrator/README.md)** - Simple and scalable structure
4. **[Step 4: HITL Integration](src/lg_agents/README.md#step-4-hitl-통합)** - Implement human approval flow
## Project Implementation Steps
### 🚀 Step 1: Integrate MCP Server as a Tool for LangGraph Agent
**📁 Implementation Location**: [`src/lg_agents/`](src/lg_agents/README.md)
**🔗 Branch**: `feature/1.mcp-langgraph-integration`
Learn how to integrate MCP tools into LangGraph.
- Implement MCP Server (Tavily Web Search)
- Integrate MCP tools into `create_react_agent`
- Automatic tool selection using ReAct pattern
**👉 Detailed Guide**: [src/lg_agents/README.md](src/lg_agents/README.md#step-1-mcp--langgraph-통합)
---
### 🔗 Step 2: Integrate LangGraph Agent with A2A
**📁 Implementation Location**: [`src/lg_agents/`](src/lg_agents/README.md#step-2-a2a로-래핑), [`src/a2a_integration/`](src/a2a_integration/)
**🔗 Branch**: `feature/2.langgraph-a2a-integration`
Learn how to wrap LangGraph agents with the A2A protocol.
- Create A2A Agent class
- Wrap LangGraph Agent according to A2A specifications
- Implement message protocol
- State management and synchronization
**👉 Detailed Guide**: [src/lg_agents/README.md](src/lg_agents/README.md#step-2-a2a로-래핑)
---
### 🔀 Step 3: Implement Multi-Agent with MCP and Overcome Limitations Using A2A
**📁 Implementation Location**: [`src/multi_agent/`](src/multi_agent/README.md), [`src/a2a_orchestrator/`](src/a2a_orchestrator/README.md)
**🔗 Branch**: `feature/3.multi-agent-deep-research`
#### Core Goal: Compare State Complexity of LangGraph vs Simplicity of A2A
Experience the differences in state management complexity through the implementation of both systems.
**👉 Detailed Guides by System**:
- **LangGraph Supervisor System**: [src/multi_agent/README.md](src/multi_agent/README.md) - Issues with complex nested states
- **A2A Simplification System**: [src/a2a_orchestrator/README.md](src/a2a_orchestrator/README.md) - Advantages of independent states
#### Overall Architecture
```mermaid
graph TB
subgraph "LangGraph Supervisor System (Complex State)"
User1[User] --> Supervisor[Supervisor Agent<br/>Task Routing]
Supervisor --> Planner[Planner Agent<br/>Research Planning]
Supervisor --> Researcher[Researcher Agent<br/>Using MCP Tools]
Supervisor --> Writer[Writer Agent<br/>Report Writing]
Supervisor --> Evaluator[Evaluator Agent<br/>Quality Assessment]
subgraph "Nested State Structure"
State[SupervisorState<br/>- messages<br/>- planner_state<br/>- researcher_state<br/>- writer_state<br/>- evaluator_state]
end
Supervisor -.-> State
Planner -.-> State
Researcher -.-> State
Writer -.-> State
Evaluator -.-> State
end
subgraph "A2A System (Simple Context)"
User2[User] --> A2AOrch[A2A Orchestrator]
A2AOrch --> A2APlan[Planner A2A]
A2AOrch --> A2ARes[Researcher A2A]
A2AOrch --> A2AWrite[Writer A2A]
A2AOrch --> A2AEval[Evaluator A2A]
subgraph "Independent State"
Context[Simple Context<br/>{"plan": "...",<br/>"findings": [...],<br/>"report": "..."}]
end
A2AOrch -.-> Context
end
style State fill:#ffcccc
style Context fill:#ccffcc
```
#### 🔍 Implementation Comparison
Key differences between the two systems:
```python
# LangGraph: Complex Nested State
class SupervisorState(TypedDict):
messages: List[BaseMessage]
planner_state: PlannerState # Nested!
researcher_state: ResearcherState # Nested!
writer_state: WriterState # Nested!
# ... 15+ fields
# A2A: Simple Context
context = {
"query": "Research Topic",
"plan": "Plan",
"findings": [...],
"report": "Results"
}
```
#### 📊 Performance and Complexity Comparison
| Aspect | LangGraph Supervisor | A2A System |
|--------|---------------------|------------|
| Number of State Fields | 15+ (including nested) | 4-5 (flat) |
| Parallel Execution | Difficult | Easy |
| Adding Agents | Requires state modification | Independent addition |
| Performance | Sequential execution (53 seconds) | Parallel execution (30 seconds) |
**👉 Detailed Implementation and Comparison**:
- [LangGraph System Details](src/multi_agent/README.md)
- [A2A System Details](src/a2a_orchestrator/README.md)
**Implementation Structure**:
```bash
src/
├── multi_agent/ # LangGraph based system
│ ├── supervisor_system.py # Supervisor pattern implementation
│ ├── coordinator.py # Existing MCP coordinator
│ └── models.py # State model definitions
└── a2a_orchestrator/ # A2A based system
├── simplified_system.py # Simplified A2A system
├── orchestrator.py # A2A orchestrator
└── agent_wrappers.py # Agent wrappers (extensions)
```
**Execution Method**:
```bash
# 1. Start the entire multi-agent system (MCP server + A2A agents)
python scripts/start_multiagent_system.py
# 2. Run system comparison (in a separate terminal)
python examples/compare_systems.py
# 3. Test individual systems
# Only LangGraph Supervisor system
python -m src.multi_agent.supervisor_system
# Only A2A system
python -m src.a2a_orchestrator.simplified_system
```
**🎯 Key Learning Points**:
- **60% Reduction in Code Complexity**: Simplified code by removing nested states
- **40% Performance Improvement through Parallel Execution**: Concurrent processing of independent tasks
- **95% Simplification in State Management**: Removal of nested structures
- **Improved Maintainability**: Each agent can be developed/tested independently
**👉 Full Comparison Analysis**: [examples/compare_systems.py](examples/compare_systems.py)
---
### 🧑💻 Step 4: Request Human Judgment through HITL (Human-In-The-Loop) in the A2A Communication Process and Receive Final Response
**📁 Implementation Location**: [`src/lg_agents/`](src/lg_agents/README.md#step-4-hitl-통합), [`src/a2a_integration/`](src/a2a_integration/)
**🔗 Branch**: `feature/4.a2a_hitl`
Implement a system that obtains human approval for significant decisions made by AI agents.
**👉 Detailed Guide**: [src/lg_agents/README.md](src/lg_agents/README.md#step-4-hitl-통합)
#### HITL Integration Architecture
```mermaid
sequenceDiagram
participant User
participant HITL_Interface
participant DeepResearch
participant Human
User->>DeepResearch: Research Request
DeepResearch->>DeepResearch: Data Collection and Analysis
DeepResearch->>HITL_Interface: Draft Review Request
HITL_Interface->>Human: Approval/Modification Request
Human->>HITL_Interface: Provide Feedback
HITL_Interface->>DeepResearch: Deliver Feedback
DeepResearch->>DeepResearch: Incorporate Feedback
DeepResearch->>User: Final Report
```
#### 🎯 HITL System Overview
This HITL system is designed to obtain human approval for significant decisions made by AI agents. It is integrated with the A2A (Agent-to-Agent) protocol and can manage approval requests through a real-time web dashboard.
#### 🏗️ Key Components
1. **HITL Backend (Python)**
- `HITLManager`: Manages approval requests
- `ApprovalStorage`: Redis-based data storage
- `NotificationService`: Multi-channel notifications
- `HITLResearchAgent`: A2A integrated research agent
2. **Web Dashboard (React)**
- Real-time management of approval requests
- WebSocket-based real-time updates
- CopilotKit simulation chat
3. **A2A Integration**
- Agent communication based on A2A protocol
- Integration of TaskStore, EventQueue
- Multi-stage approval workflow
#### 🔄 HITL Workflow
**Research Agent Approval Stages**:
1. **Research Plan Approval** (`CRITICAL_DECISION`)
- Approval of research topic and plan
- Options: [Approve, Request Modification, Cancel]
2. **Data Validation** (`DATA_VALIDATION`)
- Validation of significant findings
- High-priority approval
3. **Final Report Approval** (`FINAL_REPORT`)
- Review of the complete research report
- Approval of final deliverables
#### Approval Types
- `CRITICAL_DECISION`: Significant decisions
- `DATA_VALIDATION`: Data validation
- `FINAL_REPORT`: Final report
- `BUDGET_APPROVAL`: Budget approval
- `SAFETY_CHECK`: Safety inspection
---
## 🏗️ Unified A2A Agent Server
### ✨ Advantages of the New Unified Server
Previously, a separate server file was needed for each agent, but now all A2A agents can be managed with **one unified server**.
#### 🔄 Migration Guide
| Old Method | New Method |
|------------|------------|
| `python -m src.a2a_integration.server` | `python -m src.a2a_integration.unified_server --agent-type web_search` |
| `python -m src.lg_agents.run_hitl_agent` | `python -m src.a2a_integration.unified_server --agent-type hitl_research` |
| Individual agent servers | `python -m src.a2a_integration.unified_server --agent-type [planner/researcher/writer/evaluator]` |
#### 🎛️ Usage
```bash
# 1. Run a specific agent
python -m src.a2a_integration.unified_server --agent-type hitl_research --host localhost
# 2. Check the list of supported agents
python -m src.a2a_integration.unified_server --list-agents
# 3. Sequentially run all agents (for development)
python -m src.a2a_integration.unified_server
```
#### 🤖 Supported Agent Types
| Agent Type | Description | Port | Key Features |
|-------------|-------------|------|--------------|
| `web_search` | MCP-based web search agent | 8080 | Tavily web search, news search |
| `hitl_research` | HITL integrated research agent | 8081 | In-depth research, human approval flow |
| `planner` | Research planning agent | 8085 | Systematic research planning |
| `researcher` | Information gathering agent | 8086 | Investigating various sources with MCP tools |
| `writer` | Report writing agent | 8087 | Writing research results into reports |
| `evaluator` | Quality assessment agent | 8088 | Evaluating report quality and providing feedback |
#### 🔗 A2A Standard Endpoints
Each agent provides the following standard endpoints:
- **Agent Card JSON**: `/.well-known/agent-card.json`
- **JSON-RPC**: `/` (main agent endpoint)
- **Health Check**: `/health` (check server status)
---
## 🚀 HITL System Execution Guide
### 💻 Environment Setup
#### 📁 Docker Compose File Configuration
| File Name | Purpose | Target Stage |
|-----------|---------|--------------|
| `docker-compose.yml` | **HITL System** (Redis + management tools) | **Step 4** (current) |
| `docker-compose-step3.yml` | Multi-agent system (MCP + A2A) | Step 3 |
#### Method 1: Using Docker Compose (Recommended)
```bash
# Start Redis with Docker
./docker-start.sh # macOS/Linux
# or
docker-start.bat # Windows
# Or use Make
make up # Start Redis only
make up-tools # Start Redis + Redis Commander
# To switch to Step 3 multi-agent
docker-compose -f docker-compose-step3.yml up -d
```
#### Method 2: Local Installation
```bash
# Install Redis (macOS)
brew install redis
# Install Python dependencies
pip install redis fastapi uvicorn websockets aiohttp
```
### 🎯 Starting the System
#### Unified Execution
```bash
# Start the entire HITL system (Redis auto-detection)
python scripts/start_hitl_system.py
# Or use Make
make start-hitl
```
#### Individual Execution
```bash
# 1. Start Redis (Docker)
docker-compose up -d redis
# 2. Start HITL research agent from unified A2A server
python -m src.a2a_integration.unified_server --agent-type hitl_research
```
### 🌐 Access Information
| Service | URL | Description |
|---------|-----|-------------|
| **Web Dashboard** | <http://localhost:8000/hitl> | UI for managing approval requests |
| **API Server** | <http://localhost:8000> | REST API endpoint |
| **Health Check** | <http://localhost:8000/health> | Check server status |
| **WebSocket** | ws://localhost:8000/ws | Real-time notifications |
| **Redis Commander** | <http://localhost:8081> | Redis management tool (admin/hitl2025) |
### 📡 API Usage
#### Start Research Request
```bash
curl -X POST "http://localhost:8000/api/research" \
-H "Content-Type: application/json" \
-d '{
"query": "In-depth analysis of future trends in artificial intelligence",
"task_id": "research_001"
}'
```
#### Approval Processing
```bash
# Approve
curl -X POST "http://localhost:8000/api/approvals/{request_id}/approve" \
-H "Content-Type: application/json" \
-d '{
"request_id": "req_123",
"decision": "Approve",
"decided_by": "reviewer_name"
}'
# Reject
curl -X POST "http://localhost:8000/api/approvals/{request_id}/reject" \
-H "Content-Type: application/json" \
-d '{
"request_id": "req_123",
"decision": "Reject",
"decided_by": "reviewer_name",
"reason": "Reason for rejection"
}'
```
### 🎛️ Environment Configuration
Set environment variables (`.env` file):
```bash
# Redis configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0
# HITL server configuration
HITL_HOST=0.0.0.0
HITL_PORT=8000
# Notification settings - Email (optional)
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=your-email@gmail.com
SMTP_PASSWORD=your-app-password
FROM_EMAIL=hitl-system@example.com
TO_EMAILS=reviewer1@example.com,reviewer2@example.com
# Slack settings (optional)
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
```
### 🔧 Docker Compose Commands
```bash
# Help
make help
# Start/stop services
make up # Start Redis
make up-tools # Start Redis + management tools
make down # Stop services
make restart # Restart
# Monitoring
make status # Check status
make logs # Check logs
make redis-cli # Access Redis CLI
# Development environment
make dev-setup # Set up development environment
make test # System tests
# Cleanup
make clean # Remove containers and volumes
```
### 🧪 System Testing
```bash
# HITL system tests
python scripts/test_hitl_system.py
# Or use Make
make test
```
### 🔧 Troubleshooting
#### Common Issues
1. **Redis Connection Failure**
```bash
# Start Redis with Docker
make up
# Or start local Redis
brew services start redis
```
2. **Port Conflict**
```bash
# Check port usage
lsof -i :8000
lsof -i :6379
```
3. **Docker Related Issues**
```bash
# Clean up containers
make clean
# Restart
make up
```
### 📊 Monitoring
```bash
# Check real-time logs
python scripts/start_hitl_system.py 2>&1 | tee hitl_system.log
# Check Docker logs
make logs
# Check Redis status
make redis-cli
```
---
## Testing
### How to Run Tests
The project includes various tests:
#### 1. Agent Tests
```bash
# Test agent functionality
python tests/run_tests.py agent
# Or run directly
python tests/test_simple_mcp_agent.py
```
#### 2. Integration Tests
```bash
# Integration tests using pytest
python tests/run_tests.py integration
# Or run directly
python tests/integration_test.py
```
#### 3. Run All Tests
```bash
# Run all tests
python tests/run_tests.py all
# Or use existing script
./scripts/test-all.sh
```
### Test Structure
```
tests/
├── test_simple_mcp_agent.py # Agent functionality tests
├── integration_test.py # Integration tests (pytest)
└── run_tests.py # Test execution script
```
### Test Prerequisites
1. **MCP Server Running**: `docker-compose up -d`
2. **Set Environment Variables**: Set `OPENAI_API_KEY`
3. **Install Dependencies**: Complete `uv sync`
---
### Reference Docs
- [RepoMix](https://repomix.com/)
- [LangGraph-llms-full.txt](https://langchain-ai.github.io/langgraph/llms-full.txt)
- [LangChain-llms.txt](https://python.langchain.com/llms.txt)
- [LangChain-MCP-Adapter](https://github.com/langchain-ai/langchain-mcp-adapters)
- [FastMCP-llms-full.txt](https://gofastmcp.com/llms-full.txt)
- [Official-A2A-Python-SDK](https://github.com/a2aproject/a2a-python)
- [Official-A2A-Samples](https://github.com/a2aproject/a2a-samples)