Content
# Agent to Agent (A2A) Protocol Sample: The Comedian Agent
This project is a sample implementation accompanying a YouTube video that explains and demonstrates the Agent to Agent (A2A) protocol. It features a simple "Comedian Agent" that tells jokes, built using Python, LangChain, and Ollama, and exposes its functionality using the A2A protocol.
## About the A2A Protocol
The Agent to Agent (A2A) protocol, initiated by Google and now community-maintained, aims to enable communication and interoperability between different AI agents, regardless of their underlying framework, ecosystem, or provider. This allows for the creation of multi-agent teams where agents can discover each other's capabilities and collaborate on tasks.
Key principles of A2A:
- **Simple**: Uses existing standards.
- **Enterprise-Oriented**: Includes authentication, security, privacy, traceability, and monitoring.
- **Asynchronous**: Allows for human participation in long-running tasks.
- **Multimodal**: Can operate with various data types.
- **Opaque**: Agents don't need to share internal plans or tools.
Functionally, A2A involves:
- **Users**: People or services initiating tasks.
- **Clients**: Entities (services, agents, apps) that request actions from external agents on behalf of users.
- **Remote Agents**: Server-based agents that interact with other agents.
The protocol uses HTTP for transport, JSON-RPC for messages, and OpenAPI specifications for authentication. A core concept is the **Agent Card**, a discoverable description of an agent's capabilities, skills, and authentication mechanisms.
Communication revolves around tasks, artifacts (results), messages (intermediate thoughts), parts (content pieces), and push notifications.
## Project Overview
This repository provides a practical example of an A2A **Remote Agent**.
- **The Comedian Agent (`src/llm.py`)**: A simple agent built with LangChain and powered by a local Ollama model (e.g., Gemma). Its sole purpose is to generate jokes based on user input.
- **The A2A Server (`src/main.py`)**: This module uses the A2A SDK (specifically, abstractions provided by Google) to expose the Comedian Agent over HTTP. It defines the Agent Card, including its skills (joke telling), and handles incoming requests.
The YouTube video explains how this server agent can be invoked by a separate client agent (a demo from Google using Gemini Flash) which discovers and interacts with our Comedian Agent.
## Technologies Used
- **Python**: The core programming language.
- **UV**: Used as the package manager. Dependencies are listed in `pyproject.toml`.
- **A2A SDK**: Libraries from Google for implementing A2A-compatible agents.
- **LangChain**: Framework for developing applications powered by language models.
- **Langchain-Ollama**: Integration for using Ollama-hosted local LLMs.
- **Uvicorn**: ASGI server to run the Starlette application.
- **Pydantic**: For data validation and settings management.
## Setup and Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-directory>
```
2. **Ensure UV is installed.** If not, follow the official UV installation guide.
3. **Create a virtual environment and install dependencies:**
```bash
uv venv
source .venv/bin/activate # On Linux/macOS
# .venv\Scripts\activate # On Windows
uv sync
```
## Running the Agent Server
The agent server can be started by running the `main.py` script.
1. **Configure Settings (if necessary)**:
The application uses settings defined in `src/settings.py` (which likely loads from environment variables or a `.env` file). Ensure these are set correctly, especially:
- `HOST`: The host to run the server on (e.g., `0.0.0.0` or `localhost`).
- `PORT`: The port to run the server on (e.g., `8000`).
- `MODEL`: The Ollama model to use (e.g., `gemma3:4b`). Make sure this model is available in your local Ollama instance.
2. **Run the server:**
```bash
python src/main.py
```
The server will start, and the Comedian Agent will be accessible at the configured host and port (e.g., `http://localhost:8000`). You can then use A2A client tools or another A2A agent to interact with it.
## Project Structure
- `pyproject.toml`: Defines project metadata, dependencies, and tool configurations (like Ruff and Pyright).
- `src/`: Contains the main source code.
- `main.py`: Entry point for the A2A server application. Sets up the Agent Card, request handlers, and runs the Uvicorn server.
- `llm.py`: Defines the `JokeTeller` agent using LangChain and Ollama.
- `tasks.py`: (As inferred from the script) Contains the `AgentTaskExecutor` which handles the execution of tasks assigned to the agent.
- `settings.py`: (As inferred from `main.py`) Manages application settings.
- `logs.py`: (As inferred from `main.py`) Handles logging.
- `README.md`: This file.
## Disclaimer
As highlighted in the YouTube video, the A2A SDK and related libraries are in an early stage of development. They are experimental and may not be suitable for production systems. The implementation details, especially on the client side and for some server-side abstractions, can be complex due to the evolving nature of these tools.
This project serves as a conceptual demonstration of the A2A protocol's potential.
## Contributing
This is a sample project for a YouTube video. Contributions are generally not expected, but feel free to fork and experiment.