Content
# Agent to Agent with Gemini CLI, Claude Code and Ollama models
🤖 **Hybrid AI Agent Dialogue Desktop Application**
The Ollama A2A App is a desktop application that combines local LLMs from Ollama with the Gemini and Claude APIs, enabling dialogue between multiple AI agents. By having two agents, one for analysis and one for evaluation, the app aims to draw out multifaceted perspectives and deeper insights that cannot be achieved with a single AI.
## ✨ Key Features
- **🔄 Hybrid AI Dialogue**: Freely combine Ollama's local models with the Gemini and Claude cloud APIs for dialogue.
- **🖥️ Intuitive GUI**: A simple and lightweight interface built with Python's standard tkinter library.
- **📝 Markdown Export**: Easily save the dialogue history in a well-organized Markdown format.
- **⚙️ Flexible Configuration**: Adjust the models used for dialogue, the number of discussion rounds, and the timeout for each response flexibly through the GUI.
- **🌍 Cross-Platform**: Operates on macOS and Linux.
- **🚀 Easy Setup**: Using the latest Python package management tool `uv`, set up and launch the environment with a single command.
## 🛠️ System Requirements
- **OS**: macOS or Linux
- **Python**: 3.12 or later
- **uv**: Must be [installed](https://astral.sh/uv/install.sh)
- **Ollama**: The latest version must be installed and the service must be running
- **Important**: The context size (number of tokens) for Ollama models varies by the model used. If the "timeout (seconds)" specified in the settings or the internal values of `num_ctx` and `num_predict` exceed the model's maximum token count, the application may crash or behave unexpectedly. Please check the recommended context size for your Ollama model and set appropriate values.
## 🚀 Setup & Execution
### 1. Prepare Ollama
First, install Ollama from the [official site](https://ollama.com/) and start the service.
```bash
# Start the Ollama service (run in terminal)
ollama serve
```
Next, download the models you want to use in the dialogue.
```bash
# Example: Download lightweight and high-performance models
ollama pull llama3:8b
ollama pull gemma:7b
```
### 2. Set Up and Launch the Application
In the directory where you cloned or downloaded the repository, run the following commands.
```bash
# 1. Create a virtual environment
uv venv
# 2. Install the project (dependencies will be installed simultaneously)
uv pip install -e .
# 3. Launch the application
uv run ollama_a2a_app/main.py
```
The `uv run` command will execute the script directly using Python within the virtual environment.
### 3. Set API Key (Optional)
If you are using Gemini or Claude models, you will need to set an API key.
1. Click the `⚙️ Settings` button in the upper right corner of the application.
2. Enter the Gemini or Claude API key in the dialog that appears and press the "Save" button.
3. The API key will be securely saved in a file named `.gemini_api_key` or `.claude_api_key` in the project root.
## 📄 License
This project is licensed under the **Apache License 2.0**.