Content
# 🚀 Atlassian AI Agent
[](https://www.python.org/)
[](https://python-poetry.org/)
[](LICENSE)
[](https://github.com/cnoe-io/openapi-mcp-codegen/actions/workflows/conventional_commits.yml)
[](https://github.com/cnoe-io/openapi-mcp-codegen/actions/workflows/ruff.yml)
[](https://github.com/cnoe-io/openapi-mcp-codegen/actions/workflows/superlinter.yml)
---
## 🧪 Evaluation Badges
| Claude | Gemini | OpenAI | Llama |
|--------|--------|--------|-------|
| [](https://github.com/cnoe-io/agent-atlassian/actions/workflows/claude-evals.yml) | [](https://github.com/cnoe-io/agent-atlassian/actions/workflows/gemini-evals.yml) | [](https://github.com/cnoe-io/agent-atlassian/actions/workflows/openai-evals.yml) | [](https://github.com/cnoe-io/agent-atlassian/actions/workflows/openai-evals.yml) |
---
- 🤖 **Atlassian Agent** is an LLM-powered agent built using the [LangGraph ReAct Agent](https://langchain-ai.github.io/langgraph/agents/agents/) workflow and [MCP tools](https://modelcontextprotocol.io/introduction).
- 🌐 **Protocol Support:** Compatible with [ACP](https://github.com/agntcy/acp-spec) and [A2A](https://github.com/google/A2A) protocols for integration with external user clients.
- 🛡️ **Secure by Design:** Enforces Atlassian API token-based RBAC and supports external authentication for strong access control.
- 🔌 **Integrated Communication:** Uses [langchain-mcp-adapters](https://github.com/langchain-ai/langchain-mcp-adapters) to connect with the Atlassian MCP server within the LangGraph ReAct Agent workflow.
- 🏭 **First-Party MCP Server:** The MCP server is generated by our first-party [openapi-mcp-codegen](https://github.com/cnoe-io/openapi-mcp-codegen/tree/main) utility, ensuring version/API compatibility and software supply chain integrity.
---
## 🏗️ Architecture
```mermaid
flowchart TD
subgraph Client Layer
A[User Client ACP/A2A]
end
subgraph Agent Transport Layer
B[AGNTCY ACP<br/>or<br/>Google A2A]
end
subgraph Agent Graph Layer
C[LangGraph ReAct Agent]
end
subgraph Tools/MCP Layer
D[LangGraph MCP Adapter]
E[Atlassian MCP Server]
F[Atlassian API Server]
end
A --> B --> C
C --> D
D -.-> C
D --> E --> F --> E
```
---
## ✨ Features
- 🤖 **LangGraph + LangChain MCP Adapter** for agent orchestration
- 🧠 **Azure OpenAI GPT-4o** as the LLM backend
- 🔗 Connects to Atlassian via a dedicated [Atlassian MCP agent](https://github.com/severity1/atlassian-mcp)
- 🔄 **Multi-protocol support:** Compatible with both **ACP** and **A2A** protocols for flexible integration and multi-agent orchestration
---
### 1️⃣ Create/Update `.env`
```env
LLM_PROVIDER=
AGENT_NAME=
CNOE_AGENT_ATLASSIAN_ID=
CNOE_AGENT_ATLASSIAN_API_KEY=
CNOE_AGENT_ATLASSIAN_PORT=
AGENT_HOST=
AGENT_PORT=
AZURE_OPENAI_API_KEY=
OPENAI_API_VERSION=
AZURE_OPENAI_API_VERSION=
AZURE_OPENAI_DEPLOYMENT=
AZURE_OPENAI_ENDPOINT=
ATLASSIAN_TOKEN=
ATLASSIAN_EMAIL=
ATLASSIAN_API_URL=
ATLASSIAN_VERIFY_SSL=
GOOGLE_API_KEY=
```
---
### 2️⃣ Start Workflow Server (ACP or A2A)
You can start the workflow server in either ACP or A2A mode:
- **ACP Mode:**
```bash
make run-acp
```
- **A2A Mode:**
```bash
make run-a2a
```
---
## 🧪 Usage
### ▶️ Test with Atlassian Server
#### 🏃 Quick Start: Run Atlassian Locally with Minikube
If you don't have an existing Atlassian server, you can quickly spin one up using [Minikube](https://minikube.sigs.k8s.io/docs/):
1. **Start Minikube:**
```bash
minikube start
```
1. **Install Atlassian in the `atlassian` namespace:**
```bash
kubectl create namespace atlassian
kubectl apply -n atlassian -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
1. **Expose the Atlassian API server:**
```bash
kubectl port-forward svc/atlassian-server -n atlassian 8080:443
```
The API will be available at `https://localhost:8080`.
1. **Get the Atlassian admin password:**
```bash
kubectl -n atlassian get secret atlassian-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
```
1. **(Optional) Install Atlassian CLI:**
```bash
brew install atlassian
# or see https://argo-cd.readthedocs.io/en/stable/cli_installation/
```
For more details, see the [official getting started guide](https://argo-cd.readthedocs.io/en/stable/getting_started/#1-install-argo-cd).
### 1️⃣ Run the ACP Client
To interact with the agent in **ACP mode**:
```bash
make run-acp-client
```
**Configure Environment Variables**
Create or update a `.env` file in your project root with the following:
```env
AGENT_ID="<YOUR_AGENT_ID>"
API_KEY="<YOUR_API_KEY>"
WFSM_PORT="<YOUR_ACP_SERVER_PORT>"
```
**Example Interaction**
```
> Your Question: how can you help?
Agent: I can assist you with managing applications in Atlassian, including tasks such as:
```
- **Listing Applications**: Retrieve a list of applications with filtering options.
- **Getting Application Details**: Fetch detailed information about a specific application.
- **Creating Applications**: Create new applications in Atlassian.
- **Updating Applications**: Update existing applications.
- **Deleting Applications**: Remove applications from Atlassian.
- **Syncing Applications**: Synchronize applications to a specific Git revision.
- **Getting User Info**: Retrieve information about the current user.
- **Getting Atlassian Settings**: Access server settings.
- **Getting Plugins**: List available plugins.
- **Getting Version Information**: Retrieve Atlassian API server version.
---
### 2️⃣ Run the A2A Client
To interact with the agent in **A2A mode**:
```bash
make run-a2a-client
```
**Sample Streaming Output**
When running in A2A mode, you’ll see streaming responses like:
```
============================================================
RUNNING STREAMING TEST
============================================================
--- Single Turn Streaming Request ---
--- Streaming Chunk ---
The current version of Atlassian is **v2.13.3+a25c8a0**. Here are some additional details:
- **Build Date:** 2025-01-03
- **Git Commit:** a25c8a0eef7830be0c2c9074c92dbea8ff23a962
- **Git Tree State:** clean
- **Go Version:** go1.23.1
- **Compiler:** gc
- **Platform:** linux/amd64
- **Kustomize Version:** v5.4.3
- **Helm Version:** v3.15.4+gfa9efb0
- **Kubectl Version:** v0.31.0
- **Jsonnet Version:** v0.20.0
```
---
## 🧬 Internals
- 🛠️ Uses [`create_react_agent`](https://docs.langchain.com/langgraph/agents/react/) for tool-calling
- 🔌 Tools loaded from the **Atlassian MCP server** (submodule)
- ⚡ MCP server launched via `uv run` with `stdio` transport
- 🕸️ Single-node LangGraph for inference and action routing
---
## 📁 Project Structure
```text
agent_atlassian/
│
├── agent.py # LLM + MCP client orchestration
├── langgraph.py # LangGraph graph definition
├── __main__.py # CLI entrypoint
├── state.py # Pydantic state models
└── atlassian_mcp/ # Git submodule: Atlassian MCP server
client/
└── client_agent.py # Agent ACP Client
```
---
## 🧩 MCP Submodule (Atlassian Tools)
This project uses a **first-party MCP module** generated from the Atlassian OpenAPI specification using our [openapi-mcp-codegen](https://github.com/cnoe-io/openapi-mcp-codegen/tree/main/examples) utility. The generated MCP server is included as a git submodule in `atlassian_mcp/`.
All Atlassian-related LangChain tools are defined by this MCP server implementation, ensuring up-to-date API compatibility and supply chain integrity.
---
## 🔌 MCP Integration
The agent uses [`MultiServerMCPClient`](https://github.com/langchain-ai/langchain-mcp-adapters) to communicate with MCP-compliant services.
**Example (stdio transport):**
```python
async with MultiServerMCPClient(
{
"atlassian": {
"command": "uv",
"args": ["run", "/abs/path/to/atlassian_mcp/server.py"],
"env": {
"ATLASSIAN_TOKEN": atlassian_token,
"ATLASSIAN_API_URL": atlassian_api_url,
"ATLASSIAN_VERIFY_SSL": "false"
},
"transport": "stdio",
}
}
) as client:
agent = create_react_agent(model, client.get_tools())
```
**Example (SSE transport):**
```python
async with MultiServerMCPClient(
{
"atlassian": {
"transport": "sse",
"url": "http://localhost:8000"
}
}
) as client:
...
```
---
## Evals
### Running Evals
This evaluation uses [agentevals](https://github.com/langchain-ai/agentevals) to perform strict trajectory match evaluation of the agent's behavior. To run the evaluation suite:
```bash
make evals
```
This will:
- Set up and activate the Python virtual environment
- Install evaluation dependencies (`agentevals`, `tabulate`, `pytest`)
- Run strict trajectory matching tests against the agent
#### Example Output
```
=======================================
Setting up the Virtual Environment
=======================================
Virtual environment already exists.
=======================================
Activating virtual environment
=======================================
To activate venv manually, run: source .venv/bin/activate
. .venv/bin/activate
Running Agent Strict Trajectory Matching evals...
Installing agentevals with Poetry...
. .venv/bin/activate && uv add agentevals tabulate pytest
...
set -a && . .env && set +a && uv run evals/strict_match/test_strict_match.py
...
Test ID: atlassian_agent_1
Prompt: show atlassian version
Reference Trajectories: [['__start__', 'agent_atlassian']]
Note: Shows the version of the Atlassian Server Version.
...
Results:
{'score': True}
...
```
#### Evaluation Results
[Latest Strict Match Eval Results](evals/strict_match/README.md)
---
## 📜 License
Apache 2.0 (see [LICENSE](./LICENSE))
---
## 👥 Maintainers
See [MAINTAINERS.md](MAINTAINERS.md)
- Contributions welcome via PR or issue!
---
## 🙏 Acknowledgements
- [LangGraph](https://github.com/langchain-ai/langgraph) and [LangChain](https://github.com/langchain-ai/langchain) for agent orchestration frameworks.
- [langchain-mcp-adapters](https://github.com/langchain-ai/langchain-mcp-adapters) for MCP integration.
- [AGNTCY Agent Connect Protocol(ACP)](https://docs.agntcy.org/pages/syntactic_sdk/connect.html)
- [AGNTCY Agent Gateway Protocol(AGP)](https://docs.agntcy.org/pages/messaging_sdk/agp-howto.html)
- [AGNTCY Workflow Server Manager (WFSM)](https://github.com/agntcy/workflow-srv-mgr) for deployment and orchestration.
- [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) for the protocol specification.
- [Google A2A](https://github.com/google/A2A/tree/main)
- The open source community for ongoing support and contributions.