Content
# Single Agent Knowledge Q&A
## todo: To be completed
## 🚀 Introduction
An intelligent Agent system built on **LangGraph** and the **Google A2A protocol**, supporting collaboration between a React frontend and a Python backend. Users can interact with the system through multi-turn conversations, where the system displays the "thinking process" in real-time, associates entities, queries the database, and includes source tags in the answers while supporting streaming output.
---
## Features
1. **Multi-Data Source Switching**
Users can select different data sources (e.g., databases, knowledge bases, etc.), and the backend dynamically calls based on the selected source.
2. **Thinking Process Display**
The backend uses LangGraph's ReAct mode to gradually display the agent's reasoning chain in a card format.
3. **Entity Association Database Queries in Answers**
The system identifies entities in the answers and initiates associated queries to the database, integrating the results into the final answer.
4. **Reference Source Tags in Answers**
Each output answer will be annotated with reference sources, such as `[Source: Wiki]` or `[Source: DB]`.
5. **Reference Sources in Answer Data Stream**
The internal execution flow of the Agent/SSE streaming output also transmits source information at each step for frontend display.
6. **Multi-Turn Conversation Support**
Supports session context management, maintaining memory and context chains during multi-turn interactions.
7. **Streaming Output**
Supports subscription-based tasks of A2A, allowing the client to receive incremental outputs via SSE to refresh the interface.
8. **MCP**
Plug-and-play MCP
---
## 📂 Project Structure
```
```
---
## ⚙️ Tech Stack
* **Backend (Python)**
* LangGraph + LangChain to create ReAct agent (supports tool invocation & thinking process modeling)
* Google A2A Python SDK (implements `send` and `sendSubscribe` interfaces)
* Multi-data source adapter design, custom `data_sources.py` for unified calling interface
* Entity recognition and database query module (entity_linker)
* SSE streaming task updates: pushing thinking/source information through methods like `enqueue_events_for_sse`
* **Frontend (React)**
* UI supporting multi-data source selection
* Session display component: shows user questions, Agent thinking (chain process), and final answers
* Real-time update display: receives intermediate processes from the server via SSE and renders them promptly
* Reference source tag UI: displays source tags corresponding to each piece of content (thinking or answer)
---
## 🧪 Usage Example
### Backend Startup
```bash
cd backend
```
### Frontend Startup
```bash
cd frontend
npm install
npm run start
```
### Interaction Process
1. The user selects a data source and inputs a question.
2. The frontend calls the backend A2A agent interface.
3. The backend gradually reasons, calling data query tools, entity associations, and constructing user-agent-output, returning each step via SSE.
4. The frontend continuously renders:
* Thinking content + source tags
* Final answer + entity query content and source identifiers
5. The user can continue to input the next question, and the system maintains context for ongoing multi-turn conversations.
---
## 🎯 Design Highlights
* **Source Transparency**: Each logical step and query carries source metadata tags, allowing tracking of both final answers and intermediate reasoning.
* **Streaming Interaction**: Supports SSE to synchronize the agent's thinking process with the frontend, enhancing user experience.
* **Modular Design**: The backend clearly separates data source adapters, entity recognition, A2A Task management, and LangGraph Agent logic.
* **Multi-Turn Memory**: LangGraph + A2A maintains context state, supporting continuous dialogue.