5.0 KiB
Botbot
Botbot, your not so friendly Bot
Botbot aimsto be a multi‑service chat assistant for Matrix that captures room conversations, stores them, and uses Large‑Language‑Models to provide concise summaries, answer follow‑up questions, and surface action points.
Roadmap / Aspirational goals
- Persistent conversation store (PostgreSQL) for long‑range context.
- Pluggable LLM back‑ends (local models, Ollama, etc.).
- Structured meeting summaries with action‑items and deadlines.
- Additional chat front‑ends (Telegram, WhatsApp, …).
Architecture
flowchart LR
MS["matrix_service<br/>(Python & nio)"]
AI["ai_service<br/>(FastAPI)"]
Redis["Redis<br/>Deduplicates replies / guarantees idempotency"]
MS -- "HTTP (Bearer token)" --> AI
AI -- "Matrix Events" --> MS
MS -.-> Redis
AI -.-> Redis
| Component | Image / Entry‑Point | Purpose |
|---|---|---|
| matrix_service | python matrix_service/main.py |
Listens to Matrix rooms, forwards each message to ai_service, posts the reply back. |
| ai_service | python ai_service/main.py (FastAPI) |
Builds a prompt, calls the configured LLM (OpenAI today), caches the reply in Redis. |
| redis | redis:7 |
Reply cache & simple key/value store. |
The services talk to each other over the internal Docker network. Authentication between them is a static bearer token (AI_HANDLER_TOKEN).
Current Features
- Auto‑Join on Invite – secure E2EE join including device verification.
- Stateless AI handler – FastAPI endpoint
/api/v1/messagereceiving a JSON payload. - Idempotent replies – duplicate Matrix events reuse the cached answer.
Quick Start (development)
# clone & cd
$ git clone https://gitea.alluna.pt/jfig/botbot.git
$ cd botbot
# copy environment template
$ cp .env.example .env
# edit .env with your homeserver, credentials and OpenAI key
# start everything (hot‑reload volumes mounted)
$ docker compose up --build
The default compose file launches three containers:
matrix_service– watches your roomsai_service– handles AI promptsredis– reply cache
Stop with Ctrl‑C or docker compose down.
Production (single image per service)
Build once, then deploy with your orchestrator of choice:
$ docker compose -f docker-compose.yml --profile prod build
$ docker compose -f docker-compose.yml --profile prod up -d
Configuration
All settings are environment variables. The table below reflects the current codebase (commit ae27a2c).
| Variable | Service | Default | Description |
|---|---|---|---|
LOG_LEVEL |
both | INFO |
Python logging level. |
MATRIX_HOMESERVER_URL |
matrix_service | – | Matrix homeserver base URL. |
MATRIX_USER_ID |
matrix_service | – | Full user id of the bot. |
MATRIX_PASSWORD |
matrix_service | – | Password for the bot account. |
MATRIX_LOGIN_TRIES |
matrix_service | 5 |
Number of login attempts before exit. |
MATRIX_LOGIN_DELAY_INCREMENT |
matrix_service | 5 |
Seconds added per retry. |
AI_HANDLER_URL |
matrix_service | http://ai_service:8000 |
Where to POST messages. |
AI_HANDLER_TOKEN |
both | – | Shared bearer token (keep secret). |
OPENAI_API_KEY |
ai_service | – | Key for openai Python SDK. |
REDIS_URL |
ai_service | redis://redis:6379 |
Connection string used by redis-py. |
API (ai_service)
POST /api/v1/message
Authorization: Bearer <AI_HANDLER_TOKEN>
Content-Type: application/json
{
"roomId": "!foo:matrix.org",
"userId": "@alice:matrix.org",
"eventId": "$abc123",
"serverTimestamp": 1714821630123,
"content": "Hello there"
}
Returns {"reply": "Hello Alice!"} or HTTP 401/500 on error.
Contributing
Pull requests are welcome! Please open an issue first to discuss what you want to change. All source code is formatted with ruff / black – run pre-commit run --all-files before pushing.