Compare commits
14 Commits
e48b5b3d4e
...
4-db_servi
| Author | SHA1 | Date | |
|---|---|---|---|
| 9d5fb3f5be | |||
| 873280d027 | |||
| ae27a2cdf5 | |||
| 5bf54f0d44 | |||
| 5c9bdc771d | |||
| dcef90d11f | |||
| afeaeba313 | |||
| 302dd7e965 | |||
| fb578fbf40 | |||
| a22ec0a7da | |||
| e3691e1f0a | |||
| 513f597080 | |||
| 6db3c5f4cc | |||
| cf1e389df1 |
20
.env.example
20
.env.example
@@ -2,9 +2,23 @@
|
|||||||
LOG_LEVEL=INFO
|
LOG_LEVEL=INFO
|
||||||
|
|
||||||
# Matrix Configuration
|
# Matrix Configuration
|
||||||
HOMESERVER_URL = "https://matrix.org"
|
MATRIX_HOMESERVER_URL="https://matrix.org"
|
||||||
USER_ID = "@botbot_user:matrix.org"
|
MATRIX_USER_ID="@botbot_user:matrix.org"
|
||||||
PASSWORD = "botbot_password"
|
MATRIX_PASSWORD="botbot_password"
|
||||||
|
MATRIX_LOGIN_TRIES=5 # Number of login attempts before giving up, default is 5
|
||||||
|
MATRIX_LOGIN_DELAY_INCREMENT=5 # Delay increment,in seconds, between login attempts, default is 5
|
||||||
|
AI_HANDLER_URL="http://ai_service:8000"
|
||||||
|
|
||||||
# OpenAI API Key
|
# OpenAI API Key
|
||||||
OPENAI_API_KEY=your_openai_api_key_here
|
OPENAI_API_KEY=your_openai_api_key_here
|
||||||
|
AI_HANDLER_TOKEN=common_token_here
|
||||||
|
|
||||||
|
# db_service
|
||||||
|
LOG_TOKEN=your_log_token_here
|
||||||
|
|
||||||
|
|
||||||
|
# PostgreSQL Configuration
|
||||||
|
POSTGRES_USER=database_user
|
||||||
|
POSTGRES_PASSWORD=database_password
|
||||||
|
POSTGRES_DB=database_name
|
||||||
|
POSTGRES_HOST=postgres
|
||||||
132
README.md
132
README.md
@@ -1,3 +1,131 @@
|
|||||||
# botbot
|
# Botbot
|
||||||
|
|
||||||
Botbot, your not so friendly Bot
|
*Botbot, your not so friendly Bot*
|
||||||
|
|
||||||
|
**Botbot** aimsto be a multi‑service chat assistant for [Matrix](https://matrix.org) that captures room conversations, stores them, and uses Large‑Language‑Models to provide concise summaries, answer follow‑up questions, and surface action points.
|
||||||
|
|
||||||
|
### Roadmap / Aspirational goals
|
||||||
|
|
||||||
|
* Persistent conversation store (PostgreSQL) for long‑range context.
|
||||||
|
* Pluggable LLM back‑ends (local models, Ollama, etc.).
|
||||||
|
* Structured meeting summaries with action‑items and deadlines.
|
||||||
|
* Additional chat front‑ends (Telegram, WhatsApp, …).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
MS["matrix_service<br/>(Python & nio)"]
|
||||||
|
AI["ai_service<br/>(FastAPI)"]
|
||||||
|
Redis["Redis<br/>Deduplicates replies / guarantees idempotency"]
|
||||||
|
|
||||||
|
MS -- "HTTP (Bearer token)" --> AI
|
||||||
|
AI -- "Matrix Events" --> MS
|
||||||
|
|
||||||
|
MS -.-> Redis
|
||||||
|
AI -.-> Redis
|
||||||
|
```
|
||||||
|
|
||||||
|
| Component | Image / Entry‑Point | Purpose |
|
||||||
|
| ------------------- | ------------------------------------- | ------------------------------------------------------------------------------------- |
|
||||||
|
| **matrix\_service** | `python matrix_service/main.py` | Listens to Matrix rooms, forwards each message to `ai_service`, posts the reply back. |
|
||||||
|
| **ai\_service** | `python ai_service/main.py` (FastAPI) | Builds a prompt, calls the configured LLM (OpenAI today), caches the reply in Redis. |
|
||||||
|
| **redis** | `redis:7` | Reply cache & simple key/value store. |
|
||||||
|
|
||||||
|
The services talk to each other over the internal Docker network. Authentication between them is a static bearer token (`AI_HANDLER_TOKEN`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Features
|
||||||
|
|
||||||
|
* **Auto‑Join on Invite** – secure E2EE join including device verification.
|
||||||
|
* **Stateless AI handler** – FastAPI endpoint `/api/v1/message` receiving a JSON payload.
|
||||||
|
* **Idempotent replies** – duplicate Matrix events reuse the cached answer.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start (development)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# clone & cd
|
||||||
|
$ git clone https://gitea.alluna.pt/jfig/botbot.git
|
||||||
|
$ cd botbot
|
||||||
|
|
||||||
|
# copy environment template
|
||||||
|
$ cp .env.example .env
|
||||||
|
# edit .env with your homeserver, credentials and OpenAI key
|
||||||
|
|
||||||
|
# start everything (hot‑reload volumes mounted)
|
||||||
|
$ docker compose up --build
|
||||||
|
```
|
||||||
|
|
||||||
|
The default compose file launches three containers:
|
||||||
|
|
||||||
|
* `matrix_service` – watches your rooms
|
||||||
|
* `ai_service` – handles AI prompts
|
||||||
|
* `redis` – reply cache
|
||||||
|
|
||||||
|
Stop with <kbd>Ctrl‑C</kbd> or `docker compose down`.
|
||||||
|
|
||||||
|
### Production (single image per service)
|
||||||
|
|
||||||
|
Build once, then deploy with your orchestrator of choice:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker compose -f docker-compose.yml --profile prod build
|
||||||
|
$ docker compose -f docker-compose.yml --profile prod up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
All settings are environment variables. The table below reflects the current codebase (commit `ae27a2c`).
|
||||||
|
|
||||||
|
| Variable | Service | Default | Description |
|
||||||
|
| ------------------------------ | --------------- | ------------------------ | ---------------------------------------- |
|
||||||
|
| `LOG_LEVEL` | both | `INFO` | Python logging level. |
|
||||||
|
| `MATRIX_HOMESERVER_URL` | matrix\_service | – | Matrix homeserver base URL. |
|
||||||
|
| `MATRIX_USER_ID` | matrix\_service | – | Full user id of the bot. |
|
||||||
|
| `MATRIX_PASSWORD` | matrix\_service | – | Password for the bot account. |
|
||||||
|
| `MATRIX_LOGIN_TRIES` | matrix\_service | `5` | Number of login attempts before exit. |
|
||||||
|
| `MATRIX_LOGIN_DELAY_INCREMENT` | matrix\_service | `5` | Seconds added per retry. |
|
||||||
|
| `AI_HANDLER_URL` | matrix\_service | `http://ai_service:8000` | Where to POST messages. |
|
||||||
|
| `AI_HANDLER_TOKEN` | both | – | Shared bearer token (keep secret). |
|
||||||
|
| `OPENAI_API_KEY` | ai\_service | – | Key for `openai` Python SDK. |
|
||||||
|
| `REDIS_URL` | ai\_service | `redis://redis:6379` | Connection string used by `redis-py`. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API (ai\_service)
|
||||||
|
|
||||||
|
```http
|
||||||
|
POST /api/v1/message
|
||||||
|
Authorization: Bearer <AI_HANDLER_TOKEN>
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"roomId": "!foo:matrix.org",
|
||||||
|
"userId": "@alice:matrix.org",
|
||||||
|
"eventId": "$abc123",
|
||||||
|
"serverTimestamp": 1714821630123,
|
||||||
|
"content": "Hello there"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns `{"reply": "Hello Alice!"}` or HTTP 401/500 on error.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Pull requests are welcome! Please open an issue first to discuss what you want to change. All source code is formatted with **ruff** / **black** – run `pre-commit run --all-files` before pushing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
[MIT](LICENSE)
|
||||||
|
|||||||
12
ai_service/Dockerfile
Normal file
12
ai_service/Dockerfile
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
FROM python:3.11-slim
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
COPY requirements.txt ./
|
||||||
|
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
|
||||||
|
COPY main.py ./
|
||||||
|
|
||||||
|
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||||
|
|
||||||
66
ai_service/main.py
Normal file
66
ai_service/main.py
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
import os
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from fastapi import FastAPI, Header, HTTPException
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from openai import OpenAI
|
||||||
|
import logging
|
||||||
|
import redis
|
||||||
|
|
||||||
|
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
|
||||||
|
AI_TOKEN = os.environ["AI_HANDLER_TOKEN"]
|
||||||
|
|
||||||
|
AIclient = OpenAI(
|
||||||
|
api_key=os.environ["OPENAI_API_KEY"],
|
||||||
|
)
|
||||||
|
|
||||||
|
r = redis.Redis.from_url(os.environ.get("REDIS_URL", "redis://redis:6379"))
|
||||||
|
|
||||||
|
# --- Logging Setup ---
|
||||||
|
numeric_level = getattr(logging, LOG_LEVEL, logging.INFO)
|
||||||
|
logging.basicConfig(
|
||||||
|
level=numeric_level,
|
||||||
|
format="%(asctime)s %(levelname)s %(name)s: %(message)s"
|
||||||
|
)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class MessagePayload(BaseModel):
|
||||||
|
roomId: str
|
||||||
|
userId: str
|
||||||
|
eventId: str
|
||||||
|
serverTimestamp: int
|
||||||
|
content: str
|
||||||
|
|
||||||
|
app = FastAPI()
|
||||||
|
|
||||||
|
@app.post("/api/v1/message")
|
||||||
|
async def message(
|
||||||
|
payload: MessagePayload,
|
||||||
|
authorization: str = Header(None)
|
||||||
|
):
|
||||||
|
if authorization != f"Bearer {AI_TOKEN}":
|
||||||
|
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||||
|
|
||||||
|
# Idempotency: ignore duplicates
|
||||||
|
if r.get(payload.eventId):
|
||||||
|
return {"reply": r.get(payload.eventId).decode()}
|
||||||
|
|
||||||
|
# Build prompt (very simple example)
|
||||||
|
prompt = f"User {payload.userId} said: {payload.content}\nBot:"
|
||||||
|
chat_response = AIclient.chat.completions.create(
|
||||||
|
model="gpt-3.5-turbo",
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": "You are a helpful assistant."},
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
],
|
||||||
|
max_tokens=150,
|
||||||
|
n=1,
|
||||||
|
stop=None,
|
||||||
|
temperature=0.7,
|
||||||
|
)
|
||||||
|
|
||||||
|
reply = chat_response.choices[0].message.content.strip()
|
||||||
|
|
||||||
|
# Cache reply for idempotency
|
||||||
|
r.set(payload.eventId, reply, ex=3600)
|
||||||
|
|
||||||
|
return {"reply": reply}
|
||||||
6
ai_service/requirements.txt
Normal file
6
ai_service/requirements.txt
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
python-dotenv>=1.0.0
|
||||||
|
openai>=1.0.0
|
||||||
|
fastapi>=0.95
|
||||||
|
uvicorn>=0.22
|
||||||
|
redis>=4.5
|
||||||
|
pydantic>=1.10
|
||||||
20
db_service/Dockerfile
Normal file
20
db_service/Dockerfile
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
FROM python:3.11-slim
|
||||||
|
|
||||||
|
# Prevent Python from buffering stdout/stderr so logs appear immediately
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
|
||||||
|
# Install system dependencies (none needed for asyncpg on slim)
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Install Python dependencies first to leverage Docker layer caching
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
|
||||||
|
# Copy service source code only after dependencies
|
||||||
|
COPY . /app
|
||||||
|
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
# Use Uvicorn as the ASGI server
|
||||||
|
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||||
182
db_service/main.py
Normal file
182
db_service/main.py
Normal file
@@ -0,0 +1,182 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
from typing import AsyncGenerator
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from fastapi import Depends, FastAPI, Header, HTTPException, status
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from sqlalchemy import BigInteger, Column, MetaData, Table, Text, text
|
||||||
|
from sqlalchemy.exc import OperationalError
|
||||||
|
from sqlalchemy.ext.asyncio import (
|
||||||
|
AsyncEngine,
|
||||||
|
AsyncSession,
|
||||||
|
async_sessionmaker,
|
||||||
|
create_async_engine,
|
||||||
|
)
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Environment & logging setup
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
load_dotenv()
|
||||||
|
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
|
||||||
|
logging.basicConfig(level=LOG_LEVEL, format="%(levelname)s | %(name)s | %(message)s")
|
||||||
|
log = logging.getLogger("db_service")
|
||||||
|
|
||||||
|
LOG_TOKEN = os.getenv("LOG_TOKEN", "changeme")
|
||||||
|
POSTGRES_USER = os.getenv("POSTGRES_USER", "postgres")
|
||||||
|
POSTGRES_PASSWORD = os.getenv("POSTGRES_PASSWORD", "postgres")
|
||||||
|
POSTGRES_DB = os.getenv("POSTGRES_DB", "botbot")
|
||||||
|
POSTGRES_HOST = os.getenv("POSTGRES_HOST", "postgres")
|
||||||
|
POSTGRES_PORT = os.getenv("POSTGRES_PORT", "5432")
|
||||||
|
|
||||||
|
DATABASE_URL = (
|
||||||
|
f"postgresql+asyncpg://{POSTGRES_USER}:{POSTGRES_PASSWORD}" f"@{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DB}"
|
||||||
|
)
|
||||||
|
ADMIN_URL = (
|
||||||
|
f"postgresql+asyncpg://{POSTGRES_USER}:{POSTGRES_PASSWORD}" f"@{POSTGRES_HOST}:{POSTGRES_PORT}/postgres"
|
||||||
|
)
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# SQLAlchemy table definition (metadata)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
metadata = MetaData()
|
||||||
|
|
||||||
|
messages = Table(
|
||||||
|
"messages",
|
||||||
|
metadata,
|
||||||
|
Column("event_id", Text, primary_key=True),
|
||||||
|
Column("room_id", Text, nullable=False),
|
||||||
|
Column("user_id", Text, nullable=False),
|
||||||
|
Column("ts_ms", BigInteger, nullable=False),
|
||||||
|
Column("body", Text, nullable=False),
|
||||||
|
)
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# FastAPI app
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
app = FastAPI(title="Botbot Logging Service", version="1.1.0")
|
||||||
|
|
||||||
|
|
||||||
|
class MessageIn(BaseModel):
|
||||||
|
"""Payload received from matrix_service."""
|
||||||
|
|
||||||
|
event_id: str = Field(..., example="$14327358242610PhrSn:matrix.org")
|
||||||
|
room_id: str = Field(..., example="!someroomid:matrix.org")
|
||||||
|
user_id: str = Field(..., example="@alice:matrix.org")
|
||||||
|
ts_ms: int = Field(..., example=1713866689000, description="Matrix server_timestamp in ms since epoch")
|
||||||
|
body: str = Field(..., example="Hello, world!")
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Database engine/session factories (populated on startup)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
engine: AsyncEngine | None = None
|
||||||
|
SessionLocal: async_sessionmaker[AsyncSession] | None = None
|
||||||
|
|
||||||
|
|
||||||
|
async def ensure_database_exists() -> None:
|
||||||
|
"""Connect to the admin DB and create `POSTGRES_DB` if it is missing."""
|
||||||
|
|
||||||
|
log.info("Checking whether database %s exists", POSTGRES_DB)
|
||||||
|
|
||||||
|
admin_engine = create_async_engine(ADMIN_URL, pool_pre_ping=True)
|
||||||
|
try:
|
||||||
|
async with admin_engine.begin() as conn:
|
||||||
|
db_exists = await conn.scalar(
|
||||||
|
text("SELECT 1 FROM pg_database WHERE datname = :db"),
|
||||||
|
{"db": POSTGRES_DB},
|
||||||
|
)
|
||||||
|
if not db_exists:
|
||||||
|
log.warning("Database %s not found – creating it", POSTGRES_DB)
|
||||||
|
await conn.execute(text(f'CREATE DATABASE "{POSTGRES_DB}"'))
|
||||||
|
log.info("Database %s created", POSTGRES_DB)
|
||||||
|
finally:
|
||||||
|
await admin_engine.dispose()
|
||||||
|
|
||||||
|
|
||||||
|
async def create_engine_and_tables() -> None:
|
||||||
|
"""Initialise SQLAlchemy engine and create the `messages` table if needed."""
|
||||||
|
|
||||||
|
global engine, SessionLocal # noqa: PLW0603
|
||||||
|
|
||||||
|
engine = create_async_engine(DATABASE_URL, pool_pre_ping=True)
|
||||||
|
|
||||||
|
async with engine.begin() as conn:
|
||||||
|
await conn.run_sync(metadata.create_all)
|
||||||
|
|
||||||
|
SessionLocal = async_sessionmaker(engine, expire_on_commit=False)
|
||||||
|
log.info("Database initialised and tables ensured.")
|
||||||
|
|
||||||
|
|
||||||
|
async def get_session() -> AsyncGenerator[AsyncSession, None]:
|
||||||
|
async with SessionLocal() as session: # type: ignore[arg-type]
|
||||||
|
yield session
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Lifespan events
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@app.on_event("startup")
|
||||||
|
async def on_startup() -> None: # noqa: D401 (imperative mood)
|
||||||
|
"""Ensure the database *and* table exist before serving traffic."""
|
||||||
|
log.info("Starting up")
|
||||||
|
|
||||||
|
try:
|
||||||
|
await create_engine_and_tables()
|
||||||
|
except OperationalError as err:
|
||||||
|
# Common case: database itself does not yet exist.
|
||||||
|
if "does not exist" in str(err):
|
||||||
|
log.warning("Primary database missing – attempting to create it")
|
||||||
|
await ensure_database_exists()
|
||||||
|
# Retry now that DB exists
|
||||||
|
await create_engine_and_tables()
|
||||||
|
else:
|
||||||
|
log.error("Database connection failed: %s", err)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
@app.on_event("shutdown")
|
||||||
|
async def on_shutdown() -> None: # noqa: D401
|
||||||
|
if engine:
|
||||||
|
await engine.dispose()
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# API endpoints
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/healthz", tags=["health"])
|
||||||
|
async def healthz() -> dict[str, str]:
|
||||||
|
return {"status": "ok"}
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/api/v1/log", status_code=status.HTTP_202_ACCEPTED, tags=["log"])
|
||||||
|
async def log_message(
|
||||||
|
payload: MessageIn,
|
||||||
|
x_log_token: str = Header(alias="X-Log-Token"),
|
||||||
|
session: AsyncSession = Depends(get_session),
|
||||||
|
) -> dict[str, str]:
|
||||||
|
"""Persist one Matrix message to Postgres.
|
||||||
|
|
||||||
|
Requires header `X-Log-Token` matching the `LOG_TOKEN` env‑var.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if x_log_token != LOG_TOKEN:
|
||||||
|
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="invalid token")
|
||||||
|
|
||||||
|
stmt = (
|
||||||
|
messages.insert()
|
||||||
|
.values(**payload.model_dump())
|
||||||
|
.on_conflict_do_nothing(index_elements=[messages.c.event_id])
|
||||||
|
)
|
||||||
|
await session.execute(stmt)
|
||||||
|
await session.commit()
|
||||||
|
return {"status": "accepted"}
|
||||||
10
db_service/requirements.txt
Normal file
10
db_service/requirements.txt
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# Web framework & ASGI server
|
||||||
|
fastapi
|
||||||
|
uvicorn[standard]
|
||||||
|
|
||||||
|
# Database access
|
||||||
|
sqlalchemy[asyncio]>=2.0
|
||||||
|
asyncpg>=0.29
|
||||||
|
|
||||||
|
# Environment / configuration helpers
|
||||||
|
python-dotenv>=1.0
|
||||||
@@ -1,12 +1,72 @@
|
|||||||
services:
|
services:
|
||||||
botbot:
|
# -----------------------
|
||||||
build: .
|
# Database (PostgreSQL)
|
||||||
env_file:
|
# -----------------------
|
||||||
- .env
|
postgres:
|
||||||
volumes:
|
image: postgres:16-alpine
|
||||||
- ./:/app # Mount source for hot-reload
|
|
||||||
- matrix_data:/app/data # Persist Matrix client store and tokens
|
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- POSTGRES_USER
|
||||||
|
- POSTGRES_PASSWORD
|
||||||
|
- POSTGRES_DB
|
||||||
|
volumes:
|
||||||
|
- postgres-data:/var/lib/postgresql/data
|
||||||
|
healthcheck:
|
||||||
|
test: "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
# -----------------------
|
||||||
|
# Conversation‑logging microservice
|
||||||
|
# -----------------------
|
||||||
|
db_service:
|
||||||
|
build:
|
||||||
|
context: ./db_service
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- POSTGRES_USER
|
||||||
|
- POSTGRES_PASSWORD
|
||||||
|
- POSTGRES_DB
|
||||||
|
- POSTGRES_HOST
|
||||||
|
- LOG_TOKEN
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
ports:
|
||||||
|
- "8000:8000" # expose externally only if needed
|
||||||
|
|
||||||
|
|
||||||
|
matrix_service:
|
||||||
|
build: ./matrix_service
|
||||||
|
environment:
|
||||||
|
- MATRIX_HOMESERVER_URL
|
||||||
|
- MATRIX_USER_ID
|
||||||
|
- MATRIX_PASSWORD
|
||||||
|
- AI_HANDLER_URL
|
||||||
|
- AI_HANDLER_TOKEN
|
||||||
|
volumes:
|
||||||
|
- ./matrix_service:/app
|
||||||
|
- matrix_data:/app/data
|
||||||
|
depends_on:
|
||||||
|
- ai_service
|
||||||
|
|
||||||
|
ai_service:
|
||||||
|
build: ./ai_service
|
||||||
|
environment:
|
||||||
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||||
|
- AI_HANDLER_TOKEN=${AI_HANDLER_TOKEN}
|
||||||
|
- REDIS_URL=redis://redis:6379
|
||||||
|
depends_on:
|
||||||
|
- redis
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- redis-data:/data
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
matrix_data:
|
matrix_data:
|
||||||
|
redis-data:
|
||||||
|
postgres-data:
|
||||||
128
main.py
128
main.py
@@ -1,128 +0,0 @@
|
|||||||
import os
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
from nio import AsyncClient, AsyncClientConfig, MatrixRoom, RoomMessageText, InviteMemberEvent
|
|
||||||
from nio.responses import LoginResponse
|
|
||||||
from openai import AsyncOpenAI
|
|
||||||
|
|
||||||
# --- Load environment variables ---
|
|
||||||
load_dotenv()
|
|
||||||
HOMESERVER_URL = os.getenv("HOMESERVER_URL")
|
|
||||||
USER_ID = os.getenv("USER_ID")
|
|
||||||
PASSWORD = os.getenv("PASSWORD")
|
|
||||||
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
|
|
||||||
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
|
||||||
|
|
||||||
if not OPENAI_API_KEY:
|
|
||||||
raise RuntimeError("OPENAI_API_KEY is not set in environment")
|
|
||||||
|
|
||||||
# --- Initialize Async OpenAI client ---
|
|
||||||
openai_client = AsyncOpenAI(api_key=OPENAI_API_KEY)
|
|
||||||
|
|
||||||
# --- Logging Setup ---
|
|
||||||
numeric_level = getattr(logging, LOG_LEVEL, logging.INFO)
|
|
||||||
logging.basicConfig(
|
|
||||||
level=numeric_level,
|
|
||||||
format="%(asctime)s %(levelname)s %(name)s: %(message)s"
|
|
||||||
)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
async def trust_all_devices(client) -> None:
|
|
||||||
"""
|
|
||||||
Programmatically verify all devices to allow sharing encryption keys.
|
|
||||||
"""
|
|
||||||
for room_id in client.rooms:
|
|
||||||
try:
|
|
||||||
devices = await client.room_devices(room_id)
|
|
||||||
if isinstance(devices, dict):
|
|
||||||
for user, dev_ids in devices.items():
|
|
||||||
if user == USER_ID:
|
|
||||||
continue
|
|
||||||
for dev_id in dev_ids:
|
|
||||||
device = client.crypto.device_store.get_device(user, dev_id)
|
|
||||||
if device and not client.crypto.device_store.is_device_verified(device):
|
|
||||||
logger.info(f"Trusting {dev_id} for {user}")
|
|
||||||
client.verify_device(device)
|
|
||||||
except Exception:
|
|
||||||
logger.exception(f"Error trusting devices in {room_id}")
|
|
||||||
|
|
||||||
async def message_callback(room: MatrixRoom, event: RoomMessageText):
|
|
||||||
"""Handle incoming text messages."""
|
|
||||||
if event.sender == USER_ID:
|
|
||||||
return
|
|
||||||
body = event.body.strip()
|
|
||||||
lower = body.lower()
|
|
||||||
logger.info("Received '%s' from %s in %s", body, event.sender, room.display_name)
|
|
||||||
|
|
||||||
send_kwargs = {
|
|
||||||
"room_id": room.room_id,
|
|
||||||
"message_type": "m.room.message",
|
|
||||||
"ignore_unverified_devices": True
|
|
||||||
}
|
|
||||||
|
|
||||||
# Simple ping
|
|
||||||
if lower == "!ping":
|
|
||||||
await client.room_send(**send_kwargs, content={"msgtype": "m.text", "body": "Pong!"})
|
|
||||||
return
|
|
||||||
|
|
||||||
# Ask OpenAI via chat completion
|
|
||||||
if lower.startswith("!ask "):
|
|
||||||
question = body[5:].strip()
|
|
||||||
if not question:
|
|
||||||
await client.room_send(**send_kwargs, content={"msgtype": "m.text", "body": "Provide a question after !ask."})
|
|
||||||
return
|
|
||||||
logger.info("Querying OpenAI: %s", question)
|
|
||||||
try:
|
|
||||||
response = await openai_client.chat.completions.create(
|
|
||||||
model="gpt-3.5-turbo",
|
|
||||||
messages=[
|
|
||||||
{"role": "system", "content": "You are a helpful assistant."},
|
|
||||||
{"role": "user", "content": question}
|
|
||||||
],
|
|
||||||
max_tokens=150
|
|
||||||
)
|
|
||||||
answer = response.choices[0].message.content.strip()
|
|
||||||
except Exception:
|
|
||||||
logger.exception("OpenAI API error")
|
|
||||||
answer = "Sorry, I encountered an error contacting the AI service."
|
|
||||||
await client.room_send(**send_kwargs, content={"msgtype": "m.text", "body": answer})
|
|
||||||
return
|
|
||||||
|
|
||||||
# Greeting
|
|
||||||
if lower == "hello botbot":
|
|
||||||
await client.room_send(**send_kwargs, content={"msgtype": "m.text", "body": "Hello! How can I assist you today?"})
|
|
||||||
|
|
||||||
async def main() -> None:
|
|
||||||
"""Initialize and run the Matrix bot."""
|
|
||||||
global client
|
|
||||||
config = AsyncClientConfig(store_sync_tokens=True, encryption_enabled=True)
|
|
||||||
client = AsyncClient(HOMESERVER_URL, USER_ID, store_path="/app/data", config=config)
|
|
||||||
|
|
||||||
login_resp = await client.login(password=PASSWORD)
|
|
||||||
if isinstance(login_resp, LoginResponse):
|
|
||||||
logger.info("Logged in as %s", USER_ID)
|
|
||||||
else:
|
|
||||||
logger.error("Login failed: %s", login_resp)
|
|
||||||
return
|
|
||||||
|
|
||||||
await trust_all_devices(client)
|
|
||||||
|
|
||||||
# Auto-join and trust
|
|
||||||
async def on_invite(room, event):
|
|
||||||
if isinstance(event, InviteMemberEvent):
|
|
||||||
await client.join(room.room_id)
|
|
||||||
logger.info("Joined %s", room.room_id)
|
|
||||||
await trust_all_devices(client)
|
|
||||||
client.add_event_callback(on_invite, InviteMemberEvent)
|
|
||||||
client.add_event_callback(message_callback, RoomMessageText)
|
|
||||||
|
|
||||||
logger.info("Starting sync loop")
|
|
||||||
await client.sync_forever(timeout=30000)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
try:
|
|
||||||
asyncio.run(main())
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
logger.info("Shutting down")
|
|
||||||
asyncio.run(client.close())
|
|
||||||
@@ -4,10 +4,12 @@ FROM python:3.11-slim
|
|||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
# Install system dependencies
|
# Install system dependencies
|
||||||
RUN apt-get update && \
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
apt-get install -y --no-install-recommends \
|
libolm-dev \
|
||||||
libolm-dev build-essential python3-dev && \
|
build-essential \
|
||||||
rm -rf /var/lib/apt/lists/*
|
python3-dev \
|
||||||
|
&& \
|
||||||
|
apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# Install dependencies
|
# Install dependencies
|
||||||
COPY requirements.txt .
|
COPY requirements.txt .
|
||||||
174
matrix_service/main.py
Normal file
174
matrix_service/main.py
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
import os
|
||||||
|
import logging
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import httpx
|
||||||
|
|
||||||
|
from nio import AsyncClient, AsyncClientConfig, MatrixRoom, RoomMessageText, InviteMemberEvent
|
||||||
|
from nio.responses import LoginResponse
|
||||||
|
|
||||||
|
|
||||||
|
# --- Load environment variables ---
|
||||||
|
load_dotenv()
|
||||||
|
MATRIX_HOMESERVER_URL = os.getenv("MATRIX_HOMESERVER_URL")
|
||||||
|
MATRIX_USER_ID = os.getenv("MATRIX_USER_ID")
|
||||||
|
MATRIX_PASSWORD = os.getenv("MATRIX_PASSWORD")
|
||||||
|
MATRIX_LOGIN_TRIES = int(os.getenv("MATRIX_LOGIN_TRIES", 5))
|
||||||
|
MATRIX_LOGIN_DELAY_INCREMENT = int(os.getenv("MATRIX_LOGIN_DELAY_INCREMENT", 5))
|
||||||
|
|
||||||
|
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
|
||||||
|
|
||||||
|
AI_HANDLER_URL = os.getenv("AI_HANDLER_URL")
|
||||||
|
AI_HANDLER_TOKEN = os.getenv("AI_HANDLER_TOKEN")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# --- Logging Setup ---
|
||||||
|
numeric_level = getattr(logging, LOG_LEVEL, logging.INFO)
|
||||||
|
logging.basicConfig(
|
||||||
|
level=numeric_level,
|
||||||
|
format="%(asctime)s %(levelname)s %(name)s: %(message)s"
|
||||||
|
)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
async def main() -> None:
|
||||||
|
|
||||||
|
async def trust_all_devices(client) -> None:
|
||||||
|
"""
|
||||||
|
Mark every other user's device as verified so we can receive their
|
||||||
|
future Megolm keys without being blocked.
|
||||||
|
"""
|
||||||
|
for room_id in client.rooms:
|
||||||
|
try:
|
||||||
|
devices_in_room = client.room_devices(room_id)
|
||||||
|
for user_id, user_devices in devices_in_room.items():
|
||||||
|
if user_id == client.user_id:
|
||||||
|
continue
|
||||||
|
for dev_id, device in user_devices.items():
|
||||||
|
if not client.device_store.is_device_verified(device):
|
||||||
|
logger.info(f"Trusting {dev_id} for {user}")
|
||||||
|
client.verify_device(device)
|
||||||
|
except Exception:
|
||||||
|
logger.exception(f"Error trusting devices in {room_id}")
|
||||||
|
|
||||||
|
async def on_invite(room, event):
|
||||||
|
"""
|
||||||
|
Handle an invite event by joining the room and trusting all devices.
|
||||||
|
"""
|
||||||
|
if isinstance(event, InviteMemberEvent):
|
||||||
|
await client.join(room.room_id)
|
||||||
|
logger.info("Joined %s", room.room_id)
|
||||||
|
await trust_all_devices(client)
|
||||||
|
|
||||||
|
async def send_message(room: MatrixRoom, message: str) -> None:
|
||||||
|
"""
|
||||||
|
Send a message to `room`.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
await trust_all_devices(client)
|
||||||
|
|
||||||
|
await client.share_group_session(
|
||||||
|
room.room_id,
|
||||||
|
ignore_unverified_devices=True
|
||||||
|
)
|
||||||
|
|
||||||
|
await client.room_send(
|
||||||
|
room_id=room.room_id,
|
||||||
|
message_type="m.room.message",
|
||||||
|
content={
|
||||||
|
"msgtype": "m.text",
|
||||||
|
"body": message
|
||||||
|
},
|
||||||
|
ignore_unverified_devices=True
|
||||||
|
)
|
||||||
|
logger.info("Sent message to %s: %s", room.room_id, message)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error sending message: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
async def on_message(room: MatrixRoom, event: RoomMessageText) -> None:
|
||||||
|
"""
|
||||||
|
Handle incoming messages.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Check if the message is from the bot itself
|
||||||
|
if event.sender == client.user_id:
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.info("Received '%s' from %s in %s", event.body.strip(), event.sender, room.display_name)
|
||||||
|
|
||||||
|
|
||||||
|
if isinstance(event, RoomMessageText):
|
||||||
|
logger.info(f"Received message in {room.room_id}: {event.body}")
|
||||||
|
payload = {
|
||||||
|
"roomId": event.room_id,
|
||||||
|
"userId": event.sender,
|
||||||
|
"eventId": event.event_id,
|
||||||
|
"serverTimestamp": event.server_timestamp,
|
||||||
|
"content": event.body
|
||||||
|
}
|
||||||
|
headers = {"Authorization": f"Bearer {AI_HANDLER_TOKEN}"}
|
||||||
|
async with httpx.AsyncClient() as http:
|
||||||
|
try:
|
||||||
|
resp = await http.post(f"{AI_HANDLER_URL}/api/v1/message", json=payload, headers=headers)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if data.get("reply"):
|
||||||
|
await trust_all_devices(client)
|
||||||
|
await send_message(room, data["reply"])
|
||||||
|
logger.info("Reply sent: %s", data["reply"])
|
||||||
|
except httpx.HTTPStatusError as e:
|
||||||
|
logger.error(f"HTTP error: {e.response.status_code} - {e.response.text}")
|
||||||
|
except Exception:
|
||||||
|
logger.exception("Error while calling AI handler")
|
||||||
|
|
||||||
|
# --- Initialize the client ---
|
||||||
|
# Create the data directory if it doesn't exist
|
||||||
|
try:
|
||||||
|
os.makedirs("/app/data", exist_ok=True)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error creating data directory: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Initialize the client
|
||||||
|
config = AsyncClientConfig(store_sync_tokens=True, encryption_enabled=True)
|
||||||
|
client = AsyncClient(MATRIX_HOMESERVER_URL, MATRIX_USER_ID, store_path="/app/data", config=config)
|
||||||
|
|
||||||
|
for i in range(MATRIX_LOGIN_TRIES):
|
||||||
|
try:
|
||||||
|
login_response=await client.login(password=MATRIX_PASSWORD)
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Login failed: %s", login_response)
|
||||||
|
logger.error(f"Login attempt {i+1} failed: {e}")
|
||||||
|
if i == MATRIX_LOGIN_TRIES - 1:
|
||||||
|
return
|
||||||
|
await asyncio.sleep(MATRIX_LOGIN_DELAY_INCREMENT * (i + 1))
|
||||||
|
logger.info("Logged in successfully")
|
||||||
|
|
||||||
|
logger.debug("Upload one time Olm keys")
|
||||||
|
try:
|
||||||
|
await client.keys_upload()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error uploading keys: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
|
await trust_all_devices(client)
|
||||||
|
|
||||||
|
client.add_event_callback(on_invite, InviteMemberEvent)
|
||||||
|
client.add_event_callback(on_message, RoomMessageText)
|
||||||
|
|
||||||
|
logger.info("Starting sync loop")
|
||||||
|
await client.sync_forever(timeout=30000) # timeout should be moved to Variable
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
try:
|
||||||
|
asyncio.run(main())
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logger.info("Shutting down")
|
||||||
|
asyncio.run(client.close())
|
||||||
|
|
||||||
4
matrix_service/requirements.txt
Normal file
4
matrix_service/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
matrix-nio[e2e]>=0.25.2
|
||||||
|
python-dotenv>=1.0.0
|
||||||
|
httpx>=0.23.0
|
||||||
|
pydantic>=1.10
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
matrix-nio[e2e]>=0.25.0
|
|
||||||
python-dotenv>=1.0.0
|
|
||||||
openai
|
|
||||||
Reference in New Issue
Block a user