A self-hostable, multi-provider AI chat interface for
fact-grounded research.
Every answer is sourced. Every session persists. You own the data.
Next.js App Router on the server, your database on the backend, your LLM provider on the edge.
Streaming, persistence, and citations as first-class citizens. Zero vendor lock-in.
SQLite in-memory means you don't even need a database to try it out.
git clone https://github.com/SentorLabs/openchat cd openchat npm install
cp .env.example .env.local # Minimum required DATABASE_URL=:memory: # SQLite in-memory — zero setup LLM_PROVIDER=openai LLM_MODEL=gpt-4o LLM_API_KEY=sk-...
npm run dev # development — http://localhost:3000 npm run build && npm start # production
Switch providers from the Settings UI at any time — no restart, no code changes.
LLM_PROVIDER=openai
LLM_PROVIDER=anthropic
LLM_PROVIDER=gemini
LLM_PROVIDER=ollama
Auto-detected from the connection string. Schema is created on first run — no migrations needed.
DATABASE_URL=postgresql://user:pass@host:5432/openchat
DATABASE_URL=mysql://user:pass@host:3306/openchat
DATABASE_URL=/absolute/path/to/openchat.db DATABASE_URL=:memory: # in-memory, lost on restart
All settings can also be configured at runtime from the Settings UI — no server restart needed.
| Variable | Required | Description |
|---|---|---|
| DATABASE_URL | required | PostgreSQL, MySQL, or SQLite connection string (or :memory:) |
| LLM_PROVIDER | required | openai | anthropic | gemini | ollama |
| LLM_MODEL | required | Model name for your chosen provider (e.g. gpt-4o) |
| LLM_API_KEY | optional | API key — required for all providers except Ollama |
| LLM_SYSTEM_PROMPT | optional | Override the default system prompt to set AI persona or domain focus |
| OLLAMA_BASE_URL | optional | Ollama base URL — default: http://localhost:11434 |
Ready-to-run scripts for every major platform are in the deployment/ folder.
npm install
cp .env.example .env.local # fill in your values
npm run dev # dev mode npm run build && npm start # production
services:
app:
build: .
ports: ["3000:3000"]
environment:
DATABASE_URL: postgresql://postgres:secret@db:5432/openchat
LLM_PROVIDER: openai
LLM_MODEL: gpt-4o
LLM_API_KEY: ${LLM_API_KEY}
depends_on: [db]
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: openchat
volumes: [pgdata:/var/lib/postgresql/data]
volumes:
pgdata:
LLM_API_KEY=sk-... docker compose up -d
cf create-service postgresql-db trial openchat-db cf set-env openchat LLM_PROVIDER openai cf set-env openchat LLM_MODEL gpt-4o cf set-env openchat LLM_API_KEY sk-...
cf push # uses manifest.yml # or automated: bash deployment/btp.sh
bash deployment/gcp.sh
# Cloud SQL (PostgreSQL) instance # DATABASE_URL stored in Secret Manager # Image built via Cloud Build # Cloud Run service with Cloud SQL socket
bash deployment/aws.sh
# ECR repo + Docker image push # RDS PostgreSQL instance # Secrets Manager entry # IAM role + App Runner service
bash deployment/azure.sh
# Azure Container Registry + image build # Azure DB for PostgreSQL Flexible Server # Key Vault + managed identity # Container App with ACR pull access
Modern, minimal, no magic — just TypeScript, Next.js, and your database of choice.