Chat Interfaces
Two Frontends, One Memory
xbrain ships with two chat frontends — LibreChat for everyday team use, Open WebUI for admin and agent testing. Both write to the same memory-api, so conversations from either frontend are searchable by the other.
This is the multi-frontend invariant at the core of xbrain: no frontend owns the data. The memory layer is the source of truth, and every frontend is a view into it. Switching from LibreChat to Open WebUI — or to the API directly — does not create a separate silo.
LibreChat — Primary Chat Frontend
| Property | Value |
|---|---|
| Version | v0.8.2-rc2 |
| URL | https://your-domain (nginx proxied) |
| Auth | Google OAuth SSO |
| Models | Claude (Anthropic), GPT-4 (OpenAI), Grok (xAI) |
| Features | MCP tools, conversation history, file uploads, MeiliSearch full-text search |
| Memory integration | Via librechat-bridge sidecar (monitors MongoDB) |
| Internal dependencies | MongoDB 8.0, MeiliSearch v1.35.1 |
LibreChat is where your team spends most of their time. It supports multiple AI providers simultaneously — your team can switch between Claude, GPT-4, and Grok in the same interface, within a single conversation if needed. Authentication is handled via Google OAuth, so there is no separate password to manage.
Configuring AI Models in LibreChat
Model endpoints are configured in librechat.yaml. Each provider needs an API
key set as an environment variable. The configuration below enables all three providers
supported in xbrain Phase 1:
yaml — librechat.yaml (endpoints section)endpoints: anthropic: apiKey: "${ANTHROPIC_API_KEY}" models: default: ["claude-sonnet-4-6"] fetch: false titleConvo: true titleModel: "claude-sonnet-4-6" openAI: apiKey: "${OPENAI_API_KEY}" models: default: ["gpt-4o", "gpt-4o-mini"] fetch: false custom: - name: "xAI (Grok)" apiKey: "${XAI_API_KEY}" baseURL: "https://api.x.ai/v1" models: default: ["grok-3"] titleConvo: true titleModel: "grok-3"
Model Selection
Users can switch between providers in the LibreChat UI using the model selector in the top bar. Each provider's models appear in a dropdown. The active model is persisted per conversation.
MCP Tools in LibreChat
LibreChat connects to the mcp-gateway via a special aggregated SSE endpoint.
Tools appear automatically in the LibreChat UI once the mcpServers block is
configured in librechat.yaml. No plugin installation or manual wiring needed.
yaml — librechat.yaml (mcpServers section)mcpServers: xbrain: type: sse url: http://mcp-gateway:8080/mcp/aggregate headers: X-Team-Scope: "${TEAM_SCOPE}" mcpSettings: allowedDomains: - "mcp-scraper" - "mcp-drive-read" - "mcp-calendar" - "mcp-deck"
allowedDomains Required
mcpSettings.allowedDomains is required to whitelist internal Docker hostnames.
LibreChat v0.8.5+ blocks internal domains by default as an SSRF protection measure. Without
this list, LibreChat will refuse to connect to any http://mcp-* endpoint,
even within the Docker network.
Once connected, the following tools appear in the LibreChat tool selector (accessible via the paperclip icon in the message input):
- scrape_url — Fetch and extract text from any public URL
- list_files / read_file / write_file — Google Drive access
- list_events / get_event — Google Calendar (read-only)
- create_deck — Generate a PPTX presentation stored in MinIO
See the MCP Tools documentation for full details on each tool's parameters and usage examples.
Open WebUI — Admin and Agent Testing
| Property | Value |
|---|---|
| Version | v0.9.0 |
| URL | https://your-domain/owui (nginx proxied at /owui) |
| Auth | Google OAuth SSO (same identity as LibreChat) |
| License | Custom (non-OSI since v0.6.6 — retain branding for >50 users/30 days) |
| Features | RAG testing, agent pipelines, truth-level approval UI |
| Memory integration | Via openwebui-pipeline (Pipelines API hook) |
Open WebUI is your admin and power-user frontend. Use it to approve truth-level promotion requests, test agent pipelines, and manage RAG configurations. While LibreChat is optimised for daily chat use, Open WebUI provides deeper control over how the AI processes and retrieves information.
License Note
Open WebUI uses a custom non-OSI license since v0.6.6 (April 2025). For internal team deployments under 50 daily active users, this is generally acceptable. If your deployment exceeds 50 users in any 30-day window, the branding (Open WebUI logo and name) must be retained in the UI. Review the Open WebUI license terms before deploying at scale.
Memory Sync Architecture
Both frontends write to memory-api, but via different integration paths. Understanding the path helps you debug sync issues and understand latency characteristics.
Architecture diagram — memory sync pathsLibreChat ──► MongoDB ──► librechat-bridge ──► memory-api (stores msgs) (tail -f coll, (upsert with detects new msgs, full tagging) extracts content) Open WebUI ──► openwebui-pipeline ────────────► memory-api (Pipelines API hook fires (upsert with on every message send) full tagging)
Every conversation from both frontends arrives at memory-api with full tagging. The bridge
and pipeline add the mandatory fields if not present — team_scope is derived
from the authenticated user's context (their Google OAuth identity mapped to their team
membership).
librechat-bridge
The librechat-bridge is a Python sidecar that watches LibreChat's MongoDB
collection for new messages. When a conversation completes, the bridge:
- Extracts the message content and metadata from MongoDB
- Looks up the user's
team_scopevia the memory-api identity endpoint - Constructs a memory item with the full tagging contract
- POSTs to
memory-api /v1/conversations
openwebui-pipeline
The openwebui-pipeline uses Open WebUI's native Pipelines API. It intercepts
every outgoing message and syncs it to memory-api in real time — unlike the bridge, there is
no polling delay. The pipeline runs as a separate container and is registered with Open WebUI
via the OPENAI_API_BASE_URL hook.
Conversation API
Conversations can also be created directly via the API, bypassing both frontends. This is useful for importing existing conversations from other tools or for agent-generated conversations:
bash — create a conversation via APIcurl -X POST https://api.grooveos.app/v1/conversations \ -H "Authorization: Bearer $JWT" \ -H "X-Team-Scope: excalibur" \ -H "Content-Type: application/json" \ -d '{ "title": "Q2 Planning", "project_scope": "fundraising", "source": "librechat:conv_abc123", "messages": [ { "role": "user", "content": "What should our fundraising target be for Q2?", "source_user_id": "google:108765432109876543" }, { "role": "assistant", "content": "Based on our traction metrics...", "model": "claude-sonnet-4-6" } ] }'
CANONICAL Facts in System Prompts
Every LibreChat chat with Claude automatically includes the team's CANONICAL facts in the
system prompt — retrieved from Qdrant filtered by team_scope and
truth_level >= CANONICAL. This is automatic via the bridge's
system_prompt endpoint.
CANONICAL facts are the highest tier of validated knowledge in xbrain. When your team has confirmed a fact (e.g. "Our Series A target is $3M" with truth_level=CANONICAL), that fact is automatically surfaced to Claude at the start of every conversation — without users needing to copy-paste or remember to mention it.
Truth Level Hierarchy
Facts flow upward through truth levels as they are validated: EPHEMERAL → WORKING → VALIDATED → CANONICAL → PUBLIC. Only CANONICAL and PUBLIC facts are injected into system prompts by default. Use Open WebUI's admin UI to promote facts from VALIDATED to CANONICAL after peer review.
bash — retrieve CANONICAL facts for system prompt injection# This is what librechat-bridge calls automatically before each conversation curl https://api.grooveos.app/v1/memory/search \ -H "Authorization: Bearer $JWT" \ -H "X-Team-Scope: excalibur" \ -G \ --data-urlencode "truth_level=CANONICAL" \ --data-urlencode "limit=20" # Returns top 20 CANONICAL facts for the team, # pre-formatted for system prompt injection