MCP Tools

Phase 3–4 10 min read

xbrain includes four built-in MCP (Model Context Protocol) tools that extend AI capabilities beyond pure text generation. These tools let your team's AI assistants browse the web, read and write Google Drive files, check calendars, and generate PowerPoint presentations — all team-scoped and fully logged in the audit trail.

The MCP Gateway

xbrain includes a central mcp-gateway that acts as a registry and proxy for all MCP tools. Every tool call passes through the gateway, which injects team_scope and user_id, and logs every call to the audit trail. No tool call can bypass the gateway — this is where authentication and team isolation are enforced for the MCP layer.

Architecture — MCP gateway routingLibreChat ──► /mcp/aggregate (SSE) ──► mcp-gateway (8080) ──► mcp-scraper (8100)
                                                              ├──► mcp-drive-read (8101)
                                                              ├──► mcp-calendar (8102)
                                                              └──► mcp-deck (8200)

agent-runtime ──► mcp_gateway_client.py ──────► mcp-gateway (8080) ──► [same tools]

The gateway aggregates all tools into a single SSE endpoint (/mcp/aggregate) that LibreChat connects to. Each tool call is authenticated and team-scoped before being forwarded to the appropriate backend tool service. The agent-runtime uses the same gateway via the mcp_gateway_client.py Python client.

Single Connection Point

LibreChat only needs to know about one URL: http://mcp-gateway:8080/mcp/aggregate. All tools are discovered automatically via the MCP tool listing protocol. When you add a new tool to the gateway, it appears in LibreChat without any reconfiguration.

Registering Tools

Tools are registered with the gateway at startup via the register-mcp-tools.sh script. This script runs as part of the Docker Compose startup sequence via the mcp-tools-registrar service.

bash — register-mcp-tools.sh (called at startup)# Register all MCP tools with the gateway
bash infrastructure/scripts/register-mcp-tools.sh

# Each tool registration call looks like this:
curl -X POST http://mcp-gateway:8080/v1/tools/register \
  -H "Authorization: Bearer $GATEWAY_JWT" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "mcp-scraper",
    "url": "http://mcp-scraper:8100",
    "team_scope": "excalibur"
  }'

# Verify registered tools:
curl http://mcp-gateway:8080/v1/tools \
  -H "Authorization: Bearer $JWT" \
  -H "X-Team-Scope: excalibur"
# Returns list of all registered tools with their schemas

Tool 1: mcp-scraper — Web Scraping

Property Value
Service mcp-scraper
Port 8100
Tool name scrape_url
Input url (string), max_chars (int, default 5000)
Output Extracted text content of the webpage
Phase introduced Phase 3

mcp-scraper lets the AI fetch and read any public webpage. The tool retrieves the page, strips HTML markup, and returns clean text. This is useful for researching competitors, pulling in documentation, or summarising articles — without leaving the LibreChat interface.

Usage from LibreChat

In LibreChat, simply ask Claude or GPT-4 to fetch a URL. The AI will automatically invoke the scrape_url tool:

"Scrape https://openai.com/research and summarise the latest papers for me"

Direct API Call

bash — call scrape_url via the gatewaycurl -X POST http://mcp-gateway:8080/v1/tools/call \
  -H "Authorization: Bearer $JWT" \
  -H "X-Team-Scope: excalibur" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "scrape_url",
    "params": {
      "url": "https://example.com/article",
      "max_chars": 3000
    }
  }'

# Response:
# {
#   "result": "Article text content here...",
#   "url": "https://example.com/article",
#   "chars_extracted": 2847,
#   "audit_id": "audit_abc123"
# }

Tool 2: mcp-drive-read — Google Drive

Property Value
Service mcp-drive-read
Port 8101
Tools list_files, read_file, write_file
Auth Team OAuth credentials (stored encrypted in PostgreSQL)
Encryption Fernet symmetric encryption for stored OAuth tokens
Phase introduced Phase 3

mcp-drive-read provides bidirectional access to Google Drive. The AI can list files in a folder, read their content, and optionally write files back. OAuth credentials are stored per-team and encrypted using Fernet encryption — they are never exposed in logs or API responses.

Usage from LibreChat

"Read the content of our Q2 roadmap from Drive and give me a summary"

Direct API Calls

json — list_files tool call body{
  "tool": "list_files",
  "params": {
    "folder_id": "1ABC...xyz",
    "max_results": 20
  }
}

# Response:
# {
#   "files": [
#     {"id": "1DEF...uvw", "name": "Q2 Roadmap.docx", "mimeType": "..."},
#     {"id": "1GHI...rst", "name": "Budget v3.xlsx", "mimeType": "..."}
#   ]
# }
json — read_file tool call body{
  "tool": "read_file",
  "params": {
    "file_id": "1DEF...uvw"
  }
}

# Response:
# {
#   "content": "# Q2 Roadmap\n\n## Goal: 3x MRR by June...",
#   "filename": "Q2 Roadmap.docx",
#   "mimeType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
# }
json — write_file tool call body{
  "tool": "write_file",
  "params": {
    "folder_id": "1ABC...xyz",
    "filename": "ai-summary.md",
    "content": "# AI-Generated Summary\n\nKey takeaways from the Q2 roadmap review...",
    "mimeType": "text/markdown"
  }
}

# Returns: {"file_id": "1JKL...mno", "url": "https://drive.google.com/..."}

Write Operations Require Opt-In

write_file requires explicit confirmation from the user when invoked through LibreChat. The AI will present the proposed write action before executing it. All write operations are logged in the audit trail regardless of how they are invoked. The Drive OAuth credentials are stored encrypted using Fernet encryption — never in plaintext.

Tool 3: mcp-calendar — Google Calendar

Property Value
Service mcp-calendar
Port 8102
Tools list_events, get_event
Auth Same OAuth credentials as mcp-drive-read (shared team credentials)
Scope Read-only (https://www.googleapis.com/auth/calendar.readonly)
Phase introduced Phase 3

mcp-calendar gives the AI read access to the team's Google Calendar. The tool is intentionally read-only — the AI can list and describe events, but cannot create, modify, or delete them. This matches the principle of least privilege: the AI should be able to reason about your schedule without being able to change it.

Usage from LibreChat

"What meetings do we have scheduled this week? Are there any conflicts?"

Direct API Calls

json — list_events tool call body{
  "tool": "list_events",
  "params": {
    "calendar_id": "primary",
    "time_min": "2026-05-06T00:00:00Z",
    "time_max": "2026-05-13T00:00:00Z",
    "max_results": 10
  }
}

# Response:
# {
#   "events": [
#     {
#       "id": "evt_abc123",
#       "summary": "Q2 Planning Kickoff",
#       "start": "2026-05-07T10:00:00+02:00",
#       "end": "2026-05-07T11:00:00+02:00",
#       "attendees": ["alice@team.com", "bob@team.com"]
#     }
#   ]
# }
json — get_event tool call body{
  "tool": "get_event",
  "params": {
    "calendar_id": "primary",
    "event_id": "evt_abc123"
  }
}

# Returns full event details including description, conferencing links,
# attendee RSVP status, and recurring event information

Tool 4: mcp-deck — PPTX Generator

Property Value
Service mcp-deck
Port 8200
Tools create_deck, update_deck
Library python-pptx (PPTX generation)
Storage MinIO (S3-compatible object storage) — file URL returned
Memory integration Generated deck indexed in memory-api with team tagging
Phase introduced Phase 4

mcp-deck generates PowerPoint presentations from structured content. Give it a title, a list of slides with headings and bullet points, and a template name — it returns a downloadable .pptx file stored in MinIO. The generated deck is automatically indexed in xbrain memory with the team's full tagging contract, so it becomes part of the team's searchable knowledge base.

Usage from LibreChat

"Generate a 5-slide deck for our fundraising pitch. Cover the problem, our solution, traction, team, and ask."

Direct API Call

json — create_deck tool call body{
  "tool": "create_deck",
  "params": {
    "title": "Q2 Fundraising Pitch",
    "project_scope": "fundraising",
    "template": "professional",
    "slides": [
      {
        "title": "The Problem",
        "content": "Teams lose critical knowledge in Slack threads, email chains, and one-off conversations. Every new hire starts from zero.",
        "layout": "content"
      },
      {
        "title": "Our Solution",
        "content": "xbrain gives teams a shared, persistent AI memory layer — searchable, scoped by team, verified by truth level.",
        "layout": "content"
      },
      {
        "title": "Traction",
        "content": "25 containers live, 5 phases shipped, 4 MCP tools, Drive + Calendar + Chrome extension integrated.",
        "layout": "content"
      },
      {
        "title": "The Team",
        "content": "Experienced founders with background in AI infrastructure and enterprise SaaS.",
        "layout": "content"
      },
      {
        "title": "The Ask",
        "content": "$3M Series A at $15M pre-money. 18 months runway to reach 50 enterprise teams.",
        "layout": "content"
      }
    ]
  }
}

# Response:
# {
#   "url": "https://minio.internal/decks/deck_a1b2c3.pptx",
#   "filename": "Q2_Fundraising_Pitch.pptx",
#   "slides": 5,
#   "memory_id": "mem_xyz789",
#   "audit_id": "audit_def456"
# }

The generated deck is stored in MinIO and automatically indexed in xbrain memory with the team's tagging contract. The memory_id in the response can be used to retrieve or update the deck's metadata later.

Available Templates

mcp-deck ships with three built-in templates: professional (dark blue, serif headings), minimal (white background, clean sans-serif), and technical (code-friendly, monospace accents). Custom templates can be added by mounting a templates/ directory into the mcp-deck container.

Audit Trail

Every tool call through the mcp-gateway is logged to the audit_log table in PostgreSQL. Logs include the tool name, calling user, team scope, sanitised parameters (secrets are redacted), and the result status. This audit trail is permanent and cannot be deleted via the API.

sql — query recent MCP tool calls for a teamSELECT
  al.id,
  al.action,          -- e.g. 'mcp.scrape_url', 'mcp.write_file'
  al.user_id,
  al.team_scope,
  al.params,          -- sanitised JSON (secrets redacted)
  al.result_status,   -- 'success', 'error', 'timeout'
  al.created_at
FROM audit_log al
WHERE al.action LIKE 'mcp.%'
  AND al.team_scope = 'excalibur'
ORDER BY al.created_at DESC
LIMIT 50;

-- Returns: tool name, user_id, team_scope, params (sanitized), timestamp

The audit log is also used for usage accounting. You can query it to see which tools your team uses most frequently, or to investigate a specific tool call by its audit_id returned in the tool response.

sql — look up a specific tool call by audit_idSELECT * FROM audit_log
WHERE id = 'audit_abc123'
  AND team_scope = 'excalibur';  -- team_scope always required for isolation

Audit Log Retention

By default, audit logs are retained indefinitely. You can configure a retention period via the AUDIT_LOG_RETENTION_DAYS environment variable in memory-api. Logs older than the retention period are archived to MinIO before deletion, so they remain accessible for compliance purposes.