Orchestration hub

General Chat: multi-LLM control room

One thread for models, agents, AI workflows, knowledge (RAG), rules, skills, tools, and connections—with a transparent euro wallet and privacy controls before data ever reaches an external LLM.

Interactive preview

How the chat workspace feels

Sample inbox, thread, and composer with @ and / — the same interaction patterns as General Chat in the product (static demo, no account required).

Chats
127.84

Q2 launch checklist

Board · In progress · Try @ and / in the composer below

Task created from chat — participants synced.
Move legal review to this week and @Contract reviewer when ready.
Picked up. I will diff the MSA against last year and post findings here.

Sample data only — wallet tick simulates token debit. In the product this ties to your org budget.

General Chat capabilities

Choose & switch models in-session

Same familiar chat as consumer AI—but you decide which vendor runs the next turn. Swap between OpenAI, Anthropic, and Google models without losing thread context.

Live € wallet while you type

Watch spend climb as the model streams. Agents can charge a fixed or variable price per run; everything settles against one euro wallet, including optional top-ups.

Agents, skills, tools & connections

Invoke standard or custom agents, attach skills and rules, and call tools—often backed by connections such as email or CRM. Draft, edit, and send email from the same conversation.

Schedule agent runs

Automate recurring work on your cadence: daily, weekly, monthly, or a custom moment (available on Pro and Enterprise plans).

Media in the same flow

Generate or iterate on images without leaving chat so creative and operational work stay in one timeline.

Core Value

Efficiency without surprises

General Chat pairs smart routing with an always-visible wallet so teams optimize cost per outcome—not only token volume—while staying in the same conversation from prompt to production.

  • Average API spend goes down through context reuse across turns and fewer duplicate calls.
  • Model routing keeps lightweight requests on cost-efficient models and escalates only when needed.
  • Agent calls share structured context from the same thread, reducing token-heavy re-prompting.
  • Usage is visible per run and per model so teams can optimize with evidence, not guesswork.

Always work in the same chat

Keep execution centralized from first prompt to final action. Agents remain callable at any point.

  • Start in one conversation and trigger specialized agents without switching tabs.
  • Agent outputs return into the same thread, so review and follow-up stay centralized.
  • Schedule recurring agent runs directly from chat for daily, weekly, or custom automation loops.
  • Anonymization controls remain available in-chat before sensitive data is sent to models.
  • Users keep one operational timeline for prompts, agent actions, and final outputs.

RAG and context optimization in-chat

Ground responses with your data, then continue with execution immediately in the same thread.

  • Users can attach documents or URLs and run grounded Q&A directly in chat.
  • RAG setup is user-friendly: add sources, ask questions, get traceable answers.
  • Context optimization keeps relevant retrieval snippets while avoiding oversized prompts.
  • Teams can combine RAG responses with immediate agent execution in the same flow.
SS-20

General chat with always-on agent invocation and centralized context

1440 × 900px
SS-20: General chat with always-on agent invocation and centralized context, target dimensions 1440 by 900 pixels

Built-in guardrails while you work

Sensitive operations do not require separate compliance tooling. General Chat keeps protections in the same UX where teams already operate.

  • Inline anonymization before model calls when sensitive data is detected.
  • Traceable actions and outputs across prompts, retrieval, and agent invocations.
  • Single conversation timeline for reviews, approvals, and handoffs.

Why teams use General Chat

General Chat unifies your day-to-day AI operations. Instead of splitting work across separate tools for prompting, agent execution, and media generation, teams can run the full workflow in one conversation context with traceable outputs.