Auto Mode turns “which model should I use?” into a non-question. PANTA OS routes each request to the right model, picks the right tools, and decides when a single answer needs to become a workflow — without you choosing.Documentation Index
Fetch the complete documentation index at: https://help.pantaos.com/llms.txt
Use this file to discover all available pages before exploring further.
Why it matters
No model paralysis
GPT or Claude? Fast or deep? Auto Mode decides per request based on the task.
No wrong tool
Web search for fresh facts; knowledge base for internal questions; integrations when action is needed.
No manual escalation
A simple question stays a chat. A complex one becomes a workflow — Auto Mode notices the difference.
Cost-aware by default
Cheap models do cheap jobs. Expensive ones come out only when warranted.
How it decides
Read the task
Read the task
Auto Mode classifies the request: factual lookup, creative drafting, analysis, action, multi-step plan.
Match a model
Match a model
Speed-first for chat and voice; depth-first for analysis and long content; specialist models for media.
Bring the right tools
Bring the right tools
Auto Mode opens web search if the question references “today”, connects your inbox if it mentions “email me”, grounds in the knowledge base for internal terms.
Escalate when needed
Escalate when needed
If the task needs multiple steps, Auto Mode hands off to a conversational workflow seamlessly.
When to override
Auto Mode is smart, not psychic. You can always pin a model or disable a tool for a specific assistant or chat:Pin a model
Force a specific model for an assistant where the choice matters.
Disable a tool
Turn off web search for assistants that should answer only from your KB.
Force a workflow
Skip the chat and jump straight into a multi-step pipeline.
Tighten the budget
Cap the model class an assistant is allowed to use, regardless of task.
