Skip to main content

Auto Model Selection: The Smartest Way to Use AI

Stop guessing which model to use. Let Auto handle it for you.

Auto LLM

Auto removes the guesswork from AI. Instead of choosing a model yourself, Auto analyzes each message you send and routes it to the best model for the job -- automatically, instantly, and using only the credits the task actually requires.


The problem Auto solves

There are a lot of AI models available on the platform. Each has different strengths -- some are fast, some are powerful, some are built for code, others for creative work or image generation. Picking the right one for every task is time-consuming, and picking the wrong one means you're either burning through credits unnecessarily or getting a worse result than you could have.

Auto eliminates that tradeoff entirely.


Get started

  1. Navigate to the Model Selector

  2. Toggle on Auto

  3. Pick Lite, Performance, or Turbo

  4. Start typing

The right model for each message—within the Auto mode you choose. Your choice applies to Auto; you can change the mode anytime from the same menu.


How it works

Select Auto from the model selector. Choose your Auto mode (Lite, Performance, or Turbo)


With Auto on, pick how the system should balance speed, cost, and capability:

  • Lite — lightweight and fast for routine questions

  • Performance — smart routing that picks the right model for each task (default)

  • Turbo — maximum intelligence for complex, high-stakes work

From that point on, every message you send is analyzed in real-time. The system looks at what you're asking, how complex it is, and what tools or files are involved, then routes your message to the model best suited for that specific request.

This happens per-message. Within a single conversation, a quick follow-up question might go to a fast, lightweight model, while a detailed coding request in the very next message gets routed to a heavyweight. You don't have to do anything -- it just works.


What makes it worth using

Use fewer credits without compromising quality. Auto matches each task to a model that delivers a great result -- and nothing more expensive than necessary. Simple questions go to efficient models, so you stop spending premium credits on tasks that don't need premium capability. The quality stays high; the credit usage goes down.

Better answers. Each task gets matched to a model that's actually good at that kind of work -- whether it's writing code, generating images, analyzing data, or answering a quick question.

Speed where it matters. Lightweight requests get fast models. You're not waiting three seconds for "what does this acronym mean?" just because you had a powerful model selected from an earlier task.

Zero maintenance. As new models are added and the routing improves over time, you benefit automatically. No need to stay current on which model is best for what.


Full feature support

Auto isn't limited to basic chat. It's aware of your full context:

  • Image generation -- asks to create, draw, or design visuals are routed to dedicated image models

  • Code execution -- when code interpreter is active, Auto selects models optimized for technical work

  • File attachments -- documents, spreadsheets, and images are factored into model selection

  • Vision -- conversations involving images route to models that can see and analyze them

  • Web search and URLs -- attached links and search tools influence the routing decision


Admin controls

Auto respects your organization's policies. If your admin has disabled certain models, Auto will never route to them. It finds the next best available model at the same quality tier instead.

This works at every level of the admin hierarchy:

  • Tenant admins control which models are available for their organization

  • MSP admins can set model policies across all managed tenants

Auto always operates within these boundaries. Admin decisions are final -- Auto works with what's been approved, never around it. If a model gets disabled after you've been using it, Auto adapts silently and routes to the next best option. No errors, no interruptions.


Flexibility

Auto doesn't lock you in. You can:

  • Switch away at any time -- change to a specific model mid-conversation whenever you want

  • Switch back -- return to Auto on the next message and it picks up right where it should

  • Mix and match -- use Auto for most messages and manually select a model when you have a preference

Every message is independent. There's no commitment, no state to reset.


Credit usage

There's no extra credit charge for using Auto. You spend the same credits as you would if you selected the routed model yourself. The difference is that Auto stretches your credits further by not defaulting to the most expensive model when a more efficient one handles the task just as well. Same great results, fewer credits used.


Limitations

  • Per-message routing only -- you cannot configure Auto to prefer specific models or learn from your usage patterns; each message is evaluated independently based on platform algorithms

  • No visibility into routing decisions in real-time -- the system does not display which model was selected for a given message in the chat interface (this information may be available in conversation logs depending on your plan)

  • Requires multiple enabled models to be effective -- if your admin has disabled most models, Auto's optimization capability is limited to the remaining available options

  • Respects admin restrictions without override -- if the ideal model for your task has been disabled by your admin, Auto will route to the next best available option, which may not be optimal

  • No manual routing preferences -- users cannot set rules like "always use Model X for code" or "never route to Model Y"; routing logic is managed by the platform

  • Routing adds minimal processing time -- while negligible for most tasks, the routing decision does add a small amount of latency compared to direct model selection


Troubleshooting

"I'm not getting the results I expected from Auto"

Try switching to a specific model manually to compare results. If the issue persists, contact Support with details about your task, including what you asked and what outcome you expected.

"Auto seems slower than selecting a model directly"

Routing adds minimal overhead (typically under a second). Most delays are due to the selected model's processing time, not the routing itself. Complex tasks take longer regardless of whether you use Auto or select manually.

Why a mode might be greyed out or missing

"What modes can I pick?"

Admins can control whether Auto is allowed at all and which modes are allowed for an organization (and upstream MSP policies can further restrict). If a mode is not allowed for your role or org, the UI can show it as disabled with “Not available for your role”

If Auto is fully turned off by policy, the Auto toggle/mode UI is hidden so you are not stuck in an invalid state.

Here's a clean Markdown version you can copy-paste:


FAQ: Auto Model Selection & Auto Modes

What is "Auto" in the model picker?

Auto means you're not locking the chat to one AI model. Instead, the product chooses which model handles each message based on what you're asking and your Auto mode (Lite, Performance, or Turbo). Think of it as "smart assignment" instead of "always the same assistant."


What is an Auto mode?

An Auto mode is your preference for how much capability the system can use:

Mode

Best For

Feels Like

Lite

Quick answers, simple lookups, short drafts

"Keep it easy and fast."

Performance

Most day-to-day work (default)

"Figure out what this needs."

Turbo

Long analysis, tricky problems, high-stakes writing

"Pull out the stops when it's hard."


Common Questions

Do I have to pick a mode? No. If you do nothing, Performance is the default.

Is Auto the same as "the best model"? No. Auto means the best for that message within your mode.

Will Auto pick a different model for every message? It can. Simple follow-ups might stay on a lighter path; harder requests get stronger models.

Does the mode affect my privacy? No. Your organization's privacy rules apply regardless of mode. Auto is about which AI model processes the text, not who sees it.

What happens to my credits? Stronger models cost more credits. Turbo allows more capability, which can use more credits on harder tasks.


FAQ: Auto Model Selection & Auto Modes

What is "Auto" in the model picker?

Auto means you're not locking the chat to one AI model. Instead, the product chooses which model handles each message based on what you're asking and your Auto mode (Lite, Performance, or Turbo). Think of it as "smart assignment" instead of "always the same assistant."


What is an Auto mode?

An Auto mode is your preference for how much capability the system can use when you have Auto turned on.

  • Lite - Best when you want quick, lighter answers for everyday questions.

  • Performance - The balanced setting: it tries to match the right level of help to each question.

  • Turbo - Best when you want the system to be allowed to pull out more power for harder or more important work.


Do I have to pick a mode?

If you turn on Auto, you should see Lite, Performance, and Turbo. If you do nothing, Performance is the usual default—a good everyday choice. You can change the mode anytime.


What is the difference between Lite, Performance, and Turbo?

Lite

Performance

Turbo

Feels like

"Keep it easy and fast."

"Figure out what this needs."

"Pull out the stops when it's hard."

Good for

Short questions, quick drafts, simple lookups

Most day-to-day work

Long analysis, tricky problems, high-stakes writing

Tradeoff

Often faster / lighter

Balanced

Often more thorough; may use more of your credits on hard tasks


Is Auto the same as "the best model"?

Auto means the best for that message, within your mode and your organization's rules. There is no single model that is best for every sentence—Auto is built around that idea.


Will Auto pick a different model for every message?

It can. Simple follow-ups might stay on a lighter path; a big new request might get a stronger one. You do not need to manage that yourself unless you want to.


If I use Auto, can I still trust the answers?

Auto only changes which engine answers you—not whether the product follows your organization's settings. Answers can still be wrong sometimes (any AI can). For critical decisions, check important facts and use Turbo or pin a specific model when you want a fixed behavior.


What does "pin a specific model" mean?

Pinning (or choosing a named model from the list instead of Auto) means every message uses that model until you change it. People often pin a model when they want predictable behavior or are comparing two models side by side.


When should I use Lite?

Use Lite when:

  • You want fast replies.

  • The task is simple (short email, quick summary, basic rewrite).

  • You're doing a lot of small back-and-forth and don't need deep reasoning on every line.


When should I use Performance?

Use Performance when:

  • You want a good default and aren't sure what to pick.

  • Your day mixes easy and harder questions.

  • You want Auto to scale up when needed without always defaulting to "maximum."


When should I use Turbo?

Use Turbo when:

  • The task is hard (multi-step analysis, nuanced judgment, dense documents).

  • The outcome matters a lot (client-facing, legal-ish tone, careful reasoning—still verify facts).

  • You tried Performance and the answers felt too shallow for that specific task.


Will Turbo always give better answers than Lite?

Not always. For a very easy question, Turbo might not sound "smarter"—the question might just not need depth. Turbo matters most when the task is genuinely difficult or you want more room for thorough reasoning.


Does Auto or the mode affect my privacy?

Your organization's privacy and data rules apply whether you use Auto, a mode, or a fixed model. Auto does not mean "more people see your chats" by itself—it is about which AI model processes the text. If you need specifics, ask your admin or read your company's AI policy.


Do modes change what the AI "knows" about me?

Modes mainly change how strong a model can be used, not your personal memory or saved preferences by themselves. If your workspace has features like memory or custom instructions, those follow your account settings, not the Auto mode name.


What happens to my credits or usage?

Simple picture: stronger models often cost more credits than lighter ones. Lite tends to stay on the lighter side; Turbo allows the system to use more capability when the task needs it, which can use more credits on harder work. Exact usage still depends on how long your messages are, attachments, and tools (search, connectors, etc.), not just the mode.


If I switch from Lite to Turbo mid-chat, what happens?

Usually new messages follow the new mode. Earlier messages in the thread were already answered under the old setting; the product does not "re-run" your whole history automatically when you flip the switch.


Where do I set my default mode for new chats?

In Settings (often under AI or Personal default LLM), you can set your default Auto mode for new chats. The model menu in chat often updates the same preference so everything stays in sync—check your product's exact labels.


Why is one of the modes greyed out or says it is not available?

Common reasons:

  • Your company or team only allows certain modes.

  • Your role has a restriction.

  • A parent organization (e.g. a partner managing many companies) set a limit.

If something is greyed out, pick an available mode or ask your IT / admin. This is not something you fix in "advanced settings" as a normal user.


What if Auto is missing entirely?

Then Auto model selection may be turned off for your organization, or not available in that screen yet. Use a named model from the list or ask your admin whether Auto is enabled for your account.


Can I use Auto with files or images?

Generally yes—attachments still go with your message. The system still has to pick a model that can handle what you sent (for example, reading an image). If something fails, try a fixed model that supports images, or ask support with a screenshot of the error.


What about agents or automations?

Some saved agents or automated workflows can have their own Auto settings. Rule of thumb: if an agent or workflow was set up with a specific mode, that setup can override your personal default for that agent or run. If answers feel "too light" or "too heavy," check the agent's settings (if you have access) or ask whoever built the agent.


The answer felt wrong for my mode - what should I do?

Try in this order:

  1. Rewrite the question more clearly (goal, audience, length, format).

  2. Switch mode (e.g. Performance → Turbo for one hard message).

  3. Pick a specific model from the list for a while.

  4. If it keeps happening, contact support with: what you asked, which mode, and whether you had files or tools on.


Is Auto slower than picking one model?

Sometimes Lite feels snappier because the system may use a lighter path. Turbo might feel slower on hard tasks because more reasoning can take longer—like asking for a longer, deeper answer from a person.


Can I use Auto for sensitive topics?

Treat Auto like any AI: do not paste secrets (passwords, full card numbers, classified data) unless your organization explicitly allows it. Use company guidance for regulated or HR/legal topics. Modes do not remove the need for good judgment.


Quick "which should I pick?" cheat sheet

  • Most days: Performance + Auto on.

  • Lots of small, quick questions: Lite.

  • Hard document or big decision support: Turbo (and still verify facts).

  • I need the same behavior every time: turn Auto off and pick a named model.


Cheat Sheet

  • Most days: Performance + Auto on

  • Lots of small quick questions: Lite

  • Hard document: Turbo (verify facts)

Did this answer your question?