Skip to main content

Auto LLM: The Smartest Way to Use AI

Stop guessing which model to use. Let Auto handle it for you.

Updated this week

Auto LLM

Auto removes the guesswork from AI. Instead of choosing a model yourself, Auto analyzes each message you send and routes it to the best model for the job -- automatically, instantly, and using only the credits the task actually requires.


The problem Auto solves

There are a lot of AI models available on the platform. Each has different strengths -- some are fast, some are powerful, some are built for code, others for creative work or image generation. Picking the right one for every task is time-consuming, and picking the wrong one means you're either burning through credits unnecessarily or getting a worse result than you could have.

Auto eliminates that tradeoff entirely.


Get started

  1. Open any conversation

  2. Select Auto from the model picker

  3. Start typing

No setup, no configuration. The right model, every message.


How it works

Select Auto from the model picker. From that point on, every message you send is analyzed in real-time. The system looks at what you're asking, how complex it is, and what tools or files are involved, then routes your message to the model best suited for that specific request.

This happens per-message. Within a single conversation, a quick follow-up question might go to a fast, lightweight model, while a detailed coding request in the very next message gets routed to a heavyweight. You don't have to do anything -- it just works.


What makes it worth using

Use fewer credits without compromising quality. Auto matches each task to a model that delivers a great result -- and nothing more expensive than necessary. Simple questions go to efficient models, so you stop spending premium credits on tasks that don't need premium capability. The quality stays high; the credit usage goes down.

Better answers. Each task gets matched to a model that's actually good at that kind of work -- whether it's writing code, generating images, analyzing data, or answering a quick question.

Speed where it matters. Lightweight requests get fast models. You're not waiting three seconds for "what does this acronym mean?" just because you had a powerful model selected from an earlier task.

Zero maintenance. As new models are added and the routing improves over time, you benefit automatically. No need to stay current on which model is best for what.


Full feature support

Auto isn't limited to basic chat. It's aware of your full context:

  • Image generation -- asks to create, draw, or design visuals are routed to dedicated image models

  • Code execution -- when code interpreter is active, Auto selects models optimized for technical work

  • File attachments -- documents, spreadsheets, and images are factored into model selection

  • Vision -- conversations involving images route to models that can see and analyze them

  • Web search and URLs -- attached links and search tools influence the routing decision


Admin controls

Auto respects your organization's policies. If your admin has disabled certain models, Auto will never route to them. It finds the next best available model at the same quality tier instead.

This works at every level of the admin hierarchy:

  • Tenant admins control which models are available for their organization

  • MSP admins can set model policies across all managed tenants

Auto always operates within these boundaries. Admin decisions are final -- Auto works with what's been approved, never around it. If a model gets disabled after you've been using it, Auto adapts silently and routes to the next best option. No errors, no interruptions.


Flexibility

Auto doesn't lock you in. You can:

  • Switch away at any time -- change to a specific model mid-conversation whenever you want

  • Switch back -- return to Auto on the next message and it picks up right where it should

  • Mix and match -- use Auto for most messages and manually select a model when you have a preference

Every message is independent. There's no commitment, no state to reset.


Credit usage

There's no extra credit charge for using Auto. You spend the same credits as you would if you selected the routed model yourself. The difference is that Auto stretches your credits further by not defaulting to the most expensive model when a more efficient one handles the task just as well. Same great results, fewer credits used.


Limitations

  • Per-message routing only -- you cannot configure Auto to prefer specific models or learn from your usage patterns; each message is evaluated independently based on platform algorithms

  • No visibility into routing decisions in real-time -- the system does not display which model was selected for a given message in the chat interface (this information may be available in conversation logs depending on your plan)

  • Requires multiple enabled models to be effective -- if your admin has disabled most models, Auto's optimization capability is limited to the remaining available options

  • Respects admin restrictions without override -- if the ideal model for your task has been disabled by your admin, Auto will route to the next best available option, which may not be optimal

  • No manual routing preferences -- users cannot set rules like "always use Model X for code" or "never route to Model Y"; routing logic is managed by the platform

  • Routing adds minimal processing time -- while negligible for most tasks, the routing decision does add a small amount of latency compared to direct model selection


Troubleshooting

"I'm not getting the results I expected from Auto"

Try switching to a specific model manually to compare results. If the issue persists, contact Support with details about your task, including what you asked and what outcome you expected.

"Auto seems slower than selecting a model directly"

Routing adds minimal overhead (typically under a second). Most delays are due to the selected model's processing time, not the routing itself. Complex tasks take longer regardless of whether you use Auto or select manually.

Auto LLM

Pick Auto. Get the right model every time.

Auto removes the guesswork from AI. Instead of choosing a model yourself, Auto analyzes each message you send and routes it to the best model for the job -- automatically, instantly, and using only the credits the task actually requires.


The problem Auto solves

There are a lot of AI models available on the platform. Each has different strengths -- some are fast, some are powerful, some are built for code, others for creative work or image generation. Picking the right one for every task is time-consuming, and picking the wrong one means you're either burning through credits unnecessarily or getting a worse result than you could have.

Auto eliminates that tradeoff entirely.


How it works

Select Auto from the model picker. From that point on, every message you send is analyzed in real-time. The system looks at what you're asking, how complex it is, and what tools or files are involved, then routes your message to the model best suited for that specific request.

This happens per-message. Within a single conversation, a quick follow-up question might go to a fast, lightweight model, while a detailed coding request in the very next message gets routed to a heavyweight. You don't have to do anything -- it just works.


What makes it worth using

Use fewer credits without compromising quality. Auto matches each task to a model that delivers a great result -- and nothing more expensive than necessary. Simple questions go to efficient models, so you stop spending premium credits on tasks that don't need premium capability. The quality stays high; the credit usage goes down.

Better answers. Each task gets matched to a model that's actually good at that kind of work -- whether it's writing code, generating images, analyzing data, or answering a quick question.

Speed where it matters. Lightweight requests get fast models. You're not waiting three seconds for "what does this acronym mean?" just because you had a powerful model selected from an earlier task.

Zero maintenance. As new models are added and the routing improves over time, you benefit automatically. No need to stay current on which model is best for what.


Full feature support

Auto isn't limited to basic chat. It's aware of your full context:

  • Image generation -- asks to create, draw, or design visuals are routed to dedicated image models

  • Code execution -- when code interpreter is active, Auto selects models optimized for technical work

  • File attachments -- documents, spreadsheets, and images are factored into model selection

  • Vision -- conversations involving images route to models that can see and analyze them

  • Web search and URLs -- attached links and search tools influence the routing decision


Admin controls

Auto respects your organization's policies. If your admin has disabled certain models, Auto will never route to them. It finds the next best available model at the same quality tier instead.

This works at every level of the admin hierarchy:

  • Tenant admins control which models are available for their organization

  • MSP admins can set model policies across all managed tenants

Auto always operates within these boundaries. Admin decisions are final -- Auto works with what's been approved, never around it. If a model gets disabled after you've been using it, Auto adapts silently and routes to the next best option. No errors, no interruptions.


Flexibility

Auto doesn't lock you in. You can:

  • Switch away at any time -- change to a specific model mid-conversation whenever you want

  • Switch back -- return to Auto on the next message and it picks up right where it should

  • Mix and match -- use Auto for most messages and manually select a model when you have a preference

Every message is independent. There's no commitment, no state to reset.


Credit usage

There's no extra credit charge for using Auto. You spend the same credits as you would if you selected the routed model yourself. The difference is that Auto stretches your credits further by not defaulting to the most expensive model when a more efficient one handles the task just as well. Same great results, fewer credits used.


When to pick a model manually

Auto works for the vast majority of tasks, but you might prefer manual selection if:

  • You have a strong preference for a particular model's tone or style

  • You want to test-drive a specific model that was recently added

  • You're doing very rapid, simple back-and-forth and want to skip the small routing overhead


Limitations

  • Per-message routing only -- you cannot configure Auto to prefer specific models or learn from your usage patterns; each message is evaluated independently based on platform algorithms

  • No visibility into routing decisions in real-time -- the system does not display which model was selected for a given message in the chat interface (this information may be available in conversation logs depending on your plan)

  • Requires multiple enabled models to be effective -- if your admin has disabled most models, Auto's optimization capability is limited to the remaining available options

  • Respects admin restrictions without override -- if the ideal model for your task has been disabled by your admin, Auto will route to the next best available option, which may not be optimal

  • No manual routing preferences -- users cannot set rules like "always use Model X for code" or "never route to Model Y"; routing logic is managed by the platform

  • Routing adds minimal processing time -- while negligible for most tasks, the routing decision does add a small amount of latency compared to direct model selection


Troubleshooting

"How do I know which model Auto selected?"

Each output will clearly indicate which model was used.

"Auto isn't choosing the model I would have picked"

Auto prioritizes task fit and credit efficiency over user preference. If you consistently prefer a specific model's style or output, consider selecting that model manually instead of using Auto.

"Can I see Auto's routing logic or decision criteria?"

No. The routing algorithm is managed by the platform and is not user-configurable. If you need transparency into model selection for compliance or auditing purposes, contact your admin or Support.

Did this answer your question?