Skip to main content
All CollectionsFAQs
Why Are My Outputs Different in Hatz AI?
Why Are My Outputs Different in Hatz AI?

If you've been using OpenAI via openai[dot]com, you might notice that outputs are different in Hatz AI.

Updated this week

Benefits of Using OpenAI LLMs on Hatz AI

Using the API offers incredible flexibility and customization that lets you tailor the AI’s behavior to fit your unique needs. By using OpenAI models on Hatz AI, you are able to fine-tune the AI’s responses to be just the way you want them.

This level of customization can be particularly beneficial for developing applications or services where specific behaviors or performance requirements are important.

By leveraging OpenAI's models via Hatz AI, you're not just using a chatbot—you're harnessing the power of AI to build solutions that are as dynamic and versatile as your imagination. It might take a bit of prompt engineering to get the outputs you want, but you have full control over the models.

And of course, by using OpenAI models via Hatz AI, you eliminate any risk of OpenAI using, retaining, or learning from your data.

What Causes Different Outputs?

There are several reasons why you might observe different results when using OpenAI LLMs in Hatz AI compared to using ChatGPT via OpenAI's website.

  1. Model Versions: Hatz AI uses the OpenAI API to access OpenAI models. OpenAI may deploy newer model versions on their website sooner than on the API, or they may serve different models for different interfaces depending on the application. Check the specific version you are using in both cases.

  2. System Prompts and Custom Instructions: The chat interface on OpenAI's website might employ different system prompts or default settings that can affect outputs compared to the API. This can include pre-set instructions, behavior settings, or user settings that influence how responses are generated.

  3. Built-in Randomness: AI models like GPT have a bit of built-in randomness. This means that even if you ask the same question multiple times, you might get slightly different answers each time. This randomness helps the model give varied and interesting responses. Because of this, even small differences in settings or conditions between using ChatGPT on the website and through the API can lead to different results.

  4. Temperature and Other Parameters: The temperature and other settings like max tokens, top_p, and frequency penalties might differ between the web interface and your API configurations. Differences in these parameters can significantly affect the nature of the model's responses.

  5. Context Handling: The way context is managed (i.e., how past interactions are included in the current query) might differ. The chat interface on the website might handle context or conversation history differently compared to how you might implement it using the API.

  6. Rate Limits and Performance: Temporary performance issues or rate limits on either interface might also cause variations in responses if they result in timeouts or retries with different contexts.

  7. User Interface Enhancements: OpenAI's website interface might include additional layers of user interaction design, post-processing, or interpretation that do not translate directly when accessing the model through a raw API.

  8. Customization and Personalization: Over time, the chat interface may develop user-specific enhancements or adaptations based on usage history that aren't present in the API unless specifically programmed.

Did this answer your question?