How can we help? ๐Ÿ‘‹

Common Questions (MSPs / Admins)

FAQs and troubleshooting help for MSPs and Admins (if you log in via admin.hatz.ai)

These FAQs are for MSPs and Admins. If you access the Hatz AI platform through www.admin.hatz.ai, keep reading. If you are an end user, check out this FAQ article instead: Common Questions for End Users
ย 

What is Hatz AI?

Hatz AI helps MSPs build an AI-as-a-Service business with AI applications and agents. The platform is powered by an LLM Ops engine called Mido and includes multi-tenant management through an MSP admin dashboard. Hatz MSP partners can quickly integrate AI into their product offering by building specialized AI applications and workflows for their customers and offering organizationally managed AI assistants.

ย 

What products does Hatz AI offer?

Secure AI (Chat)

Secure AI is a comprehensive, organizationally managed AI platform that provides secure access to multiple large language models (LLMs) while ensuring data privacy and confidentiality. It offers a range of tools for businesses to leverage AI capabilities safely within their organizational structure.

ย 

AI App Builder (Workshop)

AI Workshop is a versatile tools suite that allows users to create, customize, and deploy AI-powered automation solutions. It provides a user-friendly environment to use and manage AI powered applications and automations with customizable templates built for businesses of all sizes.

ย 

AI Phone Agent (ADEL)

Adel, Your AI Phone Agent, is an event driven AI Agent accessible over the phone. This customizable and intelligent AI, includes with pre-built temples, orchestration, 40+ human-like voices, and 10+ LLMs to choose from.

ย 

What are AI Models and LLMs?

An AI model is a computer program designed to perform tasks that usually require human intelligence, like writing text or understanding language. It learns how to do this by analyzing large amounts of data and finding patterns within it.

The Hatz AI platform offers access to 13 large language models (LLMs), all of which have slight differences from one another. Some LLMs might be better suited for reviewing or generating code, while others might be useful for blog posts or translating languages.

ย 

How do I become a Hatz AI partner?

Fill out the form on our website, www.hatz.ai

ย 

Is there a customer support email?

You can email us at help@hatz.ai

ย 

What if I have questions about Hatz AI?

Call ADEL, our AI Phone Agent! 618 923 8447

Use the Hatz AI FAQs App in the Community Apps page

ย 

Whatโ€™s on the roadmap for Hatz AI?

Visit our roadmap at https://hatz-ai.canny.io/ to keep up with all the new features and products coming to the platform.

ย 

Can I request new features?

Yes, you can request new features here: https://hatz-ai.canny.io/feature-requests

ย 

Does Hatz AI have a changelog?

Our changelog can be found here: https://hatz-ai.canny.io/changelog and also on our LinkedIn page, https://www.linkedin.com/company/hatz-ai/

ย 

How is Hatz AI different than Microsoft Copilot or OpenAI?

Copilot

Unlike Copilot, which continuously queries and indexes data, our secure chat does not engage in any such activity. We store only the necessary data to facilitate efficient operations, and all stored data is anonymized and protected. Our system is designed to prevent any data from being used for AI training, ensuring your information remains private and secure.

ย 

OpenAI

This chat system is designed with a higher emphasis on security compared to OpenAI's standard offerings. Unlike OpenAI, where data might be used for model training, our system has strict firewalls in place, ensuring that your data is not used in any learning platform. Itโ€™s locally stored and protected with multiple layers of security to ensure your privacy.

ย 

Is the chat secure?

Yes, the chat function is highly secure. We store only short message histories and settings, which are anonymized and protected with additional security features.

ย 

Where are my prompts and responses stored?

The data is locally stored and hosted on AWS, ensuring that it's safe from unauthorized access. We've built the system with security and privacy as top priorities, coming from a background in cybersecurity.

ย 

Why is protecting your private data from AI important?

  • AI models improve by accessing large amounts of data. Many large language models have already been trained on vast public datasets, including content from sources like Wikipedia, Reddit, etc etc.
  • However, to become even more specialized, these models seek access to private business data, which includes sensitive and proprietary information your organization owns. If this private data isn't properly protected and if AI systems are not explicitly opted out of using it for training, it could end up being used to improve these models without your consent.
  • This could lead to your intellectual property (IP) being leveraged to enhance AI models for other users, including competitors.
  • We take data protection seriously. We have strict agreements in place with all the AI models we use to ensure your data is not used for external training.
  • Additionally, we host certain models, such as open-source models like Llama3 and licensed models like Anthropic's Claude, within our own cloud environment, ensuring full control and security over your data.
  • Unlike providers like OpenAI, which only offer API access on their servers, our approach allows for greater data protection and privacy.
ย 

Why does secure AI matter?

  • Many companies today allow users to access AI tools that aren't managed by their organization, similar to using a free email service. Just as you wouldn't want to send proprietary information through a free email service like Gmail, you shouldn't rely on AI systems outside of your organization's control.
  • Unmanaged AI can potentially read, store, and learn from your data, which could then be used to generate content for others.
  • Secure AI ensures that your data stays within a system that your organization owns and controls, with no risk of it being used to train external models. This is exactly what our solution provides.
ย 

What is Least Privilege AI?

  • Least Privilege AI is an approach where we prioritize giving AI only the minimum, high-quality data it needs to perform effectively, rather than granting it access to all systems and information.
  • By restricting the AI's access, we not only optimize its performance but also enhance security by reducing the risk of data leakage. For instance, if an AI has access to all systems, it could unintentionally expose sensitive information, like a CEOโ€™s email or file library, to unauthorized users.
  • Our approach prevents this by ensuring that data is siloed and only accessible to specific users as needed, thereby protecting your organization from unintended data breaches.
ย 

Are there trainings our courses that I can take?

Yes, Hatz AI currently offers 2 training courses on the Hatz AI Talent LMS:

ย 

Fundamentals of AI

This course dives into the potential of AI in applications in day-to-day problem-solving. You will gain an understanding of basic concepts that will help you practically use AI to accelerate your workflow and business.

ย 

Prompt Engineering 1

This course is an introduction to prompt engineering. You will also learn how to identify "use cases" and how to build AI Apps on the Hatz AI platform. By successfully completing this course, you will be able to build an AI App from scratch for your own company - and for your clients

ย 
ย 

There are more courses coming soon, including Prompt Engineering 2, AI-as-a-Service Go-To-Market, and more.

ย 

How does Hatz AI work?

The platform is structured into three distinct layers:

  1. User Interaction Layer: This is where users engage with the platform via the chat UI, phone functions, or API integrations.
  1. Secure Data Storage Layer: A logical separation exists here, safeguarding sensitive information like chat histories and platform settings.
  1. LLM Layer: Below the secure storage, the operation engine powers the large language models (LLMs) and other external APIs.

The MIDO LLM Operations Engine facilitates communication and data transfer between these layers, ensuring that each component interacts securely and efficiently

ย 

Where can I find the Terms of Service for my company and the End User Terms of Service for my clients?

ย 
ย 
ย 
ย 
ย 
Did this answer your question?
๐Ÿ˜ž
๐Ÿ˜
๐Ÿคฉ