Skip to main content

Terminology

Don't let acronyms get in the way of progress!

Updated over 3 weeks ago

AI (Artificial Intelligence): Technology that enables computers to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and understanding language.

AI Model: A mathematical representation trained on data to make predictions or decisions without explicit programming for each task.

LLM (Large Language Model): A type of AI model trained on vast amounts of data to understand and generate human language.

Automation: A collection of prompts and user inputs that can be used to speed up repetitive tasks using AI.

Workflow: A series of 2 or more automations connected to one another, and it can include an integration with third party software like Microsoft or ConnectWise.

Agent/Agentic: An AI system that can act autonomously to achieve goals, make decisions, and take actions based on its environment and objectives.

API (Application Programming Interface): A set of rules that allows different software applications to communicate with each other.

Context Window: The amount of text an AI model can consider at once when generating responses or making predictions.

Token: A unit of text (typically a word or part of a word) that AI models process; models have limits on how many tokens they can handle at once.

Credit: Hatz Credits are units of measure directly tied to AI usage in the Hatz AI platform. Each Hatz AI user is entitled to a set number of credits that can be shared with users across their tenant. Credits automatically reset at the start of the month.

Tenant: An organization or user with its own isolated instance of a software service or platform. Your tenant is your organization’s account.

RAG (Retrieval-Augmented Generation): A technique that enhances AI language models by combining text generation with information retrieval from external knowledge sources. Instead of relying solely on information contained in the model's parameters, RAG allows an AI to search through and incorporate relevant information from databases, documents, or other knowledge bases before generating a response.

Prompt: An input text or instruction given to an AI model that guides what the model should generate or how it should respond. A prompt can range from a simple question to complex instructions with specific formatting requirements, examples, and context. The design and wording of a prompt significantly influence the quality, style, and content of the AI's response.

Generative Pre-trained Transformer (GPT): A type of artificial intelligence (AI) that can understand and create human language. GPT is used in a variety of applications, including language translation, content creation, and computer code generation.

How GPT works

  • Generative: GPT can create new text, such as poems, scripts, and emails

  • Pre-trained: GPT is trained on large amounts of text data to learn human language patterns

  • Transformer: A neural network that allows GPT to focus on different parts of input text

Model Context Protocol (MCP): an open protocol that standardizes how applications provide context and tools to LLMs. It allows you to extend the AI Agent's capabilities by connecting it to various data sources and tools through standardized interfaces.

Did this answer your question?