2.1 | Understanding terms
If you want to use Artificial Intelligence safely and confidently in your daily work, you don’t need detailed technical knowledge – but a good basic understanding.
In this interactive module, you will learn the most important terms you will repeatedly encounter when working with AI.
Your learning objective: Short, understandable, directly applicable – step by step, understand the key AI terms and relate them to your work.
The most important AI terms at a glance
Here you will find the 13 key terms that will help you better understand the world of AI. Click through the terms to learn more.
LLM (Large Language Model)
An AI model that has been trained on large amounts of text and can generate human-like text. The basis for ChatGPT and other language assistants.
Example: ChatGPT uses an LLM to answer your questions or create texts. The model has „read“ billions of texts and can thus understand and respond to a wide variety of requests.
Prompt Engineering
The art of formulating precise instructions (prompts) for AI systems to get exactly the results you need.
Example: Instead of „Write something about meetings,“ you formulate: „Create a structured agenda for a 60-minute team meeting with 5 participants on the topic of quarterly results, including time specifications.“
Hallucinations
When AI models output false, misleading, or fabricated information as facts, it is called „hallucinations.“
Example: ChatGPT can provide convincing but false details or non-existent sources. An AI model could, for example, cite a fictitious law or reference non-existent scientific studies.
RAG (Retrieval-Augmented Generation)
A method where AI models are connected to specific data sources to provide more precise and fact-based answers.
Example: An AI assistant in your company can be connected to your internal knowledge system, allowing it to access current company documents, guidelines, or project data.
Bias
Systematic distortions in AI models that can arise from unbalanced training data or societal prejudices.
Example: An AI system for personnel selection might favor certain applicant groups because it was trained on historical data that already contained inequalities.
Tokens
The basic units into which a text is broken down for AI processing. A token can be a word, part of a word, or a punctuation mark.
Example: The sentence „Hello, how are you?“ is broken down into about 7 tokens. The number of tokens determines how much text an AI model can process at once.
Fine-Tuning
The process of further training a pre-trained AI model with specific data to optimize it for particular tasks.
Example: A general language model can be fine-tuned to the specific terminology and communication style of your company.
Embeddings
Numerical representations of text that capture meaning and context. They help AI systems understand how similar different texts are.
Example: In the embedding space, the words „King“ and „Queen“ are closer together than „King“ and „Apple“ because they are semantically more related.
Multimodal AI
AI systems that can process and understand multiple types of data (text, images, audio) simultaneously.
Example: Google Gemini or GPT-4 can analyze images as well as generate text – you can, for example, upload a photo and ask questions about it.
AI Agents
Autonomous AI systems that can independently plan and execute complex tasks, often with access to tools and services.
Example: An AI agent can search for flights for you, compare prices, analyze hotel reviews, and then suggest a travel plan.
Inference
The process by which an AI model makes predictions or generates content based on its training.
Example: When you ask ChatGPT a question, inference takes place – the system uses its training to generate the most likely appropriate answer.
Chain-of-Thought
A prompt technique where the AI is asked to explain its thinking step-by-step to achieve better results.
Example: Instead of just „Solve this math problem,“ you say „Solve this math problem and explain each step of your solution in detail.“
Temperature Setting
A parameter that controls the creativity and randomness of AI responses. Higher values lead to more creative but potentially less precise answers.
Example: At a low temperature (0.2), you will almost always get similar, precise answers to the same question. At a high temperature (0.8), the answers vary more and are more creative.
What do these terms mean for your daily work?
The terms you’ve learned are not just theoretical knowledge. Here you will find out how they become relevant in practice:
Prompt Engineering helps you get more precise results from AI systems and save time.
Understanding hallucinations protects you from passing on AI-generated misinformation.
Knowledge of RAG shows you how to connect AI with your own company knowledge.
Awareness of bias enables you to recognize and correct bias in AI results.
Practical Application: Where do you encounter these terms?
When briefing an AI assistant, you use Prompt Engineering
When quality checking AI texts, you watch out for hallucinations
When you integrate your own data, you use RAG technologies
With critical decisions, you are vigilant about possible bias
Reflection Questions
Take a moment to think about the following questions:
1. Which two or three terms seem particularly important for your understanding of AI? How are they related?
2. Where could you specifically apply the concept of ‚Prompt Engineering‘ in your daily work?
3. What risks do you see regarding ‚hallucinations‘ and ‚bias‘ in your work context?