4.3 | AI use and the EU AI Act – In-depth for practical application
What you already know
- The EU AI Act affects not only developers but also users of AI
- Transparency, quality control, and data protection are important basic principles
- Certain AI-generated content must be labeled as such
- When using AI, you are responsible for the results
What you will learn in this module
- Understand the five main obligations for AI users in detail
- Practical wording assistance for labeling AI content
- How you can establish responsible AI use in your team
- Concrete solutions for typical use cases in daily work
1. Your Extended Responsibility with AI
As a Navigator, you have access to more AI functions and, consequently, more responsibility. You not only use the predefined assistants on the xpand platform but also potentially external tools like ChatGPT or specialized AI systems.
With the upcoming introduction of the EU AI Act, responsible AI use gains a legal dimension that directly affects you. Below, you will learn what this means specifically for your daily work.
Important for you as a user: Some transparency and training obligations also affect you directly.
2. The 5 Obligations in Detail
When AI systems interact with people, this must be transparent. Users must know they are communicating with an AI—not a human.
- Applies to: Chatbots, automated customer service systems, AI telephony
- Implementation: Clear labeling, e.g., „I am an AI assistant“ at the beginning of the interaction
Content created by AI that is not easily recognizable as such must be labeled.
- Applies to: Images, videos, audio, and sometimes texts—especially if they appear deceptively real
- Implementation: Clear labeling like „This image was created with AI“ or watermarks
For important decisions, AI can provide support, but it cannot decide alone. The human factor remains important.
- Applies to: Assessments, hiring decisions, resource allocations
- Implementation: „Human in the loop“—AI provides suggestions, the human makes the final decision
Employees must be adequately trained before using AI systems that can have an impact on others.
- Applies to: Everyone who regularly works with AI tools, especially in decision-making
- Implementation: Training, guidelines, continuous learning
You remain responsible for the results created with AI support—including legally.
- Applies to: Any form of AI-generated content that you use or share
- Implementation: Quality control, fact-checking, and, if necessary, documentation of your review steps
3. What Applies Specifically in Daily Work?
Example 1: You use ChatGPT or other external AI tools to formulate a customer email.
- Generally possible.
- But: The text must be reviewed by you, be GDPR-compliant, and caution is advised with sensitive content.
- With external tools, always ensure that no confidential information is entered.
Example 2: You use an entire AI-generated report unchanged for a customer presentation.
- Transparency is necessary: Label the AI support.
- Reviewing all facts and statements is your responsibility.
- Adjustments to specific customer needs should always be made by you.
Example 3: You create deceptively real images or videos with AI for marketing purposes.
- Obligation to label as AI-generated content.
- Legal responsibility for potential deception lies with you.
- Check if the representation is compatible with company guidelines.
4. Wording Assistance for Labeling
For internal documents or emails:
For external communication (e.g., customer information):
For media (e.g., intranet, flyers):
5. What Else is Important
- Never enter sensitive data into public AI tools, not even in a slightly modified form
- Always be aware of the difference between internal AI tools (xpand platform) and public services (ChatGPT, etc.)
- Follow regular updates on evolving regulations
- Develop internal guidelines for critical applications before the EU AI Act fully comes into force
6. xpand Tip
Our tip for practical use:
Establish an AI workflow in your team. Define clear processes: When is AI use appropriate? Who reviews the results? How is it documented? A structured approach creates security and saves time in the long run.
7. In Conclusion
The EU AI Act marks the beginning of a new era of AI regulation. As a Navigator, you are now in a position to actively shape this change and be a role model for responsible AI use.
As a Navigator, with your expanded knowledge, you can be a role model and help others use AI in a legally secure manner.
You are not alone. On the xpandAI platform, you will find templates, rules, and sparring partners for your confident daily use of AI.
Fact Sheet: „The EU AI Act Explained in 3 Minutes“
Download (PDF, 1.2 MB)
Interview: „10 Questions on the EU AI Act“
Read now
Your Takeaway
- The EU AI Act affects not only AI developers but also you as an advanced AI user
- You should observe the 5 main obligations—Transparency, Labeling, Human Control, Competence Building, and Taking Responsibility—in your daily work
- Clear labeling of AI content is a core principle of responsible AI use
- Establish a structured AI workflow in your team that already takes legal requirements into account
The EU AI Act marks the beginning of a new era of AI regulation. As a Navigator, you are now in a position to actively shape this change and be a role model for responsible AI use.
As a Navigator, with your expanded knowledge, you can be a role model and help others use AI in a legally secure manner.
You are not alone. On the xpandAI platform, you will find templates, rules, and sparring partners for your confident daily use of AI.
Download (PDF, 1.2 MB)
Read now
Your Takeaway
- The EU AI Act affects not only AI developers but also you as an advanced AI user
- You should observe the 5 main obligations—Transparency, Labeling, Human Control, Competence Building, and Taking Responsibility—in your daily work
- Clear labeling of AI content is a core principle of responsible AI use
- Establish a structured AI workflow in your team that already takes legal requirements into account