Interview: 10 Questions on the EU AI Act

The EU AI Act at a Glance: What You Should Know Now

The EU AI Act at a Glance | Xpand AI Academy

What the new EU AI Act means for companies—and how to prepare your employees.

The EU AI Act is changing the rules for dealing with AI—including for daily use in the workplace. In this compact overview, you will learn what requirements companies now face, what opportunities arise, and how to achieve a responsible implementation.

In-depth knowledge, clearly explained

We have thoroughly researched and sought legal advice to compile the most important facts and recommendations for you. The goal is to provide you with orientation—understandable, practical, and clearly structured.

In focus:

  • What the EU AI Act means for users
  • Which obligations are relevant for companies
  • How you can prepare strategically
  • Where transparency and responsibility are required

Navigate through the interview

The EU AI Act introduces a risk-based regulatory framework for AI in Europe. Companies that develop (providers) or use (operators) AI systems must meet specific requirements depending on the risk level of their systems. These include, among other things, transparency obligations, risk management, documentation requirements, or user training.

The AI Act thus creates for the first time a binding, uniform set of rules for trustworthy AI in the EU.

The EU AI Act distinguishes four risk classes:

  • Prohibited AI practices (e.g., manipulative systems or social scoring)
  • High-risk AI systems (e.g., applicant selection, credit scoring, biometric identification)
  • Limited risks with transparency obligation (e.g., chatbots)
  • Minimal risk – these systems can be used freely

The higher the risk, the stricter the requirements. The risk classification determines the testing and training obligations.

Providers must carry out comprehensive conformity assessment procedures, establish a risk management system, technical documentation, and ensure ongoing monitoring (Art. 16–25).

Operators, i.e., users of AI, are obliged to use systems in accordance with regulations, train employees (Art. 29), and document and question results.

Both roles have responsibilities—but at different levels.

The classification is made via Article 6 and Annex III of the AI Act. There, precisely defined use cases are listed—e.g., in HR, education, justice, public administration.

In addition, if an AI is part of an already regulated product (e.g., a medical device), it is automatically high-risk.

Companies should therefore check their systems early on for the intended purpose and context—if necessary, with legal or technical support.

That depends on the risk class. For high-risk AI, the following are required, among others:

  • Establishment of a risk management system (Art. 9)
  • Ensuring data quality (Art. 10)
  • Logging (Art. 12)
  • Transparency & user information (Art. 13)
  • human oversight (Art. 14)
  • and technical robustness (Art. 15)

In addition, training and an internal QM system must be in place.

The central point is the clarification of roles according to Article 3: Am I a provider, importer, distributor, or operator? Obligations can then be derived from this.

Companies should define responsibilities in the AI governance structure—e.g., via an internal policy, a responsibility matrix, or dedicated roles (such as an AI officer, if voluntary).

Documentation is crucial here to be prepared for liability in the event of an audit.

According to Article 52, companies must ensure that users recognize:

  • when they are interacting with an AI (e.g., chatbot)
  • when content has been generated by AI (e.g., text, image, audio)
  • when biometric or emotional characteristics are being recorded

These notices must be clear, understandable, and given in advance. Violations of these transparency obligations can be sanctioned—even for non-high-risk AI.

Violations of the AI Act can be punished with fines of up to 35 million euros or 7% of the worldwide annual turnover—depending on the severity and type of violation (Art. 71).

The use of prohibited AI or manipulative systems is punished particularly severely. But a lack of training, insufficient documentation, or a lack of transparency can also lead to fines and reputational damage.

Transparency and responsibility build trust—not only with authorities, but also with customers and business partners.

Companies that demonstrate early on that they use AI ethically, safely, and transparently position themselves as trustworthy innovators.

This is also a competitive advantage—for example, in tenders or investor evaluations.

Short-term: Risk classification of the used AI systems, awareness-raising and training of employees, establishing transparency measures.

Long-term: Building an AI governance structure, integrating compliance-by-design into development processes, regular audits and monitoring.

It is important to secure AI not only technically, but also organizationally & ethically.