4.2 | Quality control: How to check and improve AI results

What this module is about

AI can do a lot—but not everything. And certainly not without errors. You remain responsible for what you share—even if an AI tool wrote it.

In this module, you will learn how to systematically check AI outputs and make them more reliable: in terms of content, form, style—and ethics.

1. Why this is important

  • You are responsible for the content, not the AI.
  • Language models often generate plausible but factually incorrect statements („hallucinations“).
  • Legal uncertainties (copyright, data protection, risk of deception) often arise from the uncritical sharing of AI content.
  • Good review is not a check „out of mistrust,“ but a professional quality standard.

2. What you should look out for

Content Review

Ask yourself: Is this correct—really?

  • Are all facts and figures correct?
  • Is a source cited—or does it just sound like it?
  • Are there exaggerations, contradictions, or outdated statements?
  • If you see legal or technical statements, ask the AI a follow-up question: „Please provide the source for this statement.“ This way, you can verify if it is well-founded—or just „sounds good.“

Formal Review

The text may be correct—but does it fit?

  • Is the structure clear?
  • Is the format correct (e.g., length, outline, relevance to the target audience)?
  • Has the required text type been implemented (e.g., invitation, strategy paper, concept)?

Stylistic Review

Does the text appeal to you—or does it sound like a generic AI draft?

  • Are there phrases like: „In the digital age, innovation is paramount“?
  • Do words or sentence patterns repeat?
  • Is the tone factual, motivating, professional—or somewhere in between?

If you used your style prompt from Module 2.3, the tonality should already be quite good. Nevertheless, read it over to see if you recognize yourself in the text—or rather an anonymous AI author.

Impact Review

Does the text fulfill its purpose?

  • Does it inform? Does it persuade? Does it activate?
  • Does the tonality fit the target audience and the channel?

3. How to specifically uncover false statements

Before you start optimizing, you need to know: Is this even correct?

  • Ask follow-up questions to the AI, e.g.: „On what is this statement based?“ „Are there studies on this?“
  • Watch out for apparent certainty („It is well known that…“)—without proof, it is not reliable.
  • If you are unsure: Search for a source yourself or ask the AI to provide sources (including URL or author).
  • Especially with legal statements, diagnoses, and scientific claims: Always double-check.

4. Ethical Quality Review

AI texts should not only be fact-checked—but also reviewed in terms of their societal impact. Check:

  • Is it clear that the content is (partially) AI-generated? → You can find more on transparency obligations in Module 1.5.
  • Is a distorted worldview being conveyed? (e.g., stereotypes, unbalanced language)
  • Is diversity considered? Are different perspectives possible?
  • Are people or groups being discriminated against—intentionally or unintentionally?

AI models are often trained on „majority opinions“—this does not automatically make them fair or nuanced.

5. xpand Tip

Create your own review workflow:

Three questions for a quick check:

  • Content checked?
  • Style fits?
  • Impact achieved?

If you can answer all three with yes: Share it. If not: Readjust—or start again with a new prompt.

Done! You now have everything you need to handle AI content more securely and confidently—also from legal, ethical, and qualitative perspectives.

Your Takeaway

  • AI-generated content always requires a critical review—you are responsible for its quality and accuracy.
  • Systematically review on a content, formal, stylistic, and ethical level.
  • Pay special attention to „hallucinations“ and false factual claims—ask targeted follow-up questions.
  • Your own review workflow with the core questions „Content? Style? Impact?“ will help you efficiently verify AI content.