Lesson 4 — Risks and Mitigation of Generative AI
1. Preventing the Spread of Incorrect Information
AI feels fast and reliable, but accuracy is not guaranteed.
A practical fact-checking routine helps reduce the risk:
How to Fact-Check AI Content
- Cross-check key details
Compare the AI’s answer with at least two reliable sources. - Go back to the original source
If the AI claims something about a law, policy, study, or official document, read the actual text. - Be careful with numbers and dates
These often appear precise but may be outdated or entirely wrong. - Use “unknown” when unsure
If no external source confirms the information, treat it as ⚠️ unverifiable. - Pause when stakes are high
For legal, medical, financial, or safety-related topics, double or triple-check.
It may feel repetitive, but it protects both the user and the people relying on their work.
2. Taking Responsibility for AI-Generated Content
People sometimes assume, “The AI wrote it, so I’m not responsible.”
But legally and ethically, this isn’t how things work.
What Are Users Responsible For?
a. Accuracy
You must review and validate everything before sharing or using it professionally.
b. Ethical Considerations
This includes correcting biased outputs or refusing to use content that may harm individuals or groups.
c. Copyright Compliance
If AI content resembles protected work, the user must ensure they have the right to use it.
d. Data Protection
Users must avoid entering confidential, private, or sensitive data into public tools unless they have permission.
e. Alignment with Laws and Policies
In regions with regulations (EU AI Act, GDPR, etc.), deploying AI requires human oversight, documentation, and fairness controls.
f. Transparency (when required)
Certain sectors expect users to disclose when AI assisted with a task—especially in academia, journalism, or government.
Ultimately, AI outputs become your outputs once you use or publish them. Responsibility does not disappear just because a tool helped.