Lesson 2 — Potential Legal Implications

1. Honoring Intellectual Property (IP) Rights

What is IP?
Intellectual Property refers to creations of the mind, such as:

  • Inventions and technical designs
  • Books, articles, music, films, artwork
  • Logos, trademarks, and brand symbols
  • Software code and digital content

With generative AI, the core principle stays the same:

Using someone else’s protected work without permission can be a legal problem.

This applies to:

  • Using copyrighted material in training data (depending on local law and exceptions).
  • Copying or redistributing AI outputs that closely match a copyrighted work.
  • Claiming AI-generated content as “fully original” when it is obviously derived from a known piece.

Good practice:

  • Don’t assume that “because an AI generated it,” it is automatically free from copyright issues.
  • Follow the terms of use of the AI tool and your organization’s policies.
  • When in doubt, avoid using obviously copyrighted material (e.g., specific characters, logos, songs) without proper rights.

2. Legal Risks in AI Output

a) Copyright infringement

If AI produces content that includes or is very close to someone else’s copyrighted work, and you use that content commercially or publicly, you may face:

  • Injunctions – a court order forcing you to stop using or distributing the material.
  • Damages – financial compensation claimed by the rights holder.

The fact that “the AI did it” does not automatically remove legal risk.
Courts and regulators generally look at what humans did with the AI output.

b) Defamation

Defamation happens when false statements are made about a person that harm their reputation.

With generative AI, this could look like:

  • An AI system generating a profile that wrongly links a real person to a crime or unethical behaviour.
  • A summary that mixes up two similar names and accuses the wrong individual of something serious.

Even if it’s produced by a model, an incorrect and harmful claim about a real identifiable person can still trigger defamation concerns.

Key point:
If you use AI-generated statements about real people, you must:

  • Fact-check the information.
  • Avoid repeating harmful claims unless they are verified and legal to share.

3. Transparency and Trust

Many organizations now publish AI transparency or responsible AI documents. These often describe:

  • What data types they use (in broad terms).
  • How they try to reduce bias and discrimination.
  • How they handle user data and privacy.
  • What limitations or guardrails their systems have.

Why this matters:

  • Users are understandably skeptical about how their data is collected and used.
  • Clear documentation helps build trust and shows that the company is taking responsibility.

In a good transparency document, you should expect to see:

  • Bias and fairness measures – How the company checks for and mitigates bias in training data and output.
  • Data governance – How sensitive data is protected or excluded.
  • Human oversight – Where humans review, correct, or override AI decisions, especially in high-risk areas (healthcare, hiring, credit scoring, etc.).

4. Putting It All Together

When working with generative AI, we have to think about both ethics and law:

  • Bias and fairness – Are we reinforcing stereotypes or excluding certain groups?
  • IP rights – Are we respecting the creators and owners of content?
  • Defamation and privacy – Are we careful with what we say about real people?
  • Transparency – Do users understand what the system does with their data, and what its limits are?

The goal is not just to avoid getting into legal trouble, but to use AI in a way that is fair, responsible, and worthy of trust.