Frame 302
By Thomas Bourdel
23 May 2025

The Gifts of Generative AI: What to Use, What to Avoid

The gifts of generative AI: mistakes to avoid and best practices

Generative AI is establishing itself today in businesses and even daily life. Capable of writing, analyzing and structuring information in seconds, it offers immense potential. But this power also carries major risks: compromised confidentiality, factual hallucinations, discriminatory biases, regulatory non-compliance.

For responsible use, it is crucial to know the “don'ts” : these prohibited or strongly discouraged practices which expose you to legal, ethical and operational risks. This article details the most common mistakes and associated best practices.

Verification and reliability: avoiding blind trust

Generative models sometimes produce false responses, called hallucinations:

Do not publish generated responses without proofreading and verifying sources

Don't assume that AI knows the news in real time: it can provide outdated or made-up data

Do not ask the AI to draft binding legal documents without proofreading by an attorney

Example: an American lawyer presented a brief in court containing references to case law invented by ChatGPT. Result: disciplinary sanctions and damage to credibility.

Confidentiality and sensitive data: donations to respect

Loss of confidentiality is the first pitfall to avoid. Generative AI models must not become an unintentional channel for data leaks.

Do not enter personal data of a third party without their explicit consent. → Risk of privacy violations and GDPR sanctions

Do not paste confidential contracts, company secrets, or proprietary source code into an external service. → Loss of control over intellectual property rights

Do not store contractually protected data (e.g. customer data) in the AI tool without clear agreement and protection measures (encryption, on-premise hosting)

Do not assume that AI service logs are deleted automatically. → Check the data retention and localization policy

Example: in 2023, Samsung accidentally exposed internal code by pasting it into ChatGPT for analysis. Result: leak and dissemination of its industrial secrets.

Oversight, bias and regulatory compliance

Generative AI cannot function without human oversight and a robust compliance framework. It carries a double risk: making erroneous decisions if used alone, and reproducing discriminatory biases.

Do not entrust AI with decisions with vital issues (medical diagnosis, legal decisions, industrial safety) without expert supervision

Don't let AI completely replace human journaling for compliance, ethics, or editorial quality

Don't let AI make HR or hiring decisions without regular bias audits and a clear legal framework

Do not deploy a model in a product without robustness and safety testing

Do not minimize the impact of systemic biases: a continuous correction plan is needed

Rectangle 46
The central role of the EU AI Act:

Adopted in 2024, the European AI Act constitutes the first large-scale global regulation on artificial intelligence. It classifies uses according to their level of risk:

Unacceptable risk (e.g. behavior manipulation, mass facial recognition) → prohibited

High risk (e.g. recruitment, credit, justice) → subject to strict obligations (documentation, traceability, audits, human governance)

Limited or low risk (e.g. chatbots, text generators) → information obligations and transparency

In short:

Don't assume that AI automatically complies with GDPR or the AI Act

Document the legal bases of processing and human supervision mechanisms

Depending on the level of risk of your use, regularly audit your models to detect and limit bias

Example: recruitment AI deployed without control systematically discriminated against women in technical positions. Under the AI Act, this type of breach could result in severe financial penalties (up to 7% of global annual turnover).

Related Blog Post
By Thomas Verheyde
16 Jun 2025

Artificial intelligence and environment: towards more responsible AI

Rectangle 42
By Thomas Verheyde
16 Jun 2025

Artificial intelligence and environment: towards more responsible AI

Rectangle 42
By Thomas Verheyde
16 Jun 2025

Artificial intelligence and environment: towards more responsible AI

Rectangle 42
Stay Updated With
Our Latest Stories
Enter your email address
LOGO avec Slogan - dark & rectangle - Transparent. 1

Artemia delivers advanced AI-powered solutions designed to streamline professional workflows and enhance business efficiency.

info@artemia.ai
Company
Our solution About FAQ Training
Resources
Legal Notice Confidentiality Blog
image 19
image 23

Activator France Num and a player in the digital transformation of businesses ever since 2025.

© 2025 Artemia. All rights reserved.

LOGO Entier - Light (1)
By Thomas Bourdel
23 May 2025
The Gifts of Generative AI: What to Use, What to Avoid
Rectangle 45

Bourdel

Admin

The gifts of generative AI: mistakes to avoid and best practices

Generative AI is establishing itself today in businesses and even daily life. Capable of writing, analyzing and structuring information in seconds, it offers immense potential. But this power also carries major risks: compromised confidentiality, factual hallucinations, discriminatory biases, regulatory non-compliance.

For responsible use, it is crucial to know the “don'ts” : these prohibited or strongly discouraged practices which expose you to legal, ethical and operational risks. This article details the most common mistakes and associated best practices.

Verification and reliability: avoiding blind trust

Generative models sometimes produce false responses, called hallucinations:

Do not publish generated responses without proofreading and verifying sources

Don't assume that AI knows the news in real time: it can provide outdated or made-up data

Do not ask the AI to draft binding legal documents without proofreading by an attorney

Example: an American lawyer presented a brief in court containing references to case law invented by ChatGPT. Result: disciplinary sanctions and damage to credibility.

Confidentiality and sensitive data: donations to respect

Loss of confidentiality is the first pitfall to avoid. Generative AI models must not become an unintentional channel for data leaks.

Do not enter personal data of a third party without their explicit consent. → Risk of privacy violations and GDPR sanctions

Do not paste confidential contracts, company secrets, or proprietary source code into an external service. → Loss of control over intellectual property rights

Do not store contractually protected data (e.g. customer data) in the AI tool without clear agreement and protection measures (encryption, on-premise hosting)

Do not assume that AI service logs are deleted automatically. → Check the data retention and localization policy

Example: in 2023, Samsung accidentally exposed internal code by pasting it into ChatGPT for analysis. Result: leak and dissemination of its industrial secrets.

Oversight, bias and regulatory compliance

Generative AI cannot function without human oversight and a robust compliance framework. It carries a double risk: making erroneous decisions if used alone, and reproducing discriminatory biases.

Do not entrust AI with decisions with vital issues (medical diagnosis, legal decisions, industrial safety) without expert supervision

Don't let AI completely replace human journaling for compliance, ethics, or editorial quality

Don't let AI make HR or hiring decisions without regular bias audits and a clear legal framework

Do not deploy a model in a product without robustness and safety testing

Do not minimize the impact of systemic biases: a continuous correction plan is needed

The central role of the EU AI Act:

Adopted in 2024, the European AI Act constitutes the first large-scale global regulation on artificial intelligence. It classifies uses according to their level of risk:

Unacceptable risk (e.g. behavior manipulation, mass facial recognition) → prohibited

High risk (e.g. recruitment, credit, justice) → subject to strict obligations (documentation, traceability, audits, human governance)

Limited or low risk (e.g. chatbots, text generators) → information obligations and transparency

In short:

Don't assume that AI automatically complies with GDPR or the AI Act

Document the legal bases of processing and human supervision mechanisms

Depending on the level of risk of your use, regularly audit your models to detect and limit bias

Example: recruitment AI deployed without control systematically discriminated against women in technical positions. Under the AI Act, this type of breach could result in severe financial penalties (up to 7% of global annual turnover).

Rectangle 46
Related Blog Post
By Thomas Verheyde
16 Jun 2025

Artificial intelligence and environment: towards more responsible AI

Rectangle 42
By Thomas Bourdel
23 May 2025

Artificial intelligence for lawyers: what impacts on the profession?

Rectangle 42
By Thomas Verheyde
23 May 2025

Generative AI: Differences between Open Source and On-Premise

Rectangle 42
By Thomas Verheyde
23 May 2025

Writing a prompt for generative AI: COSTAR method and best practices

Rectangle 42
LOGO avec Slogan - dark & rectangle - Transparent. 1

Artemia delivers advanced AI-powered solutions designed to streamline professional workflows and enhance business efficiency.

info@artemia.ai
Company
Our solution About FAQ Training
Resources
Legal Notice Confidentiality Blog
image 19
image 23

Activator France Num and a player in the digital transformation of businesses ever since 2025.

Stay Updated With
Our Latest Stories

Enter your email address

© 2025 Artemia. All rights reserved.

LOGO Entier - Light (1)