top of page

L&D Studio

powered by AELIA

Insights from experts

1706080339093.jpg

An interview with
Niki Tsimpogou

Lawyer specializing in technology & law

  • LinkedIn
Senior Associate
Zoulovits-Kontogeorgou Law Firm
"We're navigating a grey zone and the legal meaning of confidentiality in AI contexts is still evolving"
67802d9a9077705a66587634_2025-01-The-Role-of-AI-in-HR-Hero.jpg
Should we have any legal concerns when "feeding" our internal Gen-AI based tools with material produced within the company by specific employees, available in our server (memos, reports, presentations) to prepare AI-generated templates or final reports/memos/presentations etc?

Yes, there are legal concerns worth examining. Intellectual property rights, employee consent (where applicable), and data protection laws like GDPR can all come into play. Even if materials are internal, the nature of Gen-AI tools, especially those that might learn from input or operate via third-party platforms, requires a careful legal and technical assessment.

​

That said, this is an evolving space. There’s no one-size-fits-all answer yet, and the right approach may ultimately depend on the specific AI system, its architecture, and the company’s risk appetite. We are still in the process of understanding what responsible and legally sound AI use really looks like.

Should we have legal concerns if we feed our Gen-AI based tools with confidential content from performance reviews of various employees to identify learning & development needs for the company or to prepare templates of Personal Development Plans (PDP)?
Do we have confidentiality restrictions when feeding confidential content into the Gen-AI model?

It would be naïve or even dishonest to say there are no legal concerns here. Performance reviews include personal and possibly sensitive data, so using them in AI tools triggers data protection and confidentiality issues.

 

The key risks are around purpose limitation, employee consent or expectation, and whether the model could retain or expose this data. Still, the line between internal efficiency and privacy risk isn't always clear. We're navigating a grey zone and the legal meaning of confidentiality in AI contexts is still evolving.

Should we have legal concerns if our employees present Gen-AI generated content (memos/reports/audiovisual materials) to customers/clients without indicating which part was generated by AI?
Could there be issues of copyright or of misleading customers?

There are clear legal considerations. Presenting AI-generated content without disclosure could lead to misleading clients about the nature of the work, especially if they assume it’s entirely human-produced. This can raise issues of transparency and trust.

 

On the copyright side, AI-generated content might not be eligible for protection, and there is a risk it could unintentionally infringe on existing intellectual property. We are actively looking into these issues and hope to have a clearer picture as the legal landscape evolves.

If we were to ask you to highlight 3 key things HR executives should keep in mind when exploring and starting to use Gen-AI tools to assist in their daily work, what would they be?

If I had to give 3 key aspects to consider, they would be:

  • First, privacy – ensuring the protection of employee data.

  • Second, fairness – preventing any biases or discrimination that could arise from AI.

  • And third, transparency – making sure the decision-making process is clear and accountable.

These are crucial as AI becomes more integrated into HR practices.

bottom of page