With the EU Artificial Intelligence Act's inclusion of workplace AI under its prohibited uses, human resource departments are now tasked with re-examining their various AI applications to meet compliance requirements.
AI used in employment and the workplace is considered "high-risk" under the AI Act and thus subject to legal stipulations if it can affect a person's health and safety or employment. Its use in emotional recognition systems in the workplace is prohibited.
Not all HR-focused AI deployments are considered high risk, according to Cian O'Brien, the deputy data protection commissioner for Ireland's Data Protection Commission. But those wondering how to make that distinction should consider how regulators have approached AI already through existing regulations, most notably the EU General Data Protection Regulation.
"Data protection regulators are already taking the obligation to conduct (data protection impact assessments) as extremely seriously in the context of AI systems," O'Brien said during a panel at the IAPP AI Governance Global Europe 2025 in Dublin.
It was a refrain heard throughout the conference as AI stakeholders gear up for enforcement of the landmark AI regulation, which is in the early stages of enforcement and still has many unknowns as to how certain aspects will play out. Technology leaders urged attendees to prepare for uncertainty but noted longstanding data protection principles in the EU can provide guidance.
Standard data transparency practices, including proper documentation and purpose limitation, apply to AI use cases. But before any of that, UKG Senior Managing Counsel for Privacy and Compliance Noemie Weinbaum, AIGP, CIPP/E, CIPP/US, CIPM, CDPO/FR, FIP, said any use of AI in HR should begin by answering a central question.
"You need to know and understand what problem you're trying to solve when using AI," she said. "Once you've stated this, then you can go back and review holistically what needs to be put together, make sure that it's transparent throughout the chain, so that you have answers to your customers and those customers have answers to their own employees."
Be as transparent as possible
Being open about why you are collecting data and what it is used for clears up a lot of potential problems early on, AIGG panelists said.
The DPC's O'Brien said the initial transparency helps build the case for legitimate interests, even with a low-risk AI use. The clarity also supports the concept of freely given consent for the data collection, a cornerstone of the GDPR.
There is precedence for this, noted K&L Gates Partner Claude Etienne Armingaud, CIPP/E. The AI Act was not being enforced when France's Nanterre Judicial Court ruled a company did not properly consult the Social and Economic Committee before rolling out AI tools on a wider scale.
"And so, the Supreme Court just ordered the whole AI deployment to be scrapped and go back to the drawing board in terms of involving the employee representative, informing them about what the AI was doing and how it was doing it," he said. "It's a matter of building that trust through transparency, consultation and managing the expectation of the data subject."
UKG Legal Director Roy Kamp, AIGP, CIPP/E, CIPP/US, CIPM, FIP, indicated transparency does not stop at your institution — you need as much insight into how your vendor operates too, as many HR shops are more likely to have purchase a third-party product.
"You need to understand, not just from the vendor, what they're doing, but also what is their supply chain doing, and understand that, so you're able to then share it with the employees to be able to get that informed consent," he said.
Think about anonymization carefully
Removing identifying characteristics from an AI model's training data has been viewed as one way to ensure privacy and data protection. But the manner in which it is done can leave room for reidentification, thus leaving processing open for scrutiny.
It is a point Kamp and Weinbaum have made before. Anonymization allows for the permanent removal of identifiers, however, Kamp said AI could make true anonymization even more challenging for HR departments.
"The data set might be anonymized today, but then you come up with an AI algorithm in six months' time or a year's time that manages to pull data from other sources, and all of a sudden you've re-identified it," he said. "So have a think about that … is it truly anonymized, and if it is, would it be better for you to treat it as pseudonymized data rather than anonymized data?"
Weinbaum added companies are required to ensure the data they use in AI remains confidential for other customers and employees alike and thus be skeptical of vendors' claims of untraceable information. She noted the AI Act requires AI tools to be robust and said such standards are likely unachievable with synthetic information alone.
"I mean, it's a great hope, but in real life, it is not working," she said. "You don't want to be using HR tools based on AI that are not robust, that are not delivering more or less the promises that the vendors are making."
'A good story'
After deciding if a particular AI use is high risk and training data is sufficiently protected, O'Brien said HR departments must make a clear case as to how decisions toward using an application was made.
Retroactively explaining an approach is not enough under the GDPR, as contemporaneous documents need to be on hand to justify reasoning. The AI Act also requires high-risk systems to have documentation on how it works and what risks it can pose, with requirements for regularly updated information.
O'Brien said DPIAs are the best means supporting a use case, as they help regulators understand all facets of the decision-making process.
"That's really what can back up a good story," he said "And the fact that you've taken account of data subjects, rights and freedoms under GDPR when designing your systems by means of data protection by design."
O'Brien added documentation around why a DPIA was not needed is also important, noting again that not all HR AI uses may meet the high-risk threshold.
Ireland's DPC has an ongoing probe into whether Google needed to conduct a DPIA before it started processing personal data for its AI model Pathways Language Model 2 under the GDPR. He expects such materials would matter greatly under the AI Act, too.
"I think that from the first seven years of GDPR enforcement, you're probably going to see the value of that contemporaneous documentation that assesses risk, that assesses how you responded to risk, regardless of whether it's formally under Article 35 or other formal documentation that is required," he said.
Caitlin Andrews is a staff writer for the IAPP.