by Breda O'Malley July-04-2024 in Employment Law

The EU Regulation laying down harmonised rules on Artificial Intelligence, (better known as the “AI Act”), has been passed, and is about to enter force.

It is vital for employers to understand what the AI Act means for their organisation and their workforce.  Employers need to know their role in respect of AI, and what obligations arise from deploying AI systems in the workplace.


AI Systems in the present-day workplace  - What risks exist in a workplace setting?

AI systems are already an integral part of many contemporary workplaces, in recruitment, operations, performance monitoring and employee assessments for promotions and terminations. For instance, in a recruitment scenario, an AI system may be asked to draft role advertisements, specify responsibilities, assess applications and propose selected candidates for interviews. Similarly, it can collect employee’s performance data, evaluate it, assign tasks and propose employees for promotions and terminations.

With what seems like endless opportunities in the development of generative AI, regulation is essential to ensure safe and ethical work practices. This is what the AI Act proposes to do.


The AI Act

The legal regime of the AI Act will regulate based on the level of risk that the AI system presents to health, safety, values and fundamental rights of the EU. Proposed categories of AI systems include systems which:-

  • unacceptable risk contravene core values and fundamental rights of the EU;
  • high risk pose a threat to health, safety and fundamental rights
  • limited risk pose a moderate threat to individual or group’ rights and interests, and only require minor intervention, with transparency obligations; and
  • minimal risk pose a negligible or no threat, and are not subject to any mandatory requirements.

AI systems can be tainted with integral bias or discrimination.  When AI then inputs into management decisions, this can adversely impact employment relationships. For this reason, AI used in a workplace context are of high risk and require human intervention to mitigate those deficiencies.  The AI Act demands that human oversight is built into every stage of AI’s life cycle ranging from design and development, through training and usage. Employers will need to ensure that a person with appropriate training and authority is appointed to review the data collected and the decisions reached. Where required, this person will need to have appropriate authority to override the AI based decisions.

AI systems can collect biometric data and use this data to categorise employees by reference to their race, political views, religion or sexual orientation, or they can recognise faces or even infer emotions from the data that is collected. This constitutes an unjustified risk, and use of such systems within workplace is strictly prohibited by the AI Act.

Other workplace activities may include in-house HR chatbots that assist employees with their queries. This is a limited risk activity and the obligations that arise merely require that the users are informed that they interact with a machine.


Employer’s obligations in high-risk cases

For the purposes of the AI Act an employer may be a “provider” or a “deployer” with the former attracting significant legal obligations and the latter attracting less onerous duties. In most cases, an employer will be an AI system deployer, who is obliged to ensure that:

  1. The system is used according to the instructions,
  2. A person is assigned to oversee the operation of the system,
  3. Data inputted is relevant and representative,
  4. Monitor the system and report incidents to the provider,
  5. Inform workers and union representatives that they will be interacting with an AI system,
  6. Where required, carry out a fundamental rights impact assessment


An employer may however become a provider and assume greater obligations under the Act if they modify, or merely add their logo onto the AI system. This may oblige employers to:

  1. Implement, document and maintain a risk management system,
  2. Design tools enabling human oversight,
  3. Monitor the data used,
  4. Generate and monitor activity logs,
  5. Seek registration with EU database.


What is the legal exposure for Employers?

Employers are under a legal duty to arrange and conduct their affairs in line with the AI Act under a sanction of monetary fines. Those fines can be significant for non-compliance, violations, use of banned AI systems or for providing incorrect information.

The AI Act is complementary to the data protection laws and employees may often find that their action lies in GDPR. AI systems are data hungry and data abuse, excessive data collection or processing of special categories of personal data may frequently occur. This may constitute GDPR breaches and a ground for action by an aggrieved employee. Furthermore, employees can also pursue their claims through conventional employment and equality rights legislation, especially where AI reaches its decisions based on data such as gender, religion, political views or sexual orientation.


What should employers do?

It is imperative for prudent employers to prepare for the implications for its workplace of the AI Act.

Employers should take steps to:

  1. identify AI systems operating within their workplace,
  2. consider if they would qualify as providers or deployers,
  3. identify risks,
  4. provide training to staff and designate persons to oversee AI systems, and
  5. inform the workforce of what AI systems are in use and have policies governing their use.

Our employment law group in Hayes solicitors closely monitors those developments and we are here to support you with any questions that you may have. Please feel free to contact our partner Breda O’Malley on or any member of our employment law group.

Back to Full News