The EU AI Act Is Here – What Every Business Needs to Know

The EU’s landmark Artificial Intelligence Act is now in force, establishing a sweeping risk-based framework that affects businesses worldwide. With major obligations and significant fines already applying, compliance is no longer optional.

Regulation (EU) 2024/1689 on Artificial Intelligence (the “AI Act”) entered into force on 1 August 2024 and, as an EU regulation, has direct effect across all Member States without the need for domestic implementing legislation.

The AI Act establishes a harmonised regulatory framework governing the development, placing on the market, and deployment of artificial intelligence systems within the EU. Its primary objectives are to ensure that AI systems placed on the Union market are safe, transparent, and consistent with the protection of fundamental rights and EU values, while simultaneously fostering innovation and preserving the integrity of the internal market. The AI Act applies with equal force to AI systems deployed in both the public and private sectors, with its territorial scope extending to providers and deployers established outside the EU where the output of the AI system is used within the Union.

The AI Act is structured around a risk-based approach, which classifies AI systems into four tiers of risk:

  • Minimal risk (e.g.: spam filters)
  • Limited risk (e.g.: certain chatbot applications subject to transparency obligations)
  • High risk (e.g.: AI systems used in employment and worker management)
  • Unacceptable risk, which encompasses practices that are outright prohibited by reason of their incompatibility with EU fundamental rights (e.g.: social scoring by public authorities).

The first substantive tranche of obligations under the AI Act, those relating to prohibited AI practices, became applicable on 2 February 2025. From that date, the deployment of AI systems that perform social scoring of natural persons by public authorities, or that carry out biometric categorisation based on sensitive characteristics such as race, political opinion, or sexual orientation, is expressly prohibited. The Act further prohibits the use of AI systems designed to infer the emotions of natural persons in the workplace or in educational institutions, except in limited circumstances, as well as the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, subject to narrowly defined exceptions.

The AI Act’s enforcement regime carries significant financial exposure for non-compliant organisations. The Act establishes a three-tier sanctions framework, with maximum fines set as follows:

  • Up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher, for infringements of prohibited practices or non-compliance with requirements applicable to general-purpose AI models
  • Up to €15 million or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any other requirement or obligation under the Act
  • Up to €7.5 million or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request from a competent authority.

A more proportionate approach applies in the case of SMEs and start-ups. Where the Act specifies an alternative between a fixed Euro amount and a turnover-based percentage, SMEs are subject to whichever of the two figures is lower, whereas larger undertakings are subject to whichever is higher. Member States retain discretion to establish further rules on penalties applicable to natural persons.

While the majority of the AI Act’s substantive obligations were originally scheduled to apply from 2 August 2026, the European Commission published proposed amendments on 19 November 2025 under its Digital Omnibus on AI package which, if adopted, would materially extend the compliance timelines for high-risk AI systems. The proposal was advanced in response to delays in the development of the harmonised technical standards that will underpin conformity assessments under the Act, and is currently proceeding through the legislative process, requiring approval from the European Parliament and Council before it can take effect.

Under the proposed revised timetable, the application of high-risk obligations would be linked to the availability of those supporting measures. Once the Commission confirms their availability, the high-risk rules would apply six months later for Annex III systems and 12 months later for Annex I systems, subject to the following backstop dates:

  • 2 December 2027 for AI systems designated as high-risk under Article 6(2) and Annex III, encompassing AI systems deployed in the fields of biometrics, critical infrastructure, education, employment, essential private and public services, law enforcement, the administration of justice and border management
  • 2 August 2028 for those high-risk AI systems falling within the scope of Article 6(1) and Annex I, including systems embedded in products governed by EU product safety and market surveillance legislation, such as machinery, medical devices, and civil aviation equipment.

It should also be noted that high-risk AI systems already on the market when the new obligations come into force would not be subject to those obligations unless and until significant changes are made to their design. The Digital Omnibus package further proposes to reinforce the role of the EU AI Office, which would assume central enforcement authority over AI systems integrated into very large online platforms and search engines designated under the Digital Services Act, as well as AI systems based on general-purpose AI models where the system and model are provided by the same entity.

While the proposed extension affords operators additional runway to implement the necessary technical and governance frameworks, particularly those in heavily regulated sectors such as financial services and digital infrastructure, it does not represent a wholesale relaxation of obligations, and prudent operators should not treat the proposal as a justification for deferring compliance planning.

Operators deploying generative AI systems in the context of marketing, communications, and public-facing content will be required to comply with the Act’s transparency obligations from 2 August 2026. Under Article 50 of the AI Act, providers of generative AI systems must ensure that AI-generated outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. Deployers are in turn required to disclose clearly when content constitutes a deepfake or when AI-generated text has been published for the purpose of informing the public on matters of public interest, absent applicable exceptions.


For more information, you can contact us at +353 1 662 4747, email law@hayes-solicitors.ie

Back to top