On 5 March 2026, the European Commission published a second draft of the Code of Practice on Marking and Labelling of AI-generated content (the “Code”). The Code is expected to be finalised by the June 2026, and the rules governing the transparency of AI-generated content will become applicable on 2 August 2026. This means that organisations may need to make practical changes sooner than expected.
The purpose of the Code is to help providers and deployers of AI meet their marking and labelling requirements for AI generated content under Article 50 of the EU AI Act. Importantly, this is likely to be relevant to any organisation using AI to create content, including for marketing, communications or client-facing materials. The Code is voluntary in nature: signatories commit to its measures to demonstrate compliance, but adherence is not legally mandated. However, as explained further below, once finalised, it will carry significant practical weight. If the use of AI generated content is undisclosed, this could become a regulatory or reputational risk for organisations that use AI generated content, particularly for organisations such as creative, publishing and marketing agencies.
The Code is to serve as a guiding document for demonstrating compliance with the obligations provided for in Article 50 of the EU AI Act. While adherence to the Code does not in itself constitute conclusive evidence of compliance with Article 50 of the EU AI Act, companies that sign the final Code of Practice and implement its measures will benefit from a presumption of conformity with their obligations under Article 50. In practice, the Code is likely to operate as a benchmark for regulators and business partners when assessing responsible AI use.
The Code is structured into two sections, each addressing different aspects of transparency and regulation as they apply to providers and deployers of AI systems respectively.
Section 2 of the Code applies to deployers (i.e. users) of AI, rather than providers (i.e. developers). As a result, its scope extends beyond technology companies to include organisations using AI tools in their day-to-day operations.
Section 1 – AI Generated and Manipulated Content
Section 1 addresses the marking and detection of AI generated and manipulated content and is targeted at providers of generative AI systems. For many organisations, this will translate into practical obligations around how AI-generated content is presented to customers, clients and the public. It maintains that providers of generative AI systems should implement active marking regarding the audio, image, video or text content, or any combination thereof, generated or manipulated by AI which they place on the market or put into service in the EU.
The European Commission press release in relation to the Code notes that “this revision removes and consolidates several measures, while ensuring that all measures remain technically feasible and proportionate. Key commitments include a revised two-layered marking approach involving secured metadata and watermarking, optional fingerprinting and logging, and protocols for detection and verification.”
Section 1 of the Code is divided into four commitments:
- Commitment 1: Two-layered Marking of AI-Generated Content – Signatories commit to implement a two-layered marking approach to AI generated and manipulated content, involving secured metadata and watermarking and optional fingerprinting and logging. Businesses should consider whether any of their current outputs (e.g. marketing materials, online content or reports) could fall within this requirement.
- Commitment 2: Detection of the Marking of AI-Generated Content – Signatories commit to implement measures to enable deployers, users of the generative AI system, end-users exposed to the content, and other legitimate parties, to verify whether content has been generated or manipulated by AI. Providers of AI should ensure this verification information is provided in a clear, distinguishable and accessible manner through tools or APIs.
- Commitment 3: Measures to meet the Requirements for Marking and Detection Techniques – The marking and detection techniques used must be technically robust and fit for purpose. In particular, AI markings must be resistant to removal or circumvention, and detection and verification systems must operate reliably.
- Commitment 4: Testing, verification and compliance – Providers of AI must regularly test their AI labelling and detection systems to identify and address deficiencies, and to ensure those systems function as required.
Section 2 – Labelling of Deepfakes and AI Generated and Manipulated Published Text
Section 2 addresses the rules for labelling deepfakes, and AI generated and manipulated published text. This section of the Code is applicable to deployers of AI systems.
The European Commission press release in relation to the Code notes that “relative to the first draft, this section adopts a more flexible and practice-oriented approach. It has been restructured to simplify and streamline the commitments, while the taxonomy distinguishing the AI-generated content from AI-assisted content has been removed.”
Section 2 of the Code is also divided into four commitments.
- Commitment 1: Disclosure of AI-Generated and Manipulated Deepfakes and Published Text – Signatories commit to ensure consistent disclosure of the artificial origin of AI-generated or manipulated deepfakes or published text on matters of public interest by using the uniform EU icon (once available) or choosing an alternative icon or labelling solution that follows the design and placement requirements specified in the Code.
- Commitment 2: Proportionate compliance, awareness and review – Signatories commit to implement proportionate internal processes, awareness measures and review mechanisms for the proper implementation of the labelling obligations in respect of deepfakes and text publications.
- Commitment 3: Appropriate Disclosure for Artistic, Creative and similar Works – Signatories commit to implement measures to disclose deepfake content that forms part of evidently artistic, creative, satirical, fictional or analogous work or programmes.
- Commitment 4: Human review, editorial control and responsibility in relation to AI-generated or manipulated text publications – To rely on the exception to the disclosure obligation in Article 50(4) of the EU AI Act, Signatories will establish, adapt or maintain minimal documentation, including existing procedures and documents, demonstrating that the AI-generated or manipulated text published for the purposes of informing the public on matters of public interest have undergone human review or editorial control prior to publication and that a natural or legal person holds editorial responsibility for the publication.
Next steps and timeframe
The Code is expected to be finalised by the beginning of June this year. The rules governing the transparency of AI-generated content will become applicable on 2 August 2026. This means that organisations may need to make practical changes sooner than expected. These changes may include introducing labelling policies for AI generated content and reviewing how AI tools are used within the organisation. Early engagement with these requirements may help organisations mitigate risk and demonstrate responsible AI use to regulators and stakeholders.