6.1

The EU AI Act and provenance

Article 50 is the operative provenance text. It requires machine-readable marking of synthetic content, with obligations applying from 2 August 2026. The wording is technology-neutral; the practical answer for image generators has been C2PA.

The EU AI Act (Regulation (EU) 2024/1689) is the first major legal regime that requires marking of AI-generated content as a horizontal obligation across the AI industry. It entered into force on 1 August 2024 and is being phased in across staggered deadlines through 2026 and 2027. The provisions that matter for image provenance are concentrated in Articles 50 and 52, with the marking obligations specifically applicable from 2 August 2026. This page covers what the text actually requires, how it has been interpreted in the months leading up to the deadline, and how it interacts with the C2PA-led private-sector provenance infrastructure.

The intended reader is someone trying to understand what the law actually says rather than the abstract policy debate around it. The Act is long (180 articles plus extensive recitals) and the provenance provisions are a small fraction of the text. The relevant portions are referenced precisely below; the full text is at the EU's official journal site.

What Article 50 requires

Article 50 establishes transparency obligations for "certain AI systems," including general-purpose AI systems that generate synthetic content. The operative subsection for image provenance is Article 50(2):

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.

Several elements of the text bear emphasis. First, the obligation is on providers — the actors placing AI systems on the EU market — rather than on end users. A user running a model locally and producing an image does not fall under the obligation; the provider of the system they ran does (with caveats around general-purpose AI providers and the structure of the model-licensing chain). Second, the marking must be machine-readable and detectable — human-visible marks alone do not satisfy. Third, the technical solutions must be "effective, interoperable, robust and reliable" to the extent technically feasible.

The last phrase is doing substantial work. "Technically feasible" leaves room for the realistic state-of-the-art, which the attacks on watermarks page makes clear is imperfect. The interpretation question — whether a scheme defeated by adversarial scrubbing satisfies "robust" — will be settled through European Commission guidance and eventual litigation, not through the text alone.

Article 50(4): deepfakes and disclosure

Article 50(4) imposes an additional obligation on deployers (users) of AI systems that produce or manipulate image, audio, or video content constituting "deepfakes" — defined in Article 3(60) as AI-generated or manipulated content that appears to be authentic. Deployers must disclose the artificial nature of such content. The provision includes a carve-out for content that is "evidently artistic, creative, satirical, fictional, or analogous," where the disclosure must not interfere with the work but must indicate the artificial nature.

This is a separate kind of obligation: not a technical marking but a human-facing disclosure, and not on the system provider but on the human deploying the system. The deployer must mark or disclose; the provider must enable machine-readable marking. The two work together: a deployer using a system whose provider has implemented C2PA marking can rely on the technical marking as part of their disclosure obligation, but the deployer's editorial responsibility is separate.

What counts as a "general-purpose AI system"

The Act's definitions of "general-purpose AI" (GPAI) and "general-purpose AI model" (Article 3(63), 3(66)) determine which providers are subject to the Article 50 marking obligation in its strongest form. The Code of Practice for GPAI providers, adopted in the months leading up to the August 2026 application date, fleshes out compliance expectations including marking implementations for image-generating GPAI.

The major commercial image generators (OpenAI's DALL·E, Google's Imagen, Adobe's Firefly, Anthropic's image generation) all fall comfortably within scope. Smaller fine-tuned models distributed through Hugging Face exist in a more ambiguous space — the fine-tuner may or may not be a "provider" depending on the nature of their distribution. The Code of Practice clarifies several of these cases; others will be tested through enforcement.

How C2PA fits

The Act is technology-neutral. C2PA is not named in the text and is not mandated. The recitals do indicate that "watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, fingerprints" can all satisfy the marking obligation, provided they meet the effectiveness criteria. The choice is left to providers.

In practice, the major commercial providers have converged on C2PA-plus-watermarking as their answer. C2PA provides the structured manifest that records generation; SynthID and similar watermarks provide the soft-binding signal that survives some stripping. The combination is the technical implementation that providers have deployed and that they will defend as satisfying Article 50.

Whether the European Commission's guidance and the courts agree that this combination is "robust, reliable, interoperable" is the live question. The C2PA approach is interoperable in the sense that it is an open specification with multiple validators; it is robust in the sense that the cryptography is sound; it is reliable in the sense that the major providers have implemented it consistently. The watermarking layer is the weakest link — survivable in benign distribution but defeated by adversarial scrubbing. The interpretation question is whether that weakness disqualifies the combination or whether the combination, evaluated as a whole, is robust enough.

ProvisionSubjectObligationDate
Article 50(2)Providers of generative systemsMachine-readable marking of synthetic output2 August 2026
Article 50(4)DeployersDisclose deepfake content2 August 2026
Article 51–55GPAI providersVarious transparency and risk-management duties2 August 2025 onwards
Article 52Specific transparency provisionsDisclosure of system limitations to deployers2 August 2026

Enforcement and penalties

The Act establishes the AI Office within the European Commission as the body responsible for guidance and oversight of GPAI providers, with member-state market-surveillance authorities responsible for enforcement against individual providers. Penalties for non-compliance with the Article 50 transparency obligations are administrative fines, with maximums calibrated to a provider's turnover (up to €15 million or 3% of global annual turnover, whichever is higher, for transparency violations under Article 50). Higher tiers apply for other parts of the Act.

Enforcement has not produced its first public case as of mid-2026 because the operative date for Article 50 had not yet passed. What guidance has emerged from the AI Office and the European Data Protection Board has been general — pointing to the need for "appropriate technical and organizational measures" rather than mandating specific schemes. The first enforcement actions are expected in the second half of 2026 and the first half of 2027.

Note The Act does not require C2PA specifically and does not prohibit a provider from using a different scheme. A provider implementing a proprietary watermark plus metadata identifications could satisfy the marking obligation. The convergence on C2PA is a practical industry decision driven by interop economics, not a legal requirement.

What the Act does not cover

Several gaps in the Article 50 coverage are worth noting:

Each of these gaps is a deliberate scope choice in the Act's drafting, not an oversight. Together they mean that the marking regime applies to the large commercial sector and leaves the open-weights long tail untouched. The realistic effect is that compliant commercial generators will produce marked content, and the population of unmarked synthetic content will continue to be dominated by open-model use.

The relationship to other regulations

The EU AI Act does not stand alone. Several adjacent regulations interact with the marking obligation:

Where the field is moving

The next eighteen months are dominated by the operational shift to compliance. Providers have implemented marking; the question is whether their implementations survive scrutiny from the AI Office and from civil-society groups likely to test the system. Several published research projects through 2025 have tested commercial generator watermarks against adversarial attacks and reported substantial robustness gaps; these reports are likely to feature in the early enforcement discussions.

The harder long-term question is whether the marking obligation drives convergence toward a global standard or fractures into regional implementations. China's deep synthesis rules (in force since January 2023) have their own marking expectations; the US has not adopted federal marking rules but several states have. The C2PA-led private-sector infrastructure is the best candidate for cross-regional interop, but its adoption depends on whether the major jurisdictions accept it as compliant with their respective regimes. The next several years will reveal whether content provenance becomes a globally consistent layer or a fragmented one.