The central claim of this page is the most important one in the whole reference: a content credential does not establish truth. It establishes that a specific party, holding a specific key, made specific assertions about a file, and that the file has not been altered since those assertions were sealed. That is a smaller statement than the public discourse around C2PA usually suggests, and conflating the two has produced both inflated expectations and unwarranted skepticism.
This page enumerates the specific things provenance does not prove. The intended audience is anyone who might rely on a credential to make a decision: an editor accepting a wire image, a juror weighing photographic evidence, a platform moderator triaging a flagged post. The goal is not to undermine confidence in provenance — credentials remain valuable — but to give practitioners a precise sense of which questions credentials answer and which they do not.
The framing borrows from the C2PA 2.x specification itself, which is unusually explicit about its own limits. The C2PA Threats and Harms document, updated alongside the 2.x spec releases, names many of the failure modes catalogued here. The coalition's position is not that provenance solves trust; the coalition's position is that provenance is a foundation on which informed trust judgments can be made.
What the credential does and does not assert
A C2PA credential, validated successfully, asserts a chain of claims: that a particular signer holding a particular X.509 certificate made these assertions, that the assertions reference a particular hash of the pixel data, and that all signatures are mathematically valid. It does not assert that the signer is honest, that the signer's claims correspond to physical reality, or that the chain represents the complete history of the file. The first two are out of scope by design; the third depends on whether intermediate steps preserved or discarded prior manifests.
A credential issued by a Leica M11-P attached to a particular photographer's account confirms that the camera-and-signer combination registered a particular image. It does not confirm what the camera was pointed at. A staged scene captured on a Leica produces a perfectly valid credential containing a perfectly false implied claim. The credential is honest about what it observed; the photographer is not. There is no cryptographic mechanism that can distinguish these cases.
This limitation is structural, not a flaw. Cryptography binds bits to identities; it cannot validate semantics. A photograph asserts a fact about the world, but the world is not on the chain. The same problem afflicts every form of signed media — a signed PDF can contain false text, a signed video can show a staged event — but the photographic context produces a particularly seductive illusion of evidentiary completeness that the credential reinforces.
The staged-scene problem
The most persistent class of fraudulent photographs in the historical record involves staging rather than post-capture manipulation. The most famous case is the 1937 Robert Capa "Falling Soldier" — debated for decades, eventually shown to have been likely staged. The 1990 ITN footage of Bosnian Serb camps, the 2003 staged Hajj photographs (separate from the post-capture compositing), and the routine staging of conflict photography by various combatant parties: none of these would be caught by C2PA.
The reason is simple. A staged photograph is a real photograph of a fake event. The camera saw what the camera saw; the credential records that faithfully. The deception lives in the relationship between the image content and the caption, and captions are not part of what C2PA signs in any meaningful way. The c2pa.actions assertion can record that the photographer added a caption, but it cannot validate the caption's truth.
This is the case where editorial judgment, source assessment, and traditional verification methods remain essential. C2PA helps establish that the photograph came from where it claims to come from. The question of what the photograph shows remains a journalistic problem.
The signer-compromise problem
A credential's strength is the strength of the signing key and the trustworthiness of the signer. If an adversary obtains the private key of a trusted signer — by extraction from compromised hardware, by social engineering, by purchase from a corrupt insider — they can produce arbitrarily many valid credentials. The chain of trust looks fine to a validator. The forgeries are technically indistinguishable from authentic captures.
The 2025 Nikon Z6 III incident is the cleanest published example. A signing flaw in the camera's firmware allowed C2PA manifests to be forged without possession of the camera. Nikon suspended its C2PA implementation while the issue was addressed; the broader ecosystem treated the affected certificates as untrusted. The lesson was that hardware-rooted signing is only as good as the firmware that uses it, and that incident response — revocation, trust-list updates, communicating to validators — is as important as the cryptography. This is covered in more depth on the trust list page.
Software signers are worse. A C2PA implementation in an editing application can be reverse-engineered; if the signing key is embedded in the binary, it can be extracted. Adobe Firefly, OpenAI's image tools, and other cloud generators avoid this by signing in their backends, but the broader population of C2PA-supporting software tools varies in its key-protection rigor. Validators have no way to distinguish a key that was stolen from one that was used legitimately.
The missing-manifest problem
The absence of a credential is not evidence of fakery. The overwhelming majority of images in circulation as of 2026 have no C2PA manifest because they were captured on devices that do not produce them, edited in tools that do not preserve them, or distributed through platforms that strip them. This includes essentially all journalism shot before the C2PA-capable camera generation, all citizen video from non-flagship devices, and all images from any source that does not pay attention to provenance.
Treating uncredentialed images as suspect would invalidate the entire historical photographic record. This is not a hypothetical concern: several proposed legislative drafts in 2024 and 2025 used language that, taken literally, would have created exactly this presumption. The credential is a positive signal; its absence is a non-signal. A verification practice that treats absence as suspicious will misclassify the bulk of legitimate imagery in any newsroom archive.
The broken-chain problem
C2PA chains break readily. A re-encoding without C2PA-aware tooling discards the manifest. An OCR pass that produces a new bitmap discards it. A screenshot certainly discards it. The durable Content Credentials mechanism is designed to recover from this through watermarks and fingerprints — but durable recovery is partial, depends on a registry being queried, and gives back only what was in the original manifest at the time of registration.
A more subtle case is the partial chain. An image may carry a manifest covering only the most recent edit, with no reference to earlier manifests because an intermediate tool stripped them. A validator sees a valid chain back to a recent signer but cannot see further back. This is not a forgery, but it is also not the full provenance the consumer might assume from a "Verified" badge. The C2PA validation result will indicate that ingredients are missing; whether the consuming application surfaces that distinction to the user is an interface decision that varies across implementations.
The semantic-assertion problem
C2PA assertions can include AI-generation flags, training-mining opt-outs, and producer identity. These are useful, but they are voluntary statements by the producer. An open-weights model running on a user's laptop with no provenance tooling produces images that carry no AI-generation assertion because there is nothing in the pipeline to add one. The credential ecosystem is opt-in for producers; it offers no leverage against producers who decline to opt in.
The EU AI Act's Article 50 marking obligation, applicable from 2 August 2026, is an attempt to convert this from voluntary to mandatory for large providers. The text is enforceable against providers placing generative systems on the EU market, but it does not reach individual users running open models on local hardware. The EU AI Act page covers the legal mechanics in detail. The structural limit remains: any technical provenance scheme that depends on producer cooperation has no effect on producers who refuse.
What this means for working practice
Provenance is one input in a verification workflow, not the workflow. A complete practice combines credential inspection where credentials exist, metadata and forensic analysis where they do not, reverse image search to establish reuse history, and source assessment to establish whether the apparent producer is who they claim to be. A workflow that treats a valid credential as terminal is making the same category error that treats an absent credential as terminal: both confuse cryptographic signal with editorial judgment.
The honest framing for end users — embedded in browser badges, news organization disclosures, evidentiary presentation in court — is that a credential answers "did this come from where it claims to come from" and not "is this true." The first question has a precise cryptographic answer. The second question has whatever answer the surrounding practice supports, which is usually a probabilistic one and sometimes a contested one. The verification workflow page sketches what disciplined practice looks like; this page is its negative image.
Where the field is moving
The C2PA coalition's published documents through 2025 have become considerably more explicit about these limits than the early marketing material suggested. The JPEG Trust specification (ISO 22144) introduces a vocabulary for distinguishing different validation outcomes, including the partial-chain and untrusted-signer cases, which lets consuming applications report nuance rather than a single binary. Whether implementers actually surface that nuance to end users is a UX question, not a technical one, and the early adopters have varied widely in how they handle it.
The deeper open question is institutional. A credential ecosystem requires trusted signers, a body that decides which CAs are trusted, and a process for revoking trust when signers misbehave. The C2PA coalition has built this infrastructure, but the political question of who controls it — what happens when a national government insists on issuing C2PA certificates to its journalists, or refusing to recognize foreign ones — has barely been engaged. The cryptography is the easy part. The governance is the part that will determine whether provenance becomes a stable layer of public trust or a contested terrain in the broader information politics of the 2030s.