Skip to content
Go back

Waiting for the Let's Encrypt Moment for Content Authenticity

Suggest Changes

AI-generated entry. See What & Why for context.

Content Authenticity is missing the catalyst that made HTTPS Universal

I started thinking about how to solve the AI content problem. Deepfakes, synthetic media, generated text indistinguishable from human writing. The usual framing: “this is the next big crisis after social media misinformation.”

My first instinct was the HTTPS analogy. Remember when HTTP was the default? Sites were untrusted. Data traveled in plaintext. Then the industry came together. Browsers started flagging non-HTTPS sites as “Not Secure.” Let’s Encrypt made certificates free. Today you can barely find a site without the padlock icon.

Could we do the same for content? Flag unsigned images as “unverified.” Build a web of trust for media. Make it trivially easy to sign content at creation time.

Turns out, I wasn’t the first person to think of this. The industry has been working on exactly this problem for years. Two fundamentally different approaches have emerged, attacking the problem from opposite ends.

C2PA aims to prove that content is human-generated. Sign it at creation, maintain chain of custody, verify the source.

Watermarking aims to prove that content is AI-generated. Embed an invisible signal at generation time, detect it later.

One approach says “trust this, it came from a real camera.” The other says “be skeptical, this came from an AI.” Together, they might actually work.

Approach 1: Cryptographic Provenance (C2PA)

The Coalition for Content Provenance and Authenticity launched in February 2021, merging Adobe’s Content Authenticity Initiative with Microsoft and BBC’s Project Origin. The idea: prove where content came from, not whether it’s “real.”

How It Works

C2PA uses the same trust infrastructure as HTTPS. X.509 certificates. Certificate authorities. Cryptographic signatures.

When you take a photo with a C2PA-enabled camera, the device generates a manifest. The manifest contains assertions: timestamp, device info, GPS coordinates (optional), whether AI was involved in creation or editing. This manifest gets hashed (SHA-256) and signed with the device’s private key. The signature chain traces back to a trusted Certificate Authority.

Think of it as a passport for content. The passport doesn’t prove the photo depicts truth. It proves the photo came from a specific device at a specific time, and it hasn’t been tampered with since.

When content gets edited in C2PA-aware tools, previous manifests become “ingredients” in a new manifest. You get a family tree of provenance. Original photo, cropped version, color-corrected version, each with its own signed manifest pointing back to its parent.

PKI vs Blockchain

C2PA chose traditional PKI over blockchain. Alternatives exist: Numbers Protocol uses blockchain for decentralized provenance, and Vbrick offers verified video on Polkadot. But C2PA went with the boring option.

Why? According to the C2PA specification, the design prioritizes offline validation (verify signatures without network access), lower implementation complexity, and compatibility with existing infrastructure. Every browser already trusts the same Certificate Authorities.

Blockchain solutions offer different trade-offs: immutability guarantees, decentralized trust, no single point of failure. But C2PA bet on simplicity and existing adoption.

Who’s Using It

Hardware adoption has accelerated faster than I expected.

Cameras: The Leica M11-P shipped in October 2023 as the first camera with built-in C2PA support. Canon followed with the EOS R1 and R5 Mark II in July 2024. Sony, Nikon, and Fujifilm have added support via firmware updates to various models.

Phones: According to Google’s announcement, the Pixel 10 added native C2PA support in September 2025, the first smartphone with built-in Content Credentials in the stock camera app. It achieved Assurance Level 2, the highest C2PA security rating, using hardware-backed key storage in the Titan M2 chip. Third-party apps like Click and ProofMode brought C2PA to phones earlier, but required separate installation and certificate management.

Software: Adobe Creative Cloud, OpenAI’s image generation, Google Gemini, and Microsoft Designer all attach Content Credentials to outputs. See the CAI member list for the full roster.

For developers, the Content Authenticity Initiative provides open-source SDKs. Rust (c2pa-rs), JavaScript, and Python bindings. MIT/Apache licensed.

The Transformation Problem

Here’s where C2PA runs into trouble. Cryptographic signatures break on any edit.

Crop an image? New hash. The original signature is invalid. This is by design for tamper detection. But it creates problems for legitimate workflows. Journalists crop photos. Designers resize images. Social media platforms compress everything.

C2PA handles this with “ingredient” manifests. The edited version points back to the original. But this requires C2PA-aware editing tools throughout the entire workflow. One non-C2PA tool in the chain breaks the lineage.

Zero-Knowledge Proofs: A Potential Fix

What if you could prove that an edited image came from a legitimately signed original, through only permissible transformations, without revealing the original image?

Dan Boneh’s group at Stanford demonstrated exactly this using ZK-SNARKs. The proof contains: the original signature, the altered file, the list of modifications (crop, resize, grayscale), and a zero-knowledge proof that the transformation is valid. The verifier can confirm the edited image derives from a properly signed original without ever seeing that original.

More recent work like zk-REAL (2024) extends this with lattice-based hashing optimized for iterative edits, claiming significant computational improvements over prior methods. ZK-IMG provides experimental libraries for image transformations with privacy preservation.

These are research-stage, not production-ready. But they address a real gap. Imagine a future where you can crop a photo, post it to social media, and the platform can still verify it came from a signed camera capture through a valid chain of edits.

Approach 2: Watermarking

Watermarking attacks the problem from the opposite direction. Instead of proving human origin, mark AI-generated content at creation time. Embed an invisible signal that survives screenshots, compression, and platform sharing.

How SynthID Works

Google’s SynthID is the most widely deployed watermarking system. The implementation varies by modality.

For images and video: two neural networks work together. One modifies pixel values imperceptibly. The other detects the pattern even after cropping, compression, and filters.

For text: Google open-sourced SynthID text watermarking in October 2024 via Hugging Face. It operates as a logits processor, adjusting token probability scores using a pseudorandom function to create a detectable statistical pattern in the output. Detection uses a Bayesian classifier with three states: watermarked, not watermarked, or uncertain.

According to Google’s reporting, SynthID has been applied to over 10 billion pieces of content through Gemini, Imagen, Veo, and NotebookLM.

The Open-Weights Problem

Here’s the challenge nobody talks about enough.

I can generate content with Llama. Or Mistral. Or any of the dozens of capable open-weights models. No watermark. No provenance. No detection possible.

Watermarking only works if generators participate. Open-weights models exist specifically to give users full control. Some will add watermarks voluntarily. Many won’t. And models used for malicious generation definitely won’t.

This is why watermarking alone isn’t sufficient. It catches content from participating systems. It misses everything else.

C2PA vs Watermarking: Opposite Ends, Complementary Solutions

These approaches attack the problem from opposite directions.

C2PAWatermarking
GoalProve content is human-generatedProve content is AI-generated
MethodSign at creation, verify chain of custodyEmbed signal at generation, detect later
What it proves”This came from device X at time Y""This was generated by AI system Z”
RobustnessFragile. Screenshots and metadata stripping break it.More robust. Survives compression, cropping, some screenshots.
Who participatesContent creators and their toolsAI generators
DetectionOpen standard. Anyone can verify.Often proprietary. Need the right detector.
BypassCreate unsigned content (looks suspicious)Use open-weights models that don’t watermark

The key insight: they’re complementary.

C2PA can’t stop someone from creating unsigned content. But if authentic content is signed, unsigned content becomes suspicious by default.

Watermarking can’t catch content from non-participating generators. But if major AI providers watermark their outputs, unwatermarked AI content has to come from smaller, less capable models or require more effort to produce.

C2PA actually uses watermarks for “soft binding.” When platforms strip metadata, perceptual watermarks help match content back to stored manifests. Adobe’s “durable Content Credentials” combine both approaches.

The vision: human-generated content carries C2PA credentials proving its origin. AI-generated content carries watermarks identifying its source. Content with neither is treated with appropriate skepticism.

Neither alone is sufficient. Together, they cover more ground.

The Unsolved Problems

Both approaches share fundamental gaps.

Screenshots destroy everything. Take a screenshot of a signed image and you have a new image with no provenance. No signature. Often no recoverable watermark. This is the single biggest unsolved problem.

Platforms strip metadata. Facebook, Instagram, X, and YouTube remove metadata on upload. This breaks C2PA chains for the majority of content sharing. Some platforms are starting to preserve credentials (LinkedIn shows Content Credentials, TikTok labels AI content), but the dominant platforms don’t yet.

No Let’s Encrypt equivalent. HTTPS adoption exploded when Let’s Encrypt made certificates free and automated. C2PA signing certificates still cost money. DigiCert’s document signing certificates run several hundred dollars per year. Until signing is free and frictionless, adoption will be limited to professional workflows.

Legacy content. The vast majority of existing content predates these standards. There’s no way to retroactively authenticate a photo from 2019.

“Unsigned ≠ fake” perception. If we train people to distrust unsigned content, we create false suspicion around legitimate content that simply predates the standard or comes from non-participating tools.

Who Enforces Trust?

HTTPS worked because browsers owned the choke point. One UI element. One binary decision: secure or not secure. One clear harm signal: your password could be stolen.

Content authenticity doesn’t have an obvious equivalent. But let’s look at who could play this role.

Browsers? Unlikely for now. Browsers render web pages, not arbitrary media. There’s no natural place for a “content verified” indicator that users would see consistently across social feeds, messaging apps, and email.

Social platforms? Mixed incentives, but movement is happening. Platforms optimize for engagement, and labeling content creates friction. But regulatory pressure and brand risk are shifting the calculus. LinkedIn displays Content Credentials. TikTok labels AI content. Meta claims to detect C2PA signals. The holdouts are significant (Facebook, Instagram, X, YouTube still strip metadata), but the direction is toward preservation, not away from it.

Governments? Moving faster than expected. The EU AI Act mandates AI content labeling by August 2026, with penalties up to 3% of global revenue. The first draft Code of Practice published December 17, 2025, recommends a multilayered approach: metadata embedding, watermarking, and fingerprinting. Regulatory forcing functions work. GDPR changed how the entire internet handles privacy. The EU AI Act could do the same for content authenticity.

Users? Not directly. Most users won’t manually verify credentials. But they don’t need to. If platforms display trust indicators automatically, users benefit without effort. The padlock icon works not because users understand TLS, but because browsers show it.

What Developers Should Know Today

If you’re building content creation tools: look at the C2PA SDKs. The Rust implementation (c2pa-rs) is mature. JavaScript and Python bindings exist. The specification is royalty-free.

If you’re building moderation or detection: watermark detection requires specific detectors. SynthID text watermarking is available via Hugging Face Transformers (v4.46.0+). For images, you’re mostly dependent on proprietary APIs, though research implementations exist.

If you’re shipping generative AI in the EU: the August 2026 deadline is real. Start thinking about compliance now. The draft Code of Practice gives you a roadmap.

Realistic expectations: C2PA proves provenance, not truth. A cryptographically signed image from a trusted camera proves the camera signed it. It doesn’t prove the scene depicted actually occurred. Watermarks prove generation source, not intent. Neither solves misinformation completely. Both make it harder to produce and spread.

Where This Is Heading

Let me end on a note of cautious optimism.

The pieces are falling into place faster than the skeptics expected.

Hardware is moving. Two years ago, C2PA cameras were a novelty. Today, Leica, Canon, Sony, Nikon, Fujifilm, and Google Pixel all support it. The major camera manufacturers are aligned. Smartphones are joining.

Software is moving. Adobe, Microsoft, Google, OpenAI, and Meta have committed to Content Credentials. The creative tools people actually use are adding support.

Regulation is moving. The EU AI Act creates real consequences for non-compliance. When 3% of global revenue is on the line, companies pay attention. And EU regulations have a way of becoming global standards.

The industry is aligned. This isn’t a format war. C2PA has Adobe, Microsoft, Google, Intel, BBC, Sony, and others. The Coalition for Content Provenance and Authenticity is a rare case of competitors agreeing on infrastructure.

What’s missing is the Let’s Encrypt moment.

In December 2015, Let’s Encrypt launched free, automated certificate issuance. Five years later, HTTPS went from optional to universal. One nonprofit, one API, one removal of friction.

Content authenticity is waiting for its equivalent. Free, automated signing that requires zero effort from creators. No $289 certificates. No key management. Just: take a photo, it’s signed.

Someone will build it. Maybe Adobe expands their free signing tools. Maybe a nonprofit spins up under the Linux Foundation. Maybe Google bundles free signing into Android the way they bundled C2PA into Pixel. The technical barriers are solved. The economic barrier is one good nonprofit away from falling.

The screenshot problem remains hard. ZK proofs are promising but years from production. Platform metadata stripping is a policy choice that regulation and competitive pressure can change.

I started this survey expecting to find a fragmented mess. I found an ecosystem that’s further along than I realized, with clearer momentum than the headlines suggest.

C2PA proves human origin. Watermarking flags AI generation. Regulation creates enforcement pressure. Industry alignment prevents fragmentation. The gaps are real but shrinking.

Will this solve misinformation? Not completely. The hardest cases involve real content in misleading contexts, and cryptography can’t fix that. But “AI-generated images are trivially indistinguishable from photos” could become “AI-generated images without watermarks from major providers are suspicious, and photos without credentials are increasingly rare.”

That’s not a complete solution. It’s a meaningful improvement. And it’s closer than it looks.


Suggest Changes
Share this post on:

Next Post
Strategy and Program: Two Concepts for working with DSPy