AI and the end of proof

5gDedicated

Garbage bags are being thrown out a White House window, or so a popular video seems to show.

“Probably AI generated,” said President Trump in a Tuesday press conference.

Earlier, a White House official suggested to TIME magazine that the video was real and showed a contractor doing “regular maintenance.”

Here we are, like other events captured or not captured by cameras. Are the bags real or made by AI? If they’re real, was it an interesting mystery or just boring home improvement? We can’t know.

In entertainment news, actor and rapper Will Smith’s official channels posted a video promoting his Based on a True Story tour, showing crying fans and hand-written signs, which critics slammed for being AI-generated or enhanced to show a bigger, more emotional audience than may have existed. (Expert analysis showed the video probably mixed real and AI-made content.)

Smith responded to the accusation by posting a new Instagram video showing that his audience was made of AI-generated cats.

Of course, opposing claims about whether a picture or video is AI-generated are arising over more serious issues than falling garbage bags or cat crowds.

He said, she said

The US military this week reportedly killed 11 people in a strike on a speedboat from Venezuela said to be carrying illegal drugs in the southern Caribbean on Tuesday. The government posted a video of the strike. But Venezuela’s Communications Minister Freddy Ñáñez said the video looks to be AI-generated.

Taiwanese politicians and the military have faced repeated use of AI-generated fake images in disinformation campaigns. When caught in compromising situations on video or audio, targeted public figures claimed the recordings were “deepfakes” to create doubt, even when fact-checkers and tech experts found no proof of manipulation.

In India, Mexico, Nigeria, and Brazil, accused politicians often say that the proof against them is made by AI. They’re not outliers. They’re just ahead of the political rhetoric curve.

Rise of the ‘liar’s dividend’

In 2019, when deepfake audio and video became a serious problem, legal experts Bobby Chesney and Danielle Citron came up with the term “liar’s dividend” to describe the advantage a dishonest public figure gets by calling real evidence “fake” in a time when AI-generated content makes people question what they see and hear.

False claims of deepfakes can be just as harmful as real deepfakes during elections. Arguments about what is real, unreliable detection tools, and general mistrust allow dishonest politicians to cry wolf about “fake news” or deepfakes to avoid blame.

An American Political Science Review article by Kaylyn Jackson Schiff, Daniel S. Schiff, and Natália S. Bueno is based on five planned experiments with more than 15,000 US adults from 2020 to 2022. They found that across scandals involving politicians from both major parties, false claims of misinformation increased support more than staying quiet or apologizing.

Why Nano Banana has appeal

Google’s Gemini 2.5 Flash Image, also called by its internal code name, “Nano Banana,” is a new image generation and editing model that can create photorealistic images.

It can modify existing photos using simple prompts. For example, you can upload a photo of a person petting a puppy, and with a simple natural-language sentence, transform it into a believable picture of a person slashing a car tire.

Character consistency — keeping the same face, clothes, or object details stable across scenes — makes fake shots look like a real photo series. The model leverages Gemini’s broader “world knowledge” to follow complex instructions and make edits that match real‑world contexts, like believable lighting or object placement.

Nano Banana is available to developers in preview through the Gemini API, Google AI Studio, and Vertex AI, priced at $30 per 1 million output tokens and $0.039 per image based on 1,290 output tokens. Because Nano Banana is in the Gemini API, other companies will likely integrate it. Confirmed sites and tools are OpenRouter, fal.ai, Adobe Firefly and Adobe Express, Poe (Quora), Freepik, Figma, and Pollo AI, with WPP Open and Leonardo.ai.

The ability to make fakes will be everywhere, along with the growing awareness that visual information can be easily and convincingly faked. That awareness makes false claims that something is AI-made more believable.

The good news is that Gemini 2.5 Flash Image stamps every image it makes or edits with a hidden SynthID watermark for AI identification after common changes like resizing, rotation, compression, or screenshot copies. Google says this ID system covers all outputs and ships with the new model across the Gemini API, Google AI Studio, and Vertex AI.

SynthID for images changes pixels without being seen, but a paired detector can recognize it later, using one neural network to embed the pattern and another to spot it.

The detector reports levels like “present,” “suspected,” or “not detected,” which is more helpful than a fragile yes/no that fails after small changes.

OpenAI takes a different approach for DALL-E 3 and ChatGPT image creation by attaching C2PA “Content Credentials” metadata that records the tool used and a cryptographically signed manifest, verifiable with the Content Credentials Verify site. OpenAI began adding these credentials in February 2024 and admits they can be removed by social platforms or screenshots, so missing metadata doesn’t prove an image is made by a person.

Microsoft’s Azure OpenAI Service adds C2PA Content Credentials, signed to trace back to Azure OpenAI, with fields like “AI Generated Image,” the software agent, and the timestamp. These credentials remain on downloads but can be removed.

Meta labels realistic images made with its tools and uses IPTC metadata and invisible watermarks, and its researchers published “Stable Signature,” a model-integrated watermark for open-source generators.

Adobe’s Content Authenticity Initiative and the C2PA standard want to make verified “Content Credentials” work across different apps and websites, so people can see where photos and videos come from and how they were edited. TikTok has started adding content credentials and can automatically label AI media from partners that already ship C2PA metadata, with verification through the standard’s public tools.

SynthID makes the most sense to me. But all these verification methods can be defeated by anyone trying to pass off fake pictures or videos as real. This means when someone says image-based evidence is fake, nobody can prove them wrong.

Photography was first used as courtroom evidence in 1859, began to influence public opinion in 1862 with Civil War photos, and became a trusted source of proof in newspapers in 1880 when halftone printing allowed publishers to print real photos on newspaper presses.

That means camera-made visual content served as reliable and convincing proof for 166 years.

Farewell, reliable photographic and video evidence that we could all agree on. We hardly knew ye.AI and the end of proof – ComputerworldRead More