A forensic expert on Kate’s photo editing – and how credentialing technology can help build trust in a world of increasing uncertainty

British newspaper reports of a digitally altered photo of the royal family, in London on March 11, 2024. Credit – Rasid Necati Aslim – Anadolu/Getty Images

aAs an academic who has spent the past 25 years developing techniques to detect photo tampering, I’m used to getting panicked calls from reporters trying to authenticate a breaking story.

This time the calls, emails and texts started pouring in on Sunday evening. Catherine, Princess of Wales, has not been seen in public since Christmas Day. The abdominal surgery she underwent in January led to widespread speculation about her whereabouts and well-being.

Sunday was Mother’s Day in Britain and Kensington Palace had released an official photo of her and her three children. The image was distributed by Associated Press, Reuters, AFP and other media. The photo then quickly went viral on social media platforms with tens of millions of views, shares and comments.

But just hours later, the AP issued a rare “photo kill,” asking its clients to delete the image from their systems and archives because “closer inspection reveals that the source manipulated the image.”

The main concern seemed to be about Princess Charlotte’s left sleeve, which showed clear signs of digital manipulation. What was unclear at the time was whether this obvious artifact was a sign of more extensive photo editing, or an isolated example of minor photo editing.

To try to find out, I started analyzing the image with forensic software designed to distinguish photographic images from fully AI-generated images. This analysis confidently classified the image as not AI-generated.

I then performed a few more traditional forensic tests, including analyzing the lighting and sensor noise pattern. None of these tests revealed evidence of more significant manipulation.

After all this, I came to the conclusion that the photo was most likely edited with Photoshop or a camera’s built-in editing tools. While I can’t be 100% sure, this statement is in line with Princess Kate’s subsequent apology, issued on Monday, in which she said: ‘Like many amateur photographers, I occasionally experiment with editing. β€œI wanted to apologize for any confusion the family photo we shared yesterday caused.”

Read more: The controversy over Kate Middleton’s photos shows that the Royal PR team is out of their depth

In a rational world this would be the end of the story. But the world – and social media in particular – is nothing if not irrational. I’m already receiving dozens of emails with “evidence” of more nefarious photo manipulation and AI generation, which are then used to speculate wildly about Princess Kate’s health. And while the kind of post-hoc forensic analyzes I perform can help photo editors and journalists get to the bottom of these types of stories, they can’t necessarily help combat rumors and conspiracies that spread quickly online.

Manipulated images are nothing new, even from official sources. For example, the Associated Press temporarily suspended distribution of official footage from the Pentagon in 2008 after releasing a digitally manipulated photo of the U.S. military’s first female four-star general. The photo of Gen. Ann E. Dunwoody was the second Army-provided photo the AP had highlighted in the past two months. The AP eventually resumed use of these official photos after the Pentagon’s assurance that military branches would be reminded of a Defense Department directive that prohibits making changes to images that misrepresent the facts or circumstances of an event .

The problem, of course, is that modern technologies make altering images and video easy. And while changes are often made for creative purposes, changes can be problematic when they involve footage of real events, undermining trust in journalism.

Detection software can be useful on an ad hoc basis, highlighting problematic parts of an image, or whether an image may have been generated by AI. But it has its limitations: it’s neither scalable nor consistently accurate – and malicious actors will always be one step ahead of the latest detection software.

Read more: How to recognize an AI-generated image like the ‘Balenciaga Pope’

So what to do?

The answer likely lies in digital provenance: understanding the origins of digital files, whether images, video, audio or whatever. Provenance includes not only how the files were created, but also whether and how they were manipulated during the journey from creation to publication.

The Content Authenticity Initiative (CAI) was founded in late 2019 by Adobe, which makes Photoshop and other powerful editing software. It is now a community of more than 2,500 leading media and technology companies, working to implement an open technical standard around provenance.

That open standard was developed by the Coalition for Content Provenance and Authenticity (C2PA), an organization formed by Adobe, Microsoft, the BBC and others within the Linux Foundation. It aims to build ecosystems that accelerate open technology development and commercial adoption. The C2PA standard quickly emerged as best in class in the field of digital provenance.

C2PA has developed Content Credentials, the equivalent of a ‘nutrition label’ for digital creations. By clicking on the distinctive β€œcr” logo that appears on or next to an image, a viewer can see where the image (or other file) came from.

Schermafbeelding van een voorbeeld van een Content Credentials-label van contentcredentials.org.<span class=Copyright Β© 2023 C2PA” data-src=”https://s.yimg.com/ny/api/res/1.2/xsv3naN96EHA4x4Amuyv1Q–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU0NA–/https://media.zenfs.com/en/time_72/cac98289ce9c8625bea601fa4bb de3a6″/>
Screenshot of an example of a Content Credentials label from contentcredentials.org.Copyright Β© 2023 C2PA

The identification protocols are specifically integrated into hardware devices (cameras and smartphones) so that the future viewer can accurately determine the date, time and location of a photo at the time it is captured. The same technology is already part of Photoshop and other editing programs, allowing editing changes to be captured and inspected in a file.

All that information appears when the viewer clicks on the ‘cr’ icon, and in the same clear format and simple language as a nutrition label on the side of a cereal box.

If this technology were in full use today, news photo editors could have reviewed the content references of the royal family photo before publication and avoided the panic of withdrawal.

That’s why the Content Authenticity Initiative is working towards the global adoption of Content Credentials, and why media companies like the BBC are already gradually introducing these labels. Others, like the AP and AFP, are working to do so later this year.

Universal adoption of this standard means that over time, every piece of digital content can eventually carry Content Credentials, creating a shared understanding of what can be trusted and why. Proving what is real – as opposed to discovering what is false – replaces doubt with certainty.

Contact us at letters@time.com.

Leave a Comment