
Cover photo by 侑奈
With the advancement of AI technology, deepfakes have become more realistic than ever, shaking the reliability of photos and videos. In fields like politics, news, and entertainment, misinformation spreads, increasing the risk of people believing false information.
This article explores how various companies and organizations are responding to this issue and the measures needed in the future.
What are Content Credentials?
'Content Credentials' is a technology that embeds 'digital signatures' and 'edit history' into photos and videos. This allows verification of where an image was taken and what edits were made, enabling detection if tampering occurs.
Major camera manufacturers like Adobe and Leica are leading the adoption of this technology, significantly enhancing the reliability of photos.
Corporate Responses to the Adoption of Content Credentials
As a countermeasure to deepfakes, major tech companies like Adobe and Microsoft are developing technologies related to 'Content Credentials' and implementing systems to prove the authenticity of digital content.

Photo by 朽蓮 kyu-ren
For example, Leica's latest cameras record 'authenticity data' at the time of shooting, allowing tracking of edit history. Social media companies like TikTok and Meta are also working on labeling AI-generated content to help users easily determine the source of information.
Awareness and Actions Required from Us
In the future, AI technology will continue to advance, and deepfakes will become more sophisticated. However, as technology progresses, systems to maintain reliability are also evolving.
Each of us having the 'ability to discern accurate information' is the first step in preserving trust in the digital society.