In an era where artificial intelligence (AI) can mimic the President’s voice with unsettling accuracy, the White House is taking a stand. Ben Buchanan, Biden’s AI advisor, has confirmed that a system to “cryptographically verify” videos and statements from the White House is “in the works.” This initiative comes in response to the alarming rise of AI deepfakes, including an incident this year where President Joe Biden was the subject of a deepfake aimed to misinform voters.
The urgency of the situation is clear. Generative AI has seen a meteoric rise in use, especially since the advent of OpenAI’s ChatGPT. Major tech companies and startups alike are in a race to develop user-friendly AI tools, which has led to a proliferation of deepfakes. These AI-generated fabrications are not just limited to videos; they extend to robocalls and other forms of communication, posing a significant threat to the integrity of information and the democratic process.
The Federal Communications Commission (FCC) has stepped in, declaring AI-generated robocalls illegal. However, the battle is far from over. The sophistication of generative AI tools continues to grow, making it increasingly easy to create convincing fakes. The White House’s proactive approach is a testament to the government’s commitment to being a trusted source of information.
The cryptographic verification process is expected to be a robust solution. It would involve a private and public key pairing, where a unique hash value is generated for each official release and encrypted with a private key. The corresponding public key, widely available, would be used to decrypt and verify the authenticity of the content. Any tampering by third parties would result in an unverifiable hash, exposing the content as altered or fake.
While this technology promises to bolster the authenticity of official communications, it is not without potential pitfalls. Critics argue that it could be misused to establish a monopoly on “the truth,” allowing the White House to disavow any unverified content, even if it is genuine. The implications for political discourse and the public’s trust in government are profound.
Despite these concerns, the White House is moving forward. The cryptographic verification system is part of a broader executive order on AI, which aims to ensure safe, secure, and trustworthy AI development. This includes new standards for AI safety and security, protecting Americans’ privacy, and advancing innovation and competition.
The White House’s efforts to cryptographically verify its releases are a significant step in the fight against AI-generated disinformation. As we await further details on the implementation of this system, the message from the White House is clear: the potential for harm is recognized, and they are determined to get ahead of it.
Related posts:
The White House wants to ‘cryptographically verify’ videos of Joe Biden so viewers don’t mistake them for deepfakes
White House seeks to cryptographically verify Biden videos to mitigate deepfake risks