OpenAI's Sora Alarms Experts Over Impacts on Deepfakes, Political Propaganda

OpenAI's newest text-to-video AI, Sora, is alarming experts across the industry over its impact on the growing culture of disinformation and digital propaganda campaigns.

OpenAI's Sora Alarms Experts Over Impacts on Deepfakes, Political Propaganda
(Photo : Drew Angerer/Getty Images)

Speaking to ABC News, several digital experts highlighted its implications in making it "even easier for malicious actors to generate high-quality video deepfakes."

Oren Etzioni, founder of TruMedia.org, even expressed being "terrified" and how it possibly "leads to an Achilles heel in our democracy" as one-third of the world enters the election period.

OpenAI Vows to Curb Misinformation from Sora

In response to the ABC report, OpenAI cited one of its blogs for its safety measures to combat misinformation, hateful content, and bias being generated via its AI products.

Among the steps being taken, is hiring domain experts to assess its risks and "building tools to help detect misleading content."

The company released Sora last week as a tool capable of generating minute-long videos of "highly detailed scenes, complex camera motion, and multiple characters" from just a few prompts.

OpenAI earlier vowed to combat bad actors from using its products to spread political misinformation ahead of the 2024 Presidential Elections in November.

So far, the AI firm has taken down a GPT chatbot impersonating presidential candidate Dean Phillips.

Also Read: OpenAI's Sora Might Be Able to Generate Video Game Worlds

AI-Powered Disinformation in the 2024 Elections

The continuous advancement in generative AI has been the top concern among digital experts and watchdogs in the upcoming elections.

This is due to how ill-equipped online platforms in combating more sophisticated disinformation that has been pumping out from thousands of accounts.

Just last October, a deepfaked video of US President Joe Biden circulated on the internet, calling for a military draft to support Israeli forces in its attack on Gaza.

The White House has started to call to "cryptographically verify" videos of political figures to prevent further impersonation of top officials for political campaigns.

This is in addition to proposals requiring AI companies to disclose information regarding their latest AI research to the government, as well as report foreign entities that are using their products.

Related Article: OpenAI's Near-Realistic AI Videos: The Dangers to Digital Information Landscape

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost