OpenAI's Near-Realistic AI Videos: The Dangers to Digital Information Landscape

OpenAI has just unveiled its latest text-to-media product, Sora, an AI capable of generating near-realistic video clips from prompts.

While many fans of the technology were eager to try the new AI tool, more people see it as a threat to digital information access, especially with the 2024 Elections inching closer with each day.

OpenAI's Near-Realistic AI Videos: The Dangers to Digital Information Landscape
(Photo : Kyle Marcelino/iTech Post via Simon Wohlfahrt/AFP/Getty Images, Ben Moreland/Unsplash)

Also Read: OpenAI Launches Early Access on Text-to-Vido AI Model for Red Teamers, Visual Artists

What is Sora AI?

Launched on Feb. 15, Sora AI is a "text-to-video model" that can generate 60 seconds of "highly detailed scenes, complex camera motion, and multiple characters" with just a few clicks.

It is a step up from its other text-to-video models in the sense that the AI allows for more dynamic movements and angles compared to static positions and transitions of previously released AI models of the same purpose.

What's notable, however, is its ability to generate near-realistic details that can easily be seen as authentic.

The Rise of More Realistic Deepfakes

That said, the technology has the power to influence many more people thanks to its believable deepfakes and AI-edited clips.

While OpenAI has added safety measures to prevent people from impersonating personalities through its technology, Sora still poses great risks to information online.

Just this month, images of the Eiffel Tower supposedly burning went viral on TikTok and X (formerly Twitter).

It is easy to identify the images as AI if the user is already familiar with them. Casual users encountering the images are more apt to believe it at first glance, as evidenced by the comments on those videos.

Imagine the amount of misinformation and disinformation that will arise with the arrival of Sora AI, making it more difficult to distinguish the fake from the real.

It is not hard to see how a technology like this can be used to spread political disinformation, either to boost a politician or to attack opposition.

The Washington Post earlier reported that more bad actors are using AI tools to further optimize dissemination of "fake news" online, what more with a technology able to dupe even professionals.

What's worrying is that AI technology is only set to improve more from here.

Related Article: Taylor Swift Deepfakes: The Danger of AI on the Wild West of Internet

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost