Sora Deepfake Detectors: 5 AI Tools That Can Identify Realistic AI Videos

OpenAI's Sora, a text-to-video AI generator, has currently making a sweep across the internet over the past few days over concerns of nearly indistinguishable AI videos have now arrived.

Many people raised concerns about how Sora can be used to spread damaging deepfakes at a rapid rate.

To prevent further potential misinformation that may stem from OpenAI's newest product, here are some AI tools that can detect videos generated by Sora.

Sora Deepfake Detectors: 5 AI Tools That Can Identify Realistic AI Videos
(Photo : Drew Angerer/AFP via Getty Images)

Also Read: OpenAI's Sora Alarms Experts Over Impacts on Deepfakes, Political Propaganda

Intel's FakeCatcher

A leading expert in computer systems, Intel is sure to provide quality AI tools to detect and analyze AI-generated videos.

One such thing is its FakeCatcher program, capable of identifying changes in blood flow and eye movement in videos, determining whether the subject was recorded by a human or artificially made.

It is an effective tool due to its ability to analyze hundreds, if not thousands, of videos instantly with a 96% accuracy rate.

With AI videos spreading uncontrollably, Intel's product is a sure method to detect and react to these deepfakes in record time.

Sensity AI

Another great AI detection tool is Sensity, proven to detect even the most realistic pictures taken from Dall-E, Stable Diffusion, FaceSwap, and Midjourney.

Sensity is able to do this via its Generative Adversarial Network that leverages API to detect AI artifacts in multimedia. It currently has a 95% accuracy.

Since it is already familiar with OpenAI's AI system, it is easy to see how Sensity will also be able to detect videos generated from Sora.

Microsoft Video AI Authenticator

Microsoft, OpenAI's biggest investor, is also taking steps to ensure that AI will not easily be used against democracy.

Its Video AI Authenticator is the company's first of its many steps in resolving this problem.

The AI tool works by detecting grayscale changes in videos, distinctions the human eye is incapable of doing and proving real-time confidence scores.

Microsoft expects AI to be an effective tool in combating misinformation in the upcoming 2024 Elections where disinformation and deepfakes are expected to ramp up.

Hugging Face

Hugging Face is one of OpenAI's top competitors along with Anthropic. It comes as no surprise that it is also furthering its measurements to protect its products from misuse and abuse.

Among those planned to help people fight AI deepfakes via the development of the so-called "Provenance, Watermarking and Deepfake Detection."

The tool works by embedding watermarks in multimedia to help other platforms notify users that it is indeed AI-generated.

This works well with social media platforms currently at their first steps in creating new systems to combat the increasing AI-powered disinformation online.

OpenAI

What better way to know if a product is fake than asking its very own creators.

This is how OpenAI will address rising issues that may come from Sora, creating accessible tools to determine whether

According to the AI firm, these methods include attaching digital watermarks to Sora-generated videos, as well as other indicators to show that it is not actual footage taken in real life.

This is in addition to OpenAI's promises to take steps to prevent its technology from being used to spread political disinformation,

Related Article: OpenAI's Near-Realistic AI Videos: The Dangers to Digital Information Landscape

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost