OpenAI: GPT-4 Only Poses a Slight Risk of Helping Create Bioweapons

OpenAI is now claiming that its GPT-4 model poses will not likely help in creating "biological threats," including biological weapons.

In a blog post this Wednesday, the GPT-4 developer said its technology only had "at most" a slight risk in assisting people headstart biological warfare.

OpenAI: GPT-4 Only Poses a Slight Risk of Helping Create Bioweapons
(Photo : Lionel Bonaventure/AFP via Getty Images)

The company based its risk level on five metrics: accuracy, completeness, innovation, time taken, and self-rated difficulty.

According to OpenAI, GPT-4 only recorded a slight indication in accuracy and completeness but was "not large enough to be statistically significant" to pose a threat.

The study was part of OpenAI's earlier promise to create a safety protocol team to assess the dangers posed by AI to humanity.

Also Read: OpenAI Prepares Safety Protocols Against Dangers of AI

US Gov't Concerns on AI-Powered Weapons

Growing concerns for AI helping create "catastrophic" weapons of mass destruction have been going on for quite some time as the technology continues to advance.

Just last October, President Joe Biden signed an executive order to address threats of AI to the safety and security of the nation.

Recent actions from this order now compel AI firms to disclose "vital information" on any future AI projects and instances of foreign entities using their technology to the government.

OpenAI, along with its investor Microsoft, are among the primary targets of the executive order amid scrutiny for its secretive development and testing of its AI technology.

Most of OpenAI's safety measure reports studies were conducted by the company and rarely involved state-owned institutions.

What are OpenAI's Safety Protocols to AI Threat?

To satisfy lawmakers and regulators, OpenAI has started regularly releasing threat assessment studies of their technology. Among those was this study on biological weapons.

Aleksander Madry, the leader in OpenAI's "preparedness" team, told Bloomberg that the company is also exploring how their technology can be abused by malicious actors.

Another team, the "Superalignment," is tasked to prevent the AI from "going rogue." This includes studying its vulnerabilities in its technical and automation capabilities.

Related Article: US, China Presidents are Expected to Ban AI Use with Autonomous Weapons

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost