OpenAI Prepares Safety Protocols Against Dangers of AI

OpenAI is leading safety procedures to address the growing dangers of artificial intelligence, including the potential of bad actors using the technology for criminal and terrorist acts.

OpenAI Prepares Safety Protocols Against Dangers of AI
(Photo : Stefani Reynolds/AFP via Getty Images)

MIT AI professor Aleksander Madry will be heading the "Preparedness" team, tasked to hire computer scientists, national security experts, and policy professionals to monitor AI development.

The team will be working with the "Safety Systems" and "Superalignment" teams to address safety and security concerns in the development of its most advanced AI models.

Safety Concerns on OpenAI Development

OpenAI has been walking on the middle ground of debates on the tech firm's role in the future of AI development since ChatGPT went viral.

CEO Sam Altman believes there are no serious long-term risks in the company's development of the tech.

However, recent leadership changes in the company have shown issues not addressed for a long time, including the actual goal of the company to the development and distribution of AI to companies and to the public.

The company has also faced several lawsuits after authors accused OpenAI of using their copyrighted works to train its language models.

The safety protocols team believes OpenAI's safety framework protects its products from being used in crime despite the recent leadership "turbulence."

OpenAI is open for "qualified, independent third parties" to test its technology.

Also Read: ChatGPT is Getting 'Lazy' Which Could Have Been Learned from Training Data

US Regulations on AI Use and Development

As the development of AI models continues to surge in the tech industry, the US Congress, in turn, has filed a blueprint for a bill of rights on AI application.

US President Joe Biden has already issued an executive order on Safe, Secure, and Trustworthy Artificial Intelligence to mitigate the dangers posed by the new advancements.

The executive order specifically addresses standards for AI safety and security, as well as protecting Americans' privacy from exploitation and abuse.

Other countries, especially in Europe, have launched their own regulation commissions to address the growing issues on AI.

Related Article: ChatGPT Can Now Write Legal Rulings for Judges in UK

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost