Tech Industry Leaders Says Mitigating AI Risks Should be a Global Priority

If you've seen a movie or two where an AI aims to destroy the human race, you might be among the people who have growing concerns about it. Although it may not be as dramatic as the movies, tech leaders already believe that AI does have the ability to bring potential threats and risks.

Sam Altman
(Photo : Win McNamee/Getty Images)

Tech Leaders Wary of AI Progress

Big names in the tech industry, even those whose companies are in the business of developing AI, have expressed their concerns about the possible risks that the fast-paced progress of AI could unfold. For the context of severity, they likened it to the risks of pandemics and nuclear wars.

The nonprofit organization Center for AI Safety released a statement that resonated with several big tech leaders, which states that "mitigating the risk of extinction from AI should be a global priority. Over 350 executives, researchers, and engineers signed the open letter.

Some of the signatures belong to top executives you might recognize such as OpenAI CEO Sam Altman, Google DeepMind CEO DEmis Hassabis, and Anthropic CEO, Dario Amodei, as listed by The New York Times.

One of the main concerns among tech experts is how fast the large language models are being developed. It could reach the point where technology would be so advanced that it would surpass human performance, which would result in the loss of countless jobs.

The open letter, according to the executive director of the Center for AI Safety Dan Hendrycks, was somewhat of a statement where tech leaders can collectively express their concerns about the potential risks of the technology they themselves were creating.

The nonprofit organization's executive director stated that it was a misconception, even in the AI community, there are only a "handful of doomers," when in reality, tech executives "privately would express concerns about these things."

Prior to the open letter, CEOs Altman, Hassabis, and Amodei had already had a meeting with US President Joe Biden and Vice President Kamala Harris to discuss matters about AI regulation. OpenAI CEO believes that the risks were serious enough to involve the government.

Read Also: The Rise of Deepfakes: 5 Negative Things Deepfakes Can Do And How To Prevent It

What Risks Are We Facing

With the development of AI, more and more capabilities are emerging, which means there are more ways it can be exploited or mishandled. For one, AI has already proven to be dangerous in the wrong hands as it has been used to spread misinformation.

Through AI, any user who is tech-savvy enough could generate false content and disseminate them as authentic information. There have also been reports of misuse due to the lack of proper knowledge about the technology.

One example would be an incident that involves a New York lawyer. The law practitioner used ChatGPT to create a legal brief, not knowing that the AI chatbot cited cases that it fabricated. This resulted in the lawyer facing sanctions for the false cases presented in court.

AI-generated images have also been convincing enough for people to believe the real thing. In the professional eye, signs of a photo being generated by AI may be obvious, but the mass majority may not be able to, such as the photo called "Balenciaga Pope."

Related: New York Lawyer Faces Consequences After Using ChatGPT for Legal Brief

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Tags AI AI Risks

More from iTechPost