Elon Musk, FLI-Backed Professor Says AI Cannot Be Controlled Safely

Artificial intelligence is one of the most popular topics in tech right now, partly because people are impressed with what it's capable of, but also because tech companies are rushing to develop AI products to get ahead in the AI race. This, however, is only fueling fears in others that AI might be out of our control.

AI
(Photo : Getty Images)

AI Expert Warns About 'Existential Catastrophe'

Companies that are developing AI right now are also working on safety measures to make sure that the technology will cause no harm to human beings. Based on many cases where AI has been either biased or misguided, it's understandable why some are still doubtful.

A University of Louisville tenured professor decided to use a more academic approach to study the trajectory of AI development, and Dr. Roman V. Yampolskiy concluded that it might not be within our ability to control AI, despite all the proposed methods of making it safe.

In his research, which can be found in his book called "AI Unexplainable, Unpredictable, Uncontrollable," the AI expert expressed that AI systems will always pose risks despite efforts to prevent them due to their unpredictability and autonomy, as per Interesting Engineering.

"Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable," says the professor.

In his findings, Dr. Yampolskiy said that humanity is facing an almost guaranteed event that would cause "existential catastrophe." Many share his sentiments and concerns over AI, including some who are funding his work.

His research is partially backed by tech billionaire Elon Musk, who is known to be skeptical about the development of AI as well. In the list is the Future of Life Institute (FLI) as well, which has been active in the advocacy towards AI safety.

FLI hasn't been quiet during the development of AI models that might be too advanced for the time being. The institute even filed an open letter asking OpenAI to halt the development of more powerful AI than its latest GPT-4.

Many known individuals in the tech world signed the mentioned open letter, including Elon Musk, Apple co-founder Steve Wozniak, computer scientist Yoshua Bengio, and 33,000 others who believe that AI poses a great risk.

Read Also: AI Revolution to Worsen Job Inequality, IMF Says

Does AI Really Pose Risks?

The AI technology we have right now is still not smart enough to plot world domination just yet. In fact. Some would argue that it's still pretty dumb. Currently, AI models are limited to the knowledge that their developers provide, which is the training data.

Artificial General Intelligence (AGI), however, might be a different topic. AGI is a form of AI that can somehow think for itself, as it can find solutions when faced with new tasks that it has not been taught about yet, as per Tech Target.

The new technology would call for different sets of safety measures as opposed to our current AI technology, and it will have to be more complex. Not to worry, since most experts say that we are still far from creating a functioning AGI.

Related: Doomsday Clock Signals Impending Doom Over Nuclear War, AI

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost