Elon Musk to Create ‘TruthGPT,’ AI That Won’t Exterminate the Human Race

Elon Musk has always been vocal about the ideas swirling in his mind. One of those ideas may be coming to life soon, which is what he claims to be a "truth-seeking AI" that cares and understands the universe called "TruthGPT."

Elon Musk
(Photo : Marlena Sloss/Bloomberg via Getty Images)

Musk Working on TruthGPT

The tech billionaire says that he is working on an alternative that will operate as a maximum truth-seeking AI, which will rival ChatGPT from OpenAI, a company Elon Musk co-founded back in 2015 but left due to fears of the dangers of AI.

Now, he aims not to avoid it, but to create an AI that respects and understands "the nature of the universe." Musk expressed that an AI that cares about understanding the universe will not destroy humankind because we are an "interesting part of the universe."

As mentioned in The Verge, Musk likened an AI not destroying humans to humans protecting chimpanzees. While humans are capable of rendering the chimpanzee species extinct, we don't because we're glad that they exist and we aspire to protect their habitats.

Based on how the Tesla CEO operates, it's entirely possible that he will go through with creating TruthGPT, although it's a different conversation when it's about the success of his AI. Musk had already created a company this March called X.AI, which could mainly be for TruthGPT.

Right now, there are still no reports on whether TruthGPT is already being developed, but some would argue that Musk is creating it to rival other AI companies like OpenAI, which he believes are creating language AI models that can be dangerous.

In fact, the tech billionaire along with other AI researchers signed an open letter for big AI companies, saying that they should halt experiments with AI that they cannot understand, predict, or reliably control, which may lead to destructive results.

Read Also: Elon Musk's 'Terminator' AI Warning Comes Resurfaces; New Doomsday Tweet Goes Viral!

Is AI Really Dangerous?

We have an abundance of cautionary tales in the form of games, movies, and shows that tell us the dangers of advanced AI. While some may be far-fetched like Skynet from the "Terminator" franchise, other movies might actually be onto something.

Since AI has the ability to learn, it's not impossible that it can reach beyond the knowledge acquired by men. In a world where we are surrounded by all sorts of networks and devices, an uncontrollable AI could be disastrous.

However, OpenAI CEO Sam Altman has addressed the concerns that come with large language models, especially directed at the open letter that urged companies to slow down with AI advancements. Altman responded in an MIT event.

He noted that the latter "lacked technical nuance" about what aspect of AI development they needed to pause. The OpenAI CEO clarified that they studied GPT-4 for over six months before it was even released, as mentioned in Business Today

Since the letter urged a halt in the development of AI more advanced than GPT-4 is, Altman took the time to mention that OpenAI is not currently working on GPT-5 and that the safety bar should increase as AI becomes more advanced.

Related: Elon Musk Has Built His Own AI Company and Potential OpenAI Rival, X.AI Corp

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost