AI Hallucinations are Less Likely in New Chatbot Model, Anthropic Says

AI startup Anthropic claims its chatbot Claude is less prone to "AI hallucinations" thanks to its three new language models.

The San Francisco-based tech firm announced Monday that it is launching three new models for its Claude 3 software -- Opus, Sonnet and Haiku.

(Photo : Anthropic)

While it is "very, very hard to get to zero percent hallucination rate," Anthropic President Daniela Amodei touted the company's latest models are twice as likely to provide correct results.

Amodei said the company used essays from Y Combinator co-founder Paul Graham to train their new language models, a practice common in AI firms, Bloomberg reported.

To further reduce the risk of generating false information, Anthropic clarified that it will not include image generation features in the new models but rather have image analysis skills.

The statement comes after Google's and Meta's AI chatbots were reported to generate "inaccurate" depictions of historical figures and certain racial groups.

Also Read: AI Chatbot 'Hallucinations' Could Affect Votes in 2024 Elections

Claude 3's New AI Models: What to Expect

Based on poems, the new AI models feature specialties Anthropic tout as "new industry benchmarks across a wide range of cognitive tasks."

Similar to its counterparts, Haiku is the "fastest and most cost-effective" model, capable of generating graphs and charts within three seconds.

Meanwhile, Sonnet is twice as fast as Claude 2 and Claude 2.1 when it comes to response speed, data retrieval and sales automation. It is recommended for large-scale AI deployments for its "high endurance."

Opus operates the same as Claude 2 and Claude 2.1 but "with much higher levels of intelligence."

Both Opus and Sonnet are already available for developers, while Haiku will roll out in the coming weeks.

AI Firms Move to 'Hallucinations' Amid Rising Disinformation Online

"AI hallucinations" have been an issue AI companies have struggled with since its initial surge in the tech industry.

Many companies have started implementing guardrails on their AI models to prevent the technology from giving inaccurate results and advising users not to use their technology as a primary source of information.

With the 2024 elections around the corner in most countries, AI firms have been ramping up efforts to prevent their technology from being used in spreading political misinformation.

Anthropic, along with OpenAI and other AI startups, signed a commitment with leading tech giants, like Microsoft, Google and IBM, to help governments address issues in AI's effect on democracy.

This is in addition to the commitments the companies have joined together to provide guidelines to mitigate risks posed by AI to the work industry, information and technology landscape.

Related Article: Microsoft, Google, Leading Tech Companies Sign Commitment to Fight AI Misinformation for 2024 Elections

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost