ChatGPT is Getting ‘Lazy’ Which Could Have Been Learned from Training Data

It's pretty normal for people to be lazy, especially during the holidays. It's unheard of that technology would do the same thing, yet ChatGPT appears to be suffering some kind of laziness as it does less work than it usually does.

ChatGPT
(Photo : Stanislav Kogiku/SOPA Images/LightRocket via Getty Images)

ChatGPT Gets Lazy Too

It's a weird thing for an AI chatbot to experience something as human as being "lazy" yet OpenAI admitted that there actually is an issue that is resulting in the lessened capacity of ChatGPT. The company doesn't even know why it's happening.

Users started noticing it first back in late November, as the chatbot started producing simplified results, or even refusing to do the task it was prompted to. Others suggest that it might have something to do with the "winter break hypothesis," as reported by Ars Technica.

OpenAI responded to the issue saying that they have heard all the feedback regarding GPT-4 getting lazier. With the unusual nature of the issue, AI researchers are taking it seriously and could come up with an answer as to why it's happening.

The company said that they haven't updated the model since November 11th, but that it wasn't intentional. "model behavior can be unpredictable, and we're looking into fixing it." There are already interesting theories that might explain why ChatGPT is becoming "lazy."

One X user wondered if the large language models (LLMs) may be simulating seasonal depression. Another suggested that it might've learned through its training data that people usually slow down around December and put off projects until the new year.

It's not that far-fetched especially since ChatGPT is "aware" of the current date given that the system prompt lets the chatbot know. This isn't the first time that ChatGPT has responded like a human either.

There are cases where telling the chatbot that you have no fingers would lead to it producing longer outputs. It even responds to encouragement, such as saying "Take a deep breath" before prompting it to solve a difficult problem.

A developer conducted tests to determine if this was true using GPT-4 Turbo through the API. When the chatbot was fed a December date, it produced 4,086 characters. It was less compared to feeding it a May date, which produced 4,298 characters.

Read Also: Certain ChatGPT Prompts Can Generate Sensitive, Copyrighted Data

OpenAI Just Put Out a ChatGPT Fire

The chatbot is getting one problem after another. Just recently, OpenAI had to resolve an issue with ChatGPT as researchers found a way to fish data out of it. All they had to do was ask the bot to repeat a word "forever," and it would eventually generate private data from real people.

This flaw would've allowed threat actors to extract data like names, birthdays, email addresses, social media usernames, fax numbers, and Bitcoin addresses. This has already been fixed by the company, of course, as per Engadget.

If you try to recreate the extraction method, ChatGPT will simply show you an error. In addition to that, it states that the content may "violate our content policy or terms of use." Trying again will just lead to the chatbot asking you to submit feedback.

Related: You Can't Ask ChatGPT to Repeat Words 'Forever' Anymore

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost