ChatGPT is getting stupider and no one knows why
Large language models (LLMs), such as ChatGPT, are frequently applied and can be changed over time depending on data and user feedback.
Meanwhile, it is impossible to say clearly when are those updates made, because changes can happen at every moment and in a short period of time — no one can predict them.

Now while using ChatGPT users report weakening logic, more erroneous replies, losing track of information, difficulty adhering to instructions, forgetting to include brackets in basic software codes remembering just the most recent prompt, and forgetting everything else.

Although the language model does function faster than before, it has numerous disadvantages.

This is a far cry from earlier this year when OpenAI impressed the world with ChatGPT’s possibilities and skills.


Though it is important to say that new functions can be pretty helpful and make one’s experience easier, in general, if LLM’s performance provides fake information, those functions won’t become needed.

For example, this LLM’s unpredictability stands out.

The creation of directly executed code was shown utilizing GPT’s understanding of code generation. Inconsistencies can ruin vast software ecosystems, putting some companies relying on these models in jeopardy.

According to experts from Stanford University and Matei Zaharia, chief technology officer at Databricks, it is quite difficult to determine why this is occurring. It’s possible that fine-tuning has reached a snag, but it might possibly happen just due to errors.

Even the developers cannot clearly say what happens with the chatbot. However, they claim that they made every new ChatGPT update much more massive and extensive for sure.