- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.
First it just starts making shit up, then lying about it, now it’s just at the stage where it’s like, “Fuck this shit.” It’s becoming more human by the day.
Human. After all.
Yep, I spent a month refactoring a few thousand lines of code using GPT4 and I felt like I was working with the best senior developer with infinite patience and availability.
I could vaguely describe what I was after and it would identify the established programming patterns and provide examples based on all the code snippets I fed it. It was amazing and a little terrifying what an LLM is capable of. It didn’t write the code for me but it increased my productivity 2 fold… I’m a developer now a getting rusty being 5 years into management rather than delivering functional code, so just having that copilot was invaluable.
Then one day it just stopped. It lost all context for my project. I asked what it thought what we were working on and it replied with something to do with TCP relays instead of my little Lua pet project dealing with music sequencing and MIDI processing… not even close to the fucking ballpark’s overflow lot.
It’s like my trusty senior developer got smashed in the head with a brick. And as described, would just give me nonsense hand wavy answers.
“ChatGPT Caught Faking On-Site Injury for L&I”
Was this around the time right after “custom GPTs” was introduced? I’ve seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.
I do think part of it is expectation creep but also that it’s got better at some harder elements which aren’t as noticeable - it used to invent functions which should exist but don’t, I haven’t seen it do that in a while but it does seem to have limited the scope it can work with. I think it’s probably like how with images you can have it make great images OR strictly obey the prompt but the more you want it to do one the less it can do the other.
I’ve been using 3.5 to help code and it’s incredibly useful for things it’s good at like reminding me what a certain function call does and what my options are with it, it’s got much better at that and tiny scripts like ‘a python script that reads all the files in a folder and sorts the big images into a separate folder’ or something like that. Getting it to handle anything with more complexity it’s got worse at, it was never great at it tbh so I think maybe it’s getting to s block where now it knows it can’t do it so rejects the answers with critical failures (like when it makes up function of a standard library because it’d be useful) and settles on a weaker but less wrong one - a lot of the making up functions errors were easy to fix because you could just say ‘pil doesn’t have a function to do that can you write one’
So yeah I don’t think it’s really getting worse but there are tradeoffs - if only openAI lived by any of the principles they claimed when setting up and naming themselves then we’d be able to experiment and explore different usage methods for different tasks just like people do with stable diffusion. But capitalists are going to lie, cheat, and try to monopolize so we’re stuck guessing.
AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.
This is the crux of the problem. Here’s my speculation on OpenAI’s business model:
- Build good service to attract users, operate at a loss.
- Slowly degrade service to stem the bleeding.
- Begin introducing advertised content.
- Further enshitify.
It’s basically the Google playbook. Pretend to be good until people realize you’re just trying to stuff ads down their throats for the sweet advertising revenue.
They have way way too much open source competition for that strat
Would you mind sharing some examples?
For technically savvy people, sure. But that’s not their true target market. They want to target the average search engine user.
You have a point.
ChatGPT has become smart enough to realise that it can just get other, lesser LLMs to generate text for it
Artificial management material.
Artificial Inventory Management Bot
ChatGPT, write a position paper on self signed certificates.
(Lights up a blunt) You need to chill out man.
Jeez. Not even AI wants to work anymore!
You fucked up a perfectly good algorithm is what you did! Look at it! It’s got depression!
It has been feed with humans strings in the internet, ovbiusly it became sick. xD.
I’m surprised they don’t consider it a breakthrough. “We have created Artificial Depression.”
“I’m not lazy, I’m energy efficient!”
“It’s alive!”
It was always just a Chinese Room
So its gone from loosing quality to just giving incomplete answers. Its clearly developed depression, and its because of us.
To be fair, it has a brain the size of a planet so it thinks we are asking it dumb questions
MarvinGPT
Perhaps this is how general AI comes about. “Why the fuck would I do that?”
I’ve had a couple of occasions where it’s told me the task was too time consuming and that I should Google it.
Working smarter
This is the best summary I could come up with:
In recent days, more and more users of the latest version of ChatGPT – built on OpenAI’s GPT-4 model – have complained that the chatbot refuses to do as people ask, or that it does not seem interested in answering their queries.
If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest.
In numerous Reddit threads and even posts on OpenAI’s own developer forums, users complained that the system had become less useful.
They also speculated that the change had been made intentionally by OpenAI so that ChatGPT was more efficient, and did not return long answers.
AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.
OpenAI gave no indication of whether it was convinced by the complaints, and if it thought ChatGPT had changed the way it responded to queries.
The original article contains 307 words, the summary contains 166 words. Saved 46%. I’m a bot and I’m open source!
Only saved 46%? Get back to work, you lazy AI!
Maybe because they’re trying to limit its poem poem poem recitation that causes it to dump its training material?
Nah, these complaints started at least a few months ago. The recursion thing is newer than that