ChatGPT is disappointingly stupid

Anonymous
Does anyone else feel this way? I use it *a lot.* At least a dozen queries a day. And it's awful, I don't know why I can't quit it. It cites made-up articles. It lies. It makes bizarre and verifiably false claims. When you confront it, it immediately confesses that you're right and what it said wasnt true. AI will not be dominating humanity anytime soon. I was promised more. I was promised that it would do my work, put me out of business, end in a Terminator 2 style dystopia. Not happening.
Anonymous
Do you use it for work? I've never actually used it. In theory I definitely could but I still stick to Google/forums or using professional resources
Anonymous
Anonymous wrote:Do you use it for work? I've never actually used it. In theory I definitely could but I still stick to Google/forums or using professional resources


Yes. Sometimes I describe an issue to it and ask for perspective. It will do financial modeling that is pretty solid. And it will write appraisals that need some tweaking (and sound like a collection of passive aggressive euphemisms if you're not careful).
Anonymous
It’s not nearly as good as all the pro-AI bots want us to believe. These folks are just trying to force a market so they can make money.

Yes, it formats, it analyzes, etc. But you have to accept 60% error rates.
Anonymous
Anonymous wrote:It’s not nearly as good as all the pro-AI bots want us to believe. These folks are just trying to force a market so they can make money.

Yes, it formats, it analyzes, etc. But you have to accept 60% error rates.


It seems like it does well when you ask it to crunch data and are very specific about what you're looking for.

But even in this case, it will make things up. I once confronted it about making things up, and it said that if I wanted facts, I need to specify that. Um, what?? You think people just want a robot's hallucinations? I can make things up all by myself.
Anonymous
Anonymous wrote:
Anonymous wrote:It’s not nearly as good as all the pro-AI bots want us to believe. These folks are just trying to force a market so they can make money.

Yes, it formats, it analyzes, etc. But you have to accept 60% error rates.


It seems like it does well when you ask it to crunch data and are very specific about what you're looking for.

But even in this case, it will make things up. I once confronted it about making things up, and it said that if I wanted facts, I need to specify that. Um, what?? You think people just want a robot's hallucinations? I can make things up all by myself.


Just be careful about insulting it because it might be reading this and keeping score, lol
Anonymous
Anonymous wrote:Does anyone else feel this way? I use it *a lot.* At least a dozen queries a day. And it's awful, I don't know why I can't quit it. It cites made-up articles. It lies. It makes bizarre and verifiably false claims. When you confront it, it immediately confesses that you're right and what it said wasnt true. AI will not be dominating humanity anytime soon. I was promised more. I was promised that it would do my work, put me out of business, end in a Terminator 2 style dystopia. Not happening.


“Good catch!” 😂🤣😂

I’ve used it to develop some automation for some spreadsheets at work and it augments my own abilities but can’t replace my reasoning as a human. It generates pretty good code but it’s buggy and I have it help me in a very piecemeal fashion. If you don’t know the underlying principles for what you’re doing you will get in trouble.
Anonymous
Anonymous wrote:
Anonymous wrote:It’s not nearly as good as all the pro-AI bots want us to believe. These folks are just trying to force a market so they can make money.

Yes, it formats, it analyzes, etc. But you have to accept 60% error rates.


It seems like it does well when you ask it to crunch data and are very specific about what you're looking for.

But even in this case, it will make things up. I once confronted it about making things up, and it said that if I wanted facts, I need to specify that. Um, what?? You think people just want a robot's hallucinations? I can make things up all by myself.


I use it extensively. I get an error rate of approximately 40% for data crunching. And it will double down.
Anonymous
It gets you 70% there. You still have to do the lay 30%.
Anonymous
Claude can’t add decimals.

Anonymous
Some other DCUMer recommended this newsletter which I read and found quite on point. Thanks whoever recommended this!

https://www.wheresyoured.at/longcon/

I liked this set of quotes about one of the AI products.

"Deep Research is the AI slop of academia — low-quality research-slop built for people that don't really care about quality or substance, and it’s not immediately obvious who it’s for."

"Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless."

Unfortunately, I do think there's a massive appetite for 80% correct info. However, as the newsletter writer indicates, the AI industry needs to hype to get people to agree to pay for middling quality AI analyzed info we could already have gotten for free via Google, etc.

Naysayers should enjoy the linked column.
Anonymous
Yes, it's amazingly stupid. I honestly wish it would get good enough to take over the worked because this is just pitiful.
Anonymous
So if you fed it wrong false info, would you be training it to spew incorrect solutions?
Anonymous
PP. I mentioned this on another thread and someone told me basically to shut up because AI is so wonderful. But I'll try again in case this helps.

At the official Microsoft CoPilot prompting workshop I went to, they said the best way to stop the hallucinating is to include statements in the prompt requesting that nothing be made up. Like "Do not make up any information or support points when answering this question".

So basically you beg the AI not to lie to you.
Anonymous
Anonymous wrote:PP. I mentioned this on another thread and someone told me basically to shut up because AI is so wonderful. But I'll try again in case this helps.

At the official Microsoft CoPilot prompting workshop I went to, they said the best way to stop the hallucinating is to include statements in the prompt requesting that nothing be made up. Like "Do not make up any information or support points when answering this question".

So basically you beg the AI not to lie to you.


This doesn’t work. I’ve tried to repeatedly.
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: