ChatGPT is disappointingly stupid

Anonymous
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.
Anonymous
Anonymous wrote:You have to learn how to write prompts correctly. Seriously. Take a course in how to do it. You must also ask it to check it's work. Learn how to set the temps on hallucination. You have to put in some effort.


I literally work in the field and this is wrong.
Anonymous
I setup a solid custom instructions and it is pretty decent and while it does sometimes can be a bit obtuse in general it seems smarter than average person and definitely smarter than average when it comes to technical stuff or writing some scripts.
Anonymous
Anonymous wrote:
Anonymous wrote:You have to learn how to write prompts correctly. Seriously. Take a course in how to do it. You must also ask it to check it's work. Learn how to set the temps on hallucination. You have to put in some effort.


I literally work in the field and this is wrong.


Ok, expert. Correct me! What about the response is wrong?
Anonymous
Try Gemini or grok. They are better and you can work with them to refine responses.
Anonymous
I love it. It’s good at reading documents and answering questions about them. Not hard questions, but it would take me longer to read myself and find the answers.

It’s GREAT at teaching me to use other software. Having the ability to explain my confusion and get a tailored answer is so useful.

Like I recently use Zapier for the first time. Having a ChatGPT window open to ask questions to made it MUCH easier to learn it faster.

The same thing with Python. I don’t know Python beyond the very, very basics but chat gpt was able to help me write and troubleshoot code.

I want all of my interactions with software to have an AI interface. I think our children will look at drop down menus like we look at a punch card.

But as much as I like ChatGPT, I hate how it’s trapped in there.

It makes me so mad at Google that I can’t talk to Google maps while I’m driving. “I don’t want to take the BW parkway unless you think it will save at least 40 minutes.” “It’s pouring, don’t make me take any lefts without a light or a four way stop.” That kind of thing.
Anonymous
Anonymous wrote:Try Gemini or grok. They are better and you can work with them to refine responses.

Grok is Elon Musk’s AI. No thanks.
Anonymous
You're just not good at using it
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:You have to learn how to write prompts correctly. Seriously. Take a course in how to do it. You must also ask it to check it's work. Learn how to set the temps on hallucination. You have to put in some effort.


I literally work in the field and this is wrong.


Ok, expert. Correct me! What about the response is wrong?


I will be very simplistic here but the full answer is more complex. Essentially, however, hallucinations aren’t a side effect. They are part and parcel of how foundation models work. At their base level, foundation models are doing token prediction. Hallucinations are the models working as intended.

For a long time, the hope was that accurate prompting could reduce hallucination incidence. That’s not turned out to be true, however, and you’ll notice that the major generative AI companies have stopped talking much about prompt engineering and are moving instead to focus on agentic querying. That’s because the next great hope is that agentic interfaces can catch hallucinations more efficiently. I’m personally skeptical but the field is moving quickly and it might work.

Prompt engineering can help around the edges, but it doesn’t reduce hallucinations particularly well. Neither does asking most foundation models to check their work, because that’s not how their token prediction works.
Anonymous
Anonymous wrote:It’s not nearly as good as all the pro-AI bots want us to believe. These folks are just trying to force a market so they can make money.

Yes, it formats, it analyzes, etc. But you have to accept 60% error rates.


Is it that much? I definitely find mistakes and I always fact check. But I do find it helpful. It prompts me to do more research in areas I hadn't thought about before.

It's also really good at writing warm, pleasant customer response letters, when I'd much rather just curse them out. lol.
Anonymous
Anonymous wrote:
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.


Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall.
Anonymous
I like it. I use it to edit documents for grammar. I'm also having it help me redecorate my living room and dining room.
Anonymous
Anonymous wrote:I like it. I use it to edit documents for grammar. I'm also having it help me redecorate my living room and dining room.


Oooh how did you use it to redecorate!
Anonymous
I tried it just for fun. It’s BS…
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.


Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall.


At a decade out, you aren’t sophisticated enough to pick up the subtle but significant errors.

There is a reason Harvey is popular with junior associates but not with partners. And it isn’t the lack of sophistication of the partners. It’s the error rates.
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: