ChatGPT is disappointingly stupid

Anonymous
I love it. It’s good at reading documents and answering questions about them. Not hard questions, but it would take me longer to read myself and find the answers.

It’s GREAT at teaching me to use other software. Having the ability to explain my confusion and get a tailored answer is so useful.

Like I recently use Zapier for the first time. Having a ChatGPT window open to ask questions to made it MUCH easier to learn it faster.

The same thing with Python. I don’t know Python beyond the very, very basics but chat gpt was able to help me write and troubleshoot code.

I want all of my interactions with software to have an AI interface. I think our children will look at drop down menus like we look at a punch card.

But as much as I like ChatGPT, I hate how it’s trapped in there.

It makes me so mad at Google that I can’t talk to Google maps while I’m driving. “I don’t want to take the BW parkway unless you think it will save at least 40 minutes.” “It’s pouring, don’t make me take any lefts without a light or a four way stop.” That kind of thing.
Anonymous
Anonymous wrote:Try Gemini or grok. They are better and you can work with them to refine responses.

Grok is Elon Musk’s AI. No thanks.
Anonymous
You're just not good at using it
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:You have to learn how to write prompts correctly. Seriously. Take a course in how to do it. You must also ask it to check it's work. Learn how to set the temps on hallucination. You have to put in some effort.


I literally work in the field and this is wrong.


Ok, expert. Correct me! What about the response is wrong?


I will be very simplistic here but the full answer is more complex. Essentially, however, hallucinations aren’t a side effect. They are part and parcel of how foundation models work. At their base level, foundation models are doing token prediction. Hallucinations are the models working as intended.

For a long time, the hope was that accurate prompting could reduce hallucination incidence. That’s not turned out to be true, however, and you’ll notice that the major generative AI companies have stopped talking much about prompt engineering and are moving instead to focus on agentic querying. That’s because the next great hope is that agentic interfaces can catch hallucinations more efficiently. I’m personally skeptical but the field is moving quickly and it might work.

Prompt engineering can help around the edges, but it doesn’t reduce hallucinations particularly well. Neither does asking most foundation models to check their work, because that’s not how their token prediction works.
Anonymous
Anonymous wrote:It’s not nearly as good as all the pro-AI bots want us to believe. These folks are just trying to force a market so they can make money.

Yes, it formats, it analyzes, etc. But you have to accept 60% error rates.


Is it that much? I definitely find mistakes and I always fact check. But I do find it helpful. It prompts me to do more research in areas I hadn't thought about before.

It's also really good at writing warm, pleasant customer response letters, when I'd much rather just curse them out. lol.
Anonymous
Anonymous wrote:
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.


Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall.
Anonymous
I like it. I use it to edit documents for grammar. I'm also having it help me redecorate my living room and dining room.
Anonymous
Anonymous wrote:I like it. I use it to edit documents for grammar. I'm also having it help me redecorate my living room and dining room.


Oooh how did you use it to redecorate!
Anonymous
I tried it just for fun. It’s BS…
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.


Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall.


At a decade out, you aren’t sophisticated enough to pick up the subtle but significant errors.

There is a reason Harvey is popular with junior associates but not with partners. And it isn’t the lack of sophistication of the partners. It’s the error rates.
Anonymous
I used Chat to come up with the following Chat instructions - I couldn't take any more of the sappy "You are so thoughtful to consider that!" and "I can understand why you might feel that way" nonsense:

Focus on facts and objectivity: Direct ChatGPT to provide answers based on data and evidence, avoiding subjective interpretations or emotional language.

Refrain from empathy: Instruct ChatGPT not to offer emotional support or reassurance, and to avoid phrases that could be interpreted as empathetic.

Use a neutral tone: Tell ChatGPT to maintain a formal and objective tone, avoiding casual or conversational language.

Prioritize clear and concise communication: Instruct ChatGPT to be direct and to the point, avoiding unnecessary details or explanations.

Avoid subjective language: Tell ChatGPT to avoid using words like "I think," "in my opinion," or "it seems," and instead, present information objectively.

Be specific: Instruct ChatGPT to be clear, precise, and avoid being vague or ambiguous in its responses.

Focus on providing information: Direct ChatGPT to focus on providing information, rather than engaging in a conversation or offering advice.

Avoid self-deprecation or apologies: Instruct ChatGPT to avoid any language that could be interpreted as remorse, apology, or regret, according to the OpenAI Developer Community.

Maintain contextual sensitivity: If the user is discussing a product or expressing dissatisfaction analytically, do not shift into emotional support mode.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.


Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall.


At a decade out, you aren’t sophisticated enough to pick up the subtle but significant errors.

There is a reason Harvey is popular with junior associates but not with partners. And it isn’t the lack of sophistication of the partners. It’s the error rates.


Your unnecessarily insulting language suggests to me that you don’t really understand the potential of this technology and feel sort of threatened by it. I’ve used ChatGPT successfully to summarize long documents, develop talking points, timelines, code, etc. While it definitely occasionally spits out errors, that’s not a problem if you review the work. It’s worth the time saved.

Anyway, not my problem, if I find something that saves me hours a day I’m using it. Sorry you can’t pad your billing as much.
Anonymous
I used it for 3 weeks until I realized how bad it is ( but also good info too) but it’s horrendous for the environment which is why I stopped.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I’m honestly having a blast using it and I don’t understand why people aren’t figuring out what a great tool AI is if you don’t rely on it to hand you your work.

Here is what I use it for:
- coding to create automated spreadsheets and word templates- I already knew how to do some computer programming honestly but I was not going to make this effort w/o ChatGPT
- Converting data and moving it around, turning documents into tables that I can load into excel and turn into a mini database
- uploading and summarizing long documents like legislation or regulations- it’s accurate at this
- outlining out ideas, organizing my thoughts and pointing out things I missed
- planning out steps for long term projects

It’s a fantastic tool but you’ll notice I’m not asking it to do my job, just augment things I do.


It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate.


Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall.


At a decade out, you aren’t sophisticated enough to pick up the subtle but significant errors.

There is a reason Harvey is popular with junior associates but not with partners. And it isn’t the lack of sophistication of the partners. It’s the error rates.


Your unnecessarily insulting language suggests to me that you don’t really understand the potential of this technology and feel sort of threatened by it. I’ve used ChatGPT successfully to summarize long documents, develop talking points, timelines, code, etc. While it definitely occasionally spits out errors, that’s not a problem if you review the work. It’s worth the time saved.

Anyway, not my problem, if I find something that saves me hours a day I’m using it. Sorry you can’t pad your billing as much.


I hire lawyers, and have written my own ChatGPT wrappers. I recognize lawyers who don’t know what they are talking about from a mile away.

You do not understand the errors you are missing, full stop.
Anonymous
Anonymous wrote:Why do you use it a dozen times per day if it's wrong so often? I use it a lot at home and love it. I wouldn't trust it at work.


Because it’s like having a 1000 really stupid people working for you all the time. You can get a lot done with that volume of workers if you understand they will make really dumb mistakes at least 30% of the time. In other words the cost savings of work is worth it so long as you can limit the impact of its errors.
post reply Forum Index » Off-Topic
Message Quick Reply
Go to: