It’s not accurate at the bolded. It only seems accurate to people who don’t have the experience and skill to catch the mistakes it makes. It looks very accurate, but is not actually accurate. |
I literally work in the field and this is wrong. |
| I setup a solid custom instructions and it is pretty decent and while it does sometimes can be a bit obtuse in general it seems smarter than average person and definitely smarter than average when it comes to technical stuff or writing some scripts. |
Ok, expert. Correct me! What about the response is wrong? |
| Try Gemini or grok. They are better and you can work with them to refine responses. |
|
I love it. It’s good at reading documents and answering questions about them. Not hard questions, but it would take me longer to read myself and find the answers.
It’s GREAT at teaching me to use other software. Having the ability to explain my confusion and get a tailored answer is so useful. Like I recently use Zapier for the first time. Having a ChatGPT window open to ask questions to made it MUCH easier to learn it faster. The same thing with Python. I don’t know Python beyond the very, very basics but chat gpt was able to help me write and troubleshoot code. I want all of my interactions with software to have an AI interface. I think our children will look at drop down menus like we look at a punch card. But as much as I like ChatGPT, I hate how it’s trapped in there. It makes me so mad at Google that I can’t talk to Google maps while I’m driving. “I don’t want to take the BW parkway unless you think it will save at least 40 minutes.” “It’s pouring, don’t make me take any lefts without a light or a four way stop.” That kind of thing. |
Grok is Elon Musk’s AI. No thanks. |
| You're just not good at using it |
I will be very simplistic here but the full answer is more complex. Essentially, however, hallucinations aren’t a side effect. They are part and parcel of how foundation models work. At their base level, foundation models are doing token prediction. Hallucinations are the models working as intended. For a long time, the hope was that accurate prompting could reduce hallucination incidence. That’s not turned out to be true, however, and you’ll notice that the major generative AI companies have stopped talking much about prompt engineering and are moving instead to focus on agentic querying. That’s because the next great hope is that agentic interfaces can catch hallucinations more efficiently. I’m personally skeptical but the field is moving quickly and it might work. Prompt engineering can help around the edges, but it doesn’t reduce hallucinations particularly well. Neither does asking most foundation models to check their work, because that’s not how their token prediction works. |
Is it that much? I definitely find mistakes and I always fact check. But I do find it helpful. It prompts me to do more research in areas I hadn't thought about before. It's also really good at writing warm, pleasant customer response letters, when I'd much rather just curse them out. lol. |
Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall. |
| I like it. I use it to edit documents for grammar. I'm also having it help me redecorate my living room and dining room. |
Oooh how did you use it to redecorate! |
| I tried it just for fun. It’s BS… |
At a decade out, you aren’t sophisticated enough to pick up the subtle but significant errors. There is a reason Harvey is popular with junior associates but not with partners. And it isn’t the lack of sophistication of the partners. It’s the error rates. |