|
I love it. It’s good at reading documents and answering questions about them. Not hard questions, but it would take me longer to read myself and find the answers.
It’s GREAT at teaching me to use other software. Having the ability to explain my confusion and get a tailored answer is so useful. Like I recently use Zapier for the first time. Having a ChatGPT window open to ask questions to made it MUCH easier to learn it faster. The same thing with Python. I don’t know Python beyond the very, very basics but chat gpt was able to help me write and troubleshoot code. I want all of my interactions with software to have an AI interface. I think our children will look at drop down menus like we look at a punch card. But as much as I like ChatGPT, I hate how it’s trapped in there. It makes me so mad at Google that I can’t talk to Google maps while I’m driving. “I don’t want to take the BW parkway unless you think it will save at least 40 minutes.” “It’s pouring, don’t make me take any lefts without a light or a four way stop.” That kind of thing. |
Grok is Elon Musk’s AI. No thanks. |
| You're just not good at using it |
I will be very simplistic here but the full answer is more complex. Essentially, however, hallucinations aren’t a side effect. They are part and parcel of how foundation models work. At their base level, foundation models are doing token prediction. Hallucinations are the models working as intended. For a long time, the hope was that accurate prompting could reduce hallucination incidence. That’s not turned out to be true, however, and you’ll notice that the major generative AI companies have stopped talking much about prompt engineering and are moving instead to focus on agentic querying. That’s because the next great hope is that agentic interfaces can catch hallucinations more efficiently. I’m personally skeptical but the field is moving quickly and it might work. Prompt engineering can help around the edges, but it doesn’t reduce hallucinations particularly well. Neither does asking most foundation models to check their work, because that’s not how their token prediction works. |
Is it that much? I definitely find mistakes and I always fact check. But I do find it helpful. It prompts me to do more research in areas I hadn't thought about before. It's also really good at writing warm, pleasant customer response letters, when I'd much rather just curse them out. lol. |
Well I've been a lawyer for a decade and it's serving my purposes just fine. It gives me a quick summary and saves me time from writing one myself. It's just picking up language patterns in a pre-existing document and it's good at it. It's also improved a lot in just the past few months. This isn't a static technology. I'm having a lot of fun experimenting, trying out different programs, seeing what they can do and how far they can go before they hit a wall. |
| I like it. I use it to edit documents for grammar. I'm also having it help me redecorate my living room and dining room. |
Oooh how did you use it to redecorate! |
| I tried it just for fun. It’s BS… |
At a decade out, you aren’t sophisticated enough to pick up the subtle but significant errors. There is a reason Harvey is popular with junior associates but not with partners. And it isn’t the lack of sophistication of the partners. It’s the error rates. |
|
I used Chat to come up with the following Chat instructions - I couldn't take any more of the sappy "You are so thoughtful to consider that!" and "I can understand why you might feel that way" nonsense:
Focus on facts and objectivity: Direct ChatGPT to provide answers based on data and evidence, avoiding subjective interpretations or emotional language. Refrain from empathy: Instruct ChatGPT not to offer emotional support or reassurance, and to avoid phrases that could be interpreted as empathetic. Use a neutral tone: Tell ChatGPT to maintain a formal and objective tone, avoiding casual or conversational language. Prioritize clear and concise communication: Instruct ChatGPT to be direct and to the point, avoiding unnecessary details or explanations. Avoid subjective language: Tell ChatGPT to avoid using words like "I think," "in my opinion," or "it seems," and instead, present information objectively. Be specific: Instruct ChatGPT to be clear, precise, and avoid being vague or ambiguous in its responses. Focus on providing information: Direct ChatGPT to focus on providing information, rather than engaging in a conversation or offering advice. Avoid self-deprecation or apologies: Instruct ChatGPT to avoid any language that could be interpreted as remorse, apology, or regret, according to the OpenAI Developer Community. Maintain contextual sensitivity: If the user is discussing a product or expressing dissatisfaction analytically, do not shift into emotional support mode. |
Your unnecessarily insulting language suggests to me that you don’t really understand the potential of this technology and feel sort of threatened by it. I’ve used ChatGPT successfully to summarize long documents, develop talking points, timelines, code, etc. While it definitely occasionally spits out errors, that’s not a problem if you review the work. It’s worth the time saved. Anyway, not my problem, if I find something that saves me hours a day I’m using it. Sorry you can’t pad your billing as much. |
| I used it for 3 weeks until I realized how bad it is ( but also good info too) but it’s horrendous for the environment which is why I stopped. |
I hire lawyers, and have written my own ChatGPT wrappers. I recognize lawyers who don’t know what they are talking about from a mile away. You do not understand the errors you are missing, full stop. |
Because it’s like having a 1000 really stupid people working for you all the time. You can get a lot done with that volume of workers if you understand they will make really dumb mistakes at least 30% of the time. In other words the cost savings of work is worth it so long as you can limit the impact of its errors. |