Anonymous wrote:As someone very knowledgeable about chatbots and LLMs, I can confidently say that using them is unethical and people who use them are losers who should feel shame for outsourcing their thinking to a corporate machine.
Anonymous wrote:As someone very knowledgeable about chatbots and LLMs, I can confidently say that using them is unethical and people who use them are losers who should feel shame for outsourcing their thinking to a corporate machine.
Anonymous wrote:Anonymous wrote:I had a couple of abnormal results in a recent blood test, Chat GPT was actually pretty good for helping me understand the big picture.
The overall answer I got was "you should cut down on XYZ" but that my numbers are still actually pretty common for my age and at the moment have not much larger meaning than that.
I love ChatGPT for medical stuff. It's diagnosed 2 illnesses my kids had that doctors couldn't.
Now when we're sick, I send the symptoms to ChatGPT, get a diagnosis and suggested treatment, then hop on a Telehealth appointment, tell them what illness it is and what prescription I'd like, they write the prescription. SO much better than going to Urgent Care.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:People don’t often realize that you can set the tone of your individual ChatGPT to be warmer or more severe. You can actually train it to be more clinical and not give you fluff, too. And it’s programmed to be sycophantic so you’ll use it more. Buyer (or free user in most cases) beware!
Even when not sycophantic, it is frequently confidently wrong. I have a deep expertise in one area and both Gemini and ChatGPT are very wrong some of the time when I ask questions in my area of expertise, but say their wrong answers with firm confidence.
It’s like the ultimate mansplainer. I have no idea why people trust it so much.
You’re probably using free version. If you work for a company that has a paid enterprise version there has been a big leap in accuracy and reductions in hallucinations. But you also can instruct it not to guess or hallucinate.
No, paid. And I have instructed it not to hallucinate. I still catch very serious errors.
It is incomprehensible to me how much people blindly trust LLMs.
Anonymous wrote:Anonymous wrote:People don’t often realize that you can set the tone of your individual ChatGPT to be warmer or more severe. You can actually train it to be more clinical and not give you fluff, too. And it’s programmed to be sycophantic so you’ll use it more. Buyer (or free user in most cases) beware!
Even when not sycophantic, it is frequently confidently wrong. I have a deep expertise in one area and both Gemini and ChatGPT are very wrong some of the time when I ask questions in my area of expertise, but say their wrong answers with firm confidence.
It’s like the ultimate mansplainer. I have no idea why people trust it so much.
Anonymous wrote:What do people mean when they say they are asking chatGPT something? Do you just mean the Google search bar and there is an AI response? Or is this some separate website/program? Clearly I don't do this.
I occasionally read the AI response to my Google searches and wouldn't trust it at all. Somethings I know enough to know they are wrong on something. Sometimes I try to look for their sources and they either don't exist or they don't say what AI claims it says. Why are people using this thing?
Anonymous wrote:What do people mean when they say they are asking chatGPT something? Do you just mean the Google search bar and there is an AI response? Or is this some separate website/program? Clearly I don't do this.
I occasionally read the AI response to my Google searches and wouldn't trust it at all. Somethings I know enough to know they are wrong on something. Sometimes I try to look for their sources and they either don't exist or they don't say what AI claims it says. Why are people using this thing?
Anonymous wrote:Anonymous wrote:Anonymous wrote:People don’t often realize that you can set the tone of your individual ChatGPT to be warmer or more severe. You can actually train it to be more clinical and not give you fluff, too. And it’s programmed to be sycophantic so you’ll use it more. Buyer (or free user in most cases) beware!
Even when not sycophantic, it is frequently confidently wrong. I have a deep expertise in one area and both Gemini and ChatGPT are very wrong some of the time when I ask questions in my area of expertise, but say their wrong answers with firm confidence.
It’s like the ultimate mansplainer. I have no idea why people trust it so much.
You’re probably using free version. If you work for a company that has a paid enterprise version there has been a big leap in accuracy and reductions in hallucinations. But you also can instruct it not to guess or hallucinate.
Anonymous wrote:Anonymous wrote:Anonymous wrote:I had a couple of abnormal results in a recent blood test, Chat GPT was actually pretty good for helping me understand the big picture.
The overall answer I got was "you should cut down on XYZ" but that my numbers are still actually pretty common for my age and at the moment have not much larger meaning than that.
I love ChatGPT for medical stuff. It's diagnosed 2 illnesses my kids had that doctors couldn't.
Now when we're sick, I send the symptoms to ChatGPT, get a diagnosis and suggested treatment, then hop on a Telehealth appointment, tell them what illness it is and what prescription I'd like, they write the prescription. SO much better than going to Urgent Care.
This doesn’t make sense to me. If nothing else, your doctors also have ChatGPT.
Anonymous wrote:Anonymous wrote:I had a couple of abnormal results in a recent blood test, Chat GPT was actually pretty good for helping me understand the big picture.
The overall answer I got was "you should cut down on XYZ" but that my numbers are still actually pretty common for my age and at the moment have not much larger meaning than that.
Was that really helpful though? Of course we all know what we should cut down on.
Anonymous wrote:Anonymous wrote:People don’t often realize that you can set the tone of your individual ChatGPT to be warmer or more severe. You can actually train it to be more clinical and not give you fluff, too. And it’s programmed to be sycophantic so you’ll use it more. Buyer (or free user in most cases) beware!
Even when not sycophantic, it is frequently confidently wrong. I have a deep expertise in one area and both Gemini and ChatGPT are very wrong some of the time when I ask questions in my area of expertise, but say their wrong answers with firm confidence.
It’s like the ultimate mansplainer. I have no idea why people trust it so much.