Are you using Chat-GPT ALL THE TIME?

Anonymous
Anonymous wrote:What I love about people making blanket statements about GPT chatbots being stupid is that it is very reflective of the user. It talks to you in a way that tries to mirror your own thinking as your language expresses it.

And it's default personality is provided by the developers, but it is highly adaptive to your instructions.


Yes exactly.
Anonymous
Literally never use it. It's SO bad for the environment.
Anonymous
Anonymous wrote:What I love about people making blanket statements about GPT chatbots being stupid is that it is very reflective of the user. It talks to you in a way that tries to mirror your own thinking as your language expresses it.

And it's default personality is provided by the developers, but it is highly adaptive to your instructions.


NP. Getting facts flat wrong is not reflective of the user. You ask it a straightforward legal question, for example, and it often gets it completely wrong. That's not user error.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:No. It's not ethical to use it ALL THE TIME. Very bad for the environment and the theft of IP has been outrageous.

And it hallucinates. A lot.


Not really accurate


It is 100% accurate.


No it isn’t and you’re clearly a low information person


FFS. There are massive issues with regard to energy usage and with regard to all of the intellectual property theft that has occurred to train LLMs. This is in the news constantly. Not sure how you could be this ignorant. But that is DCUM ... extreme ignorance + aggression. Do you think that if you are offensive and aggressive enough it will be ok that you are wrong? Hmmm, where are you getting that tactic from, lol?



I asked Gemini. The reply:

Given the "DCUM" reference and the AI topic, they are likely accusing the person of getting their tactics from Elon Musk or Donald Trump


Anonymous
Anonymous wrote:
Anonymous wrote:What I love about people making blanket statements about GPT chatbots being stupid is that it is very reflective of the user. It talks to you in a way that tries to mirror your own thinking as your language expresses it.

And it's default personality is provided by the developers, but it is highly adaptive to your instructions.


NP. Getting facts flat wrong is not reflective of the user. You ask it a straightforward legal question, for example, and it often gets it completely wrong. That's not user error.


This is true, especially about topics that have a lot wrong answers published. But if you put on your critical thinking hat and ask for references and clarification, you get through to ground truth.

Also, legal questions are rarely straightforward, and often gave answers varying by jurisdiction.

Anonymous
If I’m doing research on something I’ll use my good old Google finger. I can find accurate answers fairly quickly. I have used several types of AI but I take their results with a giant block of salt. I just don’t trust it. Maybe I will later but not now.

One example of AI being wrong:

A friend of mine is an author. They use a pen name when writing books. They’ve gone further and used AI to create an online presence of one of their made up pen names. They made a picture, a web page and a backstory…all using AI. When I ask AI if the pen name is a real person, AI responds “yes, this is a real person”. 🤪
Anonymous
Anonymous wrote:No. It's not ethical to use it ALL THE TIME. Very bad for the environment and the theft of IP has been outrageous.

And it hallucinates. A lot.


Humans hallucinate a lot too.
Anonymous
Anonymous wrote:
Anonymous wrote:What I love about people making blanket statements about GPT chatbots being stupid is that it is very reflective of the user. It talks to you in a way that tries to mirror your own thinking as your language expresses it.

And it's default personality is provided by the developers, but it is highly adaptive to your instructions.


NP. Getting facts flat wrong is not reflective of the user. You ask it a straightforward legal question, for example, and it often gets it completely wrong. That's not user error.


Post your question. Come on
Anonymous
Anonymous wrote:Never used it.


I used it once for work, it got the answer wrong, and it made me feel better that my job won't become obsolete.
Anonymous
Anonymous wrote:If I’m doing research on something I’ll use my good old Google finger. I can find accurate answers fairly quickly. I have used several types of AI but I take their results with a giant block of salt. I just don’t trust it. Maybe I will later but not now.

One example of AI being wrong:

A friend of mine is an author. They use a pen name when writing books. They’ve gone further and used AI to create an online presence of one of their made up pen names. They made a picture, a web page and a backstory…all using AI. When I ask AI if the pen name is a real person, AI responds “yes, this is a real person”. 🤪


Well, yes. Because you’re trying to trick the model and not asking a better prompt
Anonymous
I have yet to find a need for it.
Anonymous
Anonymous wrote:
Anonymous wrote:What I love about people making blanket statements about GPT chatbots being stupid is that it is very reflective of the user. It talks to you in a way that tries to mirror your own thinking as your language expresses it.

And it's default personality is provided by the developers, but it is highly adaptive to your instructions.


NP. Getting facts flat wrong is not reflective of the user. You ask it a straightforward legal question, for example, and it often gets it completely wrong. That's not user error.


I don’t think you understand how AI works.
Anonymous
Anonymous wrote:It nailed college recommendations for my kid. I inputted his stats, personality, strengths vs. weaknesses, interests, desired major and got a better list that I got from our college counselor.


How do you know that? Did he apply to all the colleges ChatGPT recommended as well as all the colleges the counselor recommended and got into more of the ones from ChatGPT?
Anonymous
Anonymous wrote:
Anonymous wrote:If I’m doing research on something I’ll use my good old Google finger. I can find accurate answers fairly quickly. I have used several types of AI but I take their results with a giant block of salt. I just don’t trust it. Maybe I will later but not now.

One example of AI being wrong:

A friend of mine is an author. They use a pen name when writing books. They’ve gone further and used AI to create an online presence of one of their made up pen names. They made a picture, a web page and a backstory…all using AI. When I ask AI if the pen name is a real person, AI responds “yes, this is a real person”. 🤪


Well, yes. Because you’re trying to trick the model and not asking a better prompt


I asked “Is ‘Joey Penname’ a real person”. I wasn’t trying to trick it.

What would be a better question?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:If I’m doing research on something I’ll use my good old Google finger. I can find accurate answers fairly quickly. I have used several types of AI but I take their results with a giant block of salt. I just don’t trust it. Maybe I will later but not now.

One example of AI being wrong:

A friend of mine is an author. They use a pen name when writing books. They’ve gone further and used AI to create an online presence of one of their made up pen names. They made a picture, a web page and a backstory…all using AI. When I ask AI if the pen name is a real person, AI responds “yes, this is a real person”. 🤪


Well, yes. Because you’re trying to trick the model and not asking a better prompt


I asked “Is ‘Joey Penname’ a real person”. I wasn’t trying to trick it.

What would be a better question?


“Can you independently verify that this is a real person, not a fictional or AI-generated persona?

post reply Forum Index » Off-Topic
Message Quick Reply
Go to: