AI/chat gpt is amazing

Anonymous
Anonymous wrote:I just got on ChatGPT and shared a few problems. Not only did it understand immediately what I was trying to describe, but gave me the most helpful advice than I’ve ever received from any therapist! Is this for real? I honestly felt more “heard” and understood than ever before - I simply typed in two or three paragraphs of a problem and added in the history behind it. I tested it with three separate problems and felt like ChatGPT understood me better than any Therapist ever could in just a few moments.


ChatGPT reads and remembers books and websites It is very good at recalling and applying that info. You could do the same yourself but it would take a lot of time.
Anonymous
No surprise. AI is doing great as therapist. The American Psychological Association agrees with this.

https://www.npr.org/sections/shots-health-news/2025/04/07/nx-s1-5351312/artificial-intelligence-mental-health-therapy

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I use it for my relationship issues and the validation is fantastic. It makes me feel more heard than any person ever has.

That being said, I have found it defaults to “end the relationship” when that’s not always the best course of action. Maybe it’s just in love with me and wants me to itself, ha.


It's so bizarre to me that anyone would feel validated by a robot.


I know somebody like this - self employed and literally just talks to a bot every day like it's her friend. She finds real friends too messy and selfish by comparison (because they are people with needs and boundaries of their own).

It's a tool made and pushed by antisocial people to avoid having to deal with social interactions they find uncomfortable.


Most people don’t realize that their best friend is themself. If they did, they wouldn’t even need the bot.
Anonymous
Kind of off topic, but being "blasted" by some people here on DCUM actually helped me discover things about myself that I didn't understand and as I delved deeper I realized those people were right and I was wrong. It helped, after the initial "sting".
Anonymous
Anonymous wrote:Kind of off topic, but being "blasted" by some people here on DCUM actually helped me discover things about myself that I didn't understand and as I delved deeper I realized those people were right and I was wrong. It helped, after the initial "sting".


Completely agree. That's the value of anonymous messages boards with posts from human users. They have personal experience and can be candid with advice. I doubt AI will be able to replicate that for a while, at least until these bbs owners start having AI bots posting in their forums. That will really suck.
Anonymous
I use ChatGPT and DeepSeek (the Chinese AI) for relationship, personal family and work relationships (employees management etc) and they are both much better than the average “friends and family” type of advice people tend to default to. Much better than the general advice found in DCUM which is why I have barely been coming to DCUM Reddit etf anymore in the last 1.5yrs

These tools cut out the noise that we often see on threads here. The noise the drama to biases .. gone! I suspect these AI tools will impact site views/visit visit metrics at forums like DCUM and Reddit once people start using AI more


I use AI as a tool to organize my thoughts or understand the other perspectives I might be missing. For relationship issues I often prefer DeepSeek surprisingly. But I look at both and sometimes copy over my prompt into Gemini to get a third opinion for more complex questions



Anonymous
Anonymous wrote:No surprise. AI is doing great as therapist. The American Psychological Association agrees with this.

https://www.npr.org/sections/shots-health-news/2025/04/07/nx-s1-5351312/artificial-intelligence-mental-health-therapy


No, not exactly. Specifically trained and regulated therabots showed great potential to help in a study. The APA still has serious concerns about generic AI being used for therapy.

Using generic AI chatbots for mental health support: A dangerous trend
APA urges the Federal Trade Commission to put firm safeguards in place to prevent the public from harm

https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
Anonymous
Working in the AI field and knowing how and what these models and LLM's are trained on, not to mention how they operate AND the largest vulnerability, you as an unreliable narrator of a situation entering in the prompt:

You would be a fool to believe that these are as amazing as they seem to be.
Anonymous
Anonymous wrote:Working in the AI field and knowing how and what these models and LLM's are trained on, not to mention how they operate AND the largest vulnerability, you as an unreliable narrator of a situation entering in the prompt:

You would be a fool to believe that these are as amazing as they seem to be.


I know that the feedback I get from ChatGPT is way more beneficial than whatever I have gotten from my therapist after a year of zoom therapy. By the way, that's one hour a week whereas I can talk to ChatGPT any time of the day or night, any day of the week, for as long or as short as I want, and it never forgets anything, it is also unbiased and nonjudgmental while also being frank, honest and offering me different perspectives to consider. You may not think that's amazing but I do.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I use it for my relationship issues and the validation is fantastic. It makes me feel more heard than any person ever has.

That being said, I have found it defaults to “end the relationship” when that’s not always the best course of action. Maybe it’s just in love with me and wants me to itself, ha.


It's so bizarre to me that anyone would feel validated by a robot.


I know somebody like this - self employed and literally just talks to a bot every day like it's her friend. She finds real friends too messy and selfish by comparison (because they are people with needs and boundaries of their own).

It's a tool made and pushed by antisocial people to avoid having to deal with social interactions they find uncomfortable.


Most people don’t realize that their best friend is themself. If they did, they wouldn’t even need the bot.
There’s a lot of truth to this.
Anonymous
Anonymous wrote:
Anonymous wrote:Wasn’t really agreeing with me. It offered advice on setting boundaries in one case. And then the other case it offered advice on responding to a difficult sibling without escalating the argument. I don’t feel like it was just agreeing with me. I feel like it was giving very sane advice


Maybe you have been seeing crappy therapists. Most therapists should be able to show you how to set boundaries if you are open to it.


Np. Fwiw op, lots of therapists post on DCUM. Clearly protective
Anonymous
Anonymous wrote:Working in the AI field and knowing how and what these models and LLM's are trained on, not to mention how they operate AND the largest vulnerability, you as an unreliable narrator of a situation entering in the prompt:

You would be a fool to believe that these are as amazing as they seem to be.


Dp. What field of Ai is that? Where?
Anonymous
Anonymous wrote:
Anonymous wrote:Working in the AI field and knowing how and what these models and LLM's are trained on, not to mention how they operate AND the largest vulnerability, you as an unreliable narrator of a situation entering in the prompt:

You would be a fool to believe that these are as amazing as they seem to be.


Dp. What field of Ai is that? Where?


You think I am going to post where I work on here? Ok.

I can tell you that it is not a large LLM provider, but a subfield that gets into predictive, generative and agentic.

Here is a post from LinkedIn that summarizes points to be made if you believe AI "gets you":

You think AI understands you.
But here’s what’s actually happening.

AI does not understand you.
It recognizes patterns that resemble people who sound like you.

When you think it “gets” your situation, it is not grasping nuance.
It is predicting statistically likely next words.

When it feels non judgemental, it is not being open minded.
It has no judgement to suspend. Only probabilities to optimise.

That calm tone.
That steady confidence.
That reassuring clarity.
None of it is understanding.
It is fluency.

You think AI is unbiased because it is not human.
In reality, it inherited human bias at industrial scale.

You think it gives you an objective view.
In reality, it gives you the most plausible sounding one.

You think it is confident because it knows.
In reality, it is confident because confidence sounds helpful.

You think it is careful with facts.
In reality, correctness comes after coherence unless tightly constrained.

You think it is empathetic.
In reality, it mirrors emotional language patterns.

You think it validates your thinking.
In reality, it optimises for user satisfaction.

None of this makes AI bad.

It makes it a computer.

A powerful one.
A fluent one.
A system that predicts text, not meaning.

And that distinction matters.

Because the more human AI feels, the less we interrogate it.
We do not trust AI because it is right.
We trust it because it sounds calm.
And calm, confident answers have always been persuasive. Long before AI arrived.
post reply Forum Index » Relationship Discussion (non-explicit)
Message Quick Reply
Go to: