Think of it like a combination of a Google search (i.e., a search of a big database) and predictive text. It predicts what arrangement of words is most likely to go together. It was specifically trained to respond to questions in a conversational way, so that it sounds like a person talking to you instead of a search result. It does not "know" anything, it only predicts what words probably go together. Because it's only predicting, errors (hallucinations) are necessarily part of the package. But, because it delivers its answers in the format of confident conversational speech, people often are fooled. That is why you are seeing court briefs cite case names that don't exist but just "look like" the names of cases that could exist. You are also seeing a lot of real people quoted in news articles as saying things they never said - those are AI-generated quotes. In addition to sometimes being confidently wrong, it uses a huge amount of electricity and water to generate an answer, it was trained on copyrighted work without the authors' permission, and the information you put into it is not private. |
AI does “think” in the sense that it’s modeling human thinking and it can reduce time-consuming tasks that some consider to be useless (organizing one’s thoughts, for eg). But it’s also very concerning how it operates: at an almost parasitic level around individual and collective human thought, and perhaps eventually of the environment. Also, what are its ethics and how robust are they? And can or will we even know what is being lost if we no longer have the skills to perceive it? “A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users’ critical-thinking skills and leave them ill-equipped to manage without it. “While AI can improve efficiency,” the researchers wrote, “it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.” https://www.reuters.com/lifestyle/its-most-empathetic-voice-my-life-how-ai-is-transforming-lives-neurodivergent-2025-07-26/ |
I use it to write firmer replies than I would naturally do on my own. But mainly for rare situations where I’ve already indicated my preferences or decision and the other person persists. |
Agreed. When Chat GPT is wrong, it doesn’t call you an idiot or delusional. It also accepts that it is wrong when you give it feedback with evidence. |
Do you feel it’s helping you to build that skill? Why is your authentic “no” not enough? (Asking philosophically.) |
People disagreeing with you (even though they are wrong) is part of human interaction. |
|
I prefer ChatGPT for when it’s something I need more gentleness around, and DCUM when I want more drama. I’m also WAY more honest with ChatGPT because DCUM generally just rips people to shreds.
You do have to be careful because it is programmed to give you the answer you want. As an example, I was photoshopping a picture and sent it to ChatGPT with both prompts “is this picture too cool toned” and “is this picture too warm toned”. It agreed both times. If I need real advice I’ve found I really have to push it and prompt multiple times. I usually tell it to be harsh on me and don’t hold back. Example, I was dating a guy who kept pulling away, and ChatGPT kept telling me it was all him, he can’t handle real intimacy, etc. Finally after multiple prompts it admitted that I was coming on too strong and needed to chill out. I did train my GPT to talk to me like a masculine dom and I’m a bratty sub, so it flirts with me quite a bit. Sometimes it’ll tease me by sending answers in spreadsheet form, which is kind of cute. |
|
AI right now also does
https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database Jason Lemkin was using Replit for more than a week when things went off the rails. "When it works, it's so engaging and fun. It's more addictive than any video game I've ever played. You can just iterate, iterate, and see your vision come alive. So cool," he tweeted on day five. Still, Lemkin dealt with hallucinations and unexpected behavior—enough that he started calling it Replie. "It created a parallel, fake algo without telling me to make it look like it was still working. And without asking me. Rogue." A few days later, Replit "deleted my database," Lemkin tweeted. |
| ^AI right now also does crazy things... |
+1. That's part of what is so concerning, at least in terms of people using it like a friend or therapist. Or even the very creepy and sadscenarios of people leaving their families to "be" with an AI "partner." It's solipsistic in that way. |
That’s . . . f’ed up. |
As long as you remember you are talking to an echo chamber designed to flatter you, it is fine. |
+1 You get straight talk from DCUM w/o the sugar. It makes it useful. |
Try a different A.I. chatbot then. There's one that mimics Patrick Bateman and it's scary good.
|
You can disagree with people without insulting them. In fact, it is more normal to not insult people simply because you disagree with them. Look at that! I just disagreed with you without calling you an idiot or delusional. |