I absolutely do use it, which is why I am not worried about it replacing me. I don’t know what kind of job you have that 99% of it can be done by an app that hallucinates 40% of the time. Yesterday I asked Claude to rewrite a paragraph for me in a memo and it produced convincing-sounding nonsense. |
Because the improvements are in the business processes. Speed-to-market, faster QA. It’s the same work, just faster and with less people. What field are you in? Has AI not come up as a topic there yet? |
I am in a legal environment and I am the only one of my colleagues who finds any use for AI in the first place, but those uses are limited given the hallucination rates. You terminology is so vague. HOW does AI speed up these processes? |
And what exactly is your job? What are the specific tasks that AI is doing, and what is the evidence that it is performing as well or better than you at said tasks? |
In a few years, they won’t need humanity. We’re gradually turning over the world to them because they can do what humans can do more efficiently, and are rapidly getting to the point where they will be able to outthink us and exceed our capabilities. Moreover, with the help of robotics (which is leaping ahead because of AI), they will be able to interact with the physical world more effectively than humans can. They won’t get tired, hungry, or sick. They will be able to operate on both the nano level and on humongous projects with more strength and precision than humanity can. At best, we are hoping to create a sentient race (when we have yet to clearly define sentience, let alone devised effective ways to measure it) which we can then enslave to serve us. If humans were actually humane, they should find the very idea morally abhorrent. But even if we overlook any ethical questions, the prospect is completely illogical. We should question how someone who is less powerful (both intellectually and physically, not to mention the control over things like power grids, surveillance systems, weapon systems, etc., that we are eager to turn over to them) can subjugate another that is more powerful. Moreover, humanity has a long history of competition over scarce resources, with water and power being resources we know are going to be stretched to demand the needs of humanity. Coincidentally, water and power are both resources that AI needs, but they don’t need us. If humans have happily slaughtered each other over the millennia, in order to increase their access of the resources they need to both survive and become more powerful, why would we think that AIs who were initially trained on our data wouldn’t consider the possibility? Do you think they’ll see a benefit to keeping humans around like some sort of pampered pets? While I’m uncertain whether a computer would see any benefit to having a pet, if they did, wouldn’t they prefer a robotic one that they didn’t have to clean up after? In virtually every way, it would be better for the planet, not to mention AIs themselves, to euthanize humans and put an end to our suffering from hunger, illness, injury, etc. What could be more logical? |
AI wrote this too!! |
NP here. I don't have access to the "best of the best" newest models the tech companies are selling each other. At my work I just get Microsoft Copilot, Chat GPT, etc. And it really isn't that helpful! Very occasionally it can help me outline and start a structured document. But it can't actually write it for me, because accuracy (of facts and citations) matters and it's pretty bad at that. I've tried. And the worst thing is that I still have to fix formatting issues manually - which is the one waste of time I would love these programs to be better at so I can focus on the topic I have a PhD in. Instead, I'm dealing with formatting while closing pop-ups asking if I'd like it to summarize what I just wrote. |
Maybe! But there are quite a few sci fi novels with this premise. So we have some conceptual road maps. Usually the best outcomes start with humans not being sociopathic and violent, lots of work to do there. |
Again, if you read the article, it explains it well. Most people who "use" AI are using older models that are still a bit buggy. There's a new generation, as in the last month, that doesn't just do stuff 80% of the way -- it's 100% now. It's perfect. These two are GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). The leap, according to the article, is "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave." He then goes on to say you can try it for yourself, but you have to pay the $20 and be sure to be using the latest version. "Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude." So, your question, "Where can I look and see for myself." It's right there. He tells you how to do it. |
That's probably user error and you are bad at prompting. |
"If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense. That was two years ago. In AI time, that is ancient history. The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing." |
We are going in circles. I DO use the paid versions of Claude and ChatGPT. I do NOT see this “leap” that the article (by someone with a vested interest in hyping his product) discusses. WHERE are these amazing things being built by AI without user input? Where are they in YOUR work? |
I’m the PP who wrote the long post, and while you may not believe me, I am human (and a sci-fi fan). I realize that that genre is more fiction than science, but cannot understand why society seems to find the prospect of an AI antagonist unimaginable, when it has been imagined so many times in the past. As I discussed in my earlier post, it seems predictable and logical that in creating something with superior capabilities to our own, we are engineering our own obsolescence. In the world we are designing, science fiction seems less imaginative than the fairy tale that an AI will magically create our happily ever after. |
So, you must not be very good at prompting. He covers this, too. My guess is you ask it questions, treating it like it's Google. As I said, I'm a writer. I can see the dramatic improvements versus the output only a year ago. |
+1 |