If you read nothing else today, read Matt Shumer on AI

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Have you used it? I’ve been using it lately at my work, and it’s doing 99% of my job completely correctly.


I absolutely do use it, which is why I am not worried about it replacing me.

I don’t know what kind of job you have that 99% of it can be done by an app that hallucinates 40% of the time. Yesterday I asked Claude to rewrite a paragraph for me in a memo and it produced convincing-sounding nonsense.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Because the improvements are in the business processes. Speed-to-market, faster QA. It’s the same work, just faster and with less people.

What field are you in? Has AI not come up as a topic there yet?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Because the improvements are in the business processes. Speed-to-market, faster QA. It’s the same work, just faster and with less people.

What field are you in? Has AI not come up as a topic there yet?


I am in a legal environment and I am the only one of my colleagues who finds any use for AI in the first place, but those uses are limited given the hallucination rates.

You terminology is so vague. HOW does AI speed up these processes?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Have you used it? I’ve been using it lately at my work, and it’s doing 99% of my job completely correctly.


And what exactly is your job? What are the specific tasks that AI is doing, and what is the evidence that it is performing as well or better than you at said tasks?
Anonymous
Anonymous wrote:
Anonymous wrote:If things are really as good/bad as he says they are then I don’t see what anyone can do.

I do agree that telling your kids to focus on learning/adapting as a skill vs particular subject matters or jobs makes sense but if all he needs to do is tell the AI “build me an app that does x y and z” then it’s kind of stupid to tell me to spend an hour a day “practicing” with Claude.

It’s very hard to tell how much of AI is inevitable and how much people just want it to be inevitable, but if it is inevitable at the level he is talking about then his advice is basically just sticking a finger in the dike and waiting for the economy to implode.

I am also really curious where these law firms expect to find senior partners and if AI replaces all the junior associates.


They won’t need senior associates because AI will replace them too.


In a few years, they won’t need humanity. We’re gradually turning over the world to them because they can do what humans can do more efficiently, and are rapidly getting to the point where they will be able to outthink us and exceed our capabilities. Moreover, with the help of robotics (which is leaping ahead because of AI), they will be able to interact with the physical world more effectively than humans can. They won’t get tired, hungry, or sick. They will be able to operate on both the nano level and on humongous projects with more strength and precision than humanity can.

At best, we are hoping to create a sentient race (when we have yet to clearly define sentience, let alone devised effective ways to measure it) which we can then enslave to serve us. If humans were actually humane, they should find the very idea morally abhorrent. But even if we overlook any ethical questions, the prospect is completely illogical. We should question how someone who is less powerful (both intellectually and physically, not to mention the control over things like power grids, surveillance systems, weapon systems, etc., that we are eager to turn over to them) can subjugate another that is more powerful.

Moreover, humanity has a long history of competition over scarce resources, with water and power being resources we know are going to be stretched to demand the needs of humanity. Coincidentally, water and power are both resources that AI needs, but they don’t need us. If humans have happily slaughtered each other over the millennia, in order to increase their access of the resources they need to both survive and become more powerful, why would we think that AIs who were initially trained on our data wouldn’t consider the possibility? Do you think they’ll see a benefit to keeping humans around like some sort of pampered pets? While I’m uncertain whether a computer would see any benefit to having a pet, if they did, wouldn’t they prefer a robotic one that they didn’t have to clean up after?

In virtually every way, it would be better for the planet, not to mention AIs themselves, to euthanize humans and put an end to our suffering from hunger, illness, injury, etc. What could be more logical?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:If things are really as good/bad as he says they are then I don’t see what anyone can do.

I do agree that telling your kids to focus on learning/adapting as a skill vs particular subject matters or jobs makes sense but if all he needs to do is tell the AI “build me an app that does x y and z” then it’s kind of stupid to tell me to spend an hour a day “practicing” with Claude.

It’s very hard to tell how much of AI is inevitable and how much people just want it to be inevitable, but if it is inevitable at the level he is talking about then his advice is basically just sticking a finger in the dike and waiting for the economy to implode.

I am also really curious where these law firms expect to find senior partners and if AI replaces all the junior associates.


They won’t need senior associates because AI will replace them too.


In a few years, they won’t need humanity. We’re gradually turning over the world to them because they can do what humans can do more efficiently, and are rapidly getting to the point where they will be able to outthink us and exceed our capabilities. Moreover, with the help of robotics (which is leaping ahead because of AI), they will be able to interact with the physical world more effectively than humans can. They won’t get tired, hungry, or sick. They will be able to operate on both the nano level and on humongous projects with more strength and precision than humanity can.

At best, we are hoping to create a sentient race (when we have yet to clearly define sentience, let alone devised effective ways to measure it) which we can then enslave to serve us. If humans were actually humane, they should find the very idea morally abhorrent. But even if we overlook any ethical questions, the prospect is completely illogical. We should question how someone who is less powerful (both intellectually and physically, not to mention the control over things like power grids, surveillance systems, weapon systems, etc., that we are eager to turn over to them) can subjugate another that is more powerful.

Moreover, humanity has a long history of competition over scarce resources, with water and power being resources we know are going to be stretched to demand the needs of humanity. Coincidentally, water and power are both resources that AI needs, but they don’t need us. If humans have happily slaughtered each other over the millennia, in order to increase their access of the resources they need to both survive and become more powerful, why would we think that AIs who were initially trained on our data wouldn’t consider the possibility? Do you think they’ll see a benefit to keeping humans around like some sort of pampered pets? While I’m uncertain whether a computer would see any benefit to having a pet, if they did, wouldn’t they prefer a robotic one that they didn’t have to clean up after?

In virtually every way, it would be better for the planet, not to mention AIs themselves, to euthanize humans and put an end to our suffering from hunger, illness, injury, etc. What could be more logical?


AI wrote this too!!
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Have you used it? I’ve been using it lately at my work, and it’s doing 99% of my job completely correctly.


NP here. I don't have access to the "best of the best" newest models the tech companies are selling each other. At my work I just get Microsoft Copilot, Chat GPT, etc.

And it really isn't that helpful! Very occasionally it can help me outline and start a structured document. But it can't actually write it for me, because accuracy (of facts and citations) matters and it's pretty bad at that. I've tried.

And the worst thing is that I still have to fix formatting issues manually - which is the one waste of time I would love these programs to be better at so I can focus on the topic I have a PhD in. Instead, I'm dealing with formatting while closing pop-ups asking if I'd like it to summarize what I just wrote.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:If things are really as good/bad as he says they are then I don’t see what anyone can do.

I do agree that telling your kids to focus on learning/adapting as a skill vs particular subject matters or jobs makes sense but if all he needs to do is tell the AI “build me an app that does x y and z” then it’s kind of stupid to tell me to spend an hour a day “practicing” with Claude.

It’s very hard to tell how much of AI is inevitable and how much people just want it to be inevitable, but if it is inevitable at the level he is talking about then his advice is basically just sticking a finger in the dike and waiting for the economy to implode.

I am also really curious where these law firms expect to find senior partners and if AI replaces all the junior associates.


They won’t need senior associates because AI will replace them too.


In a few years, they won’t need humanity. We’re gradually turning over the world to them because they can do what humans can do more efficiently, and are rapidly getting to the point where they will be able to outthink us and exceed our capabilities. Moreover, with the help of robotics (which is leaping ahead because of AI), they will be able to interact with the physical world more effectively than humans can. They won’t get tired, hungry, or sick. They will be able to operate on both the nano level and on humongous projects with more strength and precision than humanity can.

At best, we are hoping to create a sentient race (when we have yet to clearly define sentience, let alone devised effective ways to measure it) which we can then enslave to serve us. If humans were actually humane, they should find the very idea morally abhorrent. But even if we overlook any ethical questions, the prospect is completely illogical. We should question how someone who is less powerful (both intellectually and physically, not to mention the control over things like power grids, surveillance systems, weapon systems, etc., that we are eager to turn over to them) can subjugate another that is more powerful.

Moreover, humanity has a long history of competition over scarce resources, with water and power being resources we know are going to be stretched to demand the needs of humanity. Coincidentally, water and power are both resources that AI needs, but they don’t need us. If humans have happily slaughtered each other over the millennia, in order to increase their access of the resources they need to both survive and become more powerful, why would we think that AIs who were initially trained on our data wouldn’t consider the possibility? Do you think they’ll see a benefit to keeping humans around like some sort of pampered pets? While I’m uncertain whether a computer would see any benefit to having a pet, if they did, wouldn’t they prefer a robotic one that they didn’t have to clean up after?

In virtually every way, it would be better for the planet, not to mention AIs themselves, to euthanize humans and put an end to our suffering from hunger, illness, injury, etc. What could be more logical?


AI wrote this too!!


Maybe! But there are quite a few sci fi novels with this premise. So we have some conceptual road maps. Usually the best outcomes start with humans not being sociopathic and violent, lots of work to do there.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Again, if you read the article, it explains it well. Most people who "use" AI are using older models that are still a bit buggy. There's a new generation, as in the last month, that doesn't just do stuff 80% of the way -- it's 100% now. It's perfect. These two are GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT).

The leap, according to the article, is "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave."

He then goes on to say you can try it for yourself, but you have to pay the $20 and be sure to be using the latest version. "Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude."

So, your question, "Where can I look and see for myself." It's right there. He tells you how to do it.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Have you used it? I’ve been using it lately at my work, and it’s doing 99% of my job completely correctly.


I absolutely do use it, which is why I am not worried about it replacing me.

I don’t know what kind of job you have that 99% of it can be done by an app that hallucinates 40% of the time. Yesterday I asked Claude to rewrite a paragraph for me in a memo and it produced convincing-sounding nonsense.


That's probably user error and you are bad at prompting.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Because the improvements are in the business processes. Speed-to-market, faster QA. It’s the same work, just faster and with less people.

What field are you in? Has AI not come up as a topic there yet?


I am in a legal environment and I am the only one of my colleagues who finds any use for AI in the first place, but those uses are limited given the hallucination rates.

You terminology is so vague. HOW does AI speed up these processes?


"If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing."
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Again, if you read the article, it explains it well. Most people who "use" AI are using older models that are still a bit buggy. There's a new generation, as in the last month, that doesn't just do stuff 80% of the way -- it's 100% now. It's perfect. These two are GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT).

The leap, according to the article, is "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave."

He then goes on to say you can try it for yourself, but you have to pay the $20 and be sure to be using the latest version. "Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude."

So, your question, "Where can I look and see for myself." It's right there. He tells you how to do it.


We are going in circles. I DO use the paid versions of Claude and ChatGPT. I do NOT see this “leap” that the article (by someone with a vested interest in hyping his product) discusses.

WHERE are these amazing things being built by AI without user input? Where are they in YOUR work?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:If things are really as good/bad as he says they are then I don’t see what anyone can do.

I do agree that telling your kids to focus on learning/adapting as a skill vs particular subject matters or jobs makes sense but if all he needs to do is tell the AI “build me an app that does x y and z” then it’s kind of stupid to tell me to spend an hour a day “practicing” with Claude.

It’s very hard to tell how much of AI is inevitable and how much people just want it to be inevitable, but if it is inevitable at the level he is talking about then his advice is basically just sticking a finger in the dike and waiting for the economy to implode.

I am also really curious where these law firms expect to find senior partners and if AI replaces all the junior associates.


They won’t need senior associates because AI will replace them too.


In a few years, they won’t need humanity. We’re gradually turning over the world to them because they can do what humans can do more efficiently, and are rapidly getting to the point where they will be able to outthink us and exceed our capabilities. Moreover, with the help of robotics (which is leaping ahead because of AI), they will be able to interact with the physical world more effectively than humans can. They won’t get tired, hungry, or sick. They will be able to operate on both the nano level and on humongous projects with more strength and precision than humanity can.

At best, we are hoping to create a sentient race (when we have yet to clearly define sentience, let alone devised effective ways to measure it) which we can then enslave to serve us. If humans were actually humane, they should find the very idea morally abhorrent. But even if we overlook any ethical questions, the prospect is completely illogical. We should question how someone who is less powerful (both intellectually and physically, not to mention the control over things like power grids, surveillance systems, weapon systems, etc., that we are eager to turn over to them) can subjugate another that is more powerful.

Moreover, humanity has a long history of competition over scarce resources, with water and power being resources we know are going to be stretched to demand the needs of humanity. Coincidentally, water and power are both resources that AI needs, but they don’t need us. If humans have happily slaughtered each other over the millennia, in order to increase their access of the resources they need to both survive and become more powerful, why would we think that AIs who were initially trained on our data wouldn’t consider the possibility? Do you think they’ll see a benefit to keeping humans around like some sort of pampered pets? While I’m uncertain whether a computer would see any benefit to having a pet, if they did, wouldn’t they prefer a robotic one that they didn’t have to clean up after?

In virtually every way, it would be better for the planet, not to mention AIs themselves, to euthanize humans and put an end to our suffering from hunger, illness, injury, etc. What could be more logical?


AI wrote this too!!


Maybe! But there are quite a few sci fi novels with this premise. So we have some conceptual road maps. Usually the best outcomes start with humans not being sociopathic and violent, lots of work to do there.


I’m the PP who wrote the long post, and while you may not believe me, I am human (and a sci-fi fan). I realize that that genre is more fiction than science, but cannot understand why society seems to find the prospect of an AI antagonist unimaginable, when it has been imagined so many times in the past.

As I discussed in my earlier post, it seems predictable and logical that in creating something with superior capabilities to our own, we are engineering our own obsolescence. In the world we are designing, science fiction seems less imaginative than the fairy tale that an AI will magically create our happily ever after.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Again, if you read the article, it explains it well. Most people who "use" AI are using older models that are still a bit buggy. There's a new generation, as in the last month, that doesn't just do stuff 80% of the way -- it's 100% now. It's perfect. These two are GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT).

The leap, according to the article, is "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave."

He then goes on to say you can try it for yourself, but you have to pay the $20 and be sure to be using the latest version. "Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude."

So, your question, "Where can I look and see for myself." It's right there. He tells you how to do it.


We are going in circles. I DO use the paid versions of Claude and ChatGPT. I do NOT see this “leap” that the article (by someone with a vested interest in hyping his product) discusses.

WHERE are these amazing things being built by AI without user input? Where are they in YOUR work?


So, you must not be very good at prompting. He covers this, too. My guess is you ask it questions, treating it like it's Google.

As I said, I'm a writer. I can see the dramatic improvements versus the output only a year ago.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Pp who works with AI agents - np above is on point. The author is on the inside and frankly doing a public service

Claude skills and cowork has rocked a lot of companies this January. Everyone is sprinting to adopt - it’s not hype or futurist predictions anymore.


This post is such transparent marketing hype. This is all a desperate attempt to make AI happen as these overvalued companies are hemorrhaging money in this silly endeavor.


I’m the np above. Look, I get it, I’m skeptical and lean towards being a Luddite. And AI can do dumb things. One of my co-workers described it as like working with an eager intern who needs to be reined in sometimes. But the changes are real. The improvements in its quality are exponential.

I don’t really know what this means for the future of work, especially for my kids who are still in high school, but this isn’t smoke blowing. Disruptive change is coming.



Ok but what ARE the changes? What are the exponential improvements? Where can I look and see for myself something completed with AI that is really mind-blowing? People keep talking about AI doing things but provide no evidence of AI actually doing the thing. This is not coming from a place of skepticism; it’s just a basic question that no one seems able to answer.


Again, if you read the article, it explains it well. Most people who "use" AI are using older models that are still a bit buggy. There's a new generation, as in the last month, that doesn't just do stuff 80% of the way -- it's 100% now. It's perfect. These two are GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT).

The leap, according to the article, is "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave."

He then goes on to say you can try it for yourself, but you have to pay the $20 and be sure to be using the latest version. "Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude."

So, your question, "Where can I look and see for myself." It's right there. He tells you how to do it.


We are going in circles. I DO use the paid versions of Claude and ChatGPT. I do NOT see this “leap” that the article (by someone with a vested interest in hyping his product) discusses.

WHERE are these amazing things being built by AI without user input? Where are they in YOUR work?

+1
post reply Forum Index » Jobs and Careers
Message Quick Reply
Go to: