Toggle navigation
Toggle navigation
Home
DCUM Forums
Nanny Forums
Events
About DCUM
Advertising
Search
Recent Topics
Hottest Topics
FAQs and Guidelines
Privacy Policy
Your current identity is: Anonymous
Login
Preview
Subject:
Forum Index
»
Jobs and Careers
Reply to "If you read nothing else today, read Matt Shumer on AI"
Subject:
Emoticons
More smilies
Text Color:
Default
Dark Red
Red
Orange
Brown
Yellow
Green
Olive
Cyan
Blue
Dark Blue
Violet
White
Black
Font:
Very Small
Small
Normal
Big
Giant
Close Marks
[quote=Anonymous][quote=Anonymous][quote=Anonymous]If things are really as good/bad as he says they are then I don’t see what anyone can do. I do agree that telling your kids to focus on learning/adapting as a skill vs particular subject matters or jobs makes sense but if all he needs to do is tell the AI “build me an app that does x y and z” then it’s kind of stupid to tell me to spend an hour a day “practicing” with Claude. It’s very hard to tell how much of AI is inevitable and how much people just want it to be inevitable, but if it is inevitable at the level he is talking about then his advice is basically just sticking a finger in the dike and waiting for the economy to implode. I am also really curious where these law firms expect to find senior partners and if AI replaces all the junior associates. [/quote] They won’t need senior associates because AI will replace them too.[/quote] In a few years, they won’t need humanity. We’re gradually turning over the world to them because they can do what humans can do more efficiently, and are rapidly getting to the point where they will be able to outthink us and exceed our capabilities. Moreover, with the help of robotics (which is leaping ahead because of AI), they will be able to interact with the physical world more effectively than humans can. They won’t get tired, hungry, or sick. They will be able to operate on both the nano level and on humongous projects with more strength and precision than humanity can. At best, we are hoping to create a sentient race (when we have yet to clearly define sentience, let alone devised effective ways to measure it) which we can then enslave to serve us. If humans were actually humane, they should find the very idea morally abhorrent. But even if we overlook any ethical questions, the prospect is completely illogical. We should question how someone who is less powerful (both intellectually and physically, not to mention the control over things like power grids, surveillance systems, weapon systems, etc., that we are eager to turn over to them) can subjugate another that is more powerful. Moreover, humanity has a long history of competition over scarce resources, with water and power being resources we know are going to be stretched to demand the needs of humanity. Coincidentally, water and power are both resources that AI needs, but they don’t need us. If humans have happily slaughtered each other over the millennia, in order to increase their access of the resources they need to both survive and become more powerful, why would we think that AIs who were initially trained on our data wouldn’t consider the possibility? Do you think they’ll see a benefit to keeping humans around like some sort of pampered pets? While I’m uncertain whether a computer would see any benefit to having a pet, if they did, wouldn’t they prefer a robotic one that they didn’t have to clean up after? In virtually every way, it would be better for the planet, not to mention AIs themselves, to euthanize humans and put an end to our suffering from hunger, illness, injury, etc. What could be more logical?[/quote]
Options
Disable HTML in this message
Disable BB Code in this message
Disable smilies in this message
Review message
Search
Recent Topics
Hottest Topics