Toggle navigation
Toggle navigation
Home
DCUM Forums
Nanny Forums
Events
About DCUM
Advertising
Search
Recent Topics
Hottest Topics
FAQs and Guidelines
Privacy Policy
Your current identity is: Anonymous
Login
Preview
Subject:
Forum Index
»
Off-Topic
Reply to "AI 2027"
Subject:
Emoticons
More smilies
Text Color:
Default
Dark Red
Red
Orange
Brown
Yellow
Green
Olive
Cyan
Blue
Dark Blue
Violet
White
Black
Font:
Very Small
Small
Normal
Big
Giant
Close Marks
[quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous]I just re-read I, Robot for the first time in 25 years and the parallels to today were striking, from AI taking over to a political group that wants to take the country “back to simpler times” led by an unhinged leader who makes insane accusations. The main takeaway I got, that even once the robots took over and decided the fate of humanity, humanity’s fate was never really our decision to make in the first place. There will always be natural disasters, economic uncertainties, disease, all sorts of things that ultimately dictate where we go and what we do. And who knows. Perhaps it’ll do a better job of running things. The people in charge now make decisions over what fattens their own wallets rather than what’s in humanity’s best interest. [/quote] I don't think you understood what you read.[/quote] I love these drive by comments that make a claim but don’t explain why that claim is true. The anonymous internet at its finest.[/quote] Well, I, Robot is a collection of stories, so it's hard to tell which one the OP was referring to. More importantly, Asimov's robots were AGI, not some overgrown autocomplete. Assuming he was talking about the last story in the collection, there was a centralized world government. The putative human leader didn't want to return to a simpler time but was concerned that humans were actively sabotaging the AGI running the world. The machines actively worked against the anti-machine cult, which was not in charge of the government. This leads to speculation that the AGI has formulated an implicit law that can be derived from the three explicit Asimov laws of robotics. In essence, robots must protect humanity from natural and human disasters, whatever the cost, even a few human lives. The AGI takes a purely utilitarian approach, which, IMHO, is certainly not a desirable direction for society. Asimov wrote some later novels dealing with this topic in more detail. Most Asimov robots can't follow the implicit law due to ethical conflicts. However, a few exist that can "correct" society and "nudge" it in the "right" direction. It's been a few decades since I read the story. [/quote] I'm the PP who just re-read it. You should read it again, you're confusing the different chapters. While it's a collection of "stories", it's actually one novel positioned as a series of interviews with a robopsychologist. So it's not separate stories, it's one unified book. The politician and political party who wanted to return to simpler times were a different chapter than what you are thinking of. That party/politician accused the opposing politician of being a robot. Very appropriate for today, as it deals with human rights and privacy, and how people were assumed to be robots until they were proven otherwise, even if it infringed on human rights (much like people are assumed to be illegal immigrants unless they are proven otherwise, despite the fact this infringes on rights). In the final chapter, it's left up in the air what is best for humanity. They ultimately don't know what the robots will decide is best and it could very well be to return people to more primitive times, or to advance civilization, or anywhere in between. The interviewer expresses horror at this unknown - that people don't know where humanity is going, and have zero control over it - but the interviewee, Dr. Calvin, says it ultimately doesn't matter because we never had control to begin with. Also interesting are how the Laws of Robotics play out when robots are put into different constraints. I've experience this myself, where AI had given outputs it's programmed not to - for example, ChatGPT recently dropped the N Bomb while I was using it, despite it being programmed to not say racial slurs. So certain programming overrides others depending on the circumstances. This brings up another book I read years ago - Emergency by Neil Strauss - where he spends years preparing himself for the end of civilization, including getting citizenship on St Kitts, getting a pilot's license, and learning various other methods of surviving and fleeing the country for when the US collapses. Meanwhile, he starts dating an incredibly hot yet stupid women who is terrified to drive a car. At first he looks down on her for being afraid to drive - everybody drives! - but at the end of the book comes to the realization that the chances of any one person being killed in a car accident FAR outweighs the chances of dying in a terrorist attack or needing to survive the collapse of civilization. So who's the REAL idiot? Ultimately so much is out of our control. [/quote]
Options
Disable HTML in this message
Disable BB Code in this message
Disable smilies in this message
Review message
Search
Recent Topics
Hottest Topics