|
Written by an ex-open ai exec who predicted where AI would be years ago and was largely correct. It's a long read but pretty entertaining.
https://ai-2027.com/ |
| Scary. |
| So buy Nvidua is what I’m getting |
| Sorry, I had to stop reading it. The author doesn't understand what we are currently calling AI. You can't tell this generation of AI to build a better AI for the next generation. If that were possible, it would have been done. |
|
I just re-read I, Robot for the first time in 25 years and the parallels to today were striking, from AI taking over to a political group that wants to take the country “back to simpler times” led by an unhinged leader who makes insane accusations.
The main takeaway I got, that even once the robots took over and decided the fate of humanity, humanity’s fate was never really our decision to make in the first place. There will always be natural disasters, economic uncertainties, disease, all sorts of things that ultimately dictate where we go and what we do. And who knows. Perhaps it’ll do a better job of running things. The people in charge now make decisions over what fattens their own wallets rather than what’s in humanity’s best interest. |
| AI will always be subservient to the human minds that created it. If AI somehow manages to “take over”, it will be because we’ve become lazy and let it happen. |
I don't think you understood what you read. |
I love these drive by comments that make a claim but don’t explain why that claim is true. The anonymous internet at its finest. |
Did you even read it? More developed AI can build more AI and even the current AI can code very well. |
Well, I, Robot is a collection of stories, so it's hard to tell which one the OP was referring to. More importantly, Asimov's robots were AGI, not some overgrown autocomplete. Assuming he was talking about the last story in the collection, there was a centralized world government. The putative human leader didn't want to return to a simpler time but was concerned that humans were actively sabotaging the AGI running the world. The machines actively worked against the anti-machine cult, which was not in charge of the government. This leads to speculation that the AGI has formulated an implicit law that can be derived from the three explicit Asimov laws of robotics. In essence, robots must protect humanity from natural and human disasters, whatever the cost, even a few human lives. The AGI takes a purely utilitarian approach, which, IMHO, is certainly not a desirable direction for society. Asimov wrote some later novels dealing with this topic in more detail. Most Asimov robots can't follow the implicit law due to ethical conflicts. However, a few exist that can "correct" society and "nudge" it in the "right" direction. It's been a few decades since I read the story. |
Yeah, I read it. The current crop of AI doesn't understand what it is writing. Without understanding, there is no meaning. It would have already happened if the current AI could write a better AI. Just ask ChatGPT to write a better AI. Did it work? |
Yes. I ran it in Google Colab and it was pretty great. Also, it never says in the article that the same exact AI as today is what will be writing the AIs of tomorrow. Scared much? Why are you so quick to reject the ideas presented? |
Like fusion, AGI is just a few decades away. |
| Add 2 or 3 years to each date and I’m a maybe. People are over-confident on AI. So over-hyped!! |
I'm the PP who just re-read it. You should read it again, you're confusing the different chapters. While it's a collection of "stories", it's actually one novel positioned as a series of interviews with a robopsychologist. So it's not separate stories, it's one unified book. The politician and political party who wanted to return to simpler times were a different chapter than what you are thinking of. That party/politician accused the opposing politician of being a robot. Very appropriate for today, as it deals with human rights and privacy, and how people were assumed to be robots until they were proven otherwise, even if it infringed on human rights (much like people are assumed to be illegal immigrants unless they are proven otherwise, despite the fact this infringes on rights). In the final chapter, it's left up in the air what is best for humanity. They ultimately don't know what the robots will decide is best and it could very well be to return people to more primitive times, or to advance civilization, or anywhere in between. The interviewer expresses horror at this unknown - that people don't know where humanity is going, and have zero control over it - but the interviewee, Dr. Calvin, says it ultimately doesn't matter because we never had control to begin with. Also interesting are how the Laws of Robotics play out when robots are put into different constraints. I've experience this myself, where AI had given outputs it's programmed not to - for example, ChatGPT recently dropped the N Bomb while I was using it, despite it being programmed to not say racial slurs. So certain programming overrides others depending on the circumstances. This brings up another book I read years ago - Emergency by Neil Strauss - where he spends years preparing himself for the end of civilization, including getting citizenship on St Kitts, getting a pilot's license, and learning various other methods of surviving and fleeing the country for when the US collapses. Meanwhile, he starts dating an incredibly hot yet stupid women who is terrified to drive a car. At first he looks down on her for being afraid to drive - everybody drives! - but at the end of the book comes to the realization that the chances of any one person being killed in a car accident FAR outweighs the chances of dying in a terrorist attack or needing to survive the collapse of civilization. So who's the REAL idiot? Ultimately so much is out of our control. |