We can tell. Its grammar is atrocious. |
More likely to go the opposite way. https://time.com/5520558/artificial-intelligence-racial-gender-bias/ https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html https://nihcm.org/publications/artificial-intelligences-racial-bias-in-health-care |
Ironically simple manual jobs are extremely hard for AI to replace. We still don't have a perfect roomba, lol And I can only dream until robotic housekeepers emerge. My dishwasher cannot even properly clean the dishes. And don't get me started on antiquated computer systems and PITA integration tech at most companies
|
Electric blackout and it's all gone |
Haha, totally. It's not ok to rely on something that's so easy to wipe out and is so fragile. I mean AI can be killed easily while humans can survive and have survived for millions of years without electricity. AI is only as good as its "training set" - data provided for its learning algorithms to form and apply patterns to new inputs to generate results. For some things this works really well, like where problem space is well defined and AI can efficiently process tons of information (related info) and produce very good results sometimes surpassing slow humans. AI had become more sophisticated because speed of processing increased exponentially and huge volumes of input data can be processed like never before. But I don't see that the fundamentals how AI learns about the world have changed. In a vast problem space that's poorly defined AI struggles and stumbles. It's why it's so hard to replace functions of a human housecleaner, for example. Even though it's a low skill job for a human, it requires AI to learn exponentially more things about space and objects in it and their properties and purpose, and maneuvering, and interacting and dealing with pets, etc that AI has to practically become "human" (in its understanding of the universe and objects) and also get very sophisticated sensors to do this simple job. Natural language processing where AI made leaps and bounds is "patternable" and the receiver of information is also the "interpreter". When AI "talks" to us we tend to "fill in the blanks" and assign meaning - "humanize" the entity on the other end. It's how communication works and why it's easier for AI to pretend to be human and get away with it when it's about language. In everything AI related Quality of training data determines how well AI "learns". And universe of data AI gets is limited to what humans allow it to access. Emergent conscious AI (arising from the network of interconnected objects and their unrelated information) for now is domain of science fiction. |
Absolutely SciFi, but I don’t mean androids that appear to be human. More in the sense that they could learn how to learn, and then behave in manners we can’t predict. Or be used in manners we can’t control. All of this stated….that’s not what ChatGPT is doing. As absolutely impressive as it is…..it’s not doing what people seem to think it’s doing. |
Did AI go to law school with the prosecutor who will be willing to work out a deal and present it to the judge (who happened to have a cocktail with the prosecutor the night before at a bar event), who will then offer a reduced sentence to the reckless driver? Didn’t think so. Not going to be replaced with AI. The people in this thread have no idea what they’re talking about. |
The people on this thread are all the people you don’t want to get stuck talking to at a party. |
So basically law is about who your lawyer knows, not about fitting the appropriate punishment to the crime? I would love AI to replace lawyers and real estate agents. Most expensive and useless people on the planet. |
|
|
Yes to the bolded especially. My job involves research and writing about certain topics where existing information is limited, a lot of source data has not been digitized, and a lot of existing scholarly work is in either paywalled articles or recently published books that aren't free online. I.e. the training set is sufficiently limited that the "answers" AI can produce are wrong, AND reproduce the biases of easily accessible information. That said, I have no concerns about being automated, but the pay in my field sucks and i won't encourage my kids to pursue it. My answer to worries that AI will replace white collar jobs is that my kids should either go into something where physical presence is necessary, or...learn to design and program the AIs |
So a couple of things: - AI instances are geo-redundant cloud resources in data centers with long term backup power. Yes, a blackout can make your Roomba not work at your house, but it doesn’t bring down Alexa. And I promise you that the current intensive DOD research in AI, which is moving forward with alarming speed, takes the need for electricity into account. - The concern about AI taking over the world — at least as I understand the CS leaders who are most concerned about this — is not SkyNet so much as it it concern about emergent, algorithm-driven, large-scale systems that interact with each other and have very negative impacts. That’s already been going on for a while. Think automated wire service news, automated stock trading, and social media algorithms that amplify bot-generated misinformation. There’s been research suggesting that a large percentage of twitter and forum arguments are AI arguing with AI, and when algorithms amplify this, suddenly online spaces are dominated with whatever generates the most engagement, which is usually hate. |
Yes and no. Relationship-based businesses (i.e. lobbying) aren't. Anything dealing with creativity (i.e. writing) really isn't. But all these people whose kids clamor to go to university for computer science are kind of screwed. Except the ones studying how to program AI, of course. But the rest of them -- run-of-the-mill coding? Won't be as much need for coders in 6 months to a year. Computer Science majors are already obsolete. |
Deservedly so, 80% of lawyers aren’t that smart anyway and are just doing tedious work. |