Toggle navigation
Toggle navigation
Home
DCUM Forums
Nanny Forums
Events
About DCUM
Advertising
Search
Recent Topics
Hottest Topics
FAQs and Guidelines
Privacy Policy
Your current identity is: Anonymous
Login
Preview
Subject:
Forum Index
»
Jobs and Careers
Reply to "I am so tired of every tech bro telling us how AI will change the world without giving us any concrete examples"
Subject:
Emoticons
More smilies
Text Color:
Default
Dark Red
Red
Orange
Brown
Yellow
Green
Olive
Cyan
Blue
Dark Blue
Violet
White
Black
Font:
Very Small
Small
Normal
Big
Giant
Close Marks
[quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous]You all should be using the paid versions. Game changer.[/quote] FOR WHAT. [/quote] +1 Just as OP points out, they never give examples. I think it's because they use it for stupid stuff like adjusting the tone of an email or doing some basic analysis that anyone with a brain and a modest attention span could do. [/quote] +2 I hesitate to write this because I don't want to insult people who use AI (but then, they have to problem insulting me, by constantly posting that if I get poor results from AI it must be because I didn't use it correctly because it's axiomatic that the AI works perfectly)... my impression is AI is good for two things: 1. Coding (not my field, but I'll accept the testimonials here) 2. People who have difficulty with tasks like writing emails and performance reviews or organizing their thoughts generally[/quote] -1 As a researcher it's so useful. A little bit too useful honestly.[/quote] I'm guessing you missed the article about how it made up published research? There's an expert in Homer who said chatgpt cited articles and people who don't exist. A lawyer who used it to research caselaw got destroyed by a judge because it made things up. From MIT: [QUOTE]For an example of how AI hallucinations can play out in the real world, consider the legal case of Mata v. Avianca. In this case, a New York attorney representing a client’s injury claim relied on ChatGPT to conduct his legal research. The federal judge overseeing the suit noted that the opinion contained internal citations and quotes that were nonexistent. Not only did the chatbot make them up, it even stipulated they were available in major legal databases (Weiser, 2023).[/QUOTE] Be really freaking careful there. [/quote] I literally question everything I read. I don't take anything at face value. It sounds like you only want to use AI if you can get it to do your job without you doing any work. Right now, it does not have that capability.[/quote] You can question everything, but there's different standards of reliability. If I search for a legal case on LexisNexis, I have high confidence that it's a real case, the head notes correctly summarize key holdings, and the Shepards symbol indicates whether it's good law. I know it has been reviewed by humans and has a reputation for accuracy and reliability (ironically, the only thing that would give me less confidence would be if they incorporated more AI). Searching for the case in the actual reporter would be a colossal waste of time ( i used to have to do this for Law Review. Go to the real law library, pull the book and make photocopies of the pages to file as proof the case said what the author said!). If I use AI to pull cases, I have no idea whether the cases are real, the holdings are accurate, and it's still good law, and definitely need to spend time looking them up on other sites, so this would be a waste of time when I can get highly accurate, reasonably fast results on Lexis. If I were doing non legal research I'd search pub med or whatever and skim the executive summaries to find what I need, and at least I'd know it was actual published research. I could still question the bias of the author but at least I'd know it was real.[/quote]
Options
Disable HTML in this message
Disable BB Code in this message
Disable smilies in this message
Review message
Search
Recent Topics
Hottest Topics