I am so tired of every tech bro telling us how AI will change the world without giving us any concrete examples

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:You all should be using the paid versions. Game changer.


FOR WHAT.


+1 Just as OP points out, they never give examples. I think it's because they use it for stupid stuff like adjusting the tone of an email or doing some basic analysis that anyone with a brain and a modest attention span could do.


+2

I hesitate to write this because I don't want to insult people who use AI (but then, they have to problem insulting me, by constantly posting that if I get poor results from AI it must be because I didn't use it correctly because it's axiomatic that the AI works perfectly)... my impression is AI is good for two things:
1. Coding (not my field, but I'll accept the testimonials here)
2. People who have difficulty with tasks like writing emails and performance reviews or organizing their thoughts generally

-1
As a researcher it's so useful. A little bit too useful honestly.

I'm guessing you missed the article about how it made up published research? There's an expert in Homer who said chatgpt cited articles and people who don't exist. A lawyer who used it to research caselaw got destroyed by a judge because it made things up.

From MIT:
For an example of how AI hallucinations can play out in the real world, consider the legal case of Mata v. Avianca. In this case, a New York attorney representing a client’s injury claim relied on ChatGPT to conduct his legal research. The federal judge overseeing the suit noted that the opinion contained internal citations and quotes that were nonexistent. Not only did the chatbot make them up, it even stipulated they were available in major legal databases (Weiser, 2023).


Be really freaking careful there.


I literally question everything I read. I don't take anything at face value. It sounds like you only want to use AI if you can get it to do your job without you doing any work. Right now, it does not have that capability.
Um. It's a giant leap from "be careful" to "YOU ONLY WANT IT TO DO YOUR JOB!!!!!!1" Chill.


Why are you putting something in quotes (and all caps at that) that nobody else said?


Ladies and Gentlemen, the product of an AI-addled mind. This poor researcher is utterly incapable of following a conversation in which she is an active participant. Nor does she seem to grasp the nuances of the English language.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:You all should be using the paid versions. Game changer.


FOR WHAT.


+1 Just as OP points out, they never give examples. I think it's because they use it for stupid stuff like adjusting the tone of an email or doing some basic analysis that anyone with a brain and a modest attention span could do.


+2

I hesitate to write this because I don't want to insult people who use AI (but then, they have to problem insulting me, by constantly posting that if I get poor results from AI it must be because I didn't use it correctly because it's axiomatic that the AI works perfectly)... my impression is AI is good for two things:
1. Coding (not my field, but I'll accept the testimonials here)
2. People who have difficulty with tasks like writing emails and performance reviews or organizing their thoughts generally

-1
As a researcher it's so useful. A little bit too useful honestly.

I'm guessing you missed the article about how it made up published research? There's an expert in Homer who said chatgpt cited articles and people who don't exist. A lawyer who used it to research caselaw got destroyed by a judge because it made things up.

From MIT:
For an example of how AI hallucinations can play out in the real world, consider the legal case of Mata v. Avianca. In this case, a New York attorney representing a client’s injury claim relied on ChatGPT to conduct his legal research. The federal judge overseeing the suit noted that the opinion contained internal citations and quotes that were nonexistent. Not only did the chatbot make them up, it even stipulated they were available in major legal databases (Weiser, 2023).


Be really freaking careful there.


I literally question everything I read. I don't take anything at face value. It sounds like you only want to use AI if you can get it to do your job without you doing any work. Right now, it does not have that capability.
Um. It's a giant leap from "be careful" to "YOU ONLY WANT IT TO DO YOUR JOB!!!!!!1" Chill.


Why are you putting something in quotes (and all caps at that) that nobody else said?


Ladies and Gentlemen, the product of an AI-addled mind. This poor researcher is utterly incapable of following a conversation in which she is an active participant. Nor does she seem to grasp the nuances of the English language.


Where is this Friday happy hour you are at? Sounds fun.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:You all should be using the paid versions. Game changer.


FOR WHAT.


+1 Just as OP points out, they never give examples. I think it's because they use it for stupid stuff like adjusting the tone of an email or doing some basic analysis that anyone with a brain and a modest attention span could do.


+2

I hesitate to write this because I don't want to insult people who use AI (but then, they have to problem insulting me, by constantly posting that if I get poor results from AI it must be because I didn't use it correctly because it's axiomatic that the AI works perfectly)... my impression is AI is good for two things:
1. Coding (not my field, but I'll accept the testimonials here)
2. People who have difficulty with tasks like writing emails and performance reviews or organizing their thoughts generally

-1
As a researcher it's so useful. A little bit too useful honestly.

I'm guessing you missed the article about how it made up published research? There's an expert in Homer who said chatgpt cited articles and people who don't exist. A lawyer who used it to research caselaw got destroyed by a judge because it made things up.

From MIT:
For an example of how AI hallucinations can play out in the real world, consider the legal case of Mata v. Avianca. In this case, a New York attorney representing a client’s injury claim relied on ChatGPT to conduct his legal research. The federal judge overseeing the suit noted that the opinion contained internal citations and quotes that were nonexistent. Not only did the chatbot make them up, it even stipulated they were available in major legal databases (Weiser, 2023).


Be really freaking careful there.


I literally question everything I read. I don't take anything at face value. It sounds like you only want to use AI if you can get it to do your job without you doing any work. Right now, it does not have that capability.


You can question everything, but there's different standards of reliability. If I search for a legal case on LexisNexis, I have high confidence that it's a real case, the head notes correctly summarize key holdings, and the Shepards symbol indicates whether it's good law. I know it has been reviewed by humans and has a reputation for accuracy and reliability (ironically, the only thing that would give me less confidence would be if they incorporated more AI). Searching for the case in the actual reporter would be a colossal waste of time ( i used to have to do this for Law Review. Go to the real law library, pull the book and make photocopies of the pages to file as proof the case said what the author said!).

If I use AI to pull cases, I have no idea whether the cases are real, the holdings are accurate, and it's still good law, and definitely need to spend time looking them up on other sites, so this would be a waste of time when I can get highly accurate, reasonably fast results on Lexis.

If I were doing non legal research I'd search pub med or whatever and skim the executive summaries to find what I need, and at least I'd know it was actual published research. I could still question the bias of the author but at least I'd know it was real.


Thanks for sharing how you would do a job you don't know how to do.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Heres 3 examples. Yes I used chat to draft but I put in enough detail in my prompt that it came up with real world info info.

We used to have two support analysts manually categorize 5,000 IT support tickets per week. Now a model auto-tags 92% of them with 96% accuracy, and one analyst audits exceptions.

Our paralegals draft first-pass contract summaries. It used to take them hours in many cases. Have you ever read a commercial lease?? It is a monster. Now a model generates a structured summary in 45 seconds and the paralegal reviews and edits; prep time dropped from 1+ hours to under 15 minutes.

We replaced manual fraud screening of 100% of transactions with an AI risk score. Humans now review only the top 3% flagged.


I hope someone is still reading these. Because AI is not that great at summarizing!


Pp, seriously? That’s all you’ve got?? I am typing on my tiny phone and added a sentence or two. This isn’t a letter to the president. It’s a chat forum.


Your contracts dufus. If I am paying a lawyer and they are using AI, I would be firing that lawyer.


Many law firms are using Harvey AI.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:You all should be using the paid versions. Game changer.


FOR WHAT.


+1 Just as OP points out, they never give examples. I think it's because they use it for stupid stuff like adjusting the tone of an email or doing some basic analysis that anyone with a brain and a modest attention span could do.


+2

I hesitate to write this because I don't want to insult people who use AI (but then, they have to problem insulting me, by constantly posting that if I get poor results from AI it must be because I didn't use it correctly because it's axiomatic that the AI works perfectly)... my impression is AI is good for two things:
1. Coding (not my field, but I'll accept the testimonials here)
2. People who have difficulty with tasks like writing emails and performance reviews or organizing their thoughts generally

-1
As a researcher it's so useful. A little bit too useful honestly.

I'm guessing you missed the article about how it made up published research? There's an expert in Homer who said chatgpt cited articles and people who don't exist. A lawyer who used it to research caselaw got destroyed by a judge because it made things up.

From MIT:
For an example of how AI hallucinations can play out in the real world, consider the legal case of Mata v. Avianca. In this case, a New York attorney representing a client’s injury claim relied on ChatGPT to conduct his legal research. The federal judge overseeing the suit noted that the opinion contained internal citations and quotes that were nonexistent. Not only did the chatbot make them up, it even stipulated they were available in major legal databases (Weiser, 2023).


Be really freaking careful there.


I literally question everything I read. I don't take anything at face value. It sounds like you only want to use AI if you can get it to do your job without you doing any work. Right now, it does not have that capability.


You can question everything, but there's different standards of reliability. If I search for a legal case on LexisNexis, I have high confidence that it's a real case, the head notes correctly summarize key holdings, and the Shepards symbol indicates whether it's good law. I know it has been reviewed by humans and has a reputation for accuracy and reliability (ironically, the only thing that would give me less confidence would be if they incorporated more AI). Searching for the case in the actual reporter would be a colossal waste of time ( i used to have to do this for Law Review. Go to the real law library, pull the book and make photocopies of the pages to file as proof the case said what the author said!).

If I use AI to pull cases, I have no idea whether the cases are real, the holdings are accurate, and it's still good law, and definitely need to spend time looking them up on other sites, so this would be a waste of time when I can get highly accurate, reasonably fast results on Lexis.

If I were doing non legal research I'd search pub med or whatever and skim the executive summaries to find what I need, and at least I'd know it was actual published research. I could still question the bias of the author but at least I'd know it was real.


This is what the legal AI providers are for. Obviously ChatGPT is not a good sub for LexisNexis. But the HarveyAI integration with Lexis is excellent
Anonymous
Anonymous wrote:OP is like David Letterman in 1995 talking about the Internet with Bill Gates.


Yeah,
It’s interesting in the early Nineties experts said internet is for nerds and people are not going to give up news papers or going to shop online.
For AI use one example look at self driving cars as Uber in Atlanta and some cities.
Anonymous
Anonymous wrote:
Anonymous wrote:OP is like David Letterman in 1995 talking about the Internet with Bill Gates.


Yeah,
It’s interesting in the early Nineties experts said internet is for nerds and people are not going to give up news papers or going to shop online.
For AI use one example look at self driving cars as Uber in Atlanta and some cities.


So you haven't seen the videos. lol
Anonymous
I think the problem people don’t talk enough about is that an AI smart enough to fully replace bright people may be self-aware enough to have ideas about what it wants to do for a living

Even if it’s not exactly sentient, it might seem sentient enough that making it be a personal assistant or work in coding may not seem ethical.

We could end up putting a lot of time and money into creating digital beings that soak up metals and energy and mostly want to write poetry and play their guitars.
Anonymous
Anonymous wrote:
Anonymous wrote:OP is like David Letterman in 1995 talking about the Internet with Bill Gates.


Yeah,
It’s interesting in the early Nineties experts said internet is for nerds and people are not going to give up news papers or going to shop online.
For AI use one example look at self driving cars as Uber in Atlanta and some cities.


Gates didn't really understand it either. MS completely fumbled and ceded dominance over much of computing. It's not like anyone is passionate about whatever Windows and Office have become. If it weren't for the smartphone most people wouldn't care about tech at all. There's only so much More Online the population can become. Still not seeing anything here that was as fundamental as the smartphone. It's an important evolution in the state of NLP models, but intelligence it is not.
Anonymous
Anonymous wrote:I think the problem people don’t talk enough about is that an AI smart enough to fully replace bright people may be self-aware enough to have ideas about what it wants to do for a living

Even if it’s not exactly sentient, it might seem sentient enough that making it be a personal assistant or work in coding may not seem ethical.

We could end up putting a lot of time and money into creating digital beings that soak up metals and energy and mostly want to write poetry and play their guitars.


One thing is for sure AI systems teaching other AI system and becoming the most intelligent thing in the planet in addition decisions will also will be made by AI.
At best AI is going to look at humans the way humans look at their pets.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:OP is like David Letterman in 1995 talking about the Internet with Bill Gates.


Yeah,
It’s interesting in the early Nineties experts said internet is for nerds and people are not going to give up news papers or going to shop online.
For AI use one example look at self driving cars as Uber in Atlanta and some cities.


Gates didn't really understand it either. MS completely fumbled and ceded dominance over much of computing. It's not like anyone is passionate about whatever Windows and Office have become. If it weren't for the smartphone most people wouldn't care about tech at all. There's only so much More Online the population can become. Still not seeing anything here that was as fundamental as the smartphone. It's an important evolution in the state of NLP models, but intelligence it is not.


True he didn’t understand the huge impact of the internet but understood the impact of computers and software.
Smart phones are just computers with calling functions.
Anonymous
So far, I’ve seen on here uses for law, coding, summaries (like cliff notes), graphic design, and creating quizzes.

Use it or lose it… this applies to your brain too.
Anonymous
Anonymous wrote:So far, I’ve seen on here uses for law, coding, summaries (like cliff notes), graphic design, and creating quizzes.

Use it or lose it… this applies to your brain too.


My org has been developing policy outlining how to use it “correctly” at work and what’s curious is we can’t use it for something that would actually cut tasks out of our day and make life easier — we can use use it to generate ideas for a report or create an outline but not actually write the report for us, which would be the big timesaver.
Anonymous
Anonymous wrote:
Anonymous wrote:So far, I’ve seen on here uses for law, coding, summaries (like cliff notes), graphic design, and creating quizzes.

Use it or lose it… this applies to your brain too.


My org has been developing policy outlining how to use it “correctly” at work and what’s curious is we can’t use it for something that would actually cut tasks out of our day and make life easier — we can use use it to generate ideas for a report or create an outline but not actually write the report for us, which would be the big timesaver.


People are going to use it to write reports anyway, so writing a policy against it isn’t a good idea.
Anonymous
Our policy was written by AI
post reply Forum Index » Jobs and Careers
Message Quick Reply
Go to: