How much do you use ai to write or as a work tool?

Anonymous
Anonymous wrote:I'm a high school English teacher at a private school, and my head of department and principal have told us all to stop "wasting hours writing little notes on the students' work" and use AI to provide feedback for them instead. It saves HOURS, but I hate it and I don't think it is very good. I have to do it because admin has now changed the maximum time in which we have to give feedback from two weeks to three days.

Did you know that you can now scan handwritten student work and have AI offer feedback?

We have two teachers in the department leaving, and they won't be replaced. I'm expecting bigger classes next year, and we have been told we will be required to do two after-school activities/coaching per week next year, rather than our current requirement of one per week. I guess now that AI does the grading, we're just warm bodies.

I hate hate hate my job now. If anybody has any ideas about what else I could do, I'd love to hear.


That's awful. I am sorry to hear this, you and your students deserve better.
Anonymous
A lot of people like to use the internal AI to help search for references/citations in our manual. I don't care for it personally, don't think it's that great.

While not AI, I use automation very heavily throughout my day to help flag keywords as possible redflags in what I am reading. Ultimately I make the ultimate judgement call with that information, many times it is a nothingburger. If AI ever gets judgement (which sounds like it might be quite soon) then my job is toast.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I am a school principal, and using AI to write evaluations has given me back years of my life. I take copious notes when I’m observing and have great reflection conferences with teachers. That is where the real work happens—in the conversation. But then I have to write everything up which takes forever. Using AI has changed that. Of course, it’s only as good as what I put in and what I check.


It may not be doing you as many favors as you think. I could tell my supervisor did my annual plan using AI and it made me livid. Like another poster said, I'm not even worth my boss's time once a year anymore? Blech.

Tying this back to the original question from OP, I'd be ok if my boss told me they used AI to help them write it to save time. Passing it off as your own is insulting. Your staff isn't telling that to your face because, well, you're the boss. Don't be naive to think that they aren't talking amongst themselves about it though.


You don’t appear to understand how current AI tools work. The pp said she wrote her own notes and used the AI to format it into an evaluation. That’s totally appropriate and I would not be upset if my supervisor did it.

There is a big difference between asking an LLM to write a structured document based on data you feed it and asking it to write something from scratch. LLMs excel at taking data and turning them into structured document. They excel at “structured” language in general like computer code, regulatory writing, etc.


Okay. I probably don't understand. But, similar to the principal, I had a conversation with my boss about what I'd being doing over the year like projects I'd be working on and what would be a measurement of success. So far, so good. But then the document itself was just bizarre, and the content didn't seem like something a human would come up with. It had nothing to do with MY use of it, it was completely how my boss used it and it fell flat. Maybe my boss doesn't understand how current AI tools work.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I am a school principal, and using AI to write evaluations has given me back years of my life. I take copious notes when I’m observing and have great reflection conferences with teachers. That is where the real work happens—in the conversation. But then I have to write everything up which takes forever. Using AI has changed that. Of course, it’s only as good as what I put in and what I check.


It may not be doing you as many favors as you think. I could tell my supervisor did my annual plan using AI and it made me livid. Like another poster said, I'm not even worth my boss's time once a year anymore? Blech.

Tying this back to the original question from OP, I'd be ok if my boss told me they used AI to help them write it to save time. Passing it off as your own is insulting. Your staff isn't telling that to your face because, well, you're the boss. Don't be naive to think that they aren't talking amongst themselves about it though.


You don’t appear to understand how current AI tools work. The pp said she wrote her own notes and used the AI to format it into an evaluation. That’s totally appropriate and I would not be upset if my supervisor did it.

There is a big difference between asking an LLM to write a structured document based on data you feed it and asking it to write something from scratch. LLMs excel at taking data and turning them into structured document. They excel at “structured” language in general like computer code, regulatory writing, etc.


Okay. I probably don't understand. But, similar to the principal, I had a conversation with my boss about what I'd being doing over the year like projects I'd be working on and what would be a measurement of success. So far, so good. But then the document itself was just bizarre, and the content didn't seem like something a human would come up with. It had nothing to do with MY use of it, it was completely how my boss used it and it fell flat. Maybe my boss doesn't understand how current AI tools work.


Ok. Think of the LLM as an intern. Let's say your boss goes to the intern and says "write an evaluation for Larla giving her a 4/5 and mentioning a few areas of improvement." The intern will come up with kind of a vague document based on those limited instructions.

But let's say you give your intern a complete set of notes and a template showing exactly how the evaluation should look. What will the intern come up with then?

That is the difference between putting prompts in the software versus understanding that the software is a pattern recognition tool and that if you input data (the employee's performance, a template for what your evaluation needs to include) it can spit out exactly what you need (a structured, formatted evaluation based on notes).

People criticize the tool without understanding what it is or what its capabilities are.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Never. I hate it, and in my field - law - it notoriously hallucinates. I love to write, am an excellent writer, and write quickly. I love language and sentence composition, so why would I have something do it for me, especially if it will do a poor job. Additionally, for advice to a client, a fair amount of my work is tone and nuance. I am not trusting that to an AI. And then too, sometimes I am not entirely sure of what my view of an issue will be until I write it out. The writing process helps me clarify and retain the information. And since my practice is built on expertise, I need to have that expertise at hand when I walk into a meeting and am asked unexpected questions. Can't do that if I rely on AI.


On the other hand, as a client, it's great. I'm a trustee for a relative. A question came up and I fed the trust agreement into chatGPT so I could ask it questions before I met with an actual lawyer. It was terrific as way for me to search and understand that document without knowing the terminology. So I was way better prepared to meet with the lawyer and I was better able to understand the conversation. It's not like I asked it for legal advice. But I did ask it things like "what does the trust agreement say happens if Larla dies before Larlo?" and "what does the trust agreement say about if Larla marries and has a stepchild?" and it was great at directing me to the relevant sections.


This is a really bad thing to do with long documents because often the AI hallucinates if it can’t find anything about Larlo or Larla. I have found when I ask the AI these kinds of questions I MUST go back and verify it myself. At work, Adobe has its own AI tool in acrobat that is better for these kinds of questions.


This. I’m an attorney and have played around with it a bit to learn more about documents and its accuracy is, generously, poor.

We are being directed to figure out how to incorporate it more at work and it is a challenge.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Never. I hate it, and in my field - law - it notoriously hallucinates. I love to write, am an excellent writer, and write quickly. I love language and sentence composition, so why would I have something do it for me, especially if it will do a poor job. Additionally, for advice to a client, a fair amount of my work is tone and nuance. I am not trusting that to an AI. And then too, sometimes I am not entirely sure of what my view of an issue will be until I write it out. The writing process helps me clarify and retain the information. And since my practice is built on expertise, I need to have that expertise at hand when I walk into a meeting and am asked unexpected questions. Can't do that if I rely on AI.


On the other hand, as a client, it's great. I'm a trustee for a relative. A question came up and I fed the trust agreement into chatGPT so I could ask it questions before I met with an actual lawyer. It was terrific as way for me to search and understand that document without knowing the terminology. So I was way better prepared to meet with the lawyer and I was better able to understand the conversation. It's not like I asked it for legal advice. But I did ask it things like "what does the trust agreement say happens if Larla dies before Larlo?" and "what does the trust agreement say about if Larla marries and has a stepchild?" and it was great at directing me to the relevant sections.


This is a really bad thing to do with long documents because often the AI hallucinates if it can’t find anything about Larlo or Larla. I have found when I ask the AI these kinds of questions I MUST go back and verify it myself. At work, Adobe has its own AI tool in acrobat that is better for these kinds of questions.


This. I’m an attorney and have played around with it a bit to learn more about documents and its accuracy is, generously, poor.

We are being directed to figure out how to incorporate it more at work and it is a challenge.


I’m an attorney also and I hear this silliness from others. Attorneys tend to be a risk averse and slow bunch. I’ll remind you that one of the ethical canons we must follow is a duty of competence, including tech competence. You are not showing that. If it’s spitting out continuously poor outputs, that’s because you aren’t spending the time to learn his to use it, and write good prompts.
Anonymous
AI lowers my self-esteem. Every time I write something and run it by AI, it spits out a much better version in 5 seconds.
Anonymous
Anonymous wrote:
Anonymous wrote:In my field (which is not cutting edge, more of an education/nonproft/culture space) I do a lot of writing, and I generally do it well (have edited and published articles and books ). Recently for some of my more rote things—let’s say reports that summarize research— I have written them quickly and then fed them into ai to tighten. It’s super intellectually lazy, doing the work I could do in my own but it’s much faster and frankly improves flow and specificity really well, esp if I give it the right prompts in an iterative process.

Is it dishonest? My work also goes through a human editor, this is first pass but I wonder if it’s wrong. We’ve not had these discussions in our workplace. Do you us ai as a tool? If so how? Is it accepted or talked about?

Sometimes I think it’s fine, because I’ve done the thinking research and most of the but other times I feel like I’m cheating.


It isn’t dishonest. We are explicitly being told to do this at work. Those who resist are going to be the first to lose their jobs. Those who embrace and master the technology will thrive.


+1. Using it all the time helps me to understand what AI can’t do, so I can tell people when they try to cut my job.
Anonymous
I write analytic assessments. I never use AI. I can write much better than any LLM.
Anonymous
Op here, thanks for all the feedback. I’m sure it varies significantly by field. My sibling uses proprietary model to code stuff. I never use it to generate info (because it’s often wrong!) but when I do feed my own writing into it, it does a good job of editing it though there are certain habits or tells I dislike. I wonder how our editor feels about it…the other thing is that I wonder if I am feeding the beast, all of us inputting everything that we do or need done is exchanging privacy and making ai stronger, but to what end?
Anonymous
Anonymous wrote:Op here, thanks for all the feedback. I’m sure it varies significantly by field. My sibling uses proprietary model to code stuff. I never use it to generate info (because it’s often wrong!) but when I do feed my own writing into it, it does a good job of editing it though there are certain habits or tells I dislike. I wonder how our editor feels about it…the other thing is that I wonder if I am feeding the beast, all of us inputting everything that we do or need done is exchanging privacy and making ai stronger, but to what end?


You are absolutely feeding the beast.

Your editor almost certainly hates AI because it’s basically a crappy editor.

If it substantially improves your writing, then you likely aren’t a strong writer.
Anonymous
I never learned to code except suffering through some r and very basic python and i freaking love it for writing scripts for spreadsheets. And maintaining those scripts! It’s fantastic.
Anonymous
Op here, thanks for all the feedback. I’m sure it varies significantly by field. My sibling uses proprietary model to code stuff. I never use it to generate info (because it’s often wrong!) but when I do feed my own writing into it, it does a good job of editing it though there are certain habits or tells I dislike. I wonder how our editor feels about it…the other thing is that I wonder if I am feeding the beast, all of us inputting everything that we do or need done is exchanging privacy and making ai stronger, but to what end?


You are absolutely feeding the beast.

Your editor almost certainly hates AI because it’s basically a crappy editor.

If it substantially improves your writing, then you likely aren’t a strong writer.


I would not (and did not) say that it substantially improves my writing. It does a good job of editing and condensing very quickly. If I need to cut something down by 40 words, it does it well. I do not use it to generate and often do not like its stylistic choices. Its a very impersonal way of writing, but if I give it the right prompts--cut this down by 40 words without significantly changing my language, or write a transition sentence between paragraphs 3 and 4 --it does it pretty well.

Our editorial office is currently tasked with exploring AI tools for their work. I don't know how they feel about it.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Never. I hate it, and in my field - law - it notoriously hallucinates. I love to write, am an excellent writer, and write quickly. I love language and sentence composition, so why would I have something do it for me, especially if it will do a poor job. Additionally, for advice to a client, a fair amount of my work is tone and nuance. I am not trusting that to an AI. And then too, sometimes I am not entirely sure of what my view of an issue will be until I write it out. The writing process helps me clarify and retain the information. And since my practice is built on expertise, I need to have that expertise at hand when I walk into a meeting and am asked unexpected questions. Can't do that if I rely on AI.


Lawyer in a very technical field and this is me too.

When i've asked AI some legal technical questions, the answers were astounding wrong and mixed up.

I also really like writing and am very good at it. When i get AI written product, I have to review the whole thing and rewrite it (like an intern). It is much faster for me to do it well the first time. Also agree that there's so much value to me doing the research directly, reading the initial document. That's the only way you REALLY learn the hyper complicated nuances of the law, not asking AI for one discrete answer. If i didn't put in the time for the first client's question, then that's a missed opportunity for me to just generally learn the material. And knowing the material backwards and forwards is what sets me apart from others.

I know that AI can make some jobs a lot more efficient, but my understanding so far is that those jobs are low knowledge, low barrier to entry, being done by not particularly inefficient people. Like, when someone says "it helps me write an email that would have taken me 30 minutes to get the tone right", this is not a thing in my job. I'm drafting that email faster than i could instruct and review AI.


I sort of think this attitude is just as bad as the “AI Boosters” who think AI can or will replace a lawyer. Like it seems overly defensive.

LLMs are software. If someone is adept at using a piece of software to make their job better, that doesn’t mean their job is less complicated than yours. Perhaps they are more creative than you. Perhaps their job is more multi-faceted. Perhaps their process is different, and getting those initial words on the page for them to edit leads to a product that is just as good as yours, or better, and faster.

AI is not magic, but the technology is evolving a bit. Over the past year or so it has improved at legal analysis, especially westlaw cocounsel. Assuming that you are so good, so fast, so important, that a tool (and it’s just a tool) could do nothing for you reeks of hubris. And, perhaps, isn’t the safest position.


LOL new poster here but Westlaw Cocounsel gave me such a definitely wrong answer just today, and did so so confidently. And with cites! I only knew it was wrong because it was obvious to me with my years of experience, but I can imagine a junior person would have just assumed it was correct.
Anonymous
Anonymous wrote:In my field (which is not cutting edge, more of an education/nonproft/culture space) I do a lot of writing, and I generally do it well (have edited and published articles and books ). Recently for some of my more rote things—let’s say reports that summarize research— I have written them quickly and then fed them into ai to tighten. It’s super intellectually lazy, doing the work I could do in my own but it’s much faster and frankly improves flow and specificity really well, esp if I give it the right prompts in an iterative process.

Is it dishonest? My work also goes through a human editor, this is first pass but I wonder if it’s wrong. We’ve not had these discussions in our workplace. Do you us ai as a tool? If so how? Is it accepted or talked about?

Sometimes I think it’s fine, because I’ve done the thinking research and most of the but other times I feel like I’m cheating.


0.00%
post reply Forum Index » Jobs and Careers
Message Quick Reply
Go to: