+1 Just as OP points out, they never give examples. I think it's because they use it for stupid stuff like adjusting the tone of an email or doing some basic analysis that anyone with a brain and a modest attention span could do. |
I don't think AI is great right now at doing new complex tasks that it hasn't been trained on. But my understanding is that employers are training AI so that it can do these tasks and that it is showing enormous capabilities under those circumstances. I've tried to ask AI to do my job and it sucked at it, but for advice about various issues (health, home repair, parenting) it has been extremely helpful, well beyond what I can get from an internet search. I can absolutely see how it has disrupted the tech sector and will be extremely disruptive in many other sectors. I know friends that work in finance that are proactively looking for other jobs because their companies are training AI to do their jobs |
+2 I hesitate to write this because I don't want to insult people who use AI (but then, they have to problem insulting me, by constantly posting that if I get poor results from AI it must be because I didn't use it correctly because it's axiomatic that the AI works perfectly)... my impression is AI is good for two things: 1. Coding (not my field, but I'll accept the testimonials here) 2. People who have difficulty with tasks like writing emails and performance reviews or organizing their thoughts generally |
-1 As a researcher it's so useful. A little bit too useful honestly. |
Can you elaborate? AI draws upon various things from the internet right? As a researcher, wouldn’t best practice be to consult primary sources? |
Of course it is. Everything needs to be verified. But when there is a ton of information out there AI can organize it. |
I'm also a researcher and I wouldn't say that AI has revolutionized the way I do my own research yet. There are potential uses that I'm exploring right now (like machine vision), but none of the ones I'm excited about are LLMs which is what most people seem to be thinking about when the say 'AI.' And while, yes, LLMs can be useful in helping me brainstorm or think about a dataset in a new way, they still blatantly lie about the underlying research or make faulty assumptions based on what is on the internet, so it's a really risky thing to rely on right now. |
Can you explain how you use it? Are you using it to pull research or to organize your own findings? |
I am using it to gather information on a topic ahead of an interview |
| As in-house counsel I find it extremely useful. We have an enterprise version so it is connected into our systems. My job often involves risk and issue spotting and getting up to speed quickly on new tech specs for things. It’s great to run a long doc drafted by product/tech people through it, and get it to explain the key issues and flag the legal issues for me. I’ll then review, read deeper as needed, and then it can draft me a short response. The whole thing can be done in about 15 minutes for something that previously took me about an hour. |
I think this is 100% true of most of the PPs / AI pushers. I have seen two other uses, which are: 3. Summarizing large amounts of information with poor to medium acuracy. I think some situations just don't need that much accuracy so it's ok, but people should understand the tradeoff they are making - you can't hand-wave it with "humans should double check" because then a human has to do the entire job anyway which erases the savings. You always have to pick between cheap, fast, and good. 4. Specialist applications for looking at, e.g., medical imaging. Useful, but not what people are talking about when they tell you to use AI. |
| Another use case - my high schooler uses it to study by sharing the outline he’s written and then asking it to quiz him. Or he shares some math problems and asks it to make more like that so he can practise. |
|
Someone at my company used AI to create a talk track for a public presentation.
It hallucinated by making up an event that did not occur. Luckily we checked everything in the output and so we caught it, but if we had trusted that output, we would have briefed made-up content and risked seriously damaging the company’s reputation. This is one of many reasons I barely use AI. |
How is it different than just googling the topic and browsing the top results? |
Exactly. People are spending barely any time with primary sources anymore. Example: I want to research what Hamilton, Madison, and Jay argued in the Federalist Papers. What should I do? I suspect many people would ask ChatGPT to summarize it. Some would read the Wikipedia article. One could also access a free copy of the entire thing from the Library of Congress and — gasp! — read it themselves. It’s organized into nice little sections. |