Anonymous wrote:As a current college student, I think professors need to get with the times. Those professors still assigning 200+ pages of reading each week, plus discussions, plus assignments, plus quizzes. It's kind of ridiculous. Like, now that we have chatgpt, the point is that we do not need to spend hours reading 200+ pages and analyzing it in our own minds. Maybe that used to be a relevant skill, but less and less so these days. I would prefer to learn something more relevant for the future.
Anonymous wrote:Our DS is a physics major and the department has adopted a "No AI Policy" Policy. So, the department runs things with the assumption that Ai is a tool that students can use. What this means is the classes have astronomically accelerated in difficulty and they're permitting take home exams that are designed so gpt struggles to get you near the answer. This means students are able to both use a tool that they'll inevitably use in their research, while also developing their critical thinking skills. There's no shame in using AI, but you are expected to cite its use.
DD loves it and the upperclassmen have commented by how much more difficult the major has become, but that it is a necessary step in the right direction.
Anonymous wrote:Colleges are responding to ChatGPT and similar AI tools in a range of ways, from cautious acceptance to strict regulation. Some institutions see AI as a useful tool for learning, encouraging students to use it ethically for brainstorming, editing, or coding assistance. These schools are updating policies and coursework to reflect AI’s presence in academic life, integrating discussions about digital literacy, responsible AI use, and academic integrity.
On the other hand, many colleges remain concerned about plagiarism and the erosion of critical thinking. Some have reinforced honor codes, implemented AI-detection tools, and trained faculty to recognize AI-generated writing. Others have redesigned assignments to be more resistant to AI misuse, such as using oral exams, in-class writing, or personalized prompts that are harder for AI to complete convincingly.
Ultimately, colleges are realizing that banning AI outright isn't realistic. Instead, they're working to teach students when and how it’s appropriate to use tools like ChatGPT. The focus is shifting from prohibition to adaptation—guiding students to develop original thought while also understanding the role AI can play in research, writing, and problem-solving. This transition mirrors the broader challenge of aligning education with rapid technological change.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:This reminds me of when the Internet came to be, and teachers were adamant that we only use texts as sources. I think professors need to get with it. There is now AI, students should be using it to the best of their abilities.
Professors who are psychotically trying to monitor AI? It’s stupid. AI is here, and the students who use it well will succeed. Who cares if they wrote their paper with AI, if they did a great job. They did a great job.
I've read AI papers, and no, they don't do a great job. But if it's good enough for my students then maybe it should be good enough for me. Anyone here who is paying tuition have a problem if I just let AI grade and provide feedback for me? Because if it does a great job then it should be good enough, right? I don't actually need to read anything that way, so it would make me more efficient. And it would certainly make me less "psychotic." Do we have a deal, then?
There are many college professors that have written editorials about how AI produces great work on even esoteric topics.
It’s interesting how everyone on DCUM claims it sucks, but the people willing to put their name to something claim it produces great work.
I don’t think they are getting paid by Open AI?
No, but it does need to be noted that they come from academic _disciplines_ and subfields that can be served by the capacities of AI. My discipline currently cannot. Maybe someday, but we will be low-priority for the kind of specialized training that the models will require. Until we can _really_ train them ourselves, robustly, on a near-end-user level, these LLMs will be of little use in my field.
Anonymous wrote:Anonymous wrote:Anonymous wrote:This reminds me of when the Internet came to be, and teachers were adamant that we only use texts as sources. I think professors need to get with it. There is now AI, students should be using it to the best of their abilities.
Professors who are psychotically trying to monitor AI? It’s stupid. AI is here, and the students who use it well will succeed. Who cares if they wrote their paper with AI, if they did a great job. They did a great job.
I've read AI papers, and no, they don't do a great job. But if it's good enough for my students then maybe it should be good enough for me. Anyone here who is paying tuition have a problem if I just let AI grade and provide feedback for me? Because if it does a great job then it should be good enough, right? I don't actually need to read anything that way, so it would make me more efficient. And it would certainly make me less "psychotic." Do we have a deal, then?
There are many college professors that have written editorials about how AI produces great work on even esoteric topics.
It’s interesting how everyone on DCUM claims it sucks, but the people willing to put their name to something claim it produces great work.
I don’t think they are getting paid by Open AI?
Anonymous wrote:As a current college student, I think professors need to get with the times. Those professors still assigning 200+ pages of reading each week, plus discussions, plus assignments, plus quizzes. It's kind of ridiculous. Like, now that we have chatgpt, the point is that we do not need to spend hours reading 200+ pages and analyzing it in our own minds. Maybe that used to be a relevant skill, but less and less so these days. I would prefer to learn something more relevant for the future.
Anonymous wrote:Anonymous wrote:This reminds me of when the Internet came to be, and teachers were adamant that we only use texts as sources. I think professors need to get with it. There is now AI, students should be using it to the best of their abilities.
Professors who are psychotically trying to monitor AI? It’s stupid. AI is here, and the students who use it well will succeed. Who cares if they wrote their paper with AI, if they did a great job. They did a great job.
I've read AI papers, and no, they don't do a great job. But if it's good enough for my students then maybe it should be good enough for me. Anyone here who is paying tuition have a problem if I just let AI grade and provide feedback for me? Because if it does a great job then it should be good enough, right? I don't actually need to read anything that way, so it would make me more efficient. And it would certainly make me less "psychotic." Do we have a deal, then?
Anonymous wrote:This reminds me of when the Internet came to be, and teachers were adamant that we only use texts as sources. I think professors need to get with it. There is now AI, students should be using it to the best of their abilities.
Professors who are psychotically trying to monitor AI? It’s stupid. AI is here, and the students who use it well will succeed. Who cares if they wrote their paper with AI, if they did a great job. They did a great job.