Anonymous wrote:I am doing in-class weekly testing to force them to show up prepared for class. I also ask them to describe and reflect on the work they do during laboratory exercises. AI cannot generate something meaningful if it doesn't know what they did in the lab and what results they got. They still try to use AI and it is relatively easy to tell that the discussion has little to do with their class experience (too general and pompous). But this is science and I assume that it is harder with the humanities.
Thanks for this - it _is_ harder with the humanities, because there is so much out there about existing literature that a paper can be generated in a snap - and so can a summary of what you were supposed to read, or plausible answers to your homework questions, etc., etc. But what people often forget is that the deliverable in the humanities isn't actually the paper. Sure, it's the thing we grade, but the paper is really a _symptom_ of the thoughts and skills that are developed through struggling with big ideas. The transformation of the student's capacities is what we're working on: the world doesn't need another student essay on Macbeth, but the _student_ does. AI completely kills off that deliverable in the student, and replaces it with fake evidence of learning. And halfway-ing it ("I used AI for some ideas and then wrote the essay using that outline") compromises that student transformation, too. It's not like there weren't other shortcuts in the past - it's just that these are the shortest and potentially most devaluing ones we've encountered.