| Why even bother? Just look at the transcript, ECs, test scores and recommendations. |
+1. A friend told me this was happening over a year ago. The CS majors are going to game this by putting in the hidden code & keywords. |
LLMs have bias. That's already proven. |
+1 They are trained on human data. They very much take on the same biases. |
And if the AI falls for that and scores the essay, say, a 10 out of 12 while human reader only gives it a 7, a second human reader will then review the app. All the school is doing is trying to save a bit of time and money in the face of a massive increase in applications by using AI and one human for the initial read rather than two humans, with an even smaller allowance for difference (2 points vs 4 points of discrepancy under the two-human previous process) between the two scores before a third review - by a human - is done. This is not a huge leap or an unreasonable process. People are pearl-clutching for no reason. |
In addition to liberal arts and humanities. |
Hmm. Someone has difficulty with reading comprehension. |
+1 As usual. |