Anonymous wrote:Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Ok, Mark, thanks for the heads up to listen
Anonymous wrote:Almost all essays will end up being 40-80% probability of having been generated by AI. Then AO's do what? They ignore the flags.
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Anonymous wrote:So they’re using ai to detect for ai…wouldn’t ai be incentivized to intentionally report false results so more people use ai? Or are the different ai’s battling for dominance against each other?
Anonymous wrote:Almost all essays will end up being 40-80% probability of having been generated by AI. Then AO's do what? They ignore the flags.
Anonymous wrote:Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Ok, Mark, thanks for the heads up to listen
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Anonymous wrote:All I have found so far is an article from 2024:
Slate will make essay summaries effortless for admissions officers
The Slate Platform, used by hundreds of admissions offices across the country, has promised to deliver AI essay evaluation through its Pre-Reader: “the Slate Pre-Reader will summarize what a reviewer needs to know about a letter of recommendation, college essay, etc.” This function is still in development, but once rolled out, it’s poised to have a very meaningful impact on the admissions world.
Researchers have been actively training AI models to evaluate student admission essays for personal qualities
A research group based out of the University of Pennsylvania and the University of Colorado at Boulder developed a tool (i.e., modified a Facebook Large Language Model, ROBERTA) to analyze and evaluate student admission essays across seven variables, including teamwork, prosocial purpose, intrinsic motivation and leadership. Admissions officers scored 3,000 student-submitted essays across these variables and then used this data to train the model. Once trained and fine-tuned using the human feedback, the model was applied to 300,000 previously submitted essays (from the 2008-2009 admissions year) and was able to successfully score the essays similarly to the human evaluators.
Through their analysis of the calculated essay scores and subsequent academic outcomes of these applicants, researchers found that meaningful insights were gleaned from the AI scoring. Students whose essays scored positively for leadership were more likely to graduate from college in six years than those whose essays did not, even after controlling for differences in test scores, demographics and other factors. This research offers evidence that AI systems can effectively evaluate student essays for traits that are valuable to colleges.
The GMAT has been using AI to score essays for 16 years
https://www.mytutor.com/blog/ai-will-soon-be-reading-your-college-admissions-essays
I'm not sure whether this is what the podcast will be discussing.
Slate will make essay summaries effortless for admissions officers
The Slate Platform, used by hundreds of admissions offices across the country, has promised to deliver AI essay evaluation through its Pre-Reader: “the Slate Pre-Reader will summarize what a reviewer needs to know about a letter of recommendation, college essay, etc.” This function is still in development, but once rolled out, it’s poised to have a very meaningful impact on the admissions world.
Researchers have been actively training AI models to evaluate student admission essays for personal qualities
A research group based out of the University of Pennsylvania and the University of Colorado at Boulder developed a tool (i.e., modified a Facebook Large Language Model, ROBERTA) to analyze and evaluate student admission essays across seven variables, including teamwork, prosocial purpose, intrinsic motivation and leadership. Admissions officers scored 3,000 student-submitted essays across these variables and then used this data to train the model. Once trained and fine-tuned using the human feedback, the model was applied to 300,000 previously submitted essays (from the 2008-2009 admissions year) and was able to successfully score the essays similarly to the human evaluators.
Through their analysis of the calculated essay scores and subsequent academic outcomes of these applicants, researchers found that meaningful insights were gleaned from the AI scoring. Students whose essays scored positively for leadership were more likely to graduate from college in six years than those whose essays did not, even after controlling for differences in test scores, demographics and other factors. This research offers evidence that AI systems can effectively evaluate student essays for traits that are valuable to colleges.
The GMAT has been using AI to score essays for 16 years
Anonymous wrote:Can you post link to podcast?