YCBK Podcast Episode re College Admissions' use of AI Detection Programs

Anonymous
Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?
Anonymous
Can you post link to podcast?
Anonymous
Anonymous wrote:Can you post link to podcast?

It’s not out yet.
Anonymous
Sounds like the episode isn't out yet but is expected later this week?
Anonymous
All I have found so far is an article from 2024:

Slate will make essay summaries effortless for admissions officers
The Slate Platform, used by hundreds of admissions offices across the country, has promised to deliver AI essay evaluation through its Pre-Reader: “the Slate Pre-Reader will summarize what a reviewer needs to know about a letter of recommendation, college essay, etc.” This function is still in development, but once rolled out, it’s poised to have a very meaningful impact on the admissions world.

Researchers have been actively training AI models to evaluate student admission essays for personal qualities
A research group based out of the University of Pennsylvania and the University of Colorado at Boulder developed a tool (i.e., modified a Facebook Large Language Model, ROBERTA) to analyze and evaluate student admission essays across seven variables, including teamwork, prosocial purpose, intrinsic motivation and leadership. Admissions officers scored 3,000 student-submitted essays across these variables and then used this data to train the model. Once trained and fine-tuned using the human feedback, the model was applied to 300,000 previously submitted essays (from the 2008-2009 admissions year) and was able to successfully score the essays similarly to the human evaluators.

Through their analysis of the calculated essay scores and subsequent academic outcomes of these applicants, researchers found that meaningful insights were gleaned from the AI scoring. Students whose essays scored positively for leadership were more likely to graduate from college in six years than those whose essays did not, even after controlling for differences in test scores, demographics and other factors. This research offers evidence that AI systems can effectively evaluate student essays for traits that are valuable to colleges.

The GMAT has been using AI to score essays for 16 years

https://www.mytutor.com/blog/ai-will-soon-be-reading-your-college-admissions-essays

I'm not sure whether this is what the podcast will be discussing.
Anonymous
Anonymous wrote:All I have found so far is an article from 2024:

Slate will make essay summaries effortless for admissions officers
The Slate Platform, used by hundreds of admissions offices across the country, has promised to deliver AI essay evaluation through its Pre-Reader: “the Slate Pre-Reader will summarize what a reviewer needs to know about a letter of recommendation, college essay, etc.” This function is still in development, but once rolled out, it’s poised to have a very meaningful impact on the admissions world.

Researchers have been actively training AI models to evaluate student admission essays for personal qualities
A research group based out of the University of Pennsylvania and the University of Colorado at Boulder developed a tool (i.e., modified a Facebook Large Language Model, ROBERTA) to analyze and evaluate student admission essays across seven variables, including teamwork, prosocial purpose, intrinsic motivation and leadership. Admissions officers scored 3,000 student-submitted essays across these variables and then used this data to train the model. Once trained and fine-tuned using the human feedback, the model was applied to 300,000 previously submitted essays (from the 2008-2009 admissions year) and was able to successfully score the essays similarly to the human evaluators.

Through their analysis of the calculated essay scores and subsequent academic outcomes of these applicants, researchers found that meaningful insights were gleaned from the AI scoring. Students whose essays scored positively for leadership were more likely to graduate from college in six years than those whose essays did not, even after controlling for differences in test scores, demographics and other factors. This research offers evidence that AI systems can effectively evaluate student essays for traits that are valuable to colleges.

The GMAT has been using AI to score essays for 16 years

https://www.mytutor.com/blog/ai-will-soon-be-reading-your-college-admissions-essays

I'm not sure whether this is what the podcast will be discussing.


Super interesting. Thanks for posting it.

This is obviously the opposite side of the coin - how colleges may be using AI to evaluate applicants' essays.

FWIW, this all sounds fine to me. I appreciate the way it's been trained and normed, and it makes sense given the insane volume of applications (and therefore essays) colleges need to get through in a very short period of time. (I "blame" the Common App for this. It created an arms race where kids think they should apply to more schools, and it makes it way too easy to do so . . . . )

Finally, I'd rather have AI evaluate my kid's essays than an exhausted seasonal reader who's half asleep at 11 pm. But maybe others disagree?

Anonymous
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


Ok, Mark, thanks for the heads up to listen
Anonymous
Anonymous wrote:
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


Ok, Mark, thanks for the heads up to listen

DP. I know he advertises in this forum. I think OP posted because I suggested this topic deserved its own thread, in a response to a post in the thread on Michigan essays.

However, I haven't yet been able to find anything that really corroborates the alleged use of AI detectors to be harnessed by Slate. I wonder where this comes from, though perhaps Technolutions is aware that this is a sensitive topic.
Anonymous
Almost all essays will end up being 40-80% probability of having been generated by AI. Then AO's do what? They ignore the flags.

Anonymous
So they’re using ai to detect for ai…wouldn’t ai be incentivized to intentionally report false results so more people use ai? Or are the different ai’s battling for dominance against each other?
Anonymous
Anonymous wrote:Almost all essays will end up being 40-80% probability of having been generated by AI. Then AO's do what? They ignore the flags.



Definitely not.
This is wrong. If you haven't used AI at all, pangram and originality (which you pay for) will show mostly green (less than 10%) AI. Mostly it will say 100% human.

I'm an IEC and run all my clients' stuff through my paid versions where the material is not shared for training purposes. I pay large sums for this privacy feature.

Anything that shows up more than 12% AI, I get suspicious. Then, when I probe with the students and ask for first drafts, yes, they did use AI for that draft (plus you can tell - honestly, I read so many of these - I'll know which ones are 50%+ AI before I run them). Note that you can put them through QuillBot, gptzero, and zerogpt, and it will say 0% AI. Those detectors are not useful. The technology will only get better. It's a slippery slope, and I have zero tolerance.

Yes, Slate now has the AI detector interface for the entire Common App. There was a presentation about this at our meeting in Ohio last month. Probably visuals online at this point.
Anonymous
Anonymous wrote:So they’re using ai to detect for ai…wouldn’t ai be incentivized to intentionally report false results so more people use ai? Or are the different ai’s battling for dominance against each other?


Do you actually know how this works? Do you realize what it's looking for?

https://www.carnegiehighered.com/blog/slate-ai-features/
Anonymous
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?


Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


It depends on the school.

Members of the Yale Admissions Committee review the written components of an application to better understand an individual’s unique perspective, background, and insights related to their experiences and aspirations. Successful applications showcase a prospective student’s mind and character, demonstrating much more than merely fluent or cogent writing.

To protect the integrity of the selection process, the Yale Office of Undergraduate Admissions adheres to the Common Application’s Policy on Application Fraud(link is external), and applicants confirm several affirmation statements(link is external), including that all information submitted is the applicant’s “own work, factually true, and honestly presented.”

As detailed in the above statements, it is Yale’s policy that submitting “the substantive content or output of an artificial intelligence platform, technology, or algorithm” constitutes application fraud. Submitting personal statements or other written application responses composed by text-generating software may result in admission revocation or expulsion.

Using an AI platform to review one’s grammar or spelling, or to seek general advice or topic suggestions at the start of the writing process does not constitute application fraud. Some applicants may find AI tools useful in these ways; others may not.

For more insights on artificial intelligence and college applications, listen to the “Inside the Yale Admissions Office” podcast episode linked below.


https://admissions.yale.edu/ai-policy-statement

Anonymous
Anonymous wrote:Almost all essays will end up being 40-80% probability of having been generated by AI. Then AO's do what? They ignore the flags.



Wow.
All of my kids are 100% human in Turnitin and used by all private T20s. What AI detector are you using?
Anonymous
Anonymous wrote:
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


Ok, Mark, thanks for the heads up to listen


lol, my thoughts too! Someone is always promoting “YCBK” here.
post reply Forum Index » College and University Discussion
Message Quick Reply
Go to: