YCBK Podcast Episode re College Admissions' use of AI Detection Programs

Anonymous
Clickbait thread
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


Ok, Mark, thanks for the heads up to listen


lol, my thoughts too! Someone is always promoting “YCBK” here.


No. I posted it in another thread (Michigan essay) as a "warning," and someone else created a new thread here. Not sure why. The episode hasn't come out yet.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


Ok, Mark, thanks for the heads up to listen

DP. I know he advertises in this forum. I think OP posted because I suggested this topic deserved its own thread, in a response to a post in the thread on Michigan essays.

However, I haven't yet been able to find anything that really corroborates the alleged use of AI detectors to be harnessed by Slate. I wonder where this comes from, though perhaps Technolutions is aware that this is a sensitive topic.


Yup. I've never listened to YCBK (had to look it up). I'm just interested in the topic of the role of AI and AI detectors in admissions. One of our kids is applying this year, and the other is three years younger.

It's too late for our senior (who is busy writing supplements now), but for our freshman, I'm not-so-secretly hoping that AI puts an end to the endless essays/supplements. Not to be cynical, but if tons of kids are using AI to write their essays and tons of AOs are using AI to summarize and score them, what exactly is the purpose of it all? Trying to use AI to impress AI is madness.

That said, if AOs think they can use AI detectors to shut this down, I'm curious to learn more.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:

"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"

FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.

First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.

And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).

To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)

Are those uses of AI appropriate or are they "cheating"?

Do all colleges have the same opinion on that, or does it depend on the school?

And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.

Thoughts?


Ok, Mark, thanks for the heads up to listen


lol, my thoughts too! Someone is always promoting “YCBK” here.


No. I posted it in another thread (Michigan essay) as a "warning," and someone else created a new thread here. Not sure why. The episode hasn't come out yet.


Two reasons - I didn't want to forget, and because the general issue interests me, beyond whatever they end up sharing on the podcast episode.

I've already learned a few things from the PPs, so I'm glad I posted. For those who are interested, please keep your thoughts and personal experiences with this stuff coming.
Anonymous
So I put this essay into Originality.ai (what TurnitIn is based off of):

https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)


https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI

https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)

https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)

Then, I went to the 2016 essays:

https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)

UVA Essays:
2019:

Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)

My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)


What is going on?

Anonymous
I want to force one of these podcasters to talk about this. Good sleuthing.....

I'll post a question to a few podcasters. Let's see how wants to tackle it.
Anonymous
Anonymous wrote:
Anonymous wrote:

Thoughts?


Ok, Mark, thanks for the heads up to listen


Lol.
Anonymous
I think its the nerdy way JHU kids write.

CEG essay examples:

https://www.collegeessayguy.com/blog/college-essay-examples#U%20of%20Michigan%20Supplemental%20Essay (My pillows & me sample)
83% likely original (all green)
Anonymous
Anonymous wrote:So I put this essay into Originality.ai (what TurnitIn is based off of):

https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)


https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI

https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)

https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)

Then, I went to the 2016 essays:

https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)

UVA Essays:
2019:

Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)

My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)


What is going on?



Yup. These AI detection programs are notoriously inaccurate.

That said, is it possible these essays and articles get flagged as AI-authored because they're already in the public domain (have been on the internet for a long time) and are therefore part of the AI training materials? (But in contrast, a brand-new, original article or essay, never before published on-line or fed to an AI would not be flagged in the same way?)

I'm just guessing here. I know very little about AI detection but am now quite interested.

Anonymous
Anonymous wrote:
Anonymous wrote:So I put this essay into Originality.ai (what TurnitIn is based off of):

https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)


https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI

https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)

https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)

Then, I went to the 2016 essays:

https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)

UVA Essays:
2019:

Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)

My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)


What is going on?



Yup. These AI detection programs are notoriously inaccurate.

That said, is it possible these essays and articles get flagged as AI-authored because they're already in the public domain (have been on the internet for a long time) and are therefore part of the AI training materials? (But in contrast, a brand-new, original article or essay, never before published on-line or fed to an AI would not be flagged in the same way?)

I'm just guessing here. I know very little about AI detection but am now quite interested.



No, that's not how AI works. Then every NYT opinion article or FT essay would be too.
It's based on writing style - if you look at the details, the sentences that are most "Red" are the ones that sound formulaic and robotic.
I wonder if its Grammarly? Tri colons? Oxford commas? Etc.
Anonymous
Anonymous wrote:I think its the nerdy way JHU kids write.

CEG essay examples:

https://www.collegeessayguy.com/blog/college-essay-examples#U%20of%20Michigan%20Supplemental%20Essay (My pillows & me sample)
83% likely original (all green)


Yes, this is the reason I bet.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:So I put this essay into Originality.ai (what TurnitIn is based off of):

https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)


https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI

https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)

https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)

Then, I went to the 2016 essays:

https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)

UVA Essays:
2019:

Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)

My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)


What is going on?



Yup. These AI detection programs are notoriously inaccurate.

That said, is it possible these essays and articles get flagged as AI-authored because they're already in the public domain (have been on the internet for a long time) and are therefore part of the AI training materials? (But in contrast, a brand-new, original article or essay, never before published on-line or fed to an AI would not be flagged in the same way?)

I'm just guessing here. I know very little about AI detection but am now quite interested.



No, that's not how AI works. Then every NYT opinion article or FT essay would be too.
It's based on writing style - if you look at the details, the sentences that are most "Red" are the ones that sound formulaic and robotic.
I wonder if its Grammarly? Tri colons? Oxford commas? Etc.


Gotcha. Thanks.
Anonymous
Think they aren’t using AI the way we think?


“So when we're in the Slate Reader, that's the place where we look at students' applications and evaluate applications. And we have the ability to go into a student's essay or on their transcript or anywhere in the file and highlight things that we want to make sure that the next person reading the file sees. And it's interesting with each iteration of reading, every time that the file is submitted, that highlighting color is different, right?

So I can go back and see if three other folks or three teams have looked at the at the file before me, and they've all highlighted something. There will be like pink highlighting, yellow highlighting, and green highlighting. And each of them is from a different time that the file was looked at.

But what Slate is offering now is AI highlighting. So you can open up a file and click the AI highlighting button, and Slate will highlight things for you that it wants to make sure that you don't miss. So I think that this might be available to me now.”

From Your College Bound Kid | Admission Tips, Admission Trends & Admission Interviews: New Ways In Which AI Is Being Used In Admission Offices, Oct 8, 2025
Anonymous
AI stole our writing.

Surprise, surprise, it labels our writing as AI.
Anonymous
Anonymous wrote:

Ok, Mark, thanks for the heads up to listen

lol, my thoughts too! Someone is always promoting “YCBK” here.


This comment made me laugh, too, but I could be that "someone." I promise, I'm not Mark.
post reply Forum Index » College and University Discussion
Message Quick Reply
Go to: