Anonymous wrote:
Ok, Mark, thanks for the heads up to listen
lol, my thoughts too! Someone is always promoting “YCBK” here.
Anonymous wrote:Anonymous wrote:Anonymous wrote:So I put this essay into Originality.ai (what TurnitIn is based off of):
https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)
https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI
https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)
https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)
Then, I went to the 2016 essays:
https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)
UVA Essays:
2019:
Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)
My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)
What is going on?
Yup. These AI detection programs are notoriously inaccurate.
That said, is it possible these essays and articles get flagged as AI-authored because they're already in the public domain (have been on the internet for a long time) and are therefore part of the AI training materials? (But in contrast, a brand-new, original article or essay, never before published on-line or fed to an AI would not be flagged in the same way?)
I'm just guessing here. I know very little about AI detection but am now quite interested.
No, that's not how AI works. Then every NYT opinion article or FT essay would be too.
It's based on writing style - if you look at the details, the sentences that are most "Red" are the ones that sound formulaic and robotic.
I wonder if its Grammarly? Tri colons? Oxford commas? Etc.
Anonymous wrote:I think its the nerdy way JHU kids write.
CEG essay examples:
https://www.collegeessayguy.com/blog/college-essay-examples#U%20of%20Michigan%20Supplemental%20Essay (My pillows & me sample)
83% likely original (all green)
Anonymous wrote:Anonymous wrote:So I put this essay into Originality.ai (what TurnitIn is based off of):
https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)
https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI
https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)
https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)
Then, I went to the 2016 essays:
https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)
UVA Essays:
2019:
Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)
My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)
What is going on?
Yup. These AI detection programs are notoriously inaccurate.
That said, is it possible these essays and articles get flagged as AI-authored because they're already in the public domain (have been on the internet for a long time) and are therefore part of the AI training materials? (But in contrast, a brand-new, original article or essay, never before published on-line or fed to an AI would not be flagged in the same way?)
I'm just guessing here. I know very little about AI detection but am now quite interested.
Anonymous wrote:So I put this essay into Originality.ai (what TurnitIn is based off of):
https://apply.jhu.edu/hopkins-insider/the-art-of-imperfection/
99% Likley AI in the document (the doc is pretty red)
https://apply.jhu.edu/hopkins-insider/finding-purpose-in-trivial-projects/
97% Likely AI
https://apply.jhu.edu/hopkins-insider/korean-sticky-notes/
60% Likely Original (green)
https://apply.jhu.edu/hopkins-insider/being-the-handyman/
100% Likely AI (all red)
Then, I went to the 2016 essays:
https://apply.jhu.edu/hopkins-insider/the-palate-of-my-mind/
99% Likely AI (all red)
UVA Essays:
2019:
Stories from Porch Swing here:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
69% Likely original (mostly green)
My Mom's Gifts to Me:
https://uvamagazine.org/articles/how_to_write_your_way_into_uva
54% Likely Original (mostly green)
What is going on?
Anonymous wrote:Anonymous wrote:
Thoughts?
Ok, Mark, thanks for the heads up to listen
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Ok, Mark, thanks for the heads up to listen
lol, my thoughts too! Someone is always promoting “YCBK” here.
No. I posted it in another thread (Michigan essay) as a "warning," and someone else created a new thread here. Not sure why. The episode hasn't come out yet.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Ok, Mark, thanks for the heads up to listen
DP. I know he advertises in this forum. I think OP posted because I suggested this topic deserved its own thread, in a response to a post in the thread on Michigan essays.
However, I haven't yet been able to find anything that really corroborates the alleged use of AI detectors to be harnessed by Slate. I wonder where this comes from, though perhaps Technolutions is aware that this is a sensitive topic.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Starting a new thread to discuss the following post from an earlier thread:
"YCBK is doing a podcast this week (th?) on how Slate has Turnitin/Originality/Pangram AI detection built in now.
So, schools will see all "red" on the screen for the essays when any AI is used.
Automatic rejection?
Tell your kids!!"
FWIW I'm eager to listen to the podcast episode, in part because I'm skeptical.
First, these programs are notoriously in accurate, with both false positives and false negatives. To the point where it seems absurd (irresponsible?) for an AO to draw a conclusion based on the results of running an essay through this software.
And second, at this point, anyone with half a brain should know not to straight up cut-and-paste essays drafted by AI. Not only do AI-drafted essays not capture the applicant's voice, they're notoriously inaccurate and can hallucinate details (ex. fake case citations in legal filings).
To me, the real question for 2025 is what to do about students who use AI in smaller ways. For example, on the front end as a brainstorming partner to consider and develop different essay ideas? Or in the editing phase to help wordsmith specific sentences. Or at the very end to help shrink an essay or short answer to fit the word count (or character count for the Common App's EC descriptions.)
Are those uses of AI appropriate or are they "cheating"?
Do all colleges have the same opinion on that, or does it depend on the school?
And what do these AI detection programs do with these more subtle uses of AI? It's hard to imagine how effective they could possibly be, especially with all the iterations and editing involved.
Thoughts?
Ok, Mark, thanks for the heads up to listen
lol, my thoughts too! Someone is always promoting “YCBK” here.