15

How can we effectively identify and filter out statement of purpose (SoP) and recommendation letters that are generated by AI tools?

The authenticity of these documents is crucial for our admission process, but distinguishing AI-generated content from human-written content has become increasingly challenging. Are there any known methods, tools, or best practices for detecting AI-generated texts in academic admissions?

Additionally, how should institutions approach this issue ethically and practically, considering the rapid advancement of AI technology?

17
  • 11
    Hard to believe this is a real problem at the moment: SoPs are too personal (AI can't explain a poor grade given a particular transcript for example, especially since it doesn't know the reason), while you can always request recommendation letters from official email addresses.
    – Allure
    Commented Jan 8 at 9:20
  • 16
    Do you already have a system in place to filter fake recommendation letters written by the students themselves without the professor's approval? How does that work? Because that looks like the hardest and more important problem to me; this question might be a X-Y issue. If a recommender used AI as a writing assistant but signed the letter and confirmed that its contents are accurate, that's fine for me. Commented Jan 8 at 9:59
  • 7
    @FedericoPoloni We require recommendation letters to be submitted from an official email address not associated with the candidate. The candidate is going to have to be doing pretty well to fake a submission from [email protected] Commented Jan 8 at 10:41
  • 22
    @IanSudbery "The candidate is going to have to be doing pretty well to fake a submission from [email protected]" Depending on how the sending and receiving mail servers are set up, that might not be especially hard. Without extra authentication, forging a From: header is exactly as hard as writing a bogus return address on a physical letter.
    – Ray
    Commented Jan 8 at 17:48
  • 7
    "The authenticity of these documents is crucial for our admission process" OMG really? I mean, I wrote my own myself, but I'm a terrible writer. I've been assuming the transcripts and portfolio are much more important. Hopefully the cruciality of statements of purpose is mainly for degrees related to writing. Commented Jan 9 at 4:54

2 Answers 2

65

There is no need to filter out AI-generated statements of purpose.

Even without AI, it is not reasonable to assume that applicants have generally written their SoPs all by themselves. They have received substantial input from others, or even have had someone else writing the entire thing for them. The spread of LLMs might actually be beneficial here, because it could lessen the difference between applicants with friends or family that knows how to write a good SoP and those that don't.

Contentwise, an AI-generated SoP will probably read well, but very much generic (because it is going to mimic typical examples of SoPs available on the internet). To a large extent, LLMs are bullshit generators: They create text that sounds good on first glance, but they don't really have coherent arguments. Thus, I can easily believe that a LLM can write a decent SoP, but I'd very surprised if they produce a great one.

In conclusion, if AI-generated SoPs might have a significant impact on your admissions process, there is probably something wrong with the process. The solution is to improve the admissions process, not to try and filter for AI-generated texts.

8
  • 2
    "I can easily believe that a LLM can write a decent SoP, but I'd very surprised if they produce a great one." It can actually make a good one into a generic one; those who wrote great ones always win... Commented Jan 8 at 21:36
  • 2
    One thing that seems to be creeping in - though I haven't tested it myself - is AI-assisted writing. In that case the applicant throws something together and uses an AI tool to rewrite it (often in more formal language). I wonder how the OP would perceive that. Is it just a spelling/grammar checker with a sense of superiority?
    – Chris H
    Commented Jan 9 at 11:58
  • 4
    @ChrisH What i'm looking for in an SoP for a PhD student is that the student is actaully intrested in working specifically with me, understands what it we do in my lab, and specifically keen this particular project. For a job, and SoP can help the assesor map evidence of skills/experiences to job criteria. Good writing can help me see that if its true, but it can't hide its absence. Commented Jan 9 at 18:52
  • 2
    Most humans writing is also akin to a bullshit generator. Just compare the average Quora answer to the average StackExchange answer. The former often make no sense at all or don't actually answer the question. Commented Jan 11 at 0:04
  • 5
    @JonathanReez That's my favorite laughter about LLM criticism. "They don't know what they are really talking about" is about the most common criticism of human writing as well... Commented Jan 11 at 1:27
2

Additionally, how should institutions approach this issue ethically and practically, considering the rapid advancement of AI technology?

If an AI can do a great job for a given student output and in-person examination is not possible, the only feasible solution is to just drop them as a requirement altogether. Use other criteria to evaluate the candidates, as you can no longer rely on people's essay writing skills to be the deciding factor.

Some people have mentioned that current LLMs cannot output great essays and I agree they can't beat the best 10% of humans just yet, but new versions of AI software are coming out every year and it's reasonable to believe that by 2030 AI will be able to write better essays than 99% of all humans. You might as well put yourself ahead of the curve and stop grading people for work that's best delegated to a machine.

5
  • "but new versions of AI software are coming out every year" - MONTH, not year. " it's reasonable to believe that by 2030 AI will be able to write better essays than 99% of all humans." - make that 2025. 2024 if someone puts focus on that.
    – TomTom
    Commented Jan 11 at 13:21
  • @TomTom eh, I'm a huge fan of ChatGPT but I think that the 90%->99% gap will take a long time to cross. The 99%->100% will take even longer. Commented Jan 11 at 23:14
  • Nah, see, humans are not really 100% on top - the best are, not others. Also, history shows that it does not take long and then humans are left in the dust. Plenty of AI research examples demonstrating that and now they are putting them into AI LLM's. Check the history of the AI GO players to see what is coming. Or the AI poker player research, or the AI Diplomacy research (that is a game, actually, heavy on negotiations). All demonstrated total AI dominance.
    – TomTom
    Commented Jan 12 at 8:39
  • @TomTom Wanna bet $1000 on this not happening within the next 1 year? At what odds? Money talks, everything else walks ;) Commented Jan 12 at 9:44
  • What about a SENSIBLE bet? And stop toy money. Seroiusly, 1 year is irrelevant - we talk about career ending and you say "oh, tomorrow". What about 3-5 years? Still willing to put your money (and please, REAL money, not a tip) where your mouth is? Btw., if money talks - you are a mouse unheard, $1000 - joke.
    – TomTom
    Commented Jan 12 at 13:47

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .