A new study from Cornell University highlights how AI-generated college admissions essays mimic the writing styles of male, privileged students, sparking a conversation about the inherent biases of AI tools in educational settings.
Artificial intelligence may be transforming the way students approach college admissions essays, but a new study by researchers at Cornell University suggests that this technological aid comes with notable biases. The findings, published in the Journal of Big Data, reveal that AI-generated essays tend to mirror the writing styles of male students from higher socioeconomic backgrounds.
“We wanted to find out what these patterns that we see in human-written essays look like in a ChatGPT world,” co-corresponding author AJ Alvero, an assistant research professor of information science at Cornell University, said in a news release. “If there is the strong connection with human writing and identity, how does that compare in AI-written essays?”
Alvero’s team scrutinized over 150,000 college admissions essays, submitted to the University of California system and an elite East Coast engineering program, alongside 25,000 essays generated by AI models GPT-3.5 and GPT-4, prompted with identical essay questions.
Their analysis utilized the Linguistic Inquiry and Word Count (LIWC), a program designed to measure writing features and compare them against predefined lexicons.
The results revealed that AI-generated essays “sound” most like those written by male students from privileged backgrounds. Notably, AI tended to use longer words and exhibited less stylistic variety compared to human essays.
Additionally, despite having no real affiliations, AI wrote about groups and organizations at rates similar to human writers.
Discussing the broader implications, co-author Rene Kizilcec, an associate professor of information science at Cornell, stressed the potential misuse of AI in crafting college essays.
“It’s likely that students are going to be using AI to help them craft these essays – probably not asking it to just write the whole thing, but rather asking it for help and feedback,” Kizilcec said in the news release.
He cautioned that AI-generated suggestions might not reflect a student’s authentic voice.
“It’s important to remember that if you use an AI to help you write an essay, it’s probably going to sound less like you and more like something quite generic,” he added. “And students need to know that for the people reading these essays, it won’t be too difficult for them to figure out who has used AI extensively. The key will be to use it to help students tell their own stories and to enhance what they want to convey, not to replace their own voice.”
As tools like ChatGPT become increasingly integrated into educational environments, this study underscores the need for awareness regarding their biases and limitations.