Even as AI becomes a common workplace tool, its use in hiring raises serious concerns that employers can’t afford to ignore.
Recent research suggests companies are being overwhelmed by AI-generated résumés. LinkedIn reports 11,000 applications per minute submitted through its platform, a 45% increase over the past year. The temptation for hiring managers to rely on off-the-shelf generative AI tools like ChatGPT is strong, but a new study published on Cornell University’s preprint server arXiv warns that doing so could open companies to claims of bias if a rejected candidate challenges the decision.
The study evaluated several state-of-the-art large language models (LLMs) from tech giants including OpenAI, Anthropic, Google, and Meta, analyzing both their predictive accuracy and fairness using impact ratio analysis across declared gender, race, and intersectional subgroups. These AI systems were tested on around 10,000 real-world job applications, revealing that the off-the-shelf tools most businesses would likely use to sift through résumés show significant bias.
While some LLMs, such as GPT-4o, showed near-perfect gender parity in candidate assessments, they demonstrated racial bias. When both gender and race were considered together, none of the models succeeded in achieving fair hiring outcomes, according to the researchers’ own evaluations. (The researchers did not respond to Fast Company‘s requests for comment.)
The models’ impact ratios—a metric that highlights potential disparate impact between groups, critical to fair hiring practices—fell as low as 0.809 for race and 0.773 for intersectional groups. These figures are at least 20% below the threshold typically considered impartial.
The findings offer little comfort to those who study organizational behavior and workplace dynamics. “The jobs market is chilly enough at the moment, so inflicting too much inhuman AI on job seekers seems like a cruel blow,” says Stefan Stern, visiting professor in management practice at Bayes Business School. (Stern was not involved in the study.) “There is a case for efficiency but there should also be humanity, especially if we are still interested in hiring human beings.”
Beyond legal risk, relying on AI in hiring can also alienate successful applicants, fostering a sense of distrust that can hurt the organization in the long run. Stern argues that candidates might reconsider joining a company that uses AI to screen them. “Why work for a firm that isn’t interested enough in you to get a fellow human to interview and assess you?” he asks.
In a world where artificial intelligence is becoming the norm, Stern believes that emotional intelligence—thoughtfully applied by hiring managers and leadership—can significantly improve employee well-being and retention. It can also shape a company’s culture and business practices moving forward.
“Too much heavy-handed use of AI would be a ‘red flag’ to me as a job hunter,” he says. “I want to work for and with other humans, not for and with machines.”
Inicia sesión para agregar comentarios
Otros mensajes en este grupo.

Yahoo’s bet on creator-led content appears to be paying off. Yahoo Creators, the media company’s publishing platform for creators, had its most lucrative month yet in June.
Launched in M

From being the face of memestock mania to going viral for inadvertently stapling the screens of brand-new video game consoles, GameStop is no stranger to infamy.
Last month, during the m

The technology industry has always adored its improbably audacious goals and their associated buzzwords. Meta CEO Mark Zuckerberg is among the most enamored. After all, the name “Meta” is the resi

Finding a job is hard right now. To cope, Gen Zers are documenting the reality of unemployment in 2025.
“You look sadder,” one TikTok po

Hiding your address, phone number, and other details is easier than you might think.

Tesla has scheduled an annual shareholders meeting for November, one day after the