The Hidden Biases of AI: What Disabled Job Seekers Need to Know

An individual stands in front of a large, illuminated 'AI' sign, surrounded by futuristic digital graphics and data displays, symbolizing the integration of artificial intelligence in modern technology.

Today, I’m diving into something that's been on my mind a lot lately: the role of artificial intelligence in hiring. AI has completely changed how we hire, making things quicker and more efficient than ever before. But as we jump on the AI bandwagon, we also need to talk about its potential downsides, especially when it comes to disabled candidates.

AI tools, like ChatGPT, have made hiring a lot smoother. They can zip through resumes, spotlight the good stuff, and flag any issues, making HR's job a lot easier. According to Bloomberg’s Sarah Green Carmichael, “Nearly half of recent hires used AI to apply for jobs, according to a survey by Resume Builder.” This is pretty huge, right? But let’s not kid ourselves—AI has its flaws.

A recent article by Gus Alexiou in Forbes highlighted an experiment by University of Washington researchers that found AI tools could be biased against resumes that mention disability. They compared a standard CV with six different versions, each highlighting different disability-related achievements. The results were pretty shocking: ChatGPT only ranked the disability-modified CVs higher than the control one 25% of the time. This means many qualified disabled candidates might be overlooked.

Commenting on the UW project, lead author Kate Glazko said, “Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective…. For a disabled job seeker, there’s always this question when you submit a resume of whether you should include disability credentials. I think disabled people consider that even when humans are the reviewers.” These types of biases often prevent disclosure of disability in the workplace, in all aspects—from being a candidate to an employee. Both humans and AI still have inherent biases that must be accounted for, and that starts with awareness and diverse perspectives in looking at the data.

This is where human oversight comes in. AI can help with hiring, but it shouldn’t replace human judgment. It’s like using a calculator—you need to understand the math first to know if the calculator’s answer is right. We still need humans to ensure that the AI’s decisions make sense. And even then, nothing is foolproof.

Survey data showed that many job seekers still needed to tweak their AI-generated content to avoid sounding like a robot, with 46% saying they edited the output “some” and only 1% not editing it at all. So, while AI is a handy tool, we can’t trust it blindly—whether you’re an applicant or a hiring manager.

As we move forward, we need to balance the speed and efficiency of AI with the essential human touch. Using AI as a tool rather than a replacement will help us create hiring practices that truly value the contributions of disabled candidates.

ChatGPT Is Biased Against Resumes Mentioning Disability, Research Shows