Today, I’m diving into something that's been on my mind a lot lately: the role of artificial intelligence in hiring. AI has completely changed how we hire, making things quicker and more efficient than ever before. But as we jump on the AI bandwagon, we also need to talk about its potential downsides, especially when it comes to disabled candidates.
AI tools, like ChatGPT, have made hiring a lot smoother. They can zip through resumes, spotlight the good stuff, and flag any issues, making HR's job a lot easier. According to Bloomberg’s Sarah Green Carmichael, “Nearly half of recent hires used AI to apply for jobs, according to a survey by Resume Builder.” This is pretty huge, right? But let’s not kid ourselves—AI has its flaws.
A recent article by Gus Alexiou in Forbes highlighted an experiment by University of Washington researchers that found AI tools could be biased against resumes that mention disability. They compared a standard CV with six different versions, each highlighting different disability-related achievements. The results were pretty shocking: ChatGPT only ranked the disability-modified CVs higher than the control one 25% of the time. This means many qualified disabled candidates might be overlooked.
Commenting on the UW project, lead author Kate Glazko said, “Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective…. For a disabled job seeker, there’s always this question when you submit a resume of whether you should include disability credentials. I think disabled people consider that even when humans are the reviewers.” These types of biases often prevent disclosure of disability in the workplace, in all aspects—from being a candidate to an employee. Both humans and AI still have inherent biases that must be accounted for, and that starts with awareness and diverse perspectives in looking at the data.
This is where human oversight comes in. AI can help with hiring, but it shouldn’t replace human judgment. It’s like using a calculator—you need to understand the math first to know if the calculator’s answer is right. We still need humans to ensure that the AI’s decisions make sense. And even then, nothing is foolproof.
Survey data showed that many job seekers still needed to tweak their AI-generated content to avoid sounding like a robot, with 46% saying they edited the output “some” and only 1% not editing it at all. So, while AI is a handy tool, we can’t trust it blindly—whether you’re an applicant or a hiring manager.
As we move forward, we need to balance the speed and efficiency of AI with the essential human touch. Using AI as a tool rather than a replacement will help us create hiring practices that truly value the contributions of disabled candidates.
ChatGPT Is Biased Against Resumes Mentioning Disability, Research Shows
How AI Can Make Disabled People Stronger Advocates
Recently, a neighbor in my apartment complex became increasingly irate anytime Canine Companions® Pico and I would pass her door. She claimed his incidental shedding as we walked past was done intentionally, and she didn't appreciate his ruining her welcome mat, which she had placed in a public hallway. Her disdain became so pervasive she was unwilling to engage in civil discourse.
When I raised my concerns with the property manager, I was met with the suggestion that I relocate if we couldn't find a "peaceful solution." It was a frustrating and exhausting experience.
I went into advocacy mode. I knew the laws. I knew I could address the situation from the perspective of the ADA, housing laws, and even state fire codes. But I just didn't have the energy. I thought briefly about dropping the issue completely, but I knew that would be to my detriment as the situation was ongoing.
And then it came to me: AI is my friend.
I turned to AI to craft a letter to my leasing office, documenting our meeting, their response, and the concerns I still had. I worked with the AI in much the same way I would engage with a friend or colleague at first. Imagine explaining what happened to them over text. I didn't think about it too much. I simply documented what happened as best I could without being overly concerned with whether I was doing it "right."
As I progressed, I wanted something with a bit more force, something harder for management to dismiss. So I tweaked my approach. I asked the AI to cite relevant local laws that might strengthen my position. Almost instantly, I was presented with research pertaining to building safety, means of egress, and fire codes, as well as a bit of legal language.
No system is perfect, and I still did my due diligence in verifying the accuracy; nothing will ever replace the human element and the lived experiences that shape advocacy work. But I had a very strong foundation in record time. The hours and aggravation saved, the research placed right in front of me in the blink of an eye? I couldn't help but think of the old Mastercard commercial.
Comcast Internet: $50 a month
Subscription to ChatGPT-4: $20 a month
Energy saved as AI helps you advocate? Priceless.
AI is the ultimate life hack, and I can't wait to see what's next. This technology is here, and used wisely, can be the ultimate energy saver. Yes, it's only as good as its inputs and the questions we ask, but that is the very nature of the human brain too. When we ask better questions, we get better answers. By leveraging these technologies, disabled people can continue to do the advocacy that fuels us without feeling burnt out by the nitty-gritty. For those just starting on their advocacy journey, the playing field is instantly leveled. The question isn't "Should we be using this technology?" Rather, the focus should be on how.