How we can make AI less biased against disabled people

AI continues to pervade our work lives. According to recent research by the Society for Human Resource Management, one in four employers use AI in human resources functions. Meanwhile, technology is becoming an increasingly common presence in everything from education and healthcare to criminal justice and law.

Yet we largely aren’t addressing bias in any meaningful way, and for anyone with a disability, that can be a real problem.

Indeed, a Pennsylvania State University study published last year found that trained AI models exhibit significant disability bias. “Models that fail to account for the contextual nuances of disability-related language can lead to unfair censorship and harmful misrepresentations of a marginalized population,” the researchers warned, “exacerbating existing social inequalities.”

In practical terms, an automated résumé screener, for example, may deem candidates unsuitable for a position if they have unexplained gaps in education or employment history, effectively discriminating against people with disabilities who may need time off for their health.

“People may be engaging with algorithmic systems and have no idea that that is what they’re interacting with,” says Ariana Aboulafia, who is Policy Counsel for Disability Rights in Technology Policy at the Center for Democracy and Technology, and has multiple disabilities, including superior mesenteric artery syndrome. (SMA is a rare disease that can cause various symptoms, including severe malnutrition.)

“When I was diagnosed with superior mesenteric artery syndrome, I took a year off of law school because I was very sick,” Aboulafia says. “Is it possible that I have applied to a job where a résumé screener screened out my résumé on the basis of having an unexplained year? That is absolutely possible.”

Sen. Ron Wyden of Oregon alluded to the risk for bias during a Senate Finance Committee meeting about the “promise and pitfalls” of AI in healthcare in early February. Wyden, who chairs the committee, noted that while the technology is improving efficiency in the healthcare system by helping doctors with tasks such as pre-populating clinical notes, “these big data systems are riddled with bias that discriminates against patients based on race, gender, sexual orientation, and disability.” Government programs like Medicare and Medicaid, for example, use AI to determine the level of care a patient receives, but it’s leading to “worse patient outcomes,” he said.

In 2020, the Center for Democracy and Technology (CDT) released a report listing several examples of these worse patient outcomes. It analyzed lawsuits filed over the prior decade related to algorithms used to assess people’s eligibility for government benefits. In multiple cases, algorithms significantly cut home- and community-based services (HCBS) to the recipients’ detriment. For example, in 2011, Idaho began using an algorithm to assess recipients’ budgets for HCBS under Medicaid. The court found the tool was developed with a small, limited data set, which CDT called “unconstitutional” in its report. In 2017, there was a similar case in Arkansas, where its Department of Human Services introduced an algorithm that cut several Medicaid recipients’ HCBS care.

Some legislators have proposed measures to address these technological biases. Wyden promoted his Algorithmic Accountability Act during the meeting, which he said could increase transparency around AI systems and “empower consumers to make informed choices.” (The bill is currently awaiting review by the Committee on Commerce, Science, and Transportation.) And, in late October, President Joe Biden released an executive order on AI that explicitly mentioned disabled people and addressed broad issues such as safety, privacy, and civil rights.

Aboulafia says the executive order was a powerful first step toward making AI systems less ableist. “Inclusion of disability in these conversations about technology [and] recognition of how technology can impact disabled people” is key, she says. But there’s more to do.

Aboulafia believes that algorithmic auditing—assessing an AI system for whether it displays bias—could also be an effective measure.

But some experts disagree, saying algorithmic auditing, if done improperly or incompletely, could legitimize AI systems that are inherently ableist. In other words, it matters who performs the audit—the auditor must be truly independent—and what the audit is designed to assess. An auditor should be empowered to question all underlying assumptions its developers make, not merely the algorithm’s efficacy as they define it.

Elham Tabassi, a scientist at the National Institute of Standards and Technology and the Associate Director for Emerging Technologies in the Information Technology Laboratory, suggests working with the communities affected to study the impact of AI systems on real people, as opposed to solely analyzing these algorithms in a laboratory. “We have to make sure that the evaluation is holistic, it has the right test data, it has the right metrics, the test environment,” she says. “So, like everything else, it becomes . . . about the quality of the work and how good a job has been done.”

https://www.fastcompany.com/91054056/how-we-can-make-ai-less-biased-against-disabled-people?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1y | Mar 11, 2024, 9:40:09 PM


Login to add comment

Other posts in this group

This free website is like GasBuddy for parking

Parking in a city can be a problem. It’s not just about finding parking—it’s about finding the right parking. Sometimes, there’s a $10 parking spot only a block away from a garage that ch

Jun 14, 2025, 11:40:07 AM | Fast company - tech
How a planetarium show discovered a spiral at the edge of our solar system

If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the proc

Jun 14, 2025, 11:40:05 AM | Fast company - tech
Apple just made 3 great new privacy and security enhancements—but missed these 3 opportunities

This week, Apple previewed its redesigned (and renumbered) operating syste

Jun 14, 2025, 9:30:02 AM | Fast company - tech
TikTok users are exposing their worst exes—all to the soundtrack of Lorde’s new single

The latest TikTok trend has people exposing their terrible exes and most toxic relationship stories to Lorde’s new single “ ">Man of the Ye

Jun 13, 2025, 9:50:03 PM | Fast company - tech
Trump’s ‘gold card’ visa scheme is pure gilded nonsense

President Donald Trump announced, back on February 25, that his administration would soon debut a “gold card,” an immigration program that would allow wealthy foreigners, for the low, low price of

Jun 13, 2025, 7:30:05 PM | Fast company - tech
How singles are using AI to improve their online dating success

Singles are increasingly turning to AI to boost their odds in the dating world.

According to a new study, just over a quarter (26%) of singles are using artificial intelligence to enhanc

Jun 13, 2025, 7:30:04 PM | Fast company - tech
Can AI fact-check its own lies?

As AI car crashes go, the recent publishing of a

Jun 13, 2025, 5:10:07 PM | Fast company - tech