Disability and the Metaeugenic Heart of AI

A Social Science Matters article by Rua M. Williams, author of Disabling Intelligences: Legacies of Eugenics and how we are Wrong about AI
Disability and the Metaeugenic Heart of AI
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

To mark this year’s International Day of Persons with Disabilities (December 3rd), we met with Rua M. Williams, author of "Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI," to discuss the complex relationship between disability, AI ethics, and social justice. This piece was released shortly after the Disability Day of Mourning, which occurs annually on March 1st.

What does your book, Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI, aim to achieve?

When discussing the topics in this book, I often reveal something like, "we have been discussing AI's eugenics problem, but it is really our eugenics problem." My primary aim with this book is to help people see how eugenic logics persist in our society and especially within our every-day attitudes towards each other and ourselves. All technology ethics flows outward from our interpersonal ethics.

The book bridges science and technology studies and critical disability studies. What does this interdisciplinary approach reveal that might be missed in more traditional analyses of AI?

One of the frustrations I have had with AI ethics over the years has been the ultimate failure of ethical analysis to go deeper beyond the empirically identifiable biases and right down to the  root of the flawed beliefs which determine what technologies get built and how they're used. There is a lot of effort spent in making systems more fair, without ever asking if the system itself is fundamentally designed to cause harm, whether it's egalitarian about it or not. Science and Technology Studies supports a more philosophical approach to understanding and evaluating sociotechnical systems, but philosophy, especially ethics, has historical problems with recognizing and respecting disabled life. So I bring in disability studies, and, more importantly, the lived experiences of disabled people, to complicate established notions of fairness, ethics, and life.

A few years ago, I led a study exploring the ethical reasoning of computer science students and we found that students seemed incapable of recognizing disability as a vulnerable class. The fatal consequences of algorithmic decision making systems to disabled people were either completely unnoticed or regarded as not only inevitable but unremarkable. It is this gap in reasoning I seek to reveal through this book, not merely by pointing out the vulnerability of disabled people as a class, but by prompting readers to grapple with their own experiences of disability.

How can inclusivity be achieved when it comes to AI? What are some of the injustices and biases that surround knowledge production around disability? How does ‘metaeugenics’ manifest in contemporary tech culture?

I tend to turn my nose up at "inclusivity." Myself and other disability advocates have said before "You invite me to sit, but it is still your table." I'm not particularly interested in being included in a system which is fundamentally hostile to embodied and cognitive difference. That said, I know disabled people have unique ideas about what algorithmic decision making support systems could accomplish. We have deeper problems with access to development and access to decision making power that make it more difficult for disabled people to lead in these spaces.

Because of this, and because of wider metaeugenic beliefs about what kinds of lives are worth living, the kinds of technologies that get produced tend to center the idea that a disabled body must be corrected - that a correction toward a norm is the most urgent need and desire of disabled people. This curative imaginary forecloses other kinds of projects which look outward with an aim to change the collective environment and our society instead of individual bodies.

More than this objectification, however, is the wider disregard for disability and the fundamental attitude that disabled life is expendable. Disability is most often not considered at all, the harms to disabled people from such technologies are regarded as obvious and unobjectionable, or an exception to an otherwise perfect system. The disabled body is regarded as an outlier to be discarded rather than evidence that the system itself has failed.

Why do you think disabled and racialized communities bear the brunt of AI’s negative consequences? What structural or historical factors contribute to this pattern?

There are the exhaustively proven consequences of how history has shaped data, and how that shapes what kinds of connections and outputs AI systems are capable of making. This could be briefly summarized as what Joy Buolamwini calls "power shadows," whereby historic injustice produces current injustice. This often gets regarded as an artifact or a "glitch" - a bug to be fixed. But  Meredith Broussard contests this, giving us an understanding of "Technochauvinism" as the belief structures which hold technological solutions and ideals of "innovation" and "progress" above any human testimony or critique to the contrary. I argue that you cannot fight Technochauvinism without fighting metaeugenics. Metaeugenics, and the pursuit of perfection, is at the heart of our sociotechnical abuses.

The book problematizes the ecological consequences of AI and cloud computing. How are these environmental impacts connected to disability justice and broader social inequalities?

The climate crisis is undeniably a disability justice issue. We have terrible provisions for providing for and evacuating disabled people during disasters. Massive AI data centers are poised to make brownouts and droughts more frequent and more severe. We've already seen how this devastates disabled people and leads to death. Most notably, we witnessed this during the PGE fires, during which time disabled people organized to support each other, distributing supplies, backup power, and transportation during brownouts and evacuations.

What are some key questions or criteria you believe researchers and developers should be asking before deploying AI systems?

Firstly, researchers and developers should be much more clear, and frankly much more honest, about what they mean by AI in any given context. The term has become wholly inscrutable and provides people with a rhetorical slight of hand by which they can obscure anything from irresponsible data practices to outright charlatanism. Very often, I think most people aren't trying to be deceptive, they just haven't given any thought to what their proposed system takes in, how it processes that, and what it puts out. AI has become shorthand for any data processing at all.

For many, it has become a magic box: problem + AI = profit.

So many of the systems I see pitched in research and entrepreneurship are addressing "problems" that are either imagined, or wholly manufactured by other social forces. One of the most egregious examples of this is in medicine. Our medical professional shortage is meant to be addressed by AI agents, rather than doing anything to address the labor shortage itself. As if this is an immutable problem that isn't caused by our choices in funding medical education and infrastructure.

This isn't to say that we should abandon all technological efforts to ameliorate the problems in our society simply because they should be solved in a social way. But when we do create technologies in these crucial social gaps, we should be building things that make that problem more visible, and which galvanize public action toward filling that gap even as it provides a bridge in the interim.

Rua M. Williams is Assistant Professor of User Experience Design at Purdue University in Indiana, USA, and Just Tech Fellow with the Social Science Research Council, where they research disabled people’s bodily autonomy and social agency through adaptive and assistive technologies. 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in