Finding the Humanity in AI
May 06, 2024
Feminist scholar envisions tech’s use to shape a ‘different kind of future.’
By Jessica Weiss ’05
Neda Atanasoski's previous job as a professor at the University of California, Santa Cruz gave her a front-row seat to the early 2010s tech boom in nearby Silicon Valley, as well as what she called its alarming "exacerbation of poverty and inequality." Though she didn't start her career focused on technology, the juxtaposition of wealth and need in the cradle of modern computing resonated deeply, informing the path she would take in women and gender studies and critical ethnic studies.
Now a professor and chair in the University of Maryland’s Harriet Tubman Department of Women, Gender, and Sexuality Studies, Atanasoski can look back on a decade-plus of scholarship exploring the myriad ways that technologies—from drones to sex robots to sharing-economy platforms like Uber—perpetuate systems of oppression such as anti-Blackness, settler colonialism and patriarchy.
Her 2019 book, “Surrogate Humanity,” co-authored with Professor Kalindi Vora of Yale, is critical of “public fantasies” that claim robots, artificial intelligence (AI) and other technologies can substitute for types of work that are viewed as uncreative, repetitive or dull, especially service work. Such visions of the future devalue important work that continues to be disproportionately done by women, people of color and workers in the global south, the authors argue.
Yet she also envisions a path toward a technologically-enhanced future of greater social justice and equality. A forthcoming edited volume, “Technocreep and the Politics of Things Not Seen,” co-authored with Nassim Parvin, associate professor and associate dean at the iSchool at University of Washington, re-envisions different forms of “intelligence,” such as smart homes, from a feminist perspective.
As the inaugural associate director of education of UMD’s new Artificial Intelligence Interdisciplinary Institute at Maryland (AIM), Atanasoski is coordinating the launch of two new undergraduate majors in AI that will allow students to prepare for a range of careers, all rooted in responsible use that advances the public good. Students can pursue a bachelor of arts or a bachelor of science based on their interest in technical or humanistic and artistic approaches to technology. The B.A. is expected to launch in Fall 2025, with the B.S. to follow shortly thereafter.
In a recent conversation, Atanasoski spoke about the importance of the humanities in shaping AI development and why she’s hopeful about the future of AI.
Many people think of AI as a solely tech-focused field. Why do you think that’s inaccurate?
Part of why I’ve studied inequality and injustice is to understand how we can stop replicating entrenched social and racial hierarchies and power structures, and actually start creating other kinds of worlds. But to create other worlds you have to first imagine them, and that's where the humanities have a huge role to play. The creative and the artistic imagination helps get us there. Major tech corporations and nonprofit organizations have research wings that employ feminist researchers and publish research papers on the wide ranging impact of AI on things like the environment, housing, poverty, labor exploitation, and policing of Black and brown communities. There are also companies like Google that have hosted artists in residence. I think there are opportunities for policy interventions, design and engineering interventions and broader social interventions to exhibit other kinds of worlds.
How can we ensure that AI technologies are designed and deployed in ways that promote gender equality and social justice?
What I hear as the most common solution to inclusive and ethical AI development is to include more women and people of color in the engineering side of things and resolve bias in AI and data sets. That’s important, but the much harder thing is addressing broader structural issues. So for example, in 2023, various news stories reported that OpenAI used Kenyan laborers earning less than $2 per hour to scrub data sets to make ChatGPT less toxic. One of the reasons that it's hard to address this kind of labor exploitation on a global scale is that we live in a system where profit is king. And if that's the main value that we're using to design technology, then scrubbing bias or toxic content isn’t going to resolve these structural inequities tied to racial, gendered and colonial economic relations.
How should we all be using AI right now?
First, it is important to be aware of how we are using AI in daily life (and most of us do). People may not realize, but whenever we use smartphones, smart thermostats or GPS we are using AI. I also think it's really important both for educators and students to engage with generative AI, whether that’s ChatGPT or vision programs that generate images, and to think about how we are interacting with these tools. Because they’re here to stay. I don't think it's really helpful to think of technologies as inherently good or inherently bad or inherently ethical or inherently unethical. I think that instead, it’s important to think about the kinds of relations that certain technologies encourage. If they're encouraging relations—to the planet, to each other, to communities, to families—that we're not happy with, how do we change our use or interaction with these platforms and technologies or redirect them?
What makes you excited about the future of AI education at UMD?
I believe we are going to produce the next generation of creative thinkers who will shape the future of technology, the AI, the algorithms, and therefore reshape some of the persistent and very unjust social structures I’ve talked about. The fact that AIM is being conceived and rolled out as interdisciplinary is really critical to creating a different kind of future with AI and technology as a tool that can be used to transform the inequities of the present.
Photo by John T. Consoli headshot of Neda; Background artwork by Marjan Khatibi.