My partner and I had just sat down for dinner, and I was venting to him that I felt overwhelmed by my first year of graduate school. I had majored in math and computer science as an undergraduate and had come to graduate school to figure out how to do something in the name of social justice with what I had learned. However, upon arriving, I learned that computer scientists were often making fires instead of putting them out. I told my partner, I’m like the Incredible Hulk. I have a lot of power, but all I feel I learned to do in school was…smash.” And then, I sent this tweet. 

tweet about cs education

Even in 2018, the idea that computing students needed training in the humanities and social sciences was relatively common. Nowadays, whenever a new story breaks about how a tech company built a discriminatory algorithm or collected some dubious user data, you’ll often find critics saying something like: here’s reason #542 why computer science students need to take humanities and social science classes. Now, with some hindsight, I want to talk about what I think this take gets right — and what it misses.

The Promise of Interdisciplinary Inclusion

In 2018, researchers and technology justice advocates Joy Buolamwini and Timnit Gebru published a foundational study showing that corporate facial recognition technologies (or FRTs”) were able to classify the gender of lighter-skinned male faces substantially better than darker-skinned female faces. In other words, the FRTs were exhibiting racial and gender bias. They also showed that the majority of datasets used to train these FRTs were predominantly composed of lighter-skinned, male-labeled faces. 

When I first learned of the study, I thought the solution to eliminate the bias would be simple: make the training and test datasets more diverse. This naive solution” was what my computer science training had prepared me for. My education had been rooted in computational thinking, a key tenet of which was taking problems and breaking them down into smaller ones. Through this lens, fixing a dataset seemed like both a logical and achievable next step.

Sometimes, however, instead of breaking a problem down into smaller parts or patching them with technical fixes, we need to look at the broader context. The first issue with these FRTs is that they classify gender at all, something queer people and theorists teach us is not knowable through someone’s appearance. But an even wider problem is that facial recognition technology is used more broadly in the oppressive systems of surveillance and policing, systems that disproportionately harm Black people and people of color. Fixing” FRTs would simply equate to putting Black and Brown communities at an even greater risk of experiencing prejudiced policing and surveillance practices. This is something I could only learn from fields that rely on methods that center the voices of Black people and people of color, such as African American studies and critical race studies, the areas that have given us key frameworks such as the matrix of domination or the concept of intersectionality.

What I want to note here is that even without this context from different fields, I had still seen the issue of biased facial recognition as an ethical problem — I saw that it was unfair and hurtful that the technology did not work equally across skin tones or gender. What I needed, though, was a framework to help me think beyond fairness. I needed tools to think about how technology plays a role in oppression and to think about what just and liberatory interventions might look like — core topics of discussion in much of the humanities and social sciences.

The Perils of Interdisciplinary Inclusion 

While it’s easy to see (and tweet about) the need for more humanities and social sciences education in computing, there are still traps. For example, a common approach to incorporating ethics” into technology or AI education is to teach students about three ethical frameworks: utilitarianism, deontology, and virtue ethics. While these can be helpful frameworks and spark lively class discussion, they do continue to center the thoughts and perspectives of white men.

Furthermore, in their paper, You Can’t Sit with Us: Exclusionary Pedagogy in AI Ethics,” scholars Inioluwa Deborah Raji, Morgan Klauss Scheuerman, and Razvan Amironesi write, If anything, to rush CS students through heavily condensed and simplified overviews of broad ethical understanding and then position them to be the primary arbiter of change confuses the situation. This promotes the engineer’s natural inclination towards seeing themselves as a solitary saviour, to the detriment of the quality of the solution and in spite of the need for other disciplinary perspectives.” That is, as educators in our classrooms, we need to be encouraging collaboration as a source of strength and encouraging students to see the humanities and social sciences as intrinsically valid.

Conclusion

So, what might it look like to bring the humanities and social sciences into the computing classroom? Below are a few possibilities and resources:

  • Create a culture of disciplinary respect. Encourage students to take courses outside of STEM, including classes in writing, world languages, history, and more. Be careful not to talk about particular classes as fluff” classes or about students as being science‑y” or artsy.”
  • When discussing a social problem, prompt students to think of the following: what kind of technology could they build to help with the problem? What kind of policy or market interventions might help? What kind of arts and culture project could change public opinion around the problem? Can they think of any other kinds of interventions? What are the pros and cons of each intervention?
  • Have students build an app around a community problem. Have students work in groups with students in a history class. Have them do research, including looking at primary resources, archival materials, or conducting interviews when possible, on how the problem has been tackled in the past and how technology might have helped or harmed the community involved. See this Humanities-CS‑1 curriculum for more inspiration. 

My first year of graduate school was many things: overwhelming, yes, but also eye-opening. The opportunity to take classes that included topics such as feminism, design justice, philosophy, and history, allowed me to shift from asking, how could we make this technology fairer?” to asking, does this technology promote or impede liberation? For who and in what ways?” I encourage you to provide such opportunities in your own classrooms… and render my one viral tweet an irrelevant relic.


Blakeley H. Payne is a Cambridge-based researcher and writer studying the impact that social and algorithmic technologies have on marginalized communities. She holds a bachelor of science degree in computer science and mathematics from the University of South Carolina and recently received a master’s degree in media arts and sciences from the MIT Media Lab where she studied AI ethics education.

Back to Top ↑

Be a part of our community!

Subscribe to our newsletter, Notes on Learning, for monthly updates.

SUBSCRIBE