What makes a piece of technology ethical?

It’s a question I’ve been asked by computer science (CS) educators often enough, and I know why they ask. If you can outline what makes a piece of technology ethical, then perhaps you can answer the murky question of what we should be teaching our students about the ethics of technology. An endless number of articles (including my own) have been written about the need to include ethics education in the CS classroom. But the thing is, there is little consensus on what including ethics” really means. Are we supposed to talk about Aristotelian virtue ethics? Issues of diversity (or lack thereof) in computing? Privacy and hacking? The robot apocalypse!?

When I’m asked these kinds of questions, I turn to design justice, a design approach that centers those who are the most marginalized in society. In its simplest form, design justice asks the following questions about the design process of a piece of technology:

  • Who participated?
  • Who benefitted?
  • Who was harmed?

I like these questions because they move away from the idea that ethics is an intrinsic property of a piece of technology and instead offer a framework to have a conversation about how that technology affects people, social systems, and power. As educators, we often tell our students — and hope for them — that computing can be a way to change the world. Now, more than ever, the process of building technology is world-building. And, if we’re talking about world-building, we need to provide frameworks for our students to discuss systems of power. 

Key Concepts in Design Justice

In order to explore these questions with our students, we first need to understand a few key concepts. Everything I write about in this piece, and everything I know about design justice, I have learned from others. In particular, I have learned from scholar, researcher, and advocate Sasha Costanza-Chock and their recent book entitled Design Justice: Community-Led Practices to Build the Worlds We Need (freely available via open access!) and from the Design Justice Network website.

To better understand design justice, let’s consider an example technology: the Blood Oxygen app on an Apple Watch, which has been heralded throughout the pandemic for helping patients detect COVID-19 days before they started noticing symptoms in themselves. Surely, it would appear, this is an ethical technology. Well, let’s see what we can uncover by asking our three questions: 

  • Who participated?
  • Who benefitted?
  • Who was harmed?

First, let’s notice two things about these questions. 

Notice there’s a pattern — each question is concerned with who,” which lets us know we need to be thoughtful about how we describe the people related to this technology. To help, we turn to intersectionality, which describes the way in which different identities or experiences, such as (but not limited to) race, gender, class, or disability status combine (or intersect) to produce different experiences of marginalization and privilege. 

Consider, for example, who participated in the design of the Apple Watch. Based on Apple’s diversity data, we might say that it is a male-dominated company. This description is what we might call a single-axis” analysis — it only considers the diversity of Apple employees on the basis of binary gender. Intersectionality asks us to do a bit more. It’s not just that Apple is a male-dominated company, it’s also that those men are similar in other ways — that is, most of the men at Apple are cis and white. It is also likely they share similar class backgrounds given the company’s requirement for employees to have particular college degrees. Here, intersectionality allows us to unpack which men are included or excluded by the phrase male-dominated.”

When considering the ethics of technology, intersectionality is also important because it allows us to see that there are multiple ways in which a piece of technology might either help or harm people. It allows us to understand the matrix of domination, a concept which highlights how oppression on the various axes of race, class, gender, sexual orientation, body size, disability, nationality, etc., work together to systemically distribute power, privilege, benefits, and harm in uneven ways. 

For example, when thinking about the Blood Oxygen app, someone might ask: how did immunocompromised people benefit from this feature? When asked this question, we might answer that immunocompromised people benefitted greatly; it’s wonderful that folks who face an increased risk in a doctor’s office or emergency room can get the test they need in the socially distanced safety of their own home. 

However, the matrix of domination asks us to consider how the systems which make life difficult for immunocompromised people intersect with other systems of oppression, such as those based on race and/​or class. Instead, we might ask, how did poor, immunocompromised people benefit from this feature? And to this question, we might get a very different answer. The Apple Watch costs $400. Additionally, the Blood Oxygen app requires an iPhone for setup, as well as an internet connection or data plan. The answer to this question is probably very different: the app is irrelevant because poor, immunocompromised people are likely priced out of the technology — and the risky doctor visit. Or perhaps we will ask how might the Blood Oxygen app benefit or harm immunocompromised Black people?It’s a known issue in the industry that blood oximeters routinely overestimate the blood oxygen levels in Black people at a rate of about three times higher than white people. These analyses are important because they tell us which communities have the least to gain or the most to lose from a new piece of technology — and this analysis is what we need to teach our students to do at the beginning of and throughout the design process. And preferably, this analysis is not just done by imagining the needs of any given community. Instead, it’s done in conversation with people from those communities, with community members treated as experts in their own lives and as co-equal designers. 

Bringing Design Justice into the Classroom

It might seem like the most obvious way to incorporate design justice into the CS classroom is by incorporating it into the capstone project process. It’s increasingly popular to have students identify a community problem” and offer up a technical solution to that problem. However, truly incorporating design justice principles means leaving open the possibility that designing no technology is more helpful than designing anything at all. Time and time again we’ve seen examples of how well-intentioned technology for good is actually harmful. Additionally, community-led capstone projects can be difficult to properly design and implement because they require building and fostering relationships with some of the most vulnerable groups in our communities, which takes time. In contrast, however, I offer a few other ideas for integrating design justice into your CS classroom: 

  • Ask students to discuss and analyze classroom technologies through the lens of the matrix of domination and participate in the conversation. Who got to participate in the decision to adopt certain software or hardware? What has been the effect at the individual, school, and district levels? 
  • Model design justice principles by co-designing your syllabus and classroom policies. Ask your students what they want to learn and who benefits from learning those topics. Have discussions about who benefits and who is harmed by things like late policies, dress codes, and kinds of testing.

Again, the goal of these exercises is to get students into the habit of asking three central questions: Who participated?Who benefitted?Who was harmed? And while these questions might not answer the question of what makes a piece of technology ethical, they will encourage students to think of technology in terms of justice and liberation. 

_________________________________________________________________________________________

Blakeley H. Payne is a Cambridge-based researcher and writer studying the impact that social and algorithmic technologies have on marginalized communities. She holds a bachelor of science degree in computer science and mathematics from the University of South Carolina and recently received a master’s degree in media arts and sciences from the MIT Media Lab where she studied AI ethics education.

Back to Top ↑

Be a part of our community!

Subscribe to our newsletter, Notes on Learning, for monthly updates.

SUBSCRIBE