This post is the second of a three-part series. You can read Carol’s first post here.
Ethics and robots? Why, yes! As we prepare our students to become the future designers and consumers of technology, it is imperative that we embed ethical analysis into our curriculum. In my own journey to design lessons that reach this goal, it quickly became apparent that I would need to scaffold these lessons in two ways: the complexity of the issue at hand as a way to manage cognitive load, and the degree to which students have a personal, emotional connection to the topic at hand, as a way to manage emotional load.
Scaffolding Cognitive Load
Cognitive Load is a term coined in the 1980s to describe the ways in which a brain acquires and processes information. In layman’s terms it refers to the number of new pieces of information that an individual can learn during a certain amount of time. This is something that teachers keep in mind when designing engaging learning experiences.
For example, when introducing an ethical matrix as a tool, it’s important to balance the load of this brand-new tool and way of thinking with the ethical premise that we’re actually thinking about. The topics to which we apply our matrices have to be increasingly complex— starting with a topic that has little controversy and moving to more heated debates. In other words, at the beginning of the process, students’ cognitive load is mostly taken up by learning how to create a matrix. As matrix creation becomes more natural and consumes less “brain space”, some of those thinking cycles can be applied to more complex ethical situations.
How does this play out in real time in the classroom? With my students, we start with reading several articles (like this) on what artificial intelligence is and examples of AI. From those examples, we created our first ethical matrix around an issue that is pretty innocuous to a teen: the use of artificial intelligence in eldercare. At first, my students offered a more superficial understanding of the issue: