A photograph of a Ring Doorbell mounted on a brick wall.
Interfaces are not neutral.
Chapter 18

Interface Ethics

by Amy J. Ko

The ethics of computing have never been more visible. CEOs of major tech companies are being invited to  testify  in front of governments about their use of data. Driverless cars are raising questions about whether machines should be deciding  who does and doesn’t die . Judges are beginning to adopt machine learning to  predict recidivism , rather than using their own judgement. Writers are beginning to ponder not only the ethics of computing and design, but their role in moral decisions 5 5

Batya Friedman, David Hendry (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press.

 and social justice. 2 2

Sasha Costanza-Chock (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.



These and other applications of computing provoke numerous profound questions about design ethics.

  • Who should we design for?
  • How do we include the voices of all stakeholders in design?
  • What responsibility do interaction designers have to create a sustainable, human future?

There are many methods that try to answer these questions, ranging from  inclusive design 1 1

Clarkson, P. J., Coleman, R., Keates, S., & Lebbon, C. (2013). Inclusive design: Design for the whole population. Springer Science & Business Media.

universal design 9 9

Molly Follette Story (1998). Maximizing Usability: The Principles of Universal Design. Assistive Technology, 10:1, 4-12.

participatory design 8 8

Schuler, D., & Namioka, A. (2017). Participatory design: Principles and practices. CRC Press.

value-sensitive design 6 6

Batya Friedman (1996). Value-sensitive design. ACM interactions.

, and  design justice . 2 2

Sasha Costanza-Chock (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.

 And there are many practitioners grappling with how to use these methods. Many are wondering whether these methods are enough to resolve the deep questions about the role of computing in society.

But this book isn’t about interaction design broadly, it’s about interfaces, and the software and technology that make them possible. What specific role do interface technologies have in design ethics? And what role do interaction designers have in designing and leveraging interface technologies ethically? In this chapter, I argue that there are at least four ways that interfaces technologies are at the heart of interaction design ethics.

One of the central roles of user interface software and technology is to  standardize  interaction. User interface toolkits lower the barrier to creating consistent interaction paradigms. User interface hardware, such as the sensor packages in phones, define what computers are capable of sensing. User interface conventions, built into software and hardware, regularize user experience, making new interfaces that follow convention easier to learn. These kinds of standardization aim for desirable ends of usability, learnability, and user efficiency.

Standardization is not ethically wrong in its own right. However, if not done carefully, the ubiquity of interface standards and conventions can exacerbate inequities in design choices. For example, the dominance of screen-based interaction with computers is fundamentally unjust toward people without sight or sufficient visual acuity to use screens. That standard is not a mere inconvenience to a population, but a blunt exclusion of people with disabilities from our computational worlds, because it embeds visual interaction as primary, rather than one of many channels of output. A screen reader, after all, is not a primary form of interaction, it is a technology that tries and only sometimes succeeds in making an inherently visual organization of information something accessible to people without sight. The defaults and templates built into user interface developer tools, intended to streamline the prototyping of conventional interfaces, build in subtle assumptions about left-to-right languages. This default makes it easier to create interfaces that function well for western languages, and harder to create interfaces for eastern languages. As with screens, these defaults are a categorical exclusion of cultures around the world, framing top-to-bottom, right-to-left languages as exceptional and secondary. Interface conventions and standards can also embed cultural assumptions in them. For example, taking a photograph is a social affront in some cultures, but we place multiple camera sensors with prominence in our phones. Interface standardization is, in a way, colonialist, embedding the language, ability, and cultural assumptions of one culture (primarily Silicon Valley) onto another. 7 7

Irani, L., Vertesi, J., Dourish, P., Philip, K., & Grinter, R. E. (2010). Postcolonial computing: a lens on design and development. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI).



One can question whether the colonialist mechanisms of interface standardization are unethical. But that is a debate that is more about the ethics of colonialism than interfaces. Do you believe it is the right of designers in Cupertino and Mountain View to embed their culture into interfaces reaching global ubiquity? And if not, are you willing to champion diversity in culture and ability at the expense of the learnability and interoperability that standards enable?

As a designer, what will you decide?

When I was a child, I had no computer, no smartphone, and no internet. When I was bored, I had a few simple options for how to entertain myself: read a (print) book, walk to a friend’s house, sing a song, or play with toys with my brother. The nature of that experience was one of simplicity. When I didn’t want to do any of those things, I would often choose to just sit and observe the world. I watched squirrels chase each other. I pondered the movement of the clouds. I listened to the rain’s rhythms upon my roof. I looked to nature for stimulation, and it returned riches.

As I aged, and computing became embedded in my life, my choices changed. Interfaces were abundant. The computer in the den offered puzzles and games. My Super Nintendo offered social play with friends. My modem connected me to the nascent internet and its rapidly expanding content. My palette of entertainment expanded, but in some ways narrowed: interacting with computers, or with friends through computers, was much more visceral, immediate, and engaging. It promised instant gratification, unlike nature, which often made me wait and watch. It could not compete with the bright lights of the screen, and the immediacy of a key press. My world shifted from interacting with people and nature, to interacting with computers.

How much artificial is too much? Is there a right way to be a human? Is there a natural way to be a human? Does human-computer integration go too far? 4 4

Umer Farooq, Jonathan Grudin (2016). Human-computer integration. ACM interactions.

 Or is integration an inevitable embrace of our susceptibility to tight cycles of stimulus and response? Do we design for a world that centers human-computer interaction, or carefully situates it alongside the much broader and richer collection of other human experiences?

As a designer, what will you decide?

There is emerging agreement that computing does not  cause  social change, but  amplifies  it, in whatever way that change intends. 10 10

Toyama, K. (2011). Technology as amplifier in international development. iConference.

 For example, social media has not caused political division, but it has amplified division that was already there. The Ring doorbell, pictured at the beginning of this chapter, has not caused ubiquitous surveillance, but it has amplified our ability to surveil our homes. Games have not caused social isolation, but they have amplified isolation, giving depressed adolescents a short term yet isolating salve for unwanted seclusion. In these ways, computing can be weaponized: they are tools that can amplify violence, far beyond what we can do with our hands, but they do not alone cause violence.

If we accept the premise that human beings are ultimately responsible for both desirable and undesirable social change, and that interface technology are just the tools by which we achieve it, what implications does that have for interface design? If interfaces amplify, then that also amplifies the consequences of our choices of precisely what we amplify. For example, if we work on simplifying the control of drones, we must accept that we are helping hobbyists more easily surveil our neighborhoods and governments more easily drop bombs. If we work on simplifying the spread of information, we must accept that we are also simplifying the spread of misinformation and disinformation. If we design new interfaces for recreating or altering the appearance of actors in movies, we must accept that we are also enabling troubling   deep fakes .

An amplification perspective ultimately forces us to question the ethics of what our interfaces enable. It forces us to think rigorously about both intended and unintended uses of our interfaces. It forces us to imagine not only the best case scenarios of use, but also the worst case scenarios. It forces us to take some responsibility as accomplices for the actions of others. But how much?

As a designer, what will you decide?

Before graphical user interfaces, there weren’t that many computers.  Large companies had some mainframes that they stored away in large rooms.  Hobbyists built their own computers at a negligible scale.  The computer hardware they created and discarded had negligible impact on waste and sustainability. Some computers, like the solar powered calculator I had in elementary school, were powered by sustainable energy.

All of this changed with the graphical user interface. Suddenly, the ease with which one could operate a computer to create and share information, led to ubiquity. This ease of sharing information has led to a massive global demand for data, is leading data consumption to account for   20% of global CO2 emissions , much of it in data centers controlled by just a few companies. Rather than using solar-powered calculators, many use Google search to compute 2 + 2, which Google estimated in 2009 emitted 0.2 grams of CO02.

But it’s not just carbon that interfaces emit. The promise of ever simpler and more useful interfaces leads to rapid upgrade cycles, leading e-waste to account for at least 44 million tons of waste. This makes computer hardware, and the interface accessories embedded in and attached to that hardware, the fastest growing source of garbage on the planet.  Inside these millions of tons of garbage lie an increasing proportion of the world’s rare earth metals such as gold, platinum, cobalt, and copper, as well as numerous toxins.

Would we have reached this level of CO2 output and waste without an immense effort to make interfaces learnable, efficient, useful, and desirable?  Likely not.  In this sense, innovations interface software and technology is responsible for creating the demand, and therefore responsible for the pollution and waste.

Some companies are beginning to take responsibility for this. Apple started a  recycling program  to help reclaim rare earth metals and prevent toxins from entering our water.  Amazon pledged  to shift to sustainable energy sources for its warehouses and deliveries.  Microsoft pledged  to be not only carbon neutral, but carbon negative by 2035.

Are these efforts enough to offset the demand these companies generate for software, hardware, and data?  Or, perhaps consumers are responsible for their purchasing decisions.  Or, perhaps designers, who are the ones envisioning unsustainable, wasteful products and services are responsible for changing these companies? 3 3

Paul Dourish (2010). HCI and environmental sustainability: the politics of design and the design of politics. In Proceedings of the 8th ACM conference on designing interactive systems.



As a designer, what will you decide?


How do you decide? The easiest way is, of course, to delegate. You can hope your manager, your VP, or your CEO has greater power, insight, and courage than you. Alternatively, you can advocate, demanding change from within as many have done at major technology companies. If you organize well, make your message clear, and use your power in numbers, you can change what organizations do. And if delegation and advocacy do not work, perhaps it is possible to innovate interfaces that are more inclusive, humane, sustainable, and just. And if that is not possible, ultimately, you can choose a different employer. Use the ample opportunity of the ever growing marketplace for interfaces to choose enterprises that care about justice, humanity, morality, and sustainability.

To make these choices, you will need to clarify your values. You will need to build confidence in your skills. You will need to find security. You may need to start your own company. And along the way, you will need to make decision decisions that intersect with all of the ethical challenges above. As you do, remember that the world is more diverse than you think, that communities know what they need more than you do, and that interfaces, as compelling as they are in harnessing the power computing, may not be the solution to the greatest problems facing humanity. In some cases, they may be the problem. Hold that critical skepticism and commitment to justice alongside a deep curiosity about the potential of interfaces and you’ll likely make the right choice.

References

  1. Clarkson, P. J., Coleman, R., Keates, S., & Lebbon, C. (2013). Inclusive design: Design for the whole population. Springer Science & Business Media.

  2. Sasha Costanza-Chock (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.

  3. Paul Dourish (2010). HCI and environmental sustainability: the politics of design and the design of politics. In Proceedings of the 8th ACM conference on designing interactive systems.

  4. Umer Farooq, Jonathan Grudin (2016). Human-computer integration. ACM interactions.

  5. Batya Friedman, David Hendry (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press.

  6. Batya Friedman (1996). Value-sensitive design. ACM interactions.

  7. Irani, L., Vertesi, J., Dourish, P., Philip, K., & Grinter, R. E. (2010). Postcolonial computing: a lens on design and development. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI).

  8. Schuler, D., & Namioka, A. (2017). Participatory design: Principles and practices. CRC Press.

  9. Molly Follette Story (1998). Maximizing Usability: The Principles of Universal Design. Assistive Technology, 10:1, 4-12.

  10. Toyama, K. (2011). Technology as amplifier in international development. iConference.