Information + automation
The first time I saw a computer program was magical. It was 4rd grade and I was nine years old. There was a temporary installation of about fifteen Commodore 64 computers lined up in the hallway outside of our classroom. One class a time was invited out into the hallway to sit in pairs in plastic chairs. When my class was invited out, we walked single file to our seats and then our teacher, a very short and very cranky man, grumpily handed out a single piece of paper in front of each computer. He told us to follow the instructions on the sheet of paper, entering exactly what it said until we reached the end of the worksheet. The instructions were a cryptic list of pairs of numbers. We entered them, one by one, correcting our errors as we typed them in. When we reached the bottom of the list, we pressed the “Run” button on the keyboard.
To our surprise, a pixelated duck appeared! The numbers suddenly made sense: each one represented the position of one of square, and together, all of the position made up a picture of a duck. My partner and I immediately wanted to edit the points, seeing if I could make the duck’s eyes bigger, or give it bigger wings, or better yet, change it into a monster or a cat. For some reason, the idea of telling a computer how to draw robotic, rectangular, black and white animals was far more interesting than just drawing animals myself, even though my hands could do far more with paper, pens, and paint.
We use code, now more than ever, automate how we create, retrieve, and analyze information. And yet, as in the story above, we often happily exchange the labor that we can do ourselves with the wonder of our minds and bodies with the speed, scale, logic, and versatility of code. In this chapter, we reflect on this trade, what we gain and lose when we make it, and the diverse consequences of shifting control over information and decisions to machines.
When to automate?
Automation, of course, does not just include computing. As we noted in Chapter 5 in our discussion of information technology, we used mechanical devices to automate printing, phonographs to automate musical recordings, and electricity to automate the transmission of messages via telegraphs, phones, and television. And our discussion of what is gained and lost began with Socrates and his fears that writing itself was a form of “automation”, in that it externalizes our memories, risking atrophy to our intelligence, memory, and wit. Code, therefore, is just the latest information technology to make us wonder about the tradeoffs of delegating our information labor to machines.
When, then, is automation worth it? Let’s examine this question by considering the many applications of code to problems of information. We’ll begin with one of the first things that code automated: calculation . As you may recall, the first computers were people, in that humanity has performed the labor of arithmetic manually since arithmetic was invented. This was true even up through the Space Race in the mid-1950’s, when the United States and the Soviet Union rushed to be the first to space. The calculations here were ballistic, involving algebra, geometry, and calculus, all in service of trying to aim and steer rockets in a manner that would allow them to escape orbit and safely return to Earth. Women, including many Black women mathematicians, performed the calculations that got the U.S into orbit 13 13 Margot Lee Shetterly (2016). Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race. HarperCollins.
What was gained? A new speed and complexity in space flight, requiring careful programming and planning. In fact, without code, little of the remaining missions to space would have been possible, as they all relied on onboard computers to track trajectories. What was lost was a career path for mathematicians, and their agility in responding to urgent needs for calculation in unexpected circumstances. And what remains is a public education system that still teaches the arithmetic used to get us to space.
Later, in the 1990’s, there were fewer than 100 websites, and researchers pondered what the web might be. At the time, most of the valuable information in the world was in libraries, which archived books, newspapers, magazines, and other media. The people that made that information accessible were librarians 7 7 Michael H. Harris (1999). History of Libraries in the Western World. Scarecrow Press.
Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd (1999). The PageRank citation ranking: Bringing order to the web. Stanford InfoLab.
What was gained? Obviously, a transformation in our ability to find and retrieve documents stored on other people’s computers. And when those documents have valuable content, this replicates the benefits of libraries, but does so at far greater speed, scale, and access than libraries had ever achieved. But what was lost was profound: libraries are institutions that celebrate equity, literacy, archiving, and truth. While accessing the information they have may be slower, Google has done little to adopt these values in supporting information archiving and retrieval. Half of the world lacks access to the internet, but most countries have public library systems open to all. Google has done little to address literacy, largely relying on schools and libraries to ensure literacy. Google largely ignores archiving, with the exception of Google Books, mostly documenting what is on the web now, and ignoring what used to be. And perhaps more importantly, Google has largely ignored truth, ignoring the critical role of libraries in archiving and curating credible information, and instead retrieving whatever is popular and current. What remains are two relatively independent institutions: a for-profit one that meets are immediate needs for popular information that has questionable truth, but offers little to address information literacy or inequity, and a not-for-profit one that continues to uphold these values, but struggles to retain public support because of its less than immediate response.
Before the social web, social was personal. To hear about what our friends were doing, people had conversations with their friends. To get recommendations for books or movies, people might go to their local bookstore or library to get recommendations from avid readers, or read a review from a newspaper movie critic. To make food recommendations, people might spend months getting to know the owners of local restaurants, cafes, and diners, building an awareness of industry. Word of mouth, gossip, and sharing was relational and direct. As the social web emerged, algorithms began to mediate these relationships. We were more likely to learn about what our friends were doing because of a post that Facebook’s news feed algorithm decided to recommend to us. Rather than relying on experts and enthusiasts to help us select media, we trusted collaborative filtering algorithms 12 12 Paul Resnick, Neophytos Iacovou, Mitesh Suchak, Peter Bergstrom, and John Riedl (1994). GroupLens: an open architecture for collaborative filtering of netnews. ACM Conference on Computer Supported Cooperative Work and Social Computing.
Safiya Noble (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
What was gained? It is certainly less work to keep up with our friends, and less work to decide what to read, watch, eat, and buy—especially social labor, as our interactions no longer need to involve people at all. What was lost were relationships, community, loyalty, and trust. The algorithms built to create digital recommendations are optimized to reduce our decision time, but not to connect us. And we do not yet know the implications of these lost connections on society: will our increased convenience and weakened community ties make us happier by making us more productive and satisfied, or was there something essential to community that we are losing?
While the web began to mediate our connections, the policing in the United States was pondering code as well. Throughout U.S. history, a primary function of policing had been to restrict Black lives 3 3 Simone Browne (2015). Dark matters: On the surveillance of blackness. Duke University Press.
Jeffrey Brantingham, P., Matthew Valasik, and George O. Mohler (2018). Does predictive policing lead to biased arrests? Results from a randomized controlled trial. Statistics and Public Policy.
Sonja B. Starr (2014). Evidence-based sentencing and the scientific rationalization of discrimination. Stanford Law Review.
What is gained? For police departments, they may feel like they are better allocating their time, “optimizing” the number of arrests to reduce crime. What is lost a sense of freedom: Black people have always been surveilled in the United States 3 3 Simone Browne (2015). Dark matters: On the surveillance of blackness. Duke University Press.
How to automate?
In all of the stories above, there is a similar pattern: people had evolved practices over time to perform some information task, code was used to automate their work, and in the process, the humanity in the task was lost. But there is a pattern underneath these histories that goes deeper: it is the decision to delegate control over our gathering, analysis, and interpretation of information from human, subjective, emotional, and relational human processes to procedural, objective, rational, impersonal computational processes. In that decision is a choice about precisely what aspects of human experience we delegate to processing information, and what new kinds of partnerships we form between people and information technology to help us.
The dawn of computing set up a continuum for these choices. On one end was automationautomation: The delegation of human action to technology, often for the purpose of efficiency or reliability. . This vision—championed by researchers like Marvin Minsky, who is often called the “father of artificial intelligence”—imagined a world in which computers would replicate key aspects of human intelligence such as search, pattern recognition, learning, and planning 8 8 Marvin Minsky (1961). Steps toward artificial intelligence. Proceedings of the IRE.
Thomas G. Dietterich (2017). Steps Toward Robust Artificial Intelligence. AI Magazine.
The counter narrative to automation was one of augmentationaugmentation: The use of technology to improve or enhance human abilities. . This vision—championed by people like Vannevar Bush 4 4 Vannevar Bush (1945). As we may think. The atlantic monthly.
Umer Farooq, Jonathan Grudin (2016). Human-computer integration. interactions.
Of course, the dichotomy between automation and augmentation is a false one. Computers will likely never be completely independent of humanity, as they will always require us to shape their behavior and intelligence. And as much as we enhance ourselves with computing, we will at some biological level likely always be human, with both our rational minds, and our emotional ones. And in both visions of automation, there is little attention to the inequities and injustices in society that underlie how we create information technology 1 1 Ruha Benjamin (2019). Race after technology: Abolitionist tools for the new jim code. Social Forces.
Safiya Noble (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Cathy O'Neil (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Podcasts
- What happens when an algorithm gets it wrong, In Machines We Trust, MIT Technology Review . Discusses a false arrest based in racially biased facial recognition software.
- AI in the Driver’s Seat, In Machines We Trust, MIT Technology Review . Discusses the many complexities in human-machine communication that have been largely ignored in the design and engineering of current driverless car technology.
- She’s Taking Jeff Bezos to Task, Sway, NY Times . An interview with Joy Buolamwini, an activist who leads the Algorithmic Justice League, about facial recognition, algorithmic bias, corporate resistance, and opportunities for AI legislation.
- What’s Causing the Tesla Crashes, What Next:TBD, Slate . An interview with Missy Cummings, a safety critical systems researcher at Duke, about driverless cars.
- Biased Algorithms, Biased World, On the Media . Discusses the illusion of algorithmic objectivity and how they end up reflecting the biases in our world.
- An Engineer Tries to Build His Way Out of Tragedy, The Experiment . Discusses the limits of solutionism when facing lived experiences.
References
-
Ruha Benjamin (2019). Race after technology: Abolitionist tools for the new jim code. Social Forces.
-
Jeffrey Brantingham, P., Matthew Valasik, and George O. Mohler (2018). Does predictive policing lead to biased arrests? Results from a randomized controlled trial. Statistics and Public Policy.
-
Simone Browne (2015). Dark matters: On the surveillance of blackness. Duke University Press.
-
Vannevar Bush (1945). As we may think. The atlantic monthly.
-
Thomas G. Dietterich (2017). Steps Toward Robust Artificial Intelligence. AI Magazine.
-
Umer Farooq, Jonathan Grudin (2016). Human-computer integration. interactions.
-
Michael H. Harris (1999). History of Libraries in the Western World. Scarecrow Press.
-
Marvin Minsky (1961). Steps toward artificial intelligence. Proceedings of the IRE.
-
Safiya Noble (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
-
Cathy O'Neil (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
-
Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd (1999). The PageRank citation ranking: Bringing order to the web. Stanford InfoLab.
-
Paul Resnick, Neophytos Iacovou, Mitesh Suchak, Peter Bergstrom, and John Riedl (1994). GroupLens: an open architecture for collaborative filtering of netnews. ACM Conference on Computer Supported Cooperative Work and Social Computing.
-
Margot Lee Shetterly (2016). Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race. HarperCollins.
-
Sonja B. Starr (2014). Evidence-based sentencing and the scientific rationalization of discrimination. Stanford Law Review.