Seeing Like a
Software

created_by:
created_at:
September 2021

Creators of software are in a powerful position. Silicon Valley is flush with funding; tech entrepreneurs and executives are connected to leaders of other industries; and software increasingly permeates our daily lives, governance, education, and more. However, the industry is amnesiac.

What made tech care about what it cares about? For most software creators, aside from perhaps some hacker lore, the ideological origins of tech are obscure — both in the sense that this is arcane, not-strictly-necessary knowledge and that, should someone be curious, it’s a bit challenging to find scrupulous, comprehensive answers.

Within tech, it’s easy to feel like the tenets of the culture are obvious and unworthy of questioning. So many are confidently stated and rationally argued — what else would you believe? Yet they are still opinions and deliberate decisions that somebody made: the goodness of working in public, turning to technology for social solutions, and many more.

Seeing Like a Software (a play on the title of political scientist James C. Scott’s 1998 book Seeing Like a State) aims to be an accessible introduction to how values in computer science and software development leak into society and our lives. In particular, I want it to be compelling to technical students and professionals who haven’t necessarily trained in ethics, humanities, and social sciences.

Much of the research and writing in this area, while insightful, buries definitions and context. Fields like STS (science, technology, and society) characterize problems but focus less on devising actionable solutions, and in the process, they sometimes reify exaggerated claims about technology. These rifts between the academic study of technology and the technology industry itself mean that the two don’t talk to each other as much as they should.

“Fish don’t know they’re in water,” as musician and writer Derek Sivers phrased it. Several computer science departments have begun integrating ethics lessons into their courses, but their attempts often miss “bigger-picture topics not immediately connected to each course.” Unwieldy, overarching concepts are difficult to weave into individual homework assignments.

To unpack the “water” we’re in demands, among other efforts, that we interrogate the very fundamentals: values we can’t imagine teaching CS 101 and building software without. Seeing Like a Software considers some of the caveats of extending the values of abstraction, efficiency, and scale to people, issues, and systems beyond software.

Abstraction

Abstraction is key to computer science. If every programmer and end-user had to think about layers of computer architecture, how logic gates work, and which semiconductor materials were used to construct a computer chip, we would be in a very different place. Being able to encapsulate parts of a system has enabled people to build on top of existing work without getting caught up in the weeds or needing to become an expert on everything.

Learning to think abstractly, computationally, or algorithmically is paramount in computer science education too. Many universities’ computer science faculty insist that the aim of their curricula is to cultivate this style of thinking and problem-solving, as opposed to the common misconception of teaching coding as a vocational skill.

Efficiency

Optimizing algorithms is a ubiquitous part of computer science education and software engineering — certainly the interview portion at least. Efficiency is paramount in software performance, storage, and much of information theory. Efficiency depends on quantification, the shared obsession of Quantified Self (QS), a movement of people who meticulously track the details of their lives. Gary Wolf, who co-founded the movement with WIRED founding executive editor Kevin Kelly, claimed: “Numbers make problems less resonant emotionally but more tractable intellectually.”

Objectivity in numbers, however, is a myth. The act of measuring is mediation. As Catherine D‘Ignazio and Lauren F. Klein point out in their book Data Feminism, each stage of the process involves people: people to be measured, to count and analyze, to visualize, to promote findings, to use end-products — and “people who go uncounted.” Technology writer Evgeny Morozov posits that QS participants seek to discover a stable, core self. Even Gary Wolf notes that, "For many self-trackers, the goal is unknown... they believe their numbers hold... answers to questions they have not yet thought to ask." However, data are constructed, not discovered. Numbers can't tell the whole story because maybe there's no stable core truth to be found and not everything can (or should) be measured. The pitfalls of quanitifcation abound.

Efficiency is also core to the marketing rhetoric that many companies use, promising seamless, streamlined, and frictionless products and services. Very often, though, this front — the efficient results we see, like same-day shipping or the ability to upload and post social media content — hides a cruel system that makes it possible. As anthropologist Mary L. Gray and computer scientist Siddharth Suri show in their book Ghost Work, technology companies love to imply that their systems are smart and automated, but in reality they rely on humans laboring under precarious conditions. These ghost workers, as Gray and Suri term them, appear in the interfaces of systems like Amazon Mechanical Turk as mere ID numbers.

Sarah T. Roberts, a professor at UCLA, focuses on commercial content moderators in her book Behind the Screen. It's not unreasonable to guess that the systems for approving and flagging user-generated content — YouTube video uploads, tweets and posts, dating app bios — are automated, at least mostly. In reality, it was people who scanned content, pushing the limits of how quickly they could go and, in the case of videos especially, facing psychological repercussions from disturbing content. Although these content moderators had some of the best knowledge about policy issues and possible solutions, they were kept on a rotation, with limited-term contracts and differently-colored employee badges, and did not have the leverage to get their voices heard by the policy or engineering/product teams. There is a human cost to enforcing efficiency on a system.

There are times when we might not want efficiency and benefit from friction instead. From the field of business economics, another field interested in efficiency as an ideal, management consultant John Hagel III and organizational researcher John Seely Brown describe the benefits of “productive friction.” From the end-user perspective, many products have been actively designed to be sticky, exploiting psychology to get us to scroll, share, and engage. While our commoditized attention generates revenue for companies, efficiency on this front makes it all too easy for us to spend time in ways we don’t want.

Technologists are itching to bring the value of efficiency to governance and infrastructure, as evidenced by Sidewalk Lab’s Toronto Quayside project and Marc Andreessen’s April 2020 essay “It’s time to build”, just to pick recent examples. Influential architect and design theorist Christopher Alexander wrote that “a city is not a tree,” meaning that to the chagrin of planners and anyone hoping to control a city top-down, successful unplanned cities embody a more complex, messy structure.

Shannon Mattern, an anthropology professor at The New School, builds on this in her book A City is Not a Computer, which collects and updates several of her essays. The seemingly omniscient dashboard, an emblem of smart city efforts worldwide, is flawed. Its tantalizing data visualizations wrongly suggest that we have the whole picture, precluding viewers — be they mayors or civic hackers — from recognizing where the dashboard falls short. As contracts expire and hype migrates elsewhere, neglect swiftly renders these amalgams of data sources non-functional, their API tokens and dependencies outdated.

Scale

TBD (cites Hanna)

Abstraction, efficiency, and scale are all interrelated, and similarities among their implications are to be expected, not dismissed as redundant. Black-boxing modules enables more efficient software to be written and systems to be enacted, and scale is about retaining efficiency while pursuing growth. These values are embedded in and mingle with history and society. Rubbing shoulders with counterculture and the academic-military-industrial complex, bolstered by the ascent and unspoken preeminence of neoliberal politics, tech ideology is also given credibility by its association with rigorous, rational ways of knowing. It influences the software and social change we make in tangible ways.

So what should we do? Malazita and Resetar, mentioned in Abstraction, designed a critical version of RPI’s introductory CS course, but it was resisted by well-intentioned faculty. More importantly, it left students with an “epistemic tension” — able to identify ethical and social problems, but with no path to fix them. Blindly pursuing transparency, as a counter to the black-boxing practice of abstraction, has shortcomings too. There are trade-offs to each framework, so perhaps it’s best to pursue what Turkle and Papert called epistemological pluralism, having different ways of thinking at our disposal. Part of this can mean drawing from contexts with unobvious relevance, like feminist theory and mutual aid, as McPherson and Hanna cited. With epistemological pluralism, we’re less prone to losing sight of what we cannot or fail to measure, encode, or represent, making for a richer experience and society overall.