JUMP CUT
A REVIEW OF CONTEMPORARY MEDIA

There is no AI.

review by Gary Kafer

Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press, 2021. 336 pages.

On March 22, 2023, The Future of Life Institute published an open letter calling for a pause on the development of advanced artificial intelligence (AI) systems stronger than ChatGPT-4, the generative large language model from OpenAI. Signed by more than 31,000 individuals—including the likes of Yoshua Bengio, Elon Musk, Steve Wozniak, and thousands more executive directors of leading tech companies and prominent computer science professors—the letter advocates for a six month moratorium to allow AI labs time to assess the societal impacts of such technologies as well as brainstorm new regulations for research and development. “Powerful AI systems,” the letter states, “should be developed only once we are confident that their effects will be positive and their risks will be manageable.”[1] [open notes in new window]

Yet, as AI systems have proliferated into seemingly every corner of social, cultural, and economic life, such a letter seems odd. Why would employees from top tech companies including Tesla, Apple, Microsoft, and Google want to halt development of the very thing poised to advance their research and sales? Does a moratorium on AI mask deeper political motivations for AI? In a response to the letter, a group of researchers known as the Stochastic Parrots argue that while it contains some important recommendations, the Future of Life’s proposed moratorium is a deceitful ploy to place more authority in the very designers of AI systems.[2] Rather than address the very real harms of AI in the contemporary moment (such as worker exploitation and racial discrimination), the letter dabbles in fear-mongering rhetoric about AI’s ability to destroy human civilization. By playing off such fears, these companies in turnposition themselves as saviors that know what’s best for us all. In response, the Stochastic Parrots argue that “the actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.” In short, we should democratize AI.

The open letter from The Future of Life Institute is neither the first nor last crisis in AI that we are likely to witness in the early twenty-first century regarding emergent algorithmic technologies. In 2015, the Future of Life Institute published a different letter calling for technologists to prioritize research in “how to reap [AI’s] benefits while avoiding potential pitfalls.”[3] However, as an artifact of the historical present, the letter brings into focus an important issue that shapes public and industrial conversations about the politics of AI. If computational systems are simply reflections of human interests, then agency can’t be attributed to machines. Rather, AI itself entails a way of seeing and organizing the world. In this way, the letter raises an important set of questions: How is AI conceptualized and constructed? How does AI produce and mask its own social and material consequences? And if AI is not inevitable, how did we get here?

Luckily, Kate Crawford has some answers. Principal researcher at Microsoft Research and co-founder and former director of research at the AI Now Institute at New York University, Crawford has spent the past two decades exploring the emergent political and social structures enabled by computational media and large-scale data systems. Her 2021 book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is a culmination of that work, a tour de force survey of the way that AI shapes our understanding of the world and the world itself in which we live. Through both historical and ethnographic research, she reveals how AI derives from a set of technical, social, institutional, infrastructural, political, and cultural structures tied to human desires and biases.

At the core of Crawford’s critique is a simple but unnerving claim—

“AI is neither artificial nor intelligent” (8, emphasis original).

Indeed, if the aforementioned open letters suggest anything, it’s that there has perhaps been too much hand-wringing over the nature of what makes AI AI. Very often, the mere mention of AI evokes fantasies of sentient machines, like BINA48 and Sophia, the humanoid robots developed by the Hong Kong-based company Hanson Robotics in 2010 and 2016, respectively.

BINA48 from Hanson Robotics, released in 2010. Sophia from Hanson Robotics, released in 2016.

However, we should remember that in the twenty-first century United States, the term “artificial intelligence” was adopted as a marketing ploy for Silicon Valley companies to sell their products to a wider public. The introduction of algorithmic systems into social institutions like healthcare, policing, and labor sounds more trustworthy and efficient when companies pitch “intelligence” as something that computers can formalize and reproduce.

In her book, Crawford makes it clear that AI isn’t reducible to its technical components like machine learning programs, computers, and databases. For her, AI is rather a “registry of power” (8) that indexes how we use computation to organize the world and make ourselves legible within it. As Crawford writes, “artificial intelligence, then, is an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing” (18–19). She uses such a definition to rebuke technocratic approaches to AI wherein any problem can be solved with some new computer program and any appearance of bias can be alleviated with a simple coding fix.

Instead, to fully understand the politics of AI requires that we interrogate how AI benefits certain kinds of actors, how developers optimize AI upon certain data and classification schema, and how authorities use AI to justify certain decisions. As a sociotechnical imaginary, AI is thus constructed to fit the needs of a particular group, in this case the wealthy men in power who hold court in Silicon Valley and across various government agencies. In short, there is no Artificial Intelligence. There are only the people that maintain authority over life and the planet upon which we live. By tracking the human political dynamics that undergird AI, The Atlas of AI thus sits among a strain of humanities and social science literature on “sociotechnical systems” and more broadly on the politics of technology, much of which Crawford directly cites in her analysis.[4]

Crawford develops her critique of AI across six chapters that offer a kaleidoscopic view of the multiple forces of power that play upon environments, bodies, resources, energy, and time. They constitute what she calls an atlas—a topographical reading that “offers different perspectives and scales, beyond the abstract promises of artificial intelligence or the latest machine learning models” (11). For Crawford, an atlas is not a neutral account of the world, but a highly political exercise in worldbuilding, just as the medieval European mappae mundi once engraved theocratic concepts into cartographic form.

Heinrich Bünting, Europa Prima Pars Terrae in Forma Virginis (1582).

However, in her hands, an atlas is a tool to chart a path through a range of sites that bear witness to the territories of power that animate AI as a seemingly inevitable reality of our historical present.

Still from a video tour of Amazon’s fulfillment center showing logistical oversight. Amazon Tours, “Amazon Fulfillment Center Video Tour”, April 10, 2021. https://www.youtube.com/
watch?v=UAKPoAn2cB0
Detail from video screenshot at Affectiva’s website on Smart Eye Interior Sensing. This uses AI to detect the emotional state of automobile operators, the number of occupants in a car, and objects left behind. Click here to see whole image. Video available at: https://go.affectiva.com/auto

Across these chapters, Crawford brings together a multifaceted and at times unanticipated collection of case studies to trace the geopolitics of AI. While most of the book dwells within the United States due to Crawford’s own work experience, it does at times venture elsewhere to assess the global impacts of the AI industry, including mining sites in Indonesia and iPhone factory workers at Foxconn in China. In addition, the atlas is not only spatially diverse but temporally varied. In each chapter, Crawford situates contemporary concerns over labor, data, and classification within the historical processes of capital and imperial domination that have led to this moment. Throughout, we encounter familiar characters in the history of computing like Charles Babbage, Alan Turing, and Vannevar Bush in addition to less discussed figures, including U.S. craniologist Samuel Morton and his classification schema for human races, as well as French neurologist Guillaume-Benjamin-Amand Duchenne de Boulogne and his study of human emotions through photographic technology.

An illustration from Samuel Morton’s Crania Americana (1839). Synoptic plate 4 from Guillaume-Benjamin-Amand Duchenne de Boulogne’s Mécanisme de la physionomie humaine (1862). Click here to see whole plate.

We also encounter a myriad of historical sites that precede the more familiar geographies of AI in data centers, server farms, and fiber optic cables. These include gold ore mining sites in San Francisco in the mid-1800s and Chicago meat-packing industries in the late nineteenth and early twentieth century. These historical examples put pressure on the seeming inevitably of AI, showing how it was constructed for particular purposes within particular worldviews.

While each chapter prioritizes breadth over deep sustained readings of individual objects, Crawford’s book notably manages to address multiple kinds of readers at once, all of whom are likely to approach her case studies with different interests and prior knowledge. On the one hand, the book speaks directly to those within the AI community, whether in industry or academic institutions, who are at the forefront of designing and implementing computational systems. For these readers, the book aptly uses an historical and political approach that shows how AI is bound up with material and social power structures that construct what we take to be “intelligence” in machine learning systems. To this end, The Atlas of AI is required reading for any computer science student.

On the other hand, the book addresses those more knowledgeable with critical approaches to AI who are likely familiar with discussions of bias in facial recognition systems like IBM’s Diversity in Faces initiative or analyses of power within Palantir’s partnership with federal agencies.

A figure detailing IBM’s facial recognition technology from their paper on the “Diversity in Faces” project. Merler, et. al, “Diversity in Faces,” 2019, arXiv:1901.10436v6.
Click here to see large.
A screenshot detail from an instructional video detailing Palantir’s AI platform for using large language models for military and defense planning. Click here to see full screenshot. Palantir, “Palantir AIP | Defense and Military,” April 25, 2023, https://www.youtube.com/watch
?v=XEM5qz__HOU

And yet, such readers are still bound to discover a plethora of unexpected moments that build upon and nuance what Crawford calls AI’s “extractive industry” (15). Indeed, extraction is a familiar trope within a post-Snowden world; as we know, corporations and state governments actively accumulate information from populations in massive databases in order to track behaviors, assign risk scores, sell products, and detain individuals. However, as Crawford argues, extraction goes beyond the mere collection of data. Rather,

“the creation of contemporary AI systems depends on exploiting energy and mineral resources from the planet, cheap labor, and data at scale” (15).

Extraction is thus not simply about collecting materials, but about the transformation of lifeworlds under the sign of computation. Crawford here provides a view of AI as a global logistical network that unites together people, environments, and histories under the demands of cloud-based computation. Through extraction, AI attempts to turn the planet itself into “a computationally legible form” (11).

The extractive dynamics of AI are perhaps most immediately visible in Crawford’s discussion of mineral mining, labor surveillance, and affect prediction. At these sites, environments and bodies are plundered for resources to power computational systems. More surprising, however, is how she locates extraction within the aesthetic transformations in visual culture facilitated by machine learning systems. In chapter 3, for example, Crawford examines how AI systems often draw from databases that otherwise were not produced as training data for machine learning programs. Such is the case for the National Institute of Standards and Technology (NIST) database which contains mugshots of individuals, many now deceased, which can be used for example to train facial recognition systems to identify individuals at borders. Here, within the database, the people in the mugshots are divorced from their personal narratives and merely presented as data points for a pattern recognition program. The result, Crawford argues, is

“a shift from image to infrastructure, where the meaning or care that might be given to the image of an individual person, or the context behind a scene, is presumed to be erased at the moment it becomes part of an aggregate mass that will drive a broader system” (93).

As infrastructural objects, images no longer bear an indexical relation to the world, but get reduced to their capacity to operate within the statistical operations of computer vision technology. They are excised, aggregated, and weaponized to make prediction possible as an epistemic achievement of pattern recognition. Extraction is not only an act of database management but a way of seeing the world.

The Atlas of AI is an ambitious and sprawling account of the contemporary geopolitics of AI that moves gracefully from the personal to the planetary, the mundane to the spectacular, the convenient to the disturbing. It is both unflinching in its analysis and generous in its presentation. Readers from a range of fields, including data science, media studies, law, political philosophy, and science and technology studies will surely follow Crawford’s journey with anticipation. Yet some readers might desire a more nuanced argument about particular dynamics of AI. For example, Crawford does not provide a detailed account of the racial processes that underpin data-based technologies, something for which readers are best suited to look elsewhere, such as in key texts from Simone Browne, Ruha Benjamin, and Safiya Noble.

Moreover, while Crawford insists that AI’s extractive industry operates upon a “colonial impulse” (11), she doesn’t quite engage the epistemic and ontological processes that connect computation to specific power formations. This is not the case overall; the sixth chapter on “State” is noteworthy for its investigation of how private surveillance contractors recuperate and extend state sovereignty in ways that evade judicial oversight. However, by and large, the colonial impulse of AI seems taken for granted here as a historical consequence of state and corporate actors that belies the specificity of extraction as a mechanism of power. Put differently, while it is true that AI reanimates the colonial impulses of yesteryear, the book does not theorize what this new configuration of power entails beyond its extension of certain historical circumstances of measurement and classification. To be sure, other scholars have taken note of how Big Data combines the extractive logics of colonialism with the abstract formalism of computation.[5] For Crawford, however, the brisk speed at which she moves through case studies comes at the cost of a sustained theory of power that interlaces infrastructure, capital, and labor.

Whether or not the book’s framework of power leaves more to be desired depends on its audience. For some readers, it may be enough to simply state that, yes, AI is a form of exercising power. However, what we understand as the nature of power itself does ultimately have important implications for what we imagine for our future—or, what we are to make of “resistance.” In the conclusion, Crawford reasserts that AI intensifies corporate and state matrices of control across varying scales of economic and political domination. This, however, can’t be the end of the story. In a necessary turn, she then asks:

“Could there not be an AI for the people that is reoriented toward justice and equality rather than industrial extraction and discrimination?” (223).

Crawford is clear that milquetoast programs for AI ethics, while seemingly benevolent, are veiled attempts for tech companies to maintain authority over how algorithms should be developed and implemented. Alternatively, by examining issues of power, we might rather work towards “a renewed politics of refusal” (226) that questions why AI is applied to solve certain problems—and further, why categories like race and gender are even used in the first place across sites of labor, finance, education, and border security.

However, this is where Crawford’s atlas ends. While she does gesture towards “the growing justice movements that address the interrelatedness of capitalism, computation, and control” (227), she doesn’t quite articulate what this politics of refusal entails. Rather, she states simply that

“by rejecting systems that further inequity and violence, we challenge the structures of power that AI currently reinforces and create the foundations for a different society” (227).

Now it’s unfair to ask that a single book do everything. If the task of this monograph was to map the politics of AI as an expression of capital and colonial power, one could imagine a whole other project dedicated to the work of such justice movements; indeed, one need not look any further than Ruha Benjamin’s Race After Technology, Sasha Costanza-Chock’s Design Justice, and Catherine D'Ignazio and Lauren F. Klein’s Data Feminism. One wonders, however, how Crawford’s atlas might have changed if she had more intently engaged in the scholarship produced from these data justice movements, particularly their abolitionist and decolonial demands. To be sure, she does explicitly cite one social justice organization in her critique of state and corporate partnerships a report by Mijente on a federal contract with Palantir (194). However, by and large, most of the critical scholarship that Crawford engages comes from other academics like herself. This approach in turn produces the very blind spot in her atlas that she can only but gesture towards. It’s perhaps the case then that a view from the “bottom of the data pyramid” might reveal different cartographies by which to map the power dynamics of AI that put pressure on some of the received wisdom of those peering out from the top.[6]

What then might an “atlas from below” look like? In their research on intelligence-led policing, the StopLAPD Spying Coalition produced what they call “the Algorithmic Ecology,” an “abolitionist tool” for tracing the webs of power at macro and micro levels of surveillance.[7]

The “Algorithmic Ecology” from StopLAPD Spying Coaling and Free Radicals. Full details about the toolkit, as well as downloads of zines and informational sheets, available at: https://stoplapdspying.medium.com/the-algorithmic-ecology-an-abolitionist-tool-for-organizing-against-algorithms-14fcbd0e64d0.

Theirs is an atlas of a different kind, one that traces the relations of power that animate AI from the point of view of those communities most impacted by large scale data-driven systems. It scales across various nodes of ideological, institutional, operational, and community-level power, connecting national agencies like the National Institute of Justice with local police programs and non-profits. It shows how AI itself is an ecosystem that gathers together actors across various levels of administrative operations, many with nefarious intents and some others seemingly acting out of good will.

When I teach this tool in undergraduate courses, students are always curious about the inclusion of “academia” alongside more familiar culprits like Palantir or the Department of Justice. And yet, we should not be surprised that very often those in higher education are complicit in the development of AI systems used by state and corporate actors. In her book, Crawford touches on this precise issue, especially her discussion of the researchers at Duke, University of Colorado, and Stanford who built databases of faces from people in the public without their informed consent (109–110). While these examples seem obviously unjust, perhaps far more pernicious is the way that corporate and state actors now cooperate with academics in order to produce more “ethical” AI systems. In a 2021 report titled “Automating Banishment,” StopLAPD reveals how PredPol attempted to rebrand their predicative police software as accountable and transparent with the help of a certain high-profile law professor.[8]

One is left to wonder too how the AI industry folds critiques of their systems into their operating protocol in order to produce the guise of equity. In chapter four, Crawford tracks the movement for so-called “inclusive” (131) datasets following the release of the seminal Gender Shades project. In this project, Joy Buolamwini and Timnit Gebru showed how facial recognition systems—like those used by Microsoft, IBM, and Amazon—failed to register women and people of color because their training sets were largely composed of white men. To make more inclusive datasets would then entail incorporating more women and people of color. However, as a result, facial recognition systems would just get better at recognizing marginalized people and tech companies would now intently seek out minority groups to fill their datasets. In 2019, Google infamously sent contractors to Atlanta to scan faces of unhoused Black people in order to improve their facial recognition system.[9] What was pitched as inclusivity turned out to be carcerality.

What tools like the Algorithmic Ecology from StopLAPD suggest then is how academic critiques of AI, however well intentioned, can inadvertently exacerbate power dynamics when incorporated into state and corporate agendas. This is an important realization, especially now as we are witnessing the birth of a whole research industry centered on critiquing AI, known as “Critical AI,” complete with organizations, initiatives, journals, conferences, and congressional hearings. It’s an industry where companies like Microsoft have ties to researchers at the AI Now Institute like Crawford.[10] In the wake of this industrial boom, one is left to wonder to what extent Critical AI is enough, especially a mode of critique that comes from the top.

To be clear, I’m not suggesting that researchers should remove themselves from the conversation about AI simply because of their institutional affiliations. Rather, anyone who writes about the politics of data from an academic institution—including Crawford, myself, and perhaps many of you—would be wise to approach their objects humbly, to question how our very instruments of mapmaking are inflected with unseen biases and political commitments. No one atlas is better than the other, but they do tell different stories. Crawford’s own excels in tracking the global geopolitics of AI and revealing the subtle ways that computation has transformed our ordinary lifeworlds. Yet, from below, a different map of the world is possible.