Against Digital Worldlessness: Arendt, Narrative, and the Onto-Politics of Big Data/AI Technologies

Ewa Płonowska Ziarek (bio)

“The best way to humanize AI is to tell our stories.”

— Elizabeth Adams

I. A New Referendum on Reality

In a February 2020 article in The Atlantic entitled “The Billion Dollar Disinformation Campaign to Reelect the President,” McKay Coppins offers disturbing insights into the digital extraction of big data used to target political advertising and to modify voter behavior. Developed by Cambridge Analytica in 2016, the temporal and geopolitical implications of these techniques extend well beyond the 2020 US campaign and its aftermath.1 Alarmed by the staggering amount of data collected on voters, Coppins argues that the damage that results from these massive and highly personalized political disinformation techniques includes not only a widely discussed political crisis of democracy in the digital age,2 but also and primarily the loss of a shared reality. As he puts it, “Should it prevail in 2020, the election’s legacy will be clear – not a choice between parities or candidates or policy platforms, but a referendum on reality itself.” More and more frequently discussed by computer scientists, political theorists, and the wider public alike, the loss of reality has not only prevailed but intensified: As data and computer scientist Sinan Aral puts it briefly, we are approaching “the end of reality” (24–55).3

With the waning of techno-optimism and the ascendancy of techno-dystopianism, numerous diagnoses have been offered for this state of affairs, ranging from the widely discussed “post truth societies” and the blurring of reality and hyperreality (Floridi)4 to critiques of digital capitalism and the ideology of “computationalism.”5 However, as the formulation of a “referendum on reality” suggests, this political concern about the loss of the real also foregrounds the negative ontological effects of the digital regime of power – what I call digital worldlessness. With its global reach, the hegemony of the digital regime and artificial intelligence constitutes a new horizon not only for the economy, but also for politics and culture. Therefore, any analysis of this hegemonic framework calls for broad interdisciplinary thinking, in which humanists (and particularly political, cultural, and literary theorists) need to be centrally involved, in addition to scholars and philosophers working in technology studies.

To analyze the problem of the digital worldlessness of big data and its use in AI from the perspective of political theory, I draw on Hannah Arendt’s central claim that any loss of reality is the effect of historically specific assaults on human plurality. I develop the implications of this claim beyond the limitations of Arendt’s own work6 by engaging the growing interdisciplinary critiques of the harms of datafication and of the algorithmic mediation of social relations. Although best known for her work on totalitarianism, Arendt interrogates the destruction of human plurality through high and low technologies of domination, from imperialism, anti-Semitism, and racism to nuclear warfare, biopolitics, and even the influence of religious “otherworldly” communities.7 For a number of scholars, Arendt’s enduring legacy lies in contesting the resurgence of racism, right wing populism, and fascism in the twenty-first century;8 others, such as Zuboff and Weizenbaum, enlist her work to understand the unprecedented character of computational technologies of power.9 I propose that the ontological and political stakes of the current referendum on reality require a genealogical account of the ways in which historically specific threats to human plurality are automated and encoded anew in digital technologies of power. Writing before the digital age, Arendt offers such a genealogical account of the destruction of human plurality by anti-Semitism, imperialism, racism, and refugee crises, culminating in the emergence of the horrific novum of totalitarianism. Among interdisciplinary thinkers who directly confront the damages of digital technologies of power, contemporary critical race theorists (in particular Ruha Benjamin and Simone Browne) argue that the long history of anti-black racism both precedes and is encoded anew in the global regimes of big data and AI. Building on this interdisciplinary framework, I argue that the contemporary ontological loss of reality is augmented by the political harms of digital technologies of power to human plurality.

According to Arendt, the sense of the real emerges from three types of intertwined relations that can be separated only for heuristic purposes: the appearances of natural phenomena to human senses, the construction of the world through work and technology, and the web of interpersonal relations effected by acting and speaking together. This view helps to define digital worldlessness: subordinated to the aims of digital capital, digital technologies not only intensify economic exploitation,10 but also undermine phenomenological appearances of the world and human plurality. This is the case because relations with the world are mediated by economy, science, and technologies and are interlaced with “the web” of political interactions (Human 183–84). This web of human affairs holds human plurality together and sustains a sense of the common world (204). Arendt’s web metaphor is not accidental. Plurality for her is not a numerical multitude of isolated individuals, but rather a relational form of sociality characterized by the enabling tensions between equality and distinction, between being in common and the unrepeatable singularity of each person. Even if domination or violence restrict being in common to counter-communities of resistance, such commonness is intertwined with sharing “deeds and words,” which enables political actors to disclose their uniqueness to each other and to enact together a new beginning in political life. Whenever this worldly appearance of the equality, distinction, and uniqueness of people is under assault by political technologies of power or economic exploitation, the sense of reality is eroded as well. As Arendt puts it, without human plurality, the world itself becomes “as a heap of unrelated things” – no longer a common world fit for acting, understanding, or communicating (204).

Because realness is intertwined with human plurality, it cannot be guaranteed by factual knowledge or objectivity abstracted from human affairs: “Factual truth . . . is always related to other people: it concerns events and circumstances in which many are involved; it is established by witnesses and depends upon testimony; it exists only to the extent that it is spoken about . . . It is political by nature” (“Truth” 553).11 Put differently, reality presupposes public trust in the common world. As is now more obvious than ever, even the authority of science and technology – not to mention politics – rests on public trust, or on what Arendt rehabilitates in her political writings as common sense. In her reinterpretation of Kant, Arendt argues that insofar as common sense reflects human plurality and trust in the world, it is one of the highest political virtues, which should not be automatically discarded as the unreflective public opinion of the ideologically manipulated masses. On the contrary, common sense, human plurality, and realness are inseparable because all of them depend on the interpersonal capacity to perceive one another as equal and distinct and to relate with others to the same objects in the world, or to the same matters of concern, despite our irreducible differences, political conflicts, and diverse cultural locations.12 In other words, both the political and the ontological sense of the real emerge from sharing, acting, and arguing with others who are mutually regarded as equal and distinct: “common sense presupposes a common world into which we all fit” and vice versa (“Understanding” 318).

Another insight of Arendt’s that is relevant for the contemporary “referendum on reality” is that a Western political and scientific “solution” to the loss of reality has been the replacement of common sense by calculation. Arendt calls this politically-motivated substitution of calculation for common sense “logicality” in order to distinguish it from the uses of statistics and computation in other domains. As she argues in her reflection on the rise of statistics, calculation becomes both a symptom of and substitute for the lost reality because of its paradoxical double quality: on the one hand, the capacity for logic, like sensus communis, is common to all; on the other hand, its validity is utterly abstracted from the historical world, sensible phenomena, sociability, and ordinary language: “all self-evidence from which logical reasoning proceeds can claim a reliability altogether independent of the world and the existence of other people.” “Only under conditions where the common realm between men is destroyed and the only reliability left consists in the meaningless tautologies [of logic]” can people accept logicality – or in our historical moment, data – as the substitute for common understanding (“Understanding” 318).

Although a comprehensive understanding of digital harms to human plurality requires multiple methodological and disciplinary perspectives, I want to confront this threat by analyzing contradictions between two different social practices: the uses of big data in Artificial intelligence (AI) and its subset Machine learning (ML),13 and the role of narratives in political acts. I do so, first, to foreground the contradiction between the mathematical models of data abstracted from the phenomenal world and the ethical and political difficulties of understanding that emerge from human plurality. Second, I want to highlight the conflict between the relational agency enacted in narrative and political acts and the increasing automation of high-stakes decisions in public life, ranging from economy and management to education, criminal justice, healthcare, and immigration. This conflict is at stake in the resurgence of interdisciplinary interests in narrative in both critical data/algorithmic studies and in the new political movements that contest the hegemony of AI and big data. In literary studies, the primary investigations of data and computer programming are associated with the institutionalization of Digital Humanities and the changing cultures of reading,14 yet scholars in science and technology studies, critical data studies, and computer sciences15 deploy narrative to explain socio-historical mechanisms of power encoded in data-driven technologies. Researchers as diverse as Fiore-Garland and Neff, Schrock and Shaffer, and Andrejevic return to narrative to contest the separation of mathematical models of data from socio-historical practices. As Dourish and Gómez Cruz persuasively argue, unacknowledged “narrative practices in data science” (6) make data-driven technologies applicable to social contexts, even though this narrative dependence is disavowed in the process of legitimating big data.

Finally and most importantly, the contrast between the uses of data in AI and the uses of narratives in political struggles clarifies the antagonistic ontologies of these practices. For Arendt, the conjunction of narrative/political acts presupposes and reenacts the “web” of human plurality on which the ontology of the common world depends. By contrast, insofar as the use of big data in AI and ML is subordinated to the economic aims of digital capitalism and the automation of decision-making, its ontology corresponds to digital worldlessness. Such digital worldlessness reframes the harms of algorithmic governmentality, a term proposed by Rouvroy and Berns and which Rouvroy defines as a “regime of neutralisation” (100) that aims to disarm subjective capacities for action, for speech, and “for decision (of deciding on grounds of undecidability rather than obeying the results of calculation),” as well as for collective imaginations of political projects (101). The contrast between these ontologies could not be starker, and it reflects the “hybrid” and antagonistic character of the current referendum on reality.

Thinking through this onto-political antagonism16 as reflective of larger technologies of power, we can avoid what Ruha Benjamin calls reductive oppositions between techno-determinism and techno-utopianism, which promises to solve social inequalities (44–46). Although historical genealogies of these contradictions between data, AI, and narrative might evoke a familiar conflict in modernity between storytelling and information—addressed, for example, in Walter Benjamin’s famous essay “The Storyteller”—contemporary onto-political threats to human plurality are posed by a different type of information associated with the mathematical formalization of data, and purged from any reliance on the appearances of natural, social, and political phenomena.17 Lyotard’s The Postmodern Condition can be reclaimed for its prescient critique of such formalization at stake in digital power. Already in 1979, he could foresee that incommensurable differences in the debates on justice, politics, and truth are subordinate to the computational optimization of efficiency, which “entails a certain level of terror, whether soft or hard: be operational (that is, commeasurable) or disappear” (xxiv). Although Lev Manovich proposes a diametrically opposed evaluation of the relation between narrative and big data in his influential 1999 essay “Database as Symbolic Form,” he too suggests that this antagonistic relation reflects a larger onto-political conflict. According to the ontology of computer programming, “the world appears to us as an endless and unstructured collection” of data points to be processed algorithmically (81).18 By contrast, narrative not only presupposes at its bare minimum actors, events, and narrators but also attempts to make sense of the world in terms of causality – a term that for me also includes causality of freedom and therefore cannot be reduced to mechanical causality – rather than algorithmically detected correlations in huge data sets. That is why, for Manovich, “database and narrative are natural ‘enemies’. Competing for the same territory of human culture, each claims an exclusive right to make meaning out of the world” (85).

Although I emphatically reject Manovich’s claim that “databases” are more open than narratives,19 his insight into the conflicting ontologies presupposed by data and narrative remains more relevant than ever. The contemporary antagonism between big data, narrative, and AI shapes new forms of political action, initiated by new political movements such as the Algorithmic Justice League or Data 4 Black Lives. For example, a new project launched by the AI Now Institute, entitled “AI Lexicon,” calls “for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.”20 This call for alternative narratives is all the more significant because it is an integral part of the mission of the Institute, which is dedicated to interdisciplinary research on the socio-political repercussions of artificial intelligence in order “to ensure a more equitable future” (AI Now Institute). As this example suggests, at stake in the competing political ontologies is not a resolution or a possible choice between big data and narrative. Rather the question is whether this conflict can be mobilized to reimagine different possibilities of acting and speaking in support of freedom and equity, or whether the big data revolution—often described metaphorically as a tsunami, explosion, or flood—will further exacerbate the sense of digital worldlessness.

II. Arendt’s Political Ontology of Narrative and Action

What is distinctive in Arendt’s highly idiosyncratic approach to narrative is her argument about the inextricable relation between narrative and political acts. As a number of feminist theorists such as Cavarero, Kristeva, Stone-Mediatore, and Diprose and Ziarek argue,21 Arendt’s onto-politics of narrative practice emerges from her political theories of action, political agency, and judgment. The key point of her approach is not formal analysis but rather a much-needed reflection on the connections between stories, understanding, and political action defined in the broadest sense as the struggle for freedom. For Arendt, this relation between stories and political acts, or words and deeds, provides a larger framework for the analysis and critique of more specific narrative forms in culture, literature, history, ethnography, and so forth. Let me stress from the outset that the relation between narrative and politics underscores the ambivalent and often conflicting role of narrative, which can range from hegemonic legitimations of colonial, racist, and economic domination to support for and commemorations of struggles for independence. Narrative is therefore both the object and the means of political contestation. At the same time, Arendt provides a criterion for parsing these contradictory functions, which boils down to the question whether specific narratives enable or suppress political activism and human plurality.

Because of her emphasis on relational political agency in struggles for freedom, Arendt herself focuses more on narrative modes of resistance and their ontological presuppositions. As she famously claims in The Human Condition, action “‘produces’ stories” in the way that other activities, such as work, produce objects (184). Yet what is at stake in this strange reversal of narrative agency from authorship to political acts? What does it mean to say that action produces stories rather than that stories represent real or fictional events? In the context of the predictive analytics driving digital capital, one of the most important implications of this idea is that stories produced by action retrospectively reveal the action’s meaning, which in turn depends on the contestable role of remembrance and interpretation. For feminist critics of Arendt, another crucial political implication of this claim is that narrative itself is a mode of acting in the world that presupposes human plurality.22 Deeply conditioned and yet not determined by cultural, political, and economic norms, actions, narratives, and their interpretations can safeguard the possibility of a new, unforeseeable beginning in history for Arendt. The event of a new beginning foregrounds the possibility of acting in unexpected ways and offers a new interpretation of historical events, as well as the very capacity to imagine and enact new possibilities of being in common. Because political action is impossible in isolation, another common feature of narrative and political acts is a disclosure of human plurality, which is antithetical to the increasingly automated quantification and classification of persons.

Arendt’s insistence on the interconnection between political acts and storytelling implies that both action and narrative share the same ontological conditions. As numerous interpreters argue, one of Arendt’s main contributions to political theory is her famous claim that action both depends on and discloses the ontological condition of natality. As she writes in The Human Condition, all human activities and initiatives, including labor, art, and work, are

rooted in natality . . . However, of the three, action has the closest connection with the human condition of natality; the new beginning inherent in birth can make itself felt in the world only because the newcomer possesses the capacity of beginning something anew, that is, of acting. . . . Moreover, since action is the political activity par excellence, natality, and not mortality, may be the central category of political . . . thought. (9)

For Arendt, the condition of natality carries three interrelated meanings. First, as a crucial but overlooked modality of finitude, natality emphasizes a relational inter-dependent notion of personhood: from birth, we appear first to others and then to ourselves, such that uniqueness and plurality are inseparable from each other. Second, because uniqueness and plurality can be brutally destroyed by political violence, natality refers to a political ontology. Third, natality stresses the fact that the appearance of newcomers and strangers in the historically-constituted world is the event of a new beginning. Already at work in the “first” order of birth, the possibility of such a new beginning occurs again whenever political agents act in concert with each other. To sum up, the ontology of natality is characterized by an inter-relational political agency, a new beginning in politics through acting with others, and the disclosure of agents’ uniqueness through words and deeds.

Although a full analysis of the ontological presuppositions of narrative modes of resistance is beyond the scope of this article, I want to focus on the three interrelated tasks that both narrative and action perform: the disclosure of uniqueness, the enactment of human plurality, and the communication of judgments. These aspects of sociality ensure a common sense of the world and at the same time are most endangered by the algorithmic processing of big data. As many of Arendt’s interpreters point out, unrepeatable singularity is not the same as having an individual identity in isolation from other people. On the contrary, it is a mode of appearance to and with others. Because uniqueness exceeds any general attributes of identity, it cannot be defined conceptually but merely implied in the form of an address to another: “who are you?” The irreducible paradox of uniqueness lies in the fact that, despite its opacity and resistance to general meaning, it is nonetheless communicated to others: “The manifestation of who the speaker and doer unexchangeably is . . . retains a curious intangibility that confounds all efforts toward unequivocal verbal expression” (181). Consequently, the politico-aesthetic mode of disclosure of uniqueness in action and narrative is characterized by an irreducible tension between exposure to others and opacity to ourselves, between singularity and the generality of norms. As Cavarero and Diprose and Ziarek argue, this irreducible intangibility and publicity of uniqueness presents not an obstacle to but the very possibility of narrative, which has to shelter this enigma.23

Although narrative perspectives are partial and contingent, they constitute the “web” of human plurality because they are addressed to the multiple, equally partial, and often conflicting viewpoints of others. We could say of Arendt’s idea of narrative that, as Ruha Benjamin says of architecture, it “reminds us that public space is a permanent battleground for those who wish to reinforce or challenge hierarchies” (91). The disclosure of singularity through action and narrative foregrounds (whether implicitly or explicitly) the appeal to the judgment of others, and therefore enacts human plurality.24 In her idiosyncratic reading of Kant, Arendt proposes an analogy between aesthetic, political, and historical judgments (Lectures 62–65). At stake in this analogy is the most tenuous type of public communication of unrepeatable particulars – this event, this person, this work of art – without treating them as illustrations of a general concept, empirical data, or mathematical abstraction. Consequently, in her reinterpretation of reflective judgment, Arendt finds a public mode of sharing with others what is in fact incommunicable: uniqueness refractory to general norms or expectation of transparency.

Arendt’s recovery of the communicability of uniqueness is even more urgent for contemporary debates regarding algorithmic secrecy and its opposite: reductive calls for technological transparency. Reflecting the convergence of digital capitalism and the automation of political decisions, the algorithmic processing of big data bypasses the communicability of judgments – conceptual and reflective – that holds human plurality together. All too often, even those technical and non-technical decisions encoded in data mining and the machine learning pipeline that could be communicated conceptually remain withdrawn from public judgment and contestation: for instance, the choice of the problem to be solved, its mathematical formalization, the type of algorithms used, the source and type of the available training data, and so on. As Andrejevic argues, these semi-“oracular” automated outcomes withdrawn from public opinion aim to replace even conceptual judgments, contestations, and political imagination of alternative possibilities (2). Consequently, the implementation of algorithmic decisions not only shifts the burden of explanation and argument to those who are injured by “automated inequality” (Eubanks), but also contributes to digital worldlessness and the pervasive loss of a common sense.

By contrast, for Arendt, the communication of uniqueness depends neither on transparent discourse nor on the optimization of algorithmic outcomes. Rather, it is based on the appeal to the judgment of others without any assurance that they will actually agree with us. As she argues, the communication of reflective judgments requires a so-called enlarged mentality, that is, the ability to take other points of view into account both when we proclaim our judgments to others and in the very process of judging itself. Considering other points of view in the process of judging amounts neither to an objective standpoint nor to the appropriation of those viewpoints, but instead to a critical reflection on our own judgments (Lectures 70–72). By reflecting critically on one’s own judgment from the actual and the potential perspectives of others, one might distance oneself from one’s own dogmatism, narcissism, cultural norms, and habitual, unreflective opinions, thus achieving some relative impartiality, or what Kant calls “disinterestedness” (73). In other words, as Rodolphe Gasché suggests, for Arendt (unlike for Kant), reflective judgment consists in taking into consideration others with whom we share the world and whose potential or real viewpoints are represented by imagination (112–14). That is why judging from others’ points of view presupposes and enacts human plurality. According to Cecilia Sjöholm, communication of judgments sustains both sensus communis and realness (82–85).

Expanding Stone-Mediatore’s interpretation of Arendt’s storytelling as “a feminist practice and a knowledge of resistance” (1–3, 125–60), I argue that relational agency performed by activism and counter-narratives contests racist, gendered domination and reenacts human plurality. As we have seen, Arendt’s web of political interactions, performed through words and deeds, is a precondition of the appearance of the world. Although mediated by economy, science, and technology, the sense of the world and its realness also depends on sociality characterized by equality and distinction. The question remains whether such realness, dependent as it is on human plurality, can be performed anew by narratives and actions against automated inequality. Although narratives cannot oppose these technologies directly, they can challenge their tacit narrative legitimations and mobilize new political actions oriented towards justice and human plurality. As Ruha Benjamin powerfully argues, narratives enable

a justice-oriented, emancipatory approach to data protection, analysis, and public engagement. . . . It is vital that people engaged in tech development partner with those who do important sociocultural work honing narrative tools through the arts, humanities, and social justice organizing. (192–93).

By underscoring interactions with others based on justice rather than on efficiency, such emancipatory narratives can mobilize new political struggles against the most destructive harms of these technologies, which range from “engineered inequality” (188) to digital worldlessness.

III. Datafication, AI, and the Political Ontology of Digital Worldlessness

In the previous section I argue that for Arendt, “stories produced” by emancipatory actions are one of the key cultural/political practices that can enact human plurality, expand resistance, and shelter the onto-political sense of realness. Following Arendt, I claim that realness consists in the intertwining of appearances of natural phenomena to human senses and the “web” of inter-personal relations enacted by acting and speaking together. Now I want to examine the political and ontological consequences of the mathematical models of big data used in AI and ML that work in tandem with capital. The digital worldlessness of this hegemonic formation of power/knowledge emerges first of all from the abstraction of mathematical models of data from natural and human phenomena as well as from ordinary language. Second, the loss of realness is a consequence of the new threats to human plurality posed by ubiquitous surveillance, algorithmic secrecy (so-called black boxing), and the replacement of judgments by automated algorithmic decisions. By automating historical genealogies of domination, the algorithmic processing of mathematical models of big data (working in tandem with capital on a global scale) represents an unprecedented phenomenon, which proponents of AI do not hesitate to call a new techno-political and social “revolution.”25 Yet, as scholars working in science and technology studies, black studies, legal studies, and economics argue, the significance of this “revolution” is contestable when we examine the imbrication of the old and new technologies of power/knowledge in oppression, racialization, coloniality, and capital. In particular, Ruha Benjamin’s notion of “double coding” offers a key methodological perspective for analyzing the dependence of high-tech operations of power on the long histories of oppression. As the growing vocabulary of new critical terms such as “new Jim Code,” “surveillance capitalism,” “algorithmic governmentality,” “societies of control,” “algorithmic redlining,” “surrogate humanity,” and “digital poorhouse” suggests, these new technologies both encode structures of domination and facilitate the emergence of unprecedented forms of power/knowledge. What are the main features of this formation of power and knowledge in the computational age, and how do they contribute to digital worldlessness?

Perhaps ironically, the most insightful responses to these questions emerge from techno-revolutionary narrative legitimations of big data, offered for example by Mayer-Schönberger and Cukier’s 2014 New York Times bestseller, Big Data: A Revolution That Will Transform How We Live, Work, and Think. This familiar grand narrative of technological progress and scientific objectivity justifies the deployment of data-driven AI in almost every domain of our lives, from governance, policing, management, and economy to healthcare, education, and culture. At the same time, it both reveals and depoliticizes the most contested elements of the digital regime of power. Recently supplemented by moral narratives of “AI for public good” or of “trustworthy AI” in response to growing public concern about the social harm of digital technologies, the techno-revolutionary legitimation of big data technologies remains one of the most ideologically charged “data fictions,” to use Dourish and Gómez Cruz’s apt formulation.26 Expanding on feminist and critical data studies, I argue that digital worldlessness corresponds to the well-known “4 Vs” characteristic of big data: volume, variety, velocity (the dynamism of data, continuously updated and tracked in real time), and veracity. While acknowledging the darker side of data, Mayer-Schönberger and Cukier stress the quasi-sublime volume of big data – or what Gaymon Bennet calls, in a different context, “the digital sublime” – and its epistemic aspiration to include the totality of empirical knowledge: because the plethora of data approaches n=all, traditional statistical sampling might no longer be required (6–31). However, the exponential volume and velocity of data are facilitated not only by the advancement of computational technologies but also by ubiquitous digital surveillance and data extraction, as feminist data scholars including D’Ignazio and Klein point out (21–47).27 Since the scale of big data extracted through surveillance exceeds human comprehension and communication, this overload of information has to be processed by algorithms that are often proprietary, which in turn introduce a new form of techno-political secrecy. In addition to scale and speed, the most contestable ideological assertion of big data lies in the claim of its veracity: big data appears to “speak for itself” and to reveal a new form of objectivity, which, as Berns and Rouvroy point out, seems to emerge immanently from life itself. This “veracity” depends first on the shift of scientific perspective away from explanations (historical genealogies, and all forms of causality, including the causality of power and freedom) to the discovery of “what is the case” that is based on correlations in mathematical models found by machine learning algorithms. Because the algorithmic discovery of correlations, patterns, or anomalies in huge data sets requires neither explanation nor prior scientific hypothesis testing, big data infamously announces the “end of theory” in the sciences (and not merely in the humanities) (Anderson).28 For D’Ignazio and Klein, such an epistemology once again replicates the “mythical, imaginary, impossible standpoint” so frequently criticized by feminist epistemologies (73–96). Furthermore, this departure from explanation, reasons, and interpretations of social relations abdicates any accountability for the history of oppression and inequality.

Numerous interdisciplinary scholars – including computer and feminist data scientists (Abebe, et al. 256), the majority of whom are women and women of color, such as Safiya Umoja Noble, Cathy O’Neil, Simone Browne, Virginia Eubanks, Shoshana Zuboff, as well as Catherine D’Ignazio and Lauren Klein29 – have considered the dependence of mathematical veracity on historical domination. The initial justification that the mathematical neutrality of big data and ML could counter socio-political prejudices and inequalities, even if it were possible, is problematic because it repeats the hyper-rationalist dream of transcending human conflicts, desires, limitations, and embeddedness in the world. Yet, as the ground-breaking work of O’Neil, Noble, Ruha Benjamin, Eubanks, Pasquale, and Rouvroy and Berns demonstrates all too well, big data and AI reproduce harms and discriminations and make it more difficult to contest them. Numerous causes have been identified for this state of affairs: a) technologies are never neutral tools but economic and sociopolitical operations of power; b) social data used for machine training is shaped by the long-standing history of systemic racial, economic, and gender injustices (what Ruha Benjamin calls “double encoding”); and c) the emergence of new hierarchies of power/knowledge between those who have the economic, political, and intellectual capital to extract data and design models, and communities subjected to unregulated algorithmic decisions.

Quantification of the heterogeneity of the world further contributes to digital worldlessness. “Speaking” in abstraction from the phenomenal and historical world, the velocity (speed) and efficiency of digital technologies depend on the rejection of facticity and the ambiguity of both ordinary and scientific languages for the sake of the mathematical formalization of empirical facts and their translation into machine computability. As Mayer-Schönberger and Cukier admit, the revolution of big data is ultimately not about size but about making quantification synonymous with understanding (79–97). That is why they propose to replace the word “data” with the more precise and now widely-used neologism “datafication.” By purging the term “data” from its lingering etymological reference to Latin datum, datafication conveys the agency of computerized data mining, which converts the irreducible heterogeneity of natural, political, and historical phenomena into quantifiable “variety” that can be measured, stored, and retrieved (using digital processors and storage) (78). Evocative of colonial and racist conquests, the violent rhetoric of the “datafication” of the whole world emphasizes the technological ability and the political desire to “capture quantifiable information” (78).

The ontopolitics of digital worldlessness is most explicit in the shift from a notion of the world regarded “as a string of happenings that we explain as natural social or phenomena” to “a universe comprised essentially of information” (96). Mayer-Schönberger and Cukier refer to a mathematical theory of information, defined in the Oxford English Dictionary in terms of “the statistical probabilities of occurrence of the symbol or the elements of the message.” Introduced in 1948 by Shannon (among others), this mathematical statistical approach to information “must not be confused with its ordinary usage . . . In fact, two messages, one of which is heavily loaded with meaning and the other of which is pure nonsense, can be exactly equivalent, from the present viewpoint, as regards information” (Shannon and Weaver 99). Only in a universe consisting of information can we have endless debates about whether the brain is an information-processing computer or vice versa. Abstracted from irreducible heterogeneity, a universe reduced to mathematical information processed by computers is one of the best definitions of “digital worldlessness.” Often conveyed by metaphors of natural disasters like floods or tsunamis, the destructive character of such ontopolitics is perhaps most dramatically (if inadvertently) conveyed by the words of computer scientist Chris Re, who underscores the voracious character of data mining and machine learning: “Software has been ‘eating the world’ for the last 10 years. In the last few years, a new phenomenon has started to emerge: machine learning is eating software.”

If the abstraction of data from both natural/historical phenomena and ordinary language constitutes one side of digital worldlessness, the qualification of human plurality by the rapidly increasing datafication of social relations constitutes the other. As Mayer-Schönberger and Cukier candidly admit, big data analytics transforms all human interactions and activities – values, moods, friendships, actions, stories, and interpretations – into calculable quantities (91–94).30 Once datified, social media platforms “don’t simply offer us a way to find and stay in touch with friends and colleagues,” but transform everyday interactions into the lucrative currency of data, which can be sold, treated as signals for investments, used for profiling, and turned into predictions about our future (91). Such voracious, profit-driven datafication is indeed what renders the meaning of the common world semantically “poor” and by extension constitutes the “poverty” of data science, to paraphrase Marx’s famous indictment of “the poverty of philosophy.”

The two most prevalent mechanisms of datafication that undermine human plurality are digital surveillance and algorithmic secrecy. Big data and machine learning technologies cannot function without pervasive digital surveillance, which enables the continuous extraction of data from human and nonhuman occurrences and phenomena. The scale of data extraction automates racist, disciplinary, or authoritarian apparatuses of surveillance, which have been disproportionately targeting racialized minorities, political dissidents, immigrants, refugees, and whole populations subjected to biopolitical normalization. To draw upon Frank Pasquale’s influential formulation, digital surveillance deployed by international corporations and governmental institutions alike fractures human plurality into multiple “black box societies.” One of the most familiar metaphors in discussions of big data and machine learning algorithms, “the black box” is usually shorthand for a lack of algorithmic transparency. Pasquale instead uses the term “black boxing” to describe a digital tool for social, political, and economic power. The term refers first to recording devices such as GPS, biometric sensors, software, and cameras used in cars, phones, the Internet of Things, border crossings, policing, workplaces, homes, and ubiquitous “smart” technologies that harvest data usually without users’ awareness or consent.31 Enabling the continuous tracking of things, people, and activities in real time, such pervasive extraction of data and monitoring constitute a dispersed network of power relations, which Kevin Haggerty and Richard Ericson call “surveillant assemblages” to distinguish them from top-down models like Orwell’s Big Brother or Foucault’s Panopticon.

The second aspect of black boxing that supports digital worldlessness is algorithmic secrecy, often misnamed as lack of transparency in machine learning algorithms.32 It undermines the relational character of human plurality and a shared sense of the world. By destabilizing public/private, political/economic distinctions, algorithmic secrecy prohibits access to the proprietary software mega-corporations use to analyze billions of socioeconomic data points. This proprietary aspect of secrecy is political in nature and can be challenged by legislation and political activism. Yet digital secrecy also refers not only to the wider public’s lack of technical expertise but also to the complexity/opacity of machine learning algorithms that exceed human understanding altogether, including that of experts themselves, as is the case with the neural networks used in controversial facial recognition technologies. Algorithmic secrecy is different therefore from familiar political technologies of secrecy associated with authoritarianism, secret societies, or the secret apparatus of the state. Based on algorithmic secrecy, technological opacity, and digital surveillance, the power of “black boxing” is “opaque, unverifiable, and unchallengeable” (Brevini and Pasquale 2),33 especially by those who are discriminated against based on its results. Revising this analysis in the context of algorithmic encoding of racial inequalities, Ruha Benjamin renames black boxing as an “anti-Black box” (34–36) and points out that the power structure that relies on black boxing constitutes a new digital “Jim Code” regime of racialization (1–48).

Algorithmic secrecy and digital surveillance drive digital profiling, or the algorithmic classification and ranking of users, on a global scale. As Browne, Ruha Benjamin, D’Ignazio, and Klein demonstrate in different ways, the digital capture of human and nonhuman interactions automates the long history of the quantification of human beings in regimes of racialization and colonialism. What is new in this threat to human plurality is not only its scale, technological opacity, and proprietary secrecy, but also the predictive character of digital profiling generated by “a-signifying machines,” to use Berns and Rouvroy’s formulation (11). Capturing even the most fleeting interactions, algorithmically generated digital group profiles disregard sociality, whereas automated personalizations destroy unrepeatable uniqueness. As Rouvroy and Berns point out, algorithmic profiling and personalizations are indifferent to the singularity of persons and their substantive political engagements.34 Although some profiles are based on professional, religious, and political affiliations, most digital profiling bypasses collective affiliations and constructs arbitrary assemblages based on correlations and patterns discovered in huge data sets by data mining algorithms. After the extraction of data (which reproduces historical inequalities) and its conversion into machine-friendly mathematical models, algorithms find correlations among multitudes of information, depending on the type and purpose of the profile (Otterlo 40–64). The subsequent algorithmic attribution of group profiles to individual users, regardless of whether or not their data was used to construct these profiles, constitutes so-called personalization. Digital personalization is most apparent in all kinds of automated recommendations of what one should watch, buy, “follow,” “like,” and least discernible in automated exclusions from social goods and life chances.

The extraction of billions of data points without users’ consent or awareness for the purposes of profiling creates multiple endlessly decomposable and recomposable “data doubles” (Haggerty and Ericson 613–614). Consequently, users are targeted by power as “dividuals,” in Deleuze’s phrase – that is, as decomposable aggregates of numerical footprints (3–7).35 Its indifference to human uniqueness, plurality, and activism renders this new technology of domination efficient at generating profits, managing risk, and distributing benefits and punishments inequitably. Depending on the type or purpose of the profile, users are sorted, classified, and ranked according to consumer or political behavior, productivity rates, healthcare needs, financial, medical, or criminal risks, and perhaps most pernicious of all, psychometric traits inferred from online behavior. Ranging from academic analytics and citation indexing to credit and recidivism scores, this ubiquitous discrimination based on digital scoring processes digital traces of activities as “‘signals’ for rewards or penalties, benefits or burdens” (Pasquale 21). Furthermore, because of its predictive function, individual profile rankings refer not only to past performances but also to their probable future.36

Another paradox of algorithmic power is that despite its indifference to human plurality, profiling nonetheless converts “data doubles” into “digital characters,” to use sociologist Tamara K. Nopper’s suggestive term,37 at the very moment it targets individual users. As Rouvroy and Berns similarly point out, infra-subjective data is converted into “supra-individual” models attributed to individual users without asking them to identify themselves or others to describe them (10). We are encountering here another vicious circle of datafication: abstracted from all social relations and indifferent to human uniqueness, predictive profiling is translated back into digital characters who are evaluated according to psychological, emotional, and moral characteristics such as “credibility, reliability, industriousness, responsibility, morality, and relationship choices” (Nopper 176). According to Ruha Benjamin, the conversion of user profiling into digital characterization automates familiar technologies of racialization: “this is a key feature of racialization: we take arbitrary qualities (say, social score, or skin color), imbue them with cultural importance and then act as if they reflected natural qualities in people” (75). This is precisely what is at stake in the recent attempts to determine criminality using facial recognition technologies.

Digital threats to human plurality culminate in automating intersubjective political as well as economic decisions. The punitive and distributive power of the “datified” state increasingly depends on algorithmic outcomes that allocate socio-economic opportunities, access to jobs, social benefits (Eubanks 3), and public goods (including PhD fellowships). Perhaps most alarmingly, these algorithms also regulate predictive policing and the application of law by assisting judges in sentencing convicted defendants. Outsourcing these decisions to AI ushers in an entirely new mode of sharing social goods, merits, needs, or risks.38 As Rouvroy powerfully argues, algorithmic governmentality transforms the idea of justice, which is no longer guided by “norms resulting from prior deliberative processes” (99) and their contestation, but are managed instead by automatic algorithmic procedures. By destroying what Arendt calls an enlarged mentality, outsourcing judgments to automated algorithmic outcomes further suppresses political agency and the possibility of a new beginning. The inequalities inscribed in the law, economic systems, hegemonic narratives, and in statistical normalizations, at least in principle, could be exposed and challenged by political protests. But, as Stiegler argues in reference to Rouvroy and Berns’s work, algorithmic governmentality “ultimately destroys social relations at lightning speed,” and so “becomes the global cause of a colossal social disintegration” (7).39 Because such disintegration damages human plurality at scale, digital worldlessness makes struggles for emancipation much more difficult to conceive.

Although I agree with Zuboff’s, Stiegler’s, and Rouvroy and Berns’s different assertions about the unprecedented character of the power of big data technologies, I think their accounts are limited by the lack of a robust genealogical analysis of its emergence. In particular, I contest Stiegler’s argument that the advent of an automated society ushers in “absolute novelty” (7). Such an argument risks inverting a techno-revolutionary narrative into a dystopian narrative of the absolute break produced by techno-determinism. Both of these narratives are powerfully challenged by critical race and decolonial critics like Ruha Benjamin or Simone Browne, who argue that digital technologies of power are conditioned by long histories of racial profiling, discrimination, colonialism, eugenics, and economic exploitation. This raises far more difficult questions for political theory and political critique: How does algorithmic governmentality automate existing inequalities on the “enterprise scale” and expand them to groups so far protected by forms of privilege such as Whiteness, wealth, heteronormativity, and able-bodiedness? And in what sense does it produce new forms of power, no longer operating based on statistical normalizations or ideological justifications? Ruha Benjamin’s methodological perspective of “double coding” shows how historical genealogies of racialization assume unforeseen forms, which call for new modes of resistance. Similarly, my argument that the destruction of human plurality is tantamount to digital worldlessness emphasizes the task of reconstructing historical genealogies of the unprecedented, which can be accomplished only retrospectively.

By stressing the genealogical continuities and the unprecedented character of digital worldlessness, this double perspective avoids legitimate criticisms that blame discrimination on AI and big data yet ignores low-tech but equally pervasive social inequalities. Furthermore, it facilitates a more meaningful analysis of the shortcomings of a new narrative justification of these technologies, that is, AI for social good or trustworthy AI. Formulated in response to the widespread public demands for political regulation and oversight, these narratives frequently proclaim the principles of fairness, accountability, and transparency. These principles are important but insufficient to counter digital worldlessness because they are not part of a political contestation of the meaning and the use of these terms in the context of AI and ML. Furthermore, such agonistic politics is on a collision course with the outsourcing of judgments to automated algorithmic decisions. The same goes for transparency: even if in principle every citizen could acquire coding literacy, technical expertise is insufficient to challenge the algorithmic replacement of political judgments, which require justification, accountability, and dissent by the public affected by them. As we have seen, the principle of human plurality is irreducible to transparency because it consists in acting with others and taking their judgments into account in the process of deliberation or contestation of power. Consequently, the only way digital worldlessness can be challenged is by reimagining the deployment of big data and machine learning in support of action, equity, and freedom. Here too genealogical research is indispensable. As Ruha Benjamin demonstrates in the context of Black struggles against racism, “there is a long tradition of employing and challenging data for Black lives. But before the data there were, for Du Bois, Wells-Barnett, and many others, the political questions and commitments to Black freedom” (192). Drawing on such traditions would entail reasserting the priority of action, justice, and a possibility of a new beginning created by narrative and political acts.

IV. Conclusion

In this essay, I have interpreted digital harms to human plurality by focusing on the contradictions between two different social practices: the conjunction between big data and AI, and the interdependence of narratives and political acts. Characterizing our “hybrid” configuration of democracy, digital capital, and algorithmic governmentality, these contradictions have both political and ontological dimensions. On the basis of Arendt’s work, I have argued that the ontology of natality corresponds to the intersection between narrative and political activism. Natality stresses the fact that human beings are capable of disclosing their uniqueness, sharing the world, and acting in concert with others against discrimination and for freedom. Although the politics of big data and machine learning attracts more scrutiny from journalists and scholars, its ontological effects are also a matter of public concern, as evidenced by worries about the loss of a shared reality. I have called this onto-political crisis “digital worldlessness.” This sense of worldlessness is often tacitly or explicitly acknowledged, most notably in Rouvroy’s argument that the algorithmic processing of big data “presents itself as an immune system of numerical reality against any incalculable heterogeneity, against all thought of the unassimilable outside, irreducible, non-marketable, non-finalized. . . . that is to say, also, against the world” (100). As this insight suggests, digital technologies are not only antithetical to the ontology of human plurality, but in fact put it at risk.

However, rather than representing a possible choice, the antagonistic relations between narrative, action, and big data fracture these social practices from within and disclose their ambiguously hybrid character. On the one hand, the exponentially increasing datafication of the common world tacitly relies on numerous narratives of legitimation – such as the resurgence of grand narratives that equate technological progress with political freedom and public good – or their opposites, the dystopian visions of AI’s conquest over their creators. Less sensationally, the referentiality of data and the justification of its “outcomes” are likewise mediated by numerous “data fictions.” On the other hand, new forms of political activism, such as the Algorithmic Justice League or Data 4 Black Lives, appeal to emancipatory narratives as key elements in the political struggle over collective governance and the community-based use of data. As the manifesto for Data 4 Black Lives proclaims, the politics of data can be imagined and mobilized otherwise: “Data protest. Data as accountability. Data as collective action.” Similarly, the Algorithmic Justice League relies on “the intersection of art, ML research and storytelling” in its resistance to the harms of datafication. Often started by computer scientists and statisticians, and with an overwhelming majority by women and people of color,40 such organizing reclaims collective action, judgments, and artistic practices in their struggles against digital harms. I would argue that these narrative and political acts also reenact human plurality in Arendt’s sense and in so doing provide an alternative to digital worldlessness.

Ewa Płonowska Ziarek is Julian Park Professor of Comparative Literature at University at Buffalo and a Visiting Faculty in the Institute for Doctoral Studies in the Visual Arts, Maine. Most recently she co-authored with Rosalyn Diprose Arendt, Natality and Biopolitics: Towards Democratic Plurality and Reproductive Justice (2019), awarded a Book Prize by Symposium: Canadian Journal for Continental Philosophy. Her other books include Feminist Aesthetics and the Politics of Modernism (2012); An Ethics of Dissensus: Feminism, Postmodernity, and the Politics of Radical Democracy (2001); The Rhetoric of Failure: Deconstruction of Skepticism, Reinvention of Modernism (1995); and co-edited volumes, such as, Intermedialities: Philosophy, Art, Politics (2010), Time for the Humanities (2008), and Revolt, Affect, Collectivity: The Unstable Boundaries of Kristeva’s Polis (2005). Her interdisciplinary research interests include feminist political theory, modernism, critical race theory, and algorithmic culture.

Footnotes

I would like to thank Cheryl Emerson for her comments and invaluable help in editing and compiling the Works Cited. Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs, 2019.

1. Working from an international perspective, Kuehn and Salter identify four main threats to democracy, which in my view also contribute to the crisis of reality. These are: “fake news, filter bubbles/echo chambers, online hate speech, and surveillance” (“Assessing Digital Threats to Democracy” 2589).

2. For a comprehensive overview of the scholarly debates, see, for example, Kuehn and Salter (2020). See also Bernholz et al. (2021).

3. In this context, Albert Borgmann’s Holding On to Reality sounds more like a desperate philosophical plea than a “constructive approach” to the integration of new digital technologies of information to enhance our culture and sense of reality. What interests me is his conclusion that reclaiming a sense of the world in the context of information technologies requires “considered judgment” in the public sphere and telling stories: “books have a permanence that inspires conversation and recollection” (231).

4. Certainly, for literary theorists of postmodernism, “the end of reality” in the age of big data and AI evokes the protracted discussions of Jean Baudrillard’s 1981 Simulacra and Simulation. There are, however, two crucial differences worth noting if we want to consider Baudrillard’s relevance in the 2020s: first, the crisis of reality is now a public and not only a theoretical concern; and second, the technological “affordances” that created this crisis far exceed what Baudrillard still considers as “the metaphysics of the code” (103, 152).

5. For an excellent analysis of the politics and the cultural genealogy of computationalism, see David Golumbia, The Cultural Logic of Computation, 7–27.

6. There is an ongoing debate in Arendt studies regarding the failure of her own historical analysis of anti-black racism as well as economic and gender oppression. The main question is whether this failure reveals the limitations of Arendt’s theoretical account of human plurality, freedom and action – such as her commitment to the private/public distinction, the limited account of embodiment – or whether it shows that Arendt does not follow the keenest insights of her own thinking. My own position is that a feminist antiracist engagement with both types of limitations in Arendt’s work opens up new possibilities for rethinking human plurality and political action. For the most important critique of the shortcomings of Arendt with respect to anti-black racism, see Kathryn Gines, Arendt and the Negro Question. For feminist critical revisions of Arendt’s philosophy, see, among others, Honig’s classical anthology, Feminist Interpretations of Hannah Arendt, as well as Cavarero, Diprose and Ziarek, Stone-Mediatore, and Zerilli.

7. In The Human Condition, for example, Arendt considers Christian charity a communal principle of worldlessness (54–55).

8. For a powerful argument about the relevance of Arendt’s work on totalitarianism in the era of Trump, see Roger Berkowitz, “Why Arendt Matters: Revisiting ‘The Origins of Totalitarianism.”

9. See for example Weizenbaum (11–13) and Zuboff (22, 139, 358–360).

10. In the context of AI, the relation between labor, work, and technology has to be analyzed in terms of global digital capitalism, but this topic is beyond the scope of this essay.

11. This political nature of truth and reality does not erode the difference between opinions and facts.

12. For an excellent discussion of the relation between Arendt’s interpretation of Kant’s sensus communis and her notion of realness, see Cecilia Sjöholm, Doing Aesthetics with Arendt, 82–85. For a phenomenological account of common sense in relation to worldliness, see, among others, Marieke Borren, “‘A Sense of the World’: Hannah Arendt’s Hermeneutic Phenomenology of Common Sense,” 225–55.

13. For a useful definition of machine learning and its difference from AI see https://azure.microsoft.com/en-us/overview/artificial-intelligence-ai-vs-machine-learning/#introduction: “Artificial intelligence is the capability of a computer system to mimic human cognitive functions” by using math and logic “to learn from new information and make decisions.” Machine learning is an application of AI based on the use of “mathematical models of data to help a computer learn without direct instruction.”

14. For an overview of the emergence of Digital Humanities and its institutional support, see Manovich, “Trending: The Promises and the Challenges of Big Social Data.” For a more recent reevaluation of the impact of data science on Digital Humanities in the context of the more pronounced political concerns of gender and race, see the special issue of PMLA, Varieties of Digital Humanities, and in particular Booth and Posner, “Introduction: The Materials at Hand.”

15. This is especially the case in the context of the normative turn in computer sciences. See for example Abebe et al., “Roles for Computing in Social Change.”

16. In the context of data politics, my use of political ontology resonates with but is broader than the “logic of preemption” of “ontopower” of the surveillant assemblage analyzed by Peter Mantello in “The Machine That Ate Bad People: The Ontopolitics of the Precrime Assemblage” (2). My discussion of the political ontology of natality in Arendt builds upon Diprose and Ziarek (19) and Colin Hay, “Political Ontology.”

17. As prominent computer scientist Joseph Weizenbaum argues, reality in the computational regime becomes synonymous with reduction of difference to quantification (25).

18. In a stronger formulation, computer scientists define ontology as “the impact” of computational methods “on society and individuals” (Ophir, et al. 449).

19. Manovich’s work reflects the optimistic promises of the user-driven Internet in the 1990s, which has been replaced by the corporate- and commercially-driven Internet since 2003 (Davies). Addressing the challenges of big social data more directly, Manovich is still optimistic, though aware of the unequal powers of the new “‘data classes’ in our ‘big data society’” (“Trending” 470). For a more sober reassessment of the political implications of data see, for example, Pasquinelli (254).

20. As an example of this new lexicon, see this Medium.com essay on AI nationalism: https://medium.com/a-new-ai-lexicon/a-new-ai-lexicon-ai-nationalism-417a26d212f8.

21. Cavarero focuses primarily on the relation between a desire for narrative and a desire for uniqueness; Kristeva argues for “life as narrative” – narrative bios – and the possibility of rebirth through storytelling (3–99); Stone-Mediatore articulates the political, emancipatory possibilities of Arendt’s notion of storytelling; Diprose and Ziarek focus on narrative’s relation to the aesthetics and politics of natality (289–352).

22. See Diprose and Ziarek, and Stone-Mediatore, among others.

23. This question about the narrative disclosure of uniqueness is debated among feminist theorists responding directly or indirectly to Arendt; see Cavarero; Diprose and Ziarek (295–306); and Kristeva (73–86). For example, in her response to Cavarero, Butler argues that the narrative disclosure of singularity is interrupted by the indifference and generality of discursive norms, which make us not only recognizable to others but also “substitutable” (36–39).

24. To develop this role of judgment in narrative and political acts, Stone-Mediatore (68–81) as well as Diprose and Ziarek (299–305) turn to Arendt’s political reinterpretation of Kant’s Critique of Judgment, whereas Zerilli develops its importance for feminist and democratic political theory. See in particular Zerilli’s analysis of Arendt’s engagement with Kant in the context of her feminist theory of judgment in Feminism and the Abyss of Freedom 125–163, and in her A Democratic Theory of Judgment, especially chapters 4, 7, and 9.

25. For a more philosophical account of such a revolutionary narrative, see Floridi.

26. The suppressed narrative, interpretive, and contextual dependence of big data is necessary for the ideological positioning of data in society as “self-legitimating and self-fulfilling” (Thornham and Cruz 8); also quoted in Dourish and Cruz (4).

27. For a trenchant analysis of how power works in data science, perpetuated by gendered, economic, and racialized mechanisms of surveillance and domination, see D’Ignazio and Klein.

28. For a detailed critical discussion of this epistemic paradigm shift, see Kitchin.

29. For a discussion of the relationship between data, power, and the automation of racism, see, among others, Safiya Umoja Noble, Ruha Benjamin, and Simone Browne. For an in-depth analysis of the automation of inequality and poverty, see Virginia Eubanks and Cathy O’Neil. For the relation between digital technology and capital, see Zuboff. As she powerfully argues, surveillance capitalism appropriates and ruins the early promises of digital technologies to increase access to knowledge and participatory democracy (20, 67). For a trenchant analysis of the exclusion of domination, such as “missing data about femicides” and the “excessive surveillance of minoritized groups” in data sets, see D’Ignazio and Klein (38–72).

30. As Daniela Agostinho succinctly puts it, “[d]atafication has been broadly defined as the process through which human activities are converted to data which can then be mobilised for different purposes” (2).

31. These two meanings of the black box underscore for Pasquale the “colonization” of the public sphere and democracy “by the logic of secrecy” and surveillance. By contrast, Malte Ziewitz argues that this image of inscrutable and powerful algorithms is one of modern myths surrounding algorithms (3–16). For a useful methodological approach to formulating data politics, see Ruppert et al. (1–7).

32. As Brevini and Pasquale point out, black boxing undermines “the many layers of our common lives” on the global scale (4). For a similar argument that the algorithmic processing of big data operates primarily on the level of social relations, see also Rouvroy and Berns.

33. See the rest of this special issue of Big Data & Society (Jan.–Jun. 2020) for an updated discussion of the developments of black box societies and the political economy of big data.

34. For the analysis of the three stages in the construction of digital profiling, see Rouvroy and Rouvroy and Berns For an excellent explication of the technology of digital profiling, see Otterlo. However, these analyses miss the reproduction of social inequalities in the data sets.

35. Although all users are targeted as “dividuals,” the political and economic consequences of this regime of power nevertheless vary greatly along race, ethnicity, poverty, and gender lines. These lines of power determine whether or not profiled, ranked, or labelled subjectivities are ultimately classified either as what Pasquale calls “targets” or as “waste” (33).

36. As O’Neil points out, a low or bad score indicates the probability that someone will be a bad hire, an unreliable worker or student, or a risky investment (15–31).

37. For further discussion of digital character, see Benjamin (73).

38. As Pasquale argues already in 2015, “Decisions that used to be based on human reflection are now made automatically” (8).

39. See also Prinsloo.40. For example, Joy Buolamwini, computer scientist, “poet of code,” and founder of Algorithmic Justice League.

Works Cited

  • Abebe, Rediet, et al. “Roles for computing in social change.” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Jan. 2020, pp. 252–260.
  • Adams, Elizabeth. “Civic Tech: The Path to Public Oversight of Surveillance Technology in Minneapolis.” HAI Weekly Seminar Series, Stanford University, 14 Oct. 2020, https://hai.stanford.edu/events/hai-weekly-seminar-elizabeth-adams. Virtual lecture. Accessed 26 Aug. 2022.
  • Agostinho, Daniela. “The Optical Unconscious of Big Data: Datafication of Vision and Care for Unknown Futures.” Big Data & Society, Jan. 2019.
  • Anderson, Chris. “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.” Wired, 23 Jun. 2008, https://www.wired.com/2008/06/pb-theory/. Accessed 26 Aug. 2022.
  • Aral, Sinan. The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health — and How We Must Adapt. Currency, 2020.
  • Arendt, Hannah. 1958. The Human Condition. Introduction by Margaret Canovan, 2nd ed., Chicago UP, 1998.
  • ———. Lectures on Kant’s Political Philosophy. Edited by Ronald Beiner, Chicago UP, 1982.
  • ———. “Truth and Politics.” The Portable Hannah Arendt, edited and introduction by Peter Baehr, Penguin Books, 2003, pp. 545–575.
  • ———. “Understanding and Politics (The Difficulties of Understanding).” Essays in Understanding 1930–1954: Formation, Exile, and Totalitarianism, edited and introduction by Jerome Kohn, Schocken Books, 1994, pp. 307–327.
  • Baudrillard, Jean. Simulations. Translated by Phil Beitchman, et al., Semiotext(e), 1983.
  • Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press, 2019.
  • Benjamin, Walter. “The Storyteller.” Illuminations: Essays and Reflections, edited by Hannah Arendt, translated by Harry Zohn, Schocken Books, 1968, pp. 83–109.
  • Bennett, Gaymon. “The Digital Sublime: Algorithmic Binds in a Living Foundry.” Angelaki: Journal of the Theoretical Humanities, vol. 25, no. 3, May 2020, pp. 41–52.
  • Berkowitz, Roger. “Why Arendt Matters: Revisiting ‘The Origins of Totalitarianism.’” Los Angeles Review of Books, 18 Mar. 2017, https://lareviewofbooks.org/article/arendt-matters-revisiting-origins-totalitarianism/. Accessed 26 Aug. 2022.
  • Berkowitz, Roger, et al., editors. Thinking in Dark Times: Hannah Arendt on Ethics and Politics. Fordham UP, 2009.
  • Bernholz, Lucy, et al., editors. Digital Technology and Democratic Theory. Chicago UP, 2021.
  • Birmingham, Peg. Hannah Arendt and Human Rights: The Predicament of Common Responsibility. Indiana UP, 2006.
  • Booth, Allison and Miriam Posner, editors. PMLA, Special Topic: Varieties of Digital Humanities, vol. 135, no. 1, Jan. 2020.
  • Borgmann, Albert. Holding On to Reality: The Nature of Information at the Turn of the Millennium. Chicago UP, 1999.
  • Borren, Marieke. “‘A Sense of the World’: Hannah Arendt’s Hermeneutic Phenomenology of Common Sense.” International Journal of Philosophical Studies, vol. 21, no. 2, Feb. 2013, pp. 225–255.
  • Brevini, Benedetta and Frank Pasquale. “Revisiting the Black Box Society by Rethinking the Political Economy of Big Data.” Big Data & Society, Jan.-June 2020.
  • Browne, Simone. Dark Matters: On the Surveillance of Blackness. Duke UP, 2015.
  • Butler, Judith. Giving an Account of Oneself. Fordham UP, 2005.
  • Cavarero, Adriana. Relating Narratives: Storytelling and Selfhood. Translated and introduction by Paul A. Kottman, Routledge, 2000.
  • Coppins, McKay. “The Billion-Dollar Disinformation Campaign to Reelect the President.” The Atlantic, Mar. 2020. Accessed 26 Aug. 2022.
  • Data 4 Black Lives. https://d4bl.org/. Accessed 26 Aug. 2022.
  • Davies, Todd. “Three Eras of the Internet.” Public Infrastructure: A Corporation for Public Software, Digital Public Infrastructure Series, Stanford UP, 27 Oct. 2020, https://www.slideshare.net/trbdavies/digital-public-infrastructure-a-corporation-for-public-software. Slides from a workshop. Accessed 26 Aug. 2022.
  • Deleuze, Gilles. “Postscript on the Societies of Control.” October, vol. 59, 1992, pp. 3–7.
  • D’Ignazio, Catherine and Lauren Klein. Data Feminism. MIT P, 2020.
  • Diprose, Rosalyn and Ewa Płonowska Ziarek. Arendt, Natality and Biopolitics: Toward Democratic Plurality and Reproductive Justice. Edinburgh UP, 2018.
  • Dourish, Paul and Edgar Gómez Cruz. “Datafication and Data Fiction: Narrative Data and Narrating with Data.” Big Data & Society, Jul.-Dec. 2018, pp.1–10.
  • Erwig, Martin. Once Upon an Algorithm: How Stories Explain Computing. MIT P, 2017.
  • Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2018.
  • Fiore-Gartland, Brittany and Gina Neff. “Communication, Mediation, and Expectations of Data: Data Valences Across Health and Wellness Communities.” International Journal of Communication, vol. 9, 2015, pp. 1466–1484.
  • Floridi, Luciano. The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford UP, 2014.
  • Gasché, Rodolphe. “Is Determinant Judgment Really a Judgment?” Washington University Jurisprudence Review, vol. 6, no. 1, 2013, pp. 99–120.
  • Gines, Kathryn T. Hannah Arendt and the Negro Question. Indiana UP, 2014.
  • Golumbia, David. The Cultural Logic of Computation. Harvard UP, 2009.
  • Haggerty, Kevin D. and Richard V. Ericson. “The Surveillant Assemblage.” The British Journal of Sociology, vol. 51, no. 4, 2000, pp. 605–622.
  • Hay, Colin. “Political Ontology.” The Oxford Handbook of Political Science, edited by Robert E. Goodin and Charles Tilly, Oxford UP, 2006, pp. 460–478.
  • Honig, Bonnie, editor. Feminist Interpretations of Hannah Arendt. Penn State UP, 1995.
  • “Information.” Oxford English Dictionary Online, Oxford UP, 2022. Accessed 26 Aug. 2022.
  • Kitchin, Rob. “Big Data, New Epistemologies and Paradigm Shifts.” Big Data & Society, Apr.-Jun. 2014, pp. 1–12.
  • Kristeva, Julia. Hannah Arendt. Translated by Ross Guberman, Columbia UP, 2001.
  • Kuehn, Kathleen M. and Leon A. Salter. “Assessing Digital Threats to Democracy, and Workable Solutions: A Review of the Recent Literature.” International Journal of Communication, vol. 14, 2020, pp. 2589–2610.
  • Lyotard, Jean-François. The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi, U of Minnesota P, 1984.
  • Manovich, Lev. “Database as Symbolic Form.” Convergence: The International Journal of Research into New Media Technologies, vol. 5, no. 2, Jun. 1999, pp. 80–99.
  • ———. “Trending: The Promises and the Challenges of Big Social Data.” Debates in the Digital Humanities, edited by Matthew K. Gold, U of Minnesota P, 2012, pp. 460–475.
  • Mantello, Peter. “The Machine That Ate Bad People: The Ontopolitics of the Precrime Assemblage.” Big Data & Society, Jul-Dec. 2016, pp. 1–11.
  • Mayer-Schönberger, Viktor and Kenneth Cukier. Big Data: A Revolution That Will Transform How We Live, Work, and Think. Houghton Mifflin Harcourt, 2013.
  • Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York UP, 2018.
  • Nopper. Tamara K. “Digital Character in ‘The Scored Society’: FICO, Social Networks, and Competing Measurements of Creditworthiness.” Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life, edited by Ruha Benjamin, Duke UP, 2019, pp. 170–187.
  • O’Byrne, Anne. Natality and Finitude. Indiana UP, 2010.
  • O’Neil, Cathy. Weapons of Math Destruction. Crown Publishing, 2016.
  • Ophir, Yotam, et al. “A Collaborative Way of Knowing: Bridging Computational
  • Communication Research and Grounded Theory Ethnography.” Journal of Communication 70, vol. 70, no. 3, June 2020, pp. 447–472.
  • Otterlo, Van Martin. “A Machine Learning View on Profiling.” Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology, edited by Mireille Hildebrant and Katia de Vries, Routledge 2013, pp. 41–64.
  • Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard UP, 2015.
  • Pasquinelli, Matteo. “Metadata Society.” Posthuman Glossary, edited by Rosi Braidotti and Maria Hlavajova, Bloomsbury, 2018, pp. 253–256.
  • Prinsloo, Paul. “Fleeing from Frankenstein’s Monster and Meeting Kafka on the Way: Algorithmic Decision-Making in Higher Education.” E-Learning and Digital Media, vol. 14, no. 3, Oct. 2017, pp. 138–163.
  • Re, Chris. “Software 2.0: Machine Learning Is Changing Software.” HAI Weekly Seminar Series, Stanford University, 27 Jan. 2021, https://hai.stanford.edu/events/hai-weekly-seminar-chris-re. Virtual lecture. Accessed 26 Aug. 2022.
  • Rouvroy, Antoinette. “Governing without Norms: Algorithmic Governmentality.” Psychoanalytical Notebooks, no. 32, 2018, pp. 99–120, https://www.scribd.com/document/398573876/Antoinette-Rouvroy-Governing-without-norms-algorithmic-governmentality.
  • Rouvroy, Antoinette and Thomas Berns. “Algorithmic Governmentality and Prospects of Emancipation: Disparateness as a Precondition for Individuation through Relationships?” Translated by Elizabeth Libbrecht, Réseaux vol. 177, no. 1, 2013, pp. 163–196.
  • Ruppert, Evelyn, et al. “Data Politics.” Big Data & Society, Jul.-Dec. 2017, pp. 1–7.
  • Schrock, Andrew and Gwen Shaffer. “Data Ideologies of an Interested Public: A Study of Grassroots Open Government Data Intermediaries.” Big Data & Society, Jan.-Jun., 2017, pp. 1–10.
  • Shannon, Claude E. and Warren Weaver. The Mathematical Theory of Communication. U of Illinois P, 1949, 1998.
  • Sjöholm, Cecilia. Doing Aesthetics with Arendt. Columbia UP, 2015.
  • Stiegler, Bernard. The Age of Disruption: Technology and Madness in Computational Capitalism. Translated by Daniel Ross, Polity Press, 2019.
  • Stone-Mediatore, Shari. Reading Across Borders: Storytelling and Knowledges of Resistance. Palgrave Macmillan, 2003.
  • Thornham, Helen and Edgar Gómez Cruz. “Hackathons, Data and Discourse: Convolutions of the Data (Logical).” Big Data & Society, July-Dec., 2016, pp. 1–11.
  • Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company, 1976.
  • Zerilli, Linda M. G. A Democratic Theory of Judgment. Chicago UP, 2016.
  • ———. Feminism and the Abyss of Freedom. Chicago UP, 2005.
  • Ziewitz, Malte. “Governing Algorithms: Myth, Mess, and Methods.” Science, Technology, & Human Values, vol. 41, no. 1, Jan. 2016, pp. 3–16.