By Jamie Macdonald
31 July 2015
Data, and the algorithms which define their usage, manipulation, and categorization are everywhere in the 21st-century: Kitchin and Dodge capture the omnipresence of the algorithm when they propose that software-driven entities ‘actively shape people’s daily interactions and transactions, and mediate all manner of practices in entertainment, communication, and mobilities’ (2011, p.9). Despite the considerable influence such programs have on society, politics, and culture, there has been relatively little analysis of the algorithms which underpin these pieces of software in the social sciences and the humanities (Beer, 2013, p.68). The first forays into the study of algorithms have been made by the field of Software Studies, as launched by Lev Manovich in The Language of New Media (2001) with the proposal that
To understand the logic of new media, we need to turn to computer science. It is there that we may expect to find the new terms, categories, and operations that characterize media that became programmable. From media studies, we move to something that can be called “software studies”—from media theory to software theory. (p.48)
Manovich clarified this definition in his 2013 book Software Takes Command by proposing that ‘Software Studies has to investigate the role of software in contemporary culture, and the cultural and social forces that are shaping the development of software itself’ (2013a, p.10). The need for the study of the ‘lives’ of algorithms more specifically was proposed by Kitchin and Dodge in their 2011 book Code/Space: they call for an area of study ‘that carefully unpicks the ways in which algorithms are products of knowledge about the world and how they produce knowledge that is then applied, altering the world in a recursive fashion’ (p.248).
Much of the work on algorithms to date has focused on the influence such code has had on modern culture. For instance, one common subject of analysis has been a competition the popular film and television streaming service Netflix ran between 2006 and 2009 (Beer, 2013, p.63; Hallinan and Striphas, 2014). Netflix’s recommendation algorithm suggests content to users that they might like to watch, and the competition invited individuals to submit ways of improving this algorithm by 10%. Beer argues that this recommendation algorithm reveals that the prediction of taste can be represented by a numerical value (2013, p.64). He then uses the Netflix competition to propose that ‘contemporary popular culture is being defined and shaped by these underlying collections of algorithms’ (2013, p.64). Hallinan and Striphas concur, arguing that despite its status as a competition ‘the Netflix Prize […] was equally an effort to reinterpret what culture is—how it is evaluated, by whom, and to what ends’ (2014, p.3); they see the contest as an example of the development of ‘algorithmic culture’, a concept Striphas defines as ‘the enfolding of human thought, conduct, organization and expression into the logic of big data and large scale computation, a move that alters how the category culture has long been practiced, experienced and understood’ (2015, p.396).
This re-ordering and re-evaluation of culture by algorithmic processes is a concept which Beer also highlights in relation to Google’s PageRank algorithm, which is used to determine the ordering of web pages displayed during an internet search: he suggests that ‘By making judgements about relevance this algorithm, by prioritising content, is shaping our encounters with information’ (2013, p.66). As organisations have become more familiar with the rules which drive these judgements, they have optimised their sites specifically to appeal to PageRank. Astrid Mager has explored this phenomenon, and suggests that such optimisations can result in what she calls ‘a commercialization of organic search results’ owing to the fact that wealthy organisations are able to optimise their websites more effectively than less wealthy organisations (2012, p.777). Alexander Halavais’ research in Search Engine Society (2012) arrived at similar conclusions; Halavais argues that while one might think searching the web would expose users to a diverse range of content, the way search engine algorithms sort content means that ‘search engines as they exist today represent a largely conservative force, increasing the attention paid to those people, institutions, and ideas that have traditionally held sway’ (p.85).
While Mager suggests that search algorithms were originally created with the intention of becoming the most technologically advanced solution to users’ search queries, she argues that in recent years ‘the techno-fundamentalist ideology got more and more aligned with and overshadowed by the capitalist ideology’ (2012, p.776). The capitalist logic of search engine providers, she argues, is reflected in the algorithms which are implemented (2012, p.776). Tarleton Gillespie’s analysis of Twitter’s ‘Trends’ feature captures the multifaceted objectives of search engine algorithms, which are ‘designed to take a range of criteria into account so as to serve up results that satisfy, not just the user, but the aims of the provider, their vision of relevance or newsworthiness or public import, and the particular demands of their business model’ (2011). The profit-generating quality of search engines is based in part on their ability to track the browsing habits of individual users online, and make predictions about their habits. John Cheney-Lippold argues that in in order to do so such algorithms assign users a
“new algorithmic identity”, an identity formation that works through mathematical algorithms to infer categories of identity on otherwise anonymous beings. It uses statistical commonality models to determine one’s gender, class, or race in an automatic manner at the same time as it defines the actual meaning of gender, class, or race themselves. Ultimately, it moves the practice of identification into an entirely digital, and thus measurable, plane. (2011, p.165)
Algorithms work only when the data they process is measurable, as in Cheney-Lippold’s identification example. To achieve this goal, aspects of human identity are frequently reduced to data points by algorithms; ‘Software algorithms code people, places and their data in interrelated systems that are then used to profile and drive decision making systems’ (Beer, 2013, p.75). Kitchin and Dodge suggest that this data-ification is widespread in the 21st-century, and that ‘most people in Western nations are living in a machine-readable and coded world—that is, a world where information is routinely collected, processed, and acted on by software’ (2011, p.10).
Such algorithms are often created with the intention of being not only autonomous, but unpredictable; as Manovich notes, ‘many popular software services use machine-learning technology that often results in “black box” solutions (While the software achieves desired results, we don’t know the rules it follows)’ (2013b). These unpredictable rulesets can then redefine agency in our algorithmic society; ‘An algorithm selects and reinforces one ordering at the expense of others. Agency, therefore, is contested in and through algorithms. They affect what can be said and done’ (Mackenzie, 2006, p.44). Scott Lash argues that the agency defined by algorithms arises from their creation of ‘generative rules’, and situates these rules as integral to power structures in the post-hegemonic order (2007, p.71). These restrictions are, according to Lash, ‘compressed and hidden’, and are ‘becoming more and more pervasive in our social and cultural life’ (2007, p.71). Beer reflects on the ways in which algorithms are hidden from sight, proposing that there is a critical consensus surrounding the invisibility of algorithms and the lack of public awareness of their operation (2013, p.70). Gillespie argues in his analysis of algorithmically composed lists that in actuality this invisibility is welcomed by consumers: algorithms curate ‘a list whose legitimacy is based on the presumption that it has not been curated. And we want them to feel that way, even to the point that we are unwilling to ask about the choices and implications of the algorithms we use every day’ (2011). Critics have explored the definition of power relations in the modern world through algorithms, but the parameters of these algorithms remain obfuscated.
The final dominant trend in discussion surrounding the study of algorithms is consideration of the consequences of the operation of algorithms without human influence. Jordan Candrall, for instance, notes that ‘The history of tracking is rooted in the figure of the surveillant […] Yet tracking practices have developed in ways that complicate this centralization of human agency. They have come to rely, increasingly, on algorithmic procedures and automated systems’ (2011 , p.69). The consequences of the use of algorithms which are able to act independently of human control are touched on by Kitchin and Dodge during the opening of Code/Space:
Although software is not sentient and conscious, it can exhibit some of the characteristics of being alive. […] The property of being alive is significant because it means code can make things do work in the world in an autonomous fashion—that is, it can receive capta and process information, evaluate situations, make decisions, and, most significant, act without human oversight or authorization. (2011, p.5)
Marc Lenglet characterises algorithms which trade stocks as ‘entities in their own right’ (2011, p.47), Beer challenges the assumption that algorithms are ‘neutral decision-makers’ (2013, p.88), and to Luciana Parisi algorithms are ‘performing entities’ (2013, ix). Not only do algorithms react to flows of data autonomously, but as parts of larger systems they interact with this data and change the conditions of the system itself. Lenglet’s analysis of the increasingly algorithmic nature of the stock market illustrates the conditions in which this phenomenon can occur:
As a text, the algorithm is a definitional device that makes the financial world different each time it “decides” to fire an order into the market. When describing the trading pattern it follows and making it fit into the market, algorithms get involved in the shaping of markets: not only because they belong to and co-constitute the marketplace, but also because, in so doing, they open and close possibilities to render the market adequate (or inadequate) to the patterns of action they embody. (2011, p.47)
This algorithmic influence is not merely the preserve of digital systems, but can also effect change in real-world contexts. Kitchin and Dodge describe the example of computational weather and climate change prediction models, noting that while the models are built upon recursive analysis of global climate systems, these models can then affect the real-world systems they represent through the influence they have on ‘individual and institutional responses to measured and predicted change’ (2011, p.30). Algorithms are characterised in the 21st-century not merely as predictable sets of rules implemented to accomplish specific tasks, but as actors within larger systems themselves.
Jamie Macdonald is an Interactive Media student at the University of York. His interests span criticism of literature, cinema, videogames and comic books. He recently completed work on the graphic design of a pedagogical board game, and his MA dissertation was written on the geopolitical relevance of Japanese and American giant monster cinema.
Beer, D. (2013). Popular culture and new media: the politics of circulation. Basingstoke: Palgrave Macmillan.
Cheney-Lippold, J. (2011). A new algorithmic identity: soft biopolitics and the modulation of control. Theory, Culture & Society, 28(06), 164-181.
Crandall, J. (2010). The geospatialization of calculative operations: tracking, sensing and megacities. Theory, Culture & Society, 27(06), 68-90.
Gillespie, T. (2011). Can an algorithm be wrong? twitter trends, the specter of censorship, and our faith in the algorithms around us. [Online] Culture Digitally. Last updated: 19 October 2011. Available at: http://culturedigitally.org/2011/10/can-an-algorithm-be-wrong/ [Accessed 31 July 2015].
Halavais, A. (2009). Search engine society. Cambridge: Polity Press.
Hallinan, B. and Striphas, T. (2014). Recommended for you: the Netflix prize and the production of algorithmic culture. New Media & Society. [Online epub ahead of print]. Available at: http://nms.sagepub.com/content/early/2015/02/02/1461444814538646.full.pdf+html [Accessed 31 July 2015].
Kitchin, R. and Dodge, M. (2011). Code/space: software and everyday life. Cambridge, MA: MIT Press.
Lash, S. (2007). Power after hegemony: cultural studies in mutation? Theory, Culture & Society, 24(03), 55-78.
Lenglet, M. (2011). Conflicting codes and codings: how algorithmic trading is reshaping financial regulation. Theory, Culture & Society, 28(06), 44-66.
Mackenzie, A. (2006). Cutting code: software and sociality. New York, NY: Peter Lang Publishing.
Mager, A. (2012). Algorithmic ideology: how capitalist society shapes search engines. Information, Communication & Society, 15(05), 769-787.
Manovich, L. (2001). The language of new media. Cambridge, MA: MIT Press.
Manovich, L. (2013a). Software takes command. London: Bloomsbury Academic.
Manovich, L. (2013b). The algorithms of our lives. [Online] The Chronicle of Higher Education. Last updated: 16 December 2013. Available at: http://chronicle.com/article/The-Algorithms-of-Our-Lives-/143557/ [Accessed 31 July 2015].
Parisi, L. (2013). Contagious architecture: computation, aesthetics, and space. Cambridge, MA: MIT Press.
Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(04-05), 395–412.