The codec moment: raise the standard high

Outside the technical literature there has been little study of the protocols and standards that run through software and hence media ecologies, digital assemblages and information spaces. Even within software studies, the emphasis has been on software packages, whether business (Fuller 2003a), games (Galloway 2006, WardripFruin 2009), artworks (Fuller 2003b, Parikka 2010, Hertz & Parikka 2010) or anomalies such as spam or viruses (Parikka 2007, Parikka & Sampson 2009). In other words the codecs, standards and protocols on which those packages depend or which artists or hackers use, have been under-analysed.

Galloway’s discussion (Galloway 2004) of the relationship between TCP/IP, DNS and Deleuze’s ‘control society’ (Deleuze 1992) was the first work to focus specifically at this ‘level’. I will argue that Protocol is undermined by a foundationalist and textualist approach  as evidence in statements such as: “Protocol is a language that regulates flow, directs netspace, codes relationships , and connects life-forms.” (P74) but for now the point is to mark a turning point in Software Studies, but one that has not been picked up.

In 2006, Adrain Mackenzie sought to develop an “ontology of software” (p5) that included protocols and standards alongside languages and applications as part of the wider, post-Manovich establishment of software studies/theory. Arguably it is this wider project that undermines a specific protocol studies/theory. For Mackenzie, protocols are an example of code, which is the object of his study and ontology. The specific nature and workings of the protocol-object are not fully addressed.

Within film studies, there has been some discussion of codecs and standards. In 2008, Sean Cubitt approached YouTube through the H.263 codec which supports the Flash video (.flv) format. Cubitt locates the codec in terms of its alliances with corporate interests (Adobe) and telecoms and non-governmental bodies (ITU, ISO etc) as away of mapping its presence within global and neo-liberal relations and discourses of the public sphere. He goes as far as to say: “There is no internet without the standardisation of internet protocols” (p46). Cubitt picks up on Adrain Mackenzie’s short account of motion imaging codecs in Fuller’s Lexicon (Mackenzie 2008) where, taking a more constructivist line than Galloway, he argues that “codecs structure contemporary media economies and cultures in important ways… [they] catalyze new relations between people, things, spaces, and times in events and forms” (p48). In particular he draws a connection between video codecs’ ‘transform compression’ and ‘motion estimation’, the technique it uses to compress but also render motion, and a “relational ordering that articulates realities together that previously lay further apart” (p54).

Both writers have continued to raise the standard. Cubitt has drawn connections between colour space standards and emergent 3D scopic regime (Cubitt 2010) while Mackenzie discusses the experience of ‘wirelessness’ as articulated in and through the Wi-Fi standard (2010).

So why the dearth of work at this level (sic)? Mackenzie is clearly correct to say that one problem with mapping digital assemblages and ecologies through these objects is that “they are monstrously complicated” (Mackenzie 2008: 48). Another issue might be that to approach the assemblage through software and in turn through standards raises the spectre of technological determinism, which despite renewed interest in McLuhan remains a dangerous label. Software studies has shown the importance of the code-assemblage relation, clearly that needs to be approached through all forms of code – packages, commands, programming structures and languages as well as standards and protocols (see other papers in (Fuller 2008)). I would argue that approaching these monstrously complicated objects as object-actants allows us to trace their workings without recourse to foundationalism as well as rework the idea of technological determinism as a dynamic process (in the way Mackenzie uses radical empiricism to map the flow of wireless experience). This is not an academic exercise, recent discussions of, and battles over, the political position and effects of Wikileaks needs to take into account the HTML standard that allows the link, the MySQL database protocols that allow storage and retrieval and of course the TCP/IP and DNS protocols that allow the site(s) to be found or hidden. It is those standards that not  only enable Wikileaks to function but also enable Amazon and PayPal to ‘de-visualise’ it, the US Government to trace it and activists to mirror it. Those alliances are en/infolded and en/incoded.

Cubitt 2008, Codecs and Capability, in Lovink & Niederer (eds), Video Vortex Reader : Responses to YouTube, Institute of Network Cultures, Amsterdam,.

Cubitt, 2010, Making Space, Senses of Cinema (57).

Deleuze, G., 1992, Postscript on the Societies of Control, October, 59, pp. 3-7.

Fuller 2003a, It Looks Like You’re Writing a Letter: Microsoft Word, in Behind The Blip : Essays On The Culture Of Software, Autonomedia, Brooklyn, NY, USA, pp. 137-65.

Fuller 2003b, A Means of Mutation: Notes on I/O/D 4: The Web Stalker, in Behind The Blip : Essays On The Culture Of Software, Autonomedia, Brooklyn, NY, USA, pp. 51-68.

Fuller, M., 2008, Software studies : a lexicon, MIT Press, Cambridge, Mass..

Galloway, 2004, Protocol: How Control Exists after Decentralization, MIT Press, Cambridge, Mass.; London.

Galloway, A.R., 2006, Gaming: Essays On Algorithmic Culture, University of Minnesota Press, Minneapolis.

Hertz, G.  & Parikka, J.  2010, Zombie Media: Circuit Bending Media Archaeology Into An Art Method, Vilèm Flusser Theory Award 2010, .

Mackenzie, A., 2006, Cutting Code: Software And Sociality, Peter Lang, New York.

Mackenzie 2008, Codecs, in Fuller (ed), Software Studies : A Lexicon, MIT Press, Cambridge, Mass., pp. 48-54.

Mackenzie, A., 2010, Wirelessness: Radical Empiricism In Network Cultures, MIT Press, Cambridge, Mass..

Parikka, J., 2007, Digital Contagions : A Media Archaeology Of Computer Viruses, Peter Lang, New York.

Parikka 2010, Ethologies of Software Art: What Can a Digital Body of Code Do? in Zepke & O’Sullivan (eds), Deleuze and Contemporary Art, Edinburgh University Press, Edinburgh, pp. 116-32.

Parikka, J. & Sampson, T.D., 2009, The Spam Book : On Viruses, Porn, and Other Anomalies From the Dark Side of Digital Culture, Hampton Press, Cresskill, N.J..

Wardrip-Fruin, N., 2009, Expressive Processing : Digital Fictions, Computer Games, And Software Studies, MIT Press, Cambridge, Mass..

Background and foreground

Adrian Mackenzie begins his discussion of wirelesses (Mackenzie 2010) with ‘Wi-Fi’, noting that the trademark caries instructions as to how the logo should appear. The 45 page brand guidelines (PDF) state that the space around the logo should be equal to three times the width of the (lower-case) ‘i’ which, Mackenzie notes: “seems apt. Wireless network very much concern the interval between people, or the space around ‘I.'”

Wi-Fi, like jpeg is a standard emerging from an industry body. In the case of Wi-Fi, the Institute of Electrical and Electronics Engineers, in the case of jpeg, the Joint Photographic Experts Group. Both set the rules for play. Routers and gadget software, telephones and PC software must meet the standards in order to ‘work’. A wireless chip that works to a different standard will still function but will not enable other protocols such as TCP/IP (Galloway 2004) to come into play in Starbucks. A CCD chip or camera software that encodes light falling through the lens as an Olympus RAW (or any other RAW standard) file will still imag(in)e but will not work on Flickr, Facebook or a Google search without the intervention of other code or software to turn it into a usable form.

What perhaps separates the two standards is their position as brand, their visibility. The Wi-Fi logo and the name is ubiquitous even to the extent of Wi-Fi becoming synonymous with ‘network’ itself: “is there a Wi-Fi network here?”.The Wi-Fi site, a far slicker and more corporate space than that of the Joint Photographic Experts Group, talks of a trade association, trademarks and brands. It says: “The Wi-Fi CERTIFIED logo is today’s best assurance of device interoperability. Businesses seek it. Consumers demand it. And we provide it”. Here a standard is protected, policed and deliberately visible, marked on coffee shop doors, station platforms and domestic equipment. Interestingly the practice of publicly identifying open wireless networks by a chalk mark on the pavement (warchalking) was also built around a visible sign, a sort of anti or even no logo.

Jpeg on the other hand is transparent. It not only withdraws from view in an ontological sense, it retreats semiotically. The technical standard, the jpeg protocol retreats behind the label ‘jpeg’. The language around jpeg is one of images: “this card will hold 5,000 jpegs”, “send me a jpeg” etc. Here the standard, the protocol, the compression is transparent. It is the final image, the outcome of jpeg compression, that is visible.

The alliances that form around Wi-Fi are the same as around jpeg. The corporate interests represented by the Wi-Fi alliance include Microsoft, Nokia, Apple and imaging companies such as Nikon – although not what Jarvis has identified as ‘platform’ businesses like Google and Facebook (Jarvis 2009). These companies have a stake in Wi-Fi and in jpeg but they are articulated in different ways. In object-oriented terms, the alliances are forged slightly differently.

In both cases Wi-Fi and jpeg need to be seamless. The Wi-Fi logo represents a guarantee of network readiness. Jpeg in the menu or the instruction manual guarantees interoperability, seamless uploading and sharing. In the case of Wi-Fi those alliances are very visible. This is a brand alliance: “our software or gadget will work ‘out of the box’ with the network”. The protocol (even if it not understood as such or even at all) must be present as logo if a consumer is to choose my Wi-Fi ready phone over my competitor’s 3G only one – I can watch iPlayer over my latte whereas my friend cannot. Wi-Fi is network. It is connection. To choose Wi-Fi ready is to choose networked.

Jpeg however should remain invisible or at least only visible as a suffix, an add-on to the image that is paramount. Not only does no-one need or indeed want to know that the protocol is removing information as it compresses, the most powerful alliances within which jpeg works are platforms, social businesses where the rags ’n refuse of content and their connections are paramount. A consumer does not need to know that the phone ‘shoots jpegs’ (sic) only that it can upload directly to Facebook or Flickr. It is not just the ubiquity of the standard but rather its secondary importance. Google, Facebook, Apple, Nikon et al have no interest in foregrounding jpeg. They want the iPhone, 14-28 zoom, search box or adverts as the focus. While the jpeg standards makes all those technologies, relations and business practices possible, it can remain in the background.

  • Galloway, 2004, Protocol: How Control Exists after Decentralization, MIT Press, Cambridge, Mass.; London.
  • Jarvis, J., 2009, What Would Google Do? Collins Business, New York.
  • Mackenzie, A., 2010, Wirelessness : Radical Empiricism In Network Cultures, MIT Press, Cambridge, Mass..

What are we looking at?

As well as providing an interesting area between materiality and immateriality, software occupies an evocative space between the static and dynamic. The forensic examination of protocols, standards and code, from a media archaeological or technical point of view reveals objects that are both static and dynamic. They are specific and definite. The pixel, the algorithm, the standard occupy specific positions within software and an analytical framework. At the same time it is the fact that these objects are actants, in motion and generative that allows software critiques to map a dynamic media ecology and control society.

Sean Cubitt prefaces his discussion of space in contemporary digital cinema (Cubitt 2010) with an outline of the technologies that illustrates this point of balance. Through a careful outline of the engineering behind screens, CCD chips, codecs and colour spaces he identifies what he calls the “basic building blocks of digital cinema” which, before he addresses specific films and practices, he connects to scopic regimes and ecologies – the history of the ‘standard observer’, the development of the commodity form and biopolitics. The point here is that for Cubitt digital cinema has to be explored from both ends (although the use of such spatial metaphor is as problematic as using the metaphor of ‘levels’) – the specific, apparently stable forms of code, standards and protocols as well as their dynamic presence in assemblages and regimes of truth, power, governmentally and culture.

It is not that static, stable standards suddenly become animated when activated by some process or structure. From an object-oriented perspective those objects are always dynamic within actor-networks. The issue is, like a quantum physicist who needs to hold light as simultaneously particle and wave in mind, we need to avoid collapsing the analysis into stasis or flux. Rather we need a forensic, archaeology of the specifics of software operating at what Deleuze refers to as “different velocities”.

Jussi Parikka, in his outline of software ethologies (2010), draws on Deleuze’s Spinozian account of “a complex relation between differential velocities, between deceleration and acceleration of particles” (Deleuze 1988: 123). For Parikka: “we could say that software is defined by its motion and rest, speeds and slownesses, but also its affects, i.e. Its relations with the other bodies involved in its unfolding in time” (Parikka 2010: 124). This is not a matter of how software works – the relative speed of code’s operation, but rather the way in which the object works and needs to be seen. Cubitt’s codecs and CCD chips, as object-actants, are both in motion and at rest (just as they are material and immaterial). As standards or as physical objects they have a stability – political, economic , technological and ontological. They are defined and occupy a particular location in relation (alliances) to other object-actants. As such they need to be mapped and subject to archaeological investigation as pieces of engineering, assemblages and apparatuses. At the same time they are dynamic, in constant motion with other object-actants in the development, deployment and marketing of 3D movies, photo-sharing sites and databases.Their histories are being continually remade as alliances are forged and remade in Apple’s boardroom, Google’s labs, London 2012’s ad agency, Pixar or the Home Office.

Cubitt says: “as a community of scholars we are still trying to understand what it is that we are looking at now.” In a mediaspace where the boundaries between moving and still digital imagery and film and software are increasingly fluid, the necessity of trying to understand by holding the static and the dynamic as well as the material and immaterial in hand at the same time becomes even more important.

  • Cubitt, 2010, Making Space, Senses of Cinema (57). Available here.
  • Deleuze, G., 1988, Spinoza, practical philosophy, City Lights Books, San Francisco.
  • Parikka 2010, Ethologies of Software Art: WhatCan a Digital Body of Code Do? in Zepke & O’Sullivan (eds), Deleuze and Contemporary Art, Edinburgh University Press, Edinburgh, pp. 116-32.

Tweets for the week :: 2011-01-09

Powered by Twitter Tools

Not just a matter of engineering

In 2006, introducing the Software Studies Workshop at Piet Zwart Institute in Rotterdam, Matt Fuller said: “In a sense, all intellectual work is now ‘software study’, in that software provides its media and its context, but there are very few places where the specific nature, the materiality, of software is studied except as a matter of engineering” (cited in Manovich Software Takes Command).

Software is pervasive. It is the atmosphere within which we work, have our work and social relationships as well as the context within which those relationships are policed. Our shopping happens within and through software. Our news and media production and consumption as well as political participation and struggles happen within and through software, whether as a Smart Mob (Rheingold), as activist ( Hands) or as the object of surveillance, tracking or data mining. Our economy, markets and even capital itself work through software, perhaps, in the case of financial instruments, even exist only as/in software.

It is not just the well worn commentary around information and the network society (cf Lash, Castells, Benkler etc). Clearly these critiques are right to point to the effects of protocol/software-powered in the developments and deployment of new forms of capitalism, social and cultural relations, media ecologies (Fuller) and what can be addressed as actor-networks (as distinct from the technical IT networks). What software studies argues is that to leave the analysis and critique at the level of effects is to miss those forces and players (actants) that set those forces and relations in motion. Studies of specific software packages, programming languages, anomalies and protocols seek to draw attention to code as a material actant ‘doing things’ in the world.

Where object-oriented philosophy can add to that account is in providing a framework for discussing that actant as more than a “matter of engineering”. Software studies accounts have been caught between high-level, often Deleuzian-inspired accounts of assemblages and ecologies and technical, often textualist, deconstructions of code, languages and structures. Both are of course necessary if we are to trace the workings, effects and alliances of Google’s algorithms, Tesco Clubcard’s, the London Congestion Zone or Oyster Card databases or the ambient intimacy of Twitter. But accounting for how those ‘levels’ relate demands an account of code in terms of ontology – an account of the thing.

Technically the code (em)powers software to set in motion control societies (Deleuze) and governmental and scopic relations. But in order to avoid a form of code-determinism, we must see that code (or in my case protocol) as an actant not a determinant. OOP’s great contribution is a flat ontology, a refusal to allow depth models or foundations, an anti-essentialist account that can place code as a black box not as a determinant or foundation but as an active player in alliances and translations, a box that has become accepted and transparent but which can be opened out (rather than up) to explore other code and non-code actants within which it is in/enfolded.

Studying protocol is not looking to position it as base to a code or assemblage superstructure in an engineering or ideological sense. It is to position it within a flat ontology of actants within software, capitalism and media ecologies. Protocol must be looked at at the same ‘level’ as governmental relations, corporate strategies, multinational data-holders, iPads, IoS, Apple, Apple stores and the cult of Apple.

Draft: The software studies problematic

Yesterday I wrote:

“Later with the birth of software studies, code, algorithms and protocol were elevated as worthy of attention. Software art made them cultural and auratic. Whether they were being deconstructed as ideological or power-full in Fuller’s account of Word or constructed as problematic in Manovich’s identification of the ‘new media object’, they were still within the discourse of the virtual, the immaterial maybe even the ethereal. Of course this is not to say that those critiques we’re not concerned with the real and the material. Software studies and software art has a long history of radical critique and intervention, rather the point is to draw attention to the analytical separation between the material and the immaterial. The focus on software and code was an attempt to uncover a new determinant or player in that material reality.”

@juspar rightly pointed out that this was a bit throwaway, so… some more thoughts:

The term “software studies” was coined by Lev Manovich in The Language of New Media where he said: ”New media calls for a new stage in media theory whose beginnings can be traced back to the revolutionary works of Robert Innis and Marshall McLuhan of the 1950s. To understand the logic of new media we need to turn to computer science. It is there that we may expect to find the new terms, categories and operations that characterise media that became programmable. From media studies, we move to something which can be called software studies; from media theory — to software theory” (p48). He later refined this definition (in his unpublished Software Takes Command), arguing that the original positioned computer science as a kind of absolute truth rather than itself part of culture. “I think that Software Studies has to investigate both the role of software in forming contemporary culture, and cultural, social, and economic forces that are shaping development of software itself” (p5).

Manovich traces software studies after 1991 through three key texts. Noah Wardrip-Fruin and Nick Montfort’s The New Media Reader, he argues, proposed a new model for thinking about software by juxtaposing historical accounts of computing with accounts of new media art (similar to the juxtaposition apparent in Galloway and Fuller’s work). This format, Manovich asserts, “demonstrated that both belonged to the same larger epistemes” (p6). Matthew Fuller’s Behind The Blip : Essays On The Culture Of Software continues this theme. It provides an analysis of the power relations inherent in software design and development refracted through Fuller’s own software art work with I/O/D and Mongrel. Fuller went on to edit a book that not only had software studies as its title but explicitly set out to define an emergent area of concern, even a discipline. Software Studies: A Lexicon. Fuller introduces this first volume in a new MIT series as a “project” (p1)and “software studies” as a conjunction of words (p11).

This emergence of “software studies” as problematic and discipline is framed by those involved in two ways: as a discovery of an overlooked object and as almost a meta-discipline necessary to deal with a new episteme. In his forward to his lexicon, Fuller says that his lexicon “proposes that software can be seen as an object of study and an area of practice for kinds of thinking and areas of work that have not historically ‘owned’ software, or indeed often had much of use to say about it” (p2). This is less an announcement of the birth of a new discipline as the announcement of the discovery of one hidden in the old. The “object of study” has been there all along but just overlooked by media and cultural studies.

Fuller organised the very first Software Studies Workshop at Piet Zwart Institute in Rotterdam. Manovich reports Fuller as saying:

“Software is often a blind spot in the theorization and study of computational and networked digital media. It is the very grounds and ‘stuff’ of media design. In a sense, all intellectual work is now ‘software study’, in that software provides its media and its context, but there are very few places where the specific nature, the materiality, of software is studied except as a matter of engineering” (Software Take Command, p8).

Here software is seen as so omnipresent and omnipotent that – in the sort of evangelical tones that the Birmingham Centre arguably used to position cultural studies – a society and economy soaked in software (as opposed to media) required a new meta-discipline. Manovich goes on to say:

“I completely agree with Fuller that “all intellectual work is now ‘software study'”. Yet it will take some time before the intellectuals will realize it…Fuller’s statement implies that “software” is a new object of study which should be put on the agenda of existing disciplines and which can be studied using already existing methods – for instance, object-network theory, social semiotics, or media archeology”. However, he argues: “if we are to focus on software itself, we need a new methodology. That is, it helps to practice what one writes about” (p9).

This hardly comes as a surprise from the man whose seminal, formalist account of the “new media object” argued for the specificity of its object and ways of approaching it. (Although one might draw attention to the fact that, as Michael Truscello points out in Film Philosophy, Manovich’s account of new media positions it in terms of existing approaches: “the visual culture of a computer age is cinematographic in its appearance, digital on the level of its material, and computational (i.e. software driven) in its logic”(p180).)

In his later work Manovich argues that software studies practitioners draw their legitimacy from their “systematic involve[ment] in cultural projects which centrally involve writing of new software” (Software Take Command p9). He does not talk about “practice-research” in the sense in which I use it, but the implication is clear. A dialectical relation between practice and theory/analysis is the only way to deal with this particular object of study. He even goes as far as to position other writers on technology such as Zielinski, Castells and Latour as “without this experience” (p9) implying a gap in their CV or even credibility.

This theme is apparent throughout the founding texts of software studies. Fuller as editor makes clear that “one rule of thumb for the production of this book is that the contributors had to be involved in some way in the production of software as well as being engaged in thinking about it in wider terms” (Lexicon p10). Wardrip-Fruin and Montfort in their ‘directions to readers’ say: “understanding new media is almost impossible for those aren’t actively involved in the experience of new media; for deep understanding, actually creating new media projects is essential to grasping their workings and poetics” (pxii). Geert Lovink, in his self-reflexive interview as introduction to Uncanny Networks lauds “the artists and critics featured in this book [as] working with the technology itself” (p4). Truscello even goes as far as to invoke Gramsci’s idea of the intellectual as one “grounded in the practice of everyday life and not simply an effect of oratory,” arguing that, “Manovich embodies this progressive creed”.

This concern for practice appears as a way of grounding software studies, separating it from or possibly privileging it over, other explorations of digital media and cultural analysis. It also has implications in terms of its account of the object. Software is not just something that is now so powerful and present that it demands explanation and exploration, it is also the means to that end. As such software studies demands an ontology, a theory of the object that it explores and creates. Unlike cultural studies which addresses practices and texts or cyber/digital culture studies which looks at practices and spaces, software studies has a specific object that it maps and traces across society and power relations. It is Microsoft Word (Fuller), or the Perl computer language (Mackenzie), a virus (Parikka) or an interface (Galloway) that allows one to trace the operations of power. And it is the creation of alternative browsers, programmes or interventions within networks by artists and activists that offers one the tools, the space, the insight and the hacker-like credibility to do that mapping. Without an object to analyse or to create as part of that analysis, software studies would be just another form of cultural critique divorced from the hacker communities it likens itself to and unable to distinguish itself from textualist and formalist accounts of digital space and culture.

When Galloway approaches protocol in 2004, he carries with him this sense of the overlooked, a belief that software offered a way into a meta-critique of control societies and a concern for practice. His account was also forged in an emerging discourse where the software object offered the key to analysis and indeed change.