When light enters the lens of my Olympus digital SLR (DSLR), after a journey through the mirrors and pentaprism, it hits the camera’s Live MOS sensor. This metal–oxide–semiconductor sensor (peculiar to Olympus, Leica and Panasonic cameras) is a device that converts an optical image to an electric signal – light becomes data. That process however involves software as well as hardware.
The photodiodes in the sensor, that convert light into electricity, do not ‘see’ colour. They only register shades of grey. The design of the sensor includes red, green and blue (RGB) filters arranged over each photodiode according to a particular ‘colour filter array’ (CFA). Within my camera are two imaging apparatuses, one RAW and one jpeg. This data can be written to the quaintly named memory card as a RAW file – in the case of Olympus as an ORF file. Often referred to as a ‘digital negative’ this data needs processing to render an image. Image processing software in the camera on the desktop can take that data and, based on an imager’s choices, render that data as an image with particular contrast, colour balance etc. Most cameras do not output all the data from the sensors, some is lost, and some such as a header, details of the sensor, the CFA etc are added to enable the processing software to work. But the raw ‘file’ is the nearest we have to the output from the sensor. But, as we will see, that ‘image’ is unvisible.
There is a second imaging apparatus within the camera of which jpeg is a part. Like all digital cameras, my Olympus has an “imaging processing engine”, software that gathers the luminance and chrominance information from the individual pixels and uses it to compute/interpolate the correct colour and brightness values for each pixel on the basis of neighbouring pixels using a demosaicing algorithm. Further in-camera software engages in noise reduction, image scaling, gamma correction, image enhancement, colorspace conversion and chroma subsampling. This process is engineered to deliver information that matches, as closely as possible, what the engineers have decided is ‘quality’ information – the best flesh tones, the right degree of sharpness and contrast range. Here data is enfolded into human judgements and aesthetics. There is no one objective image processing decision. This is a difference engine. And then of course, as part of what is known as the imaging (and I would argue imagining) pipeline, there is compression. The crunching of that data ready to be saved to the camera’s quaintly named memory card. Here the jpeg protocol, enfolded within the “imaging processing engine” compresses the data, writes information into the data stream and create a jpeg/jfif file ready to be written to the card. And that file is visible to the imaging industry, the scopic culture and the scopic regime.
I ‘took a photo’ of “2012”. It doesn’t matter what it was of; like Humpty Dumpty, 2012 is whatever I want it to be. Whatever tags, titles, descriptions and geolocations I add to make my image searchable, whatever archives I add it to, position it as a 2012 image/imagining.
I set my DSLR to RAW/JPEG. On the memory card I will have two ‘files’. It may look as though they were taken simultaneously, two images, but actually they are one set of information from the sensor saved as raw data and then saved again after that information has been passed through the “image processing engine” including the jpeg protocol.
I press the button. Protocol does the rest
I have two files on the memory card. On the camera’s screen I can see the image. The camera’s software decodes the data and renders it as an image. Even if I had set the camera to shoot only RAW, the software would have been able to render an image, just as the RAW plugin for Photoshop can take that data, pass it through an imaging processing engine (one which allows me as the imager to specify noise reduction, gamma, chroma etc) and present an image. My imag(in)ing is visible on the small screen on the back of the camera as it is visible (as two different images – one using the full resolution and information, one compressed) in Photoshop, Lightroom or iPhoto. All these desktop software environments have imaging processing engines that can decode the raw data and render it as an image/imagining.
But if I try to upload my imag(in)ings to Flickr or Facebook the .ORF file is greyed out. It is ‘unvisble’. It is not invisible. It is there, the software acknowledges its reality but not its presence. It “does not compute…” It cannot ‘see’ it, I cannot ‘see’ it. I cannot share it. I cannot add it to my streams of imag(in)ings, nor can anyone else add it to theirs. Without the work of jpeg (or indeed other protocols that can process the information according to recognisable standards) Facebook and Flickr’s software cannot imagine what I saw.
If I upload the two files to my web server and visit the URL (www.theinternationale.com/2012imaginings) I get a directory listing of the files: _5182491.JPG and _5182491.ORF. I click on _5182491.JPG and the browser software renders the information. An image appears. I click on _5182491.ORF and all the browser ‘sees’ is a data file and offers to download it. It is unvisible until other software can act as the imag(in)ing processing engine.
What separates these two data files is more than the data lost when the dot jpeg was compressed from the dot orf data (itself compressed from the raw data from the sensor). The difference is that the former has been rendered visible by the operations of the jpeg protocol built into the imag(in)ing processing engine, the same protocol (in its decoding form) built into Facebook and Flickr’s software and my web browser. But that protocol is not ‘in’ the dot jpeg file and more than it is ‘in’ the dot orf file. It has done its work and has withdrawn from view. It too is unvisible.