Thoughts on the viewer

The “imag(in)ing apparatus” discussed here, with which the jpeg imag(in)ing apparatus is enfolded, includes the ‘window’ the screen and frame (Friedberg 2006) component ‘through’ which the jpeg imag(in)ings appear visible and the RAW imag(in)ings remain unvisible. This window component is perhaps best thought of as a viewer, paralleling the screen on the back of the camera. This viewer component to the apparatus can exist on any network device. It exists to provide a window/screen/frame/view [all of those words have their own different connotations and implications in terms of how the apparatus is understood] into the distributed  image space that the apparatus is adding to (or failing to add to in the case of RAW data) and within which it is enfolded.

The ‘viewer’ provides a live view into that space where the RAW-encoded and jpeg-encoded images from the camera are added to the image-network to be rendered as unvisible or visible depending on the alliances within which the objects are enfolded. Just as the screen provides a viewer into the light-encoded-as-data-encoded-as-jpeg/JFIF image that the camera’s software has produced a split second after the button has been pressed, so the viewer on the worldwide swarm of phone, tablet, computer and other device screens provides a live view into the imaginings-encoded-as-stream that the networks produce.

The ‘viewer’ is encoded as a webpage. By creating it in HTML rather than as an App programmed in Cocoa for example, it appears the same regardless of device. Using the treeserver.js javascript library, the ‘viewer’ appears like an iPhone/iPad App when in a viewing alliance with Apple devices, as an Android App when in alliance with a Google device and as a Web App when in a viewing alliance with desktop or laptop devices.

The ‘viewer’ has three views:

  • A view of the images created by the apparatus – the visible jpeg/JFIF images and the unvisible RAW images
  • A view of the distributed image stream of imaginings added by other scopic apparatuses (on Flickr etc)
  • A tool to filter those imaginings according to their (protocol enabled) metadata.

An ‘imager’ using the ‘viewer’, whether it is two metres away from the camera part of the imag(in)ing apparatus or 2,000 miles away, can see (or not) the images the camera takes as soon as the WiFi component of the apparatus has passed the data to the server. She can also step into the stream of imaginings taken by other (jpeg) apparatuses. Finally she can conduct that stream by choosing criteria to search for, searching only those imaginings rendered visible by protocol.

This ‘viewer’ is a component within the “imag(in)ing apparatus”. It is not separate. The apparatus is as much a part of the apparatus as the lens, the sensor, the router, the server and the device. The apparatus is about the whole imag(in)ing pipeline, about how the objects within that pipeline form alliances and relations that render some (jpeg) imag(in)ings visible and some (RAW) imag(in)ings unvisible. The viewer component of the apparatus renders visible the traces of protocol, its alliances and translations. Like the monitor in a  lab, it makes visible and unvisible processes and objects that withdraw from view.

  • Friedberg, A., 2006, The Virtual Window: From Alberti To Microsoft, MIT Press, Cambridge, Mass..