The Task of the Digital Translator

What I want to look into in this essay is intermodal translation and what happens when it is undertaken by computers. Digital translation software allows us to “translate” from any medium to any other by inserting parameters to which we’d like variables in each medium to correspond. For example, optical character recognition programs learn to translate shapes (of typed letters) into signs (for letters of the alphabet). In most applications of digital translation, the program seeks a maximum degree of transparency between the two media; that is, the human-computer interface tries to recreate analog experience in all of its original richness without adding any additional “noise” along the way.

Most applications of digital translation efface their own translation activity. This is the mark of a well-functioning translation: the OCR gives us a perfect rendering of the text; the translation from Urdu to Serbo-Croatian is accomplished with neither loss nor addition in meaning; and when I say to the walls of my smart house, “Bring me a vodka martini with a good-sized twist and a little more vermouth than usual,” my smart house uses its voice-to-cocktails platform to achieve exactly what I want.

My examples are increasingly unlikely, because I do not believe in the possibility or desirability of transparent translations. In fact, translation interfaces that pretend not to alter the substance of the original are not transparent but opaque. They are opaque because they attempt to efface their own activity. Our culture is rife with examples of normative translations that efface the materiality of difference: from the assumption that any computer user can download (i.e., translate) files that require proprietary software, to the belief that Creole languages such as Québecois or Black English can be translated into “standard”French or English without losing their specific meaning. The examples of digital translation that I find more interesting are where the translator (in good modernist fashion) draws attention to the opacity of the interface.

This is where art comes in, for I find it is often artists who intervene in the smooth functioning of a given instrumental technology in order to see how that technology works. The artists I discuss here are using digital translation platforms to perform some kind of intermodal translation. Many interactive-tech artists are developing or adapting translation platforms, but the artists I focus on here use the platform The Very Nervous System developed by Toronto artist David Rokeby to make translations from, for example, image to sound, movement to temperature shift, or image to language. Artists who use this application tend to make the translation variables significantly idiosyncratic, so the correspondence between original and translation is not always self-evident.

In the act of translation, what is translated is not simply information but, primarily, the relationship between language and substance. Artificial intelligence theorist Douglas Hofstadter argues that information in any modality can be expressed in any other modality (from French to Korean, from color to density, etc.). This faith in the ability of translation to retain information ignores the most interesting qualities of the original, qualities that exceed mere information. It is a poor original that can undergo translation with no loss of expression: a computer manual, perhaps.

Most forms of translation retain some of the opacity of the original by pointing to what was lost in the translation process. I would characterize such translations as analog, in that they create an analogy to the relationship between language and substance that existed in the original. Here, when I talk about translation I mean not translation between languages but between modalities: from image to sound, image to words, sound to movement, etc. I would argue that most translations between modalities are also analogues. Think of Kandinsky saying that yellow “ascended” in the same way as the sharp notes of a trumpet ascend, causing pain to the perceiver, or that “in the depths of the organ you ‘see’ the depths of blue.” When Kandinsky painted a painting with forms and colors that corresponded to musical qualities, he was making an analogy in one ‘sense modality’ to an experience he had in another. I would argue that such analogues are possible precisely because each medium (language, painting, color) is irreducible to any other. Each refers to an apparently preceding or underlying state, of which it is only the instance in that particular modality. You might demur that this is an essentialist argument, and yes, it’s arguing that each modality is the manifestation of an essence. However, each modality has its own, unique way of expressing that essence.

Certainly one can argue that translation strips away the uniqueness of a particular sensory modality, not to mention a particular medium. But research in neuroscience suggests that the brain often processes sensory experience in a similar way, “translating” multisensory perception into neural patterns that can be interpreted by any number of sense centers in the brain. Barry E. Stein and M. Alex Meredith show that “the same neuron can be involved in multiple circuits and have multiple roles” depending on stimuli and the state of the organism. Such multi-sensory neurons are capable of maintaining the unique character of sensory inputs, without disrupting other fields. Multi-sensory neurons scale sensory information in individual modalities as well as integrate the inputs from these modalities to determine their overall effect. They posit a “unity of the senses,” not only ontogenetically but phylogenetically: sense modalities are thought to have evolved from a primordial “supramodal” system. Chemical, thermal, mechanical, radiant, and other sensory stimuli are all responded to similarly, based on their intensity. This neural translation capability explains synaesthesia, or the ability to experience in one sense modality information that was received through another: “tasting pink,” for example. Translation between modalities is something at which our brains are expert.

How does this process change in digital translation? What distinguishes digital translation is that its object must be translated into a very particular form of information, namely a string of 1’s and 0’s. These values are then translated again into another modality. Optical character recognition programs translate the shape “h” into a matrix of 1’s and 0’s, which is in turn translated into the character “h.” There is necessarily some loss in this process of translation, simply because it would take up an infinite amount of memory to make a perfect rendering of the original, a sort of digital Borges’ Map. For example, to render color, digital imaging software translates the infinite range of colors in the real world into a set of calculations based on the intensity of the RBG wavelengths per pixel. These in turn are rendered on our computer screens in the limited range of 256 colors or “thousands of colors.”

A good analog translation retains the original modality’s sense of, or relationship to, its medium: color’s sense of yellow, sound’s sense of a trumpet note. Analog translation is not a question of information but of expression. What is lost in digital translation is everything that cannot be considered information, at least not according to the terms of the translation program. In the terms of the program, what is lost is noise, or excess.

Very well: digital translation renders analog substance into digital information. As Walter Benjamin wrote in the essay to which my title here pays homage, the task of the translator is to retain the traces of the original language in the translation. If we want to retain a trace of the “aura” of the original language (the value of this is often debated by theorists of indexicality in digital media), then we would need to find a way to create an existential link between the two modalities or languages. Can such an existential link exist when each language is reducible to neutral arrays of information?
Clearly the answer is no. Once the color indigo, for example, is rendered as a string of numbers, the string of numbers has no essential connection to the color. It is just as easy to translate these numbers into a saxophone note, the speed of a fan, or any other programmable modality, as it is to translate them back into indigo.

***

These examples should demonstrate not only the arbitrariness of digital translation but also its exciting possibilities. It is here that I want to turn to some works by artists that take advantage of digital translation programs to create works and environments that are virtually synaesthetic. I would suggest that these works, though they lack the aura of the analog translation, have a sort of super-added aura by virtue of the highly idiosyncratic ways the artists manipulate the translation programs. I call these works “artists’ AI,” because they take the assumptions of AI and intervene in the smooth translation that AI wants to achieve.

The Very Nervous System is a platform that translates an image into a database of information. The user can then translate that information by mapping variables in the image to variables in another modality. In their ongoing project called Fleabotics, Willy LeMaitre and Eric Rosenzweig create live interactive dramas by videotaping lively bits of garbage and allowing them to generate their own sound track via The Very Nervous System. They animate the live image in various ways, shoot video of the activity, use VNS to build a database corresponding to movement in the image, and then use MAX software to do live programming of sound based on the information derived from the image. Visual rhythms and intensities generate audible rhythms and pitches that seem to be emitted by the objects themselves. They also use them to power fans that blow the trash around, creating a feedback loop that lets Fleabotics continue to produce itself.

By creating a correspondence between the image and other sense data, The Very Nervous System takes advantage of our cognitive tendency to assimilate perceived information into a meaningful whole. For example, the first time I saw Fleabotics I was sure I perceived buzzing insects pollinating flowers in a mass of sound-generating garbage. The artists characterize Fleabotics as a “nonintentional drama,” and indeed I would suggest it has all the characteristics of melodrama except for a preconceived plot. In Fleabotics, everything is expressive excess: color, movement, the sound environment that seems to proceed from the “characters.” It is impossible not to empathize with the little bits of trash, or at least to project narratives on them. Le Maitre and Rosenzweig tell, for example, of a viewer who watched several minutes of Fleabotics and said, “I like it very much, but why did you make the woman in the last scene Asian?” Their recent project The Appearance Machine is an elaborate drama where Fleabotics uses feedback to generate its own ongoing narratives within a Rube Goldberg-type apparatus.

David Rokeby, inventor of the VNS platform, has made a number of pieces that exploit its ability to create intermodal correspondences, such as generating sounds from the movement of people in the gallery. His ambitious and beautifully absurd project, The Giver of Names, as its title suggests, is a program that generates names, sentences, and eventually entire narratives from objects displayed to a VNS-equipped camera. Here, there are three levels of translation involved: from the object (or more precisely, an image of the object, not at all the same thing) to a set of data; from the data to an activation of a database of words and grammars; and from the activated database to a set of words, ideas, and sentences.

All of these stages of translation are fascinating, especially when we imagine what a “bad” transparent translation would try to do when presented with, say, a half-eaten pear placed on top of a love letter. However, I will concentrate on the second stage: the activation of a database. The Giver of Names takes the visual information it has gathered–basic shapes, colors, texture of the image–and tries to find resonances for these in its vocabulary, which includes, for example, 200 words for colors and associations with these colors in different languages and cultures. It might also include, for example, the entire text of Jeannette Winterson’s 1997 novel Oranges Are Not the Only Fruit. All of this is built on top of a hierarchical classification of words taken from WordNet, a lexicon created at Princeton with funding from DARPA (with lots of words for weapon parts but few for fruit). It then takes all these cues and attempts to build sentences from them. You can see how what The Giver of Names comes up with will rely heavily on the information the programmer chooses to feed it.

Rokeby notes that The Giver of Names exhibited some interesting emergent characteristics. In an early version, it was programmed to generate new sentences based on the database of texts and grammars fed to it. When a Spanish-language version of The Giver of Names was fed Quixote’s Don Juan and a large associative database, he found that if left alone for a few hours it “began to obsess on its possessions,” beginning every sentence with mis or “my.” Finally it began repeating over and over, with minor variations, such as a sentence that began “mis peccadoes,” or, “my sins.”

This, I would argue, is the kind of AI we need! Not an AI that mindlessly does what we tell it to, but behaves as a quirky mirror for the human obsessions and predilections that, after all, underlie all of its thought processes.

If an analog quality is retained in digital translation, it is because the “hand of the artist” intervenes to rebuild a connection. The military and commercial translation platforms from which these projects are opaque pretending to be transparent. They would like to argue that nothing is lost in translation, though in the process all sorts of minority and material meanings are effaced in the interest of “transparency.” In contrast, these artists’ applications of the platforms add opacity in order to make the interface transparent. We can see the interface thinking, struggling, and coming up with solutions based on the variables and correspondences they have learned. The caprice and randomness characteristic of analog media that are lost in the digital codification can be replaced by the programmer in the form of idiosyncratic translation variables. This is artificial intelligence, capable of creating meanings of its own, but in the opacity, obsession and downright weirdness of the translation process, we see a mirror of the human thought process.

Site References:
David Rokeby, The Very Nervous System and The Giver of Names, http://www.davidrokeby.com/gon.html
Willy Le Maitre and Eric Rosenzweig, Fleabotics and The Appearance Machine, The end of cinema as we know it American Film in the nineties, ed. Jon Lewis