Neural Networks vs. Computer-Networked Environments: Cognition and Communication in Digital Art

Similarities between the functional principles of neural networks and computer networks have recently been discussed in fields as varied as new media theory, computer science, neurobiology, cognitive science, or mathematics. On a more metaphorical level, these parallels are obvious: the neural networks of our brains allow for the transmission and analysis of multiple kinds of information – on a sensoric and cognitive level; computer networks, such as the Internet and World Wide Web, equally establish environments of linked nodes that allow for the processing of information. The dream of hypermedia applications (even if it hasn’t been quite fulfilled yet) is to establish webs of associative references that mimic the processes of the brain.

Apart from these obvious parallels between neural and computer networks as linked ‘information processing environments’ in the broadest sense, what exactly are the functional principles that connect the brain and computer networks?

This essay focuses on two broad areas: the analogies between computer and neural networks; and the relevance of neural networks and cognitive science (in particular natural language processing) in the context of Artificial Intelligence and what is known as ‘evolutionary computation.’ I will discuss both of these areas in connection to digital art works that range from projects addressing network structure to projects focusing on artificial life and intelligence.
Small World Networks: Parallels between Computer Networks and the Brain

1. Social Networks

In 1998, Duncan Watts and Steve Strogatz (from Cornell University) published a paper titled “Collective Dynamics of Small-World Networks” in the magazine Nature. Their research focused on so-called small-world graphs, diagrams outlining networks – for example, between people – that seem to be neither ordered nor random. [Fig 1] Watts and Strogatz reexamined several by now very well known experiments in the realm of social networks. In 1960, psychologist Stanley Milgram had discovered the paradigm of ‘6 degrees of separation’ – the fact that all the people on this planet seem to be linked by an average of six connections. In his original experiment, Milgram had sent letters to a random selection of people in Nebraska and Kansas and asked each of them to forward the letter to one of Milgram’s friends, a stockbroker in Boston whose address he didn’t give them. The people in Nebraska and Kansas were asked to only send the letter to someone who they thought might be socially closer to the stockbroker. The experiment has now been recreated various times (always with the same results) – for example by the German newspaper Die Zeit, which tried to connect a Kebab-shop owner in Frankfurt to his favorite actor, Marlon Brando. What interested Watts and Strogatz in these earlier experiments was the interplay of randomness and control in networks.

Another researcher whose work had been crucial to the idea of randomness in network theory was the Hungarian mathematician Paul Erdös who authored numerous papers on random graphs during the 1950s and 60s. Erdös discovered that no matter how many points there might be, a small percentage of randomly placed links between them is always enough to combine them into a more or less completely connected network. The percentage of links required dwindles as the network gets bigger. For a network of 300 points, there are nearly 50,000 possible links that could run between them. If no more than about 2 percent of these links are in place, the network will be completely connected. For a network of 1,000 points, the crucial fraction is less than 1 percent.
Fascinated by Milgram’s 6 degrees of separation, Mark Granovetter, a professor at Johns Hopkins University, further examined the nature of the small-world connections and discovered that the ‘bridges’ and crucial links in a social network are in fact the ‘weak’ links between people, meaning a direct link between people (who may only be acquainted) that connects separate social networks. It is a weak link that really holds together a social network, which would crumble if that link is removed. Strong links between close friends, on the other hand, are less important since they usually take a triangular structure. It is most likely that your two closest friends also know each other and if you would remove a link to one of them, the connection would still exist; it would only be one step further removed. Granovetter published his findings in the 1973 paper “The Strength of Weak Ties.”
One of the main questions Watts and Strogatz were pondering in their small-world research was, what if a network needed to describe a social world is neither ordered nor random but somewhere in between? Using computer programs, Watts and Strogatz did experiments for hundreds of graphs. Each of them started out as an ordered network, which they then subjected to some random rewiring. [Fig 2] They monitored the number of degrees of separation and computed how clustered the network was. In the original network, each point had 10 neighbors and there was a potential total of 45 links that could run between them. In reality, only 2 or 3 of these points were actually linked and so the degree of clustering was 2/3 or 0.67. Starting from a network of 5000 wires, they had the computer add 50 more at random, which means that one percent of links was randomly established. The result was a clustering of 0.65 percent as opposed to 0.67 of the original, not a major decrease. The degrees of separation, however, had fallen from an initial number of 50 to about 7.

The mapping of social networks and communication has also become a broad area of inquiry in digital art. Many of these maps have been created within MIT’s Sociable Media Group and among the most well-known of these works are Warren Sack’s Conversation Map and Judith Donath’s Chat Circles.

Warren Sack’s Conversation Map
(since 2000) is one example of a possible mapping of communication: the Conversation Map is a browser that analyses the content of large-scale online e-mail exchanges (such as newsgroups) and uses the results of the analysis to create a graphical interface that allows users to see social and semantic relationships. Participants in the conversation are represented as little nodes with names and their exchanges are displayed as lines connecting them (proximity of the nodes indicates the amount of messages that have been exchanged). A menu of discussion themes lists the most commonly discussed topics in hierarchical order, and an overview panel presents the history of all messages exchanged over a given period of time. Terms in the conversation that are synonyms or have similar meaning are connected in a semantic network.

An earlier, well-known graphical representation of large-scale communication is Chat Circles by Judith Donath and Fernanda B. Viégas. Each person connected to the chat environment is represented as a colored circle with the person’s name attached to it. If users post a message it appears within their respective circle, makes the circle grow and then gradually fades as time passes. Activity within the communication environment is indicated through changes in the size, color, and location of the graphics. While users see all the other participants in the entire system, they need to be physically close to other participants to be able to ‘read’ their conversation. Users outside the person’s ‘hearing’ range are rendered differently-their circles appear as outlines. Donath also created maps of the social patterns of an electronic community with the project Visual Who.

While none of these projects was specifically based on the idea of small-world networks or set out to prove their principles, they nevertheless illustrate basic functionalities of small worlds.

The study of networks is part of the general area of science known as complexity theory. The phenomenon of small-world networks seems to suggest that there is hidden principle at work that organizes our world, a combination of randomness and order that hasn’t been fully explained. The concept of the small world network theory turns out to be applicable to anything from social networks and power networks to cell structure — that is, the communication between specialized cells — as well as the WWW.

2. The Internet and WWW as Small-World Networks

In 1964, the RAND corporation, the foremost Cold War think-tank, developed a proposal for ARPA (Advanced Research Projects Agency) within the Department of Defense that conceptualized the Internet as a communication network without central authority that would be safe from a nuclear attack. Paul Baran, working for the RAND Corporation, wrote several papers that examined different types of distributed networks, one resembling a fishing net, another a hierarchical decentralized system. [Fig 3] Baran concluded that the fishnet structure would be more survivable. The clustering of computers on the current Internet turns out to be over a hundred times greater than one would expect for a random network. It isn’t an ordered network either as Baran envisioned. It is another small-world network.

By now, there are numerous visualization studies of the Internet — among them the well-known visualizations of NSFNET by Donna Cox and Robert Patterson from the NCSA that were created in 1992. Bill Cheswick (from Bell Labs) and Hal Birch (from Carnegie Mellon) created maps of the Internet via tracerouting – a process that traces the routes taken by the packets of information traveling the network — which unveiled the structure of the Internet as hierarchical decentralized network. [Cheswick and Birch took traceroute-style path probes, one to each registered Internet entity. From this, they built a tree showing the paths to most of the nets on the Internet.] In 1999, computer scientists Michalis, Petros and Christos Faloutsos used 1997 and 1998 data for the network of the Internet to study the number of links a packet ordinarily has to traverse in going between one point and another. Despite the size of the Internet, the number is 4.

This phenomenon is perfectly illustrated in ART+COM’s installation Ride the Byte (1998), which used tracerouting to visually translated the routes of information through the global communication network. Users of the system could choose a website from a predefined selection on a display. On a large projection, they could then see the route taken by the actual data packet travelling to the requested site in order to retrieve the information. As an artwork, Ride the Byte reinstates the paradigms of a physical map and its emphasis on geographical location for the process of data travel, a process that usually is not visible but vanishes behind the concept of a global network that transcends time and place.

There are few nodes have a huge number of links and act as hubs. The Faloutsos team studied a subset of 4,389 nodes in the network, linked by 8,256 connections and created a graph of it. The curve they found follows what mathematicians call a “power law”: each time the number of links doubles, the number of nodes with that many links becomes less by about 5 times. It is unlikely that this is a mere coincidence, there seems to be a ‘hidden order.’

The same results were found by physicist Albert-László Barabási and colleagues at Notre Dame University who built a crawler — software that traces and ‘crawls along’ all the links leading from one webpage to others — to investigate the structure of websites. Starting with the Notre Dame University site of 325,729 documents connected by 1,469,680 links, they found a pattern nearly identical to that of the Internet. Each time the number of links doubled, the number of websites having that many links decreased by about 5 times

Barabási’s experiments also showed that a natural engine of small-world architecture is what is known as ‘the rich-getting-richer mechanism’: networks grow by preferential attachment in the simplest way possible.

Crawlers have also been used in various art projects to create structural maps of the Internet or websites that offers us a view of the network that usually remains unseen in the conventional ways of filtering and searching the Internet through portals such as search engines. Among the projects is Lisa Jevbratt’s 1:1, which used a crawler to assemble a database of the Internet’s IP addresses, which were then color-coded to create different views of the network. These maps are particularly interesting in comparison to the ones done by Cheswick.

Jevbratt continued her investigation with the project Mapping the Web Infome Imager, an application that allows users to customize and set parameters for their own travels trough the structure of the Internet. The focus of the project is the exploration of variables, both structural and aesthetic, that create a context for the structure of websites. Jevbratt’s software uses ‘crawlers’ to access web sites by following links between them and collect data. Users can decide where they would like to begin ‘crawling’ (on a specific page, a randomly picked one, or one returned by a search engine); how they would like to navigate (by sequentially following all the links on one page or jumping around on it); and how this search should be visualized. Jevbratt’s software creates visual maps of the structures of sites as they are encountered on the itineraries chosen by the user. Depending on the parameters chosen, these visualizations reveal information about a page itself, the users’ interests as they manifest themselves in the routes selected, and about the way in which different visual models for the display of information affect the way we understand it. By allowing users to choose between pixels or ‘degrees’ (length of lines) as a visual model for the creation of the crawler’s map, Jevbratt introduces two basic aesthetic paradigms for processing information.

Temporal and spatial characteristics of the crawled pages — for example, date on the client’s computer or the date the page was modified; color and attributes of the page design — become part of the mapping process itself, which blurs the boundaries between the map and the territory. Jevbratt’s maps effectively merge the inherent structure of sites with the journey of the user whose choices in turn affect the balance between these two realms. The project documents transitions from the micro- to the macro-level –the transition from one page to the next, from randomness to user control, as well as the transitions between meanings conveyed by different models of representation.

The Internet and World Wide Web are networks that have evolved without any centralized control — potentially, everyone can connect a server to the network or create their website. The small-world architecture of these self-organizing networks suggests that this structure seems to be a form of evolutionary principle, a particularly efficient form of communication (in the broadest sense) that allows quick transmission of signals and stability of the network even if links are removed.

3. The Brain as Small-World Network

The neural network of the brain exhibits the same fundamental structure as that of social or computer networks. The brain can be understood as an assembly of distinct modules, each of them responsible for different tasks, such as speech, language, vision. In neuroscience labs, magnetic resonance imaging techniques — which use radio waves to probe the pattern of blood flow in the brain, revealing how much oxygen its various parts are using at any moment — are used to see these modules in action. This process reflects the level of neural activity.

The processing centers of the brain reside in the cerebral cortex, which contains most of the brain’s neurons. The modules of the brain have to communicate in order to coordinate overall brain activity. A region of the human brain no larger than a marble contains as many neurons as there are people in the US – which are 287,400,000 (mid 2002). Each neuron is a single cell with a central body from which numerous fibers project. The shortest fibers (dendrites) are the neuron’s receiving channels, the longer fibers (axons) are the transmission lines.

Axons from any neuron eventually link up with dendrites of other neurons, and some axons link up with neurons in neighboring brain areas. The brain also has a small number of ‘long-distance’ axons.
Focusing on the brains of cats and monkeys, Jack Scannell (from the University of Newcastle, England) spent more than a decade mapping out connections between different regions of the cerebral cortex. The cat has 55 regions of the cerebral cortex associated with different functions, with 400 to 500 significant links connecting them. In order to determine how these links are arranged, Vito Latora of the University of Paris and Massimo Marchiori of MIT used Scannel’s maps and analyzed the brain networks in the terms set out by Watts and Strogatz. They found a strikingly efficient network architecture with a number of degrees of separation in the cat brain between two and three.

Strogatz and Watts had also studied the possible communication links between fireflies in order to solve the synchronization puzzle of their simultaneous ‘blinking’ over large distances. In 1999, Luis Lago-Fernandez and colleagues from the Autonomous University of Madrid studied networks of neurons in a similar way Watts and Strogatz had studied fireflies by creating a virtual model of the locust’s olfactory antennal lobe (a group of about 800 neurons that takes info from the olfactory smell receptors and relays it to higher regions of the brain). They build several simulations, with detailed models for the behavior of each of 800 neurons, in which they applied a stimulus to a small fraction of neurons in the network and then monitored the way it spread through it. While an ordered network resulted in an inadequate response, a small-world network yielded surprising results.Neural Networks, Evolutionary Computation, and Artificial Life and Intelligence Projects Models of brain and behavioral processes are commonly applied to computer technologies and networks in fields including computer science, neurobiology, and cognitive science.

The effort of building naturally intelligent systems has become its own area of research. Computational neural networks or neurocomputers are designed to mimic the architecture of the brain. They are infomation processing systems inspired by the structure of biological neural systems and mimic the functions of the central nervous system and the sensory organs attached to it. Humans are estimated to have 10 billion neurons, a fly about a million, and the largest neurocomputers currently have about a few million — which means that they have only little more than fly power.

Computational neural networks are distinguished by the following characteristics:

* they are not programmed in computer languages as conventional computers are, but trained in the way we want them to.
* they communicate through neurodes, interconnections with variable weights and strengths
* the information in neural networks is processed by constantly changing patterns of activity

As opposed to having a separate memory and controller like a digital computer, a neural network is controlled by 3 properties: the transfer function of the neurodes, the structure of the connection among the neurodes and the learning law the system follows
Neural networks have 3 basic building blocks: neurodes (artificial models of biological neurons); interconnects (links between neurodes); and synapses (junction where interconnect meets neurode).

Neural networks deal with sensory tasks (such as the processing of visual stimuli), motor tasks (controlling arm movements), or the decision-making by which sensory tasks drive motor tasks. Neural networks imitate behaviors and are better suited for processing at the cognitive level — for example, motor control, association, and speech recognition.

1. Small-world Architecture in the Structure of Human Language

Language and speech, as well as association are obviously an important area of an intelligent human system. The architecture of a small world also seems to form the basic structure of human language. Physicists Ricard Solé and Ramon Ferrer i Cancho used the database of the British National Corpus — a 100-million-word collection of samples of written and spoken language from a wide range of sources — to study the grammatical relationships between 460,902 words in the English language. They considered two words to be linked if they appeared next to one another in English sentences. Again, the system proved to be a small-world network, in which words such as ‘a,’ ‘the,’ or ‘at’ turned out to be well-connected hubs. The typical distance between words in the language was less than three, and the clustering of the network was 5000 times higher than for a random network.
In his essay “Rules of Language,” Steven Pinker outlined that language and cognition have been explained according to two basic and different principles:

* as the products of a homogenous associative memory structure. Associationism describes the brain as a homogenous network of interconnected units, which are modified by a learning mechanism. This mechanism records correlations among frequently co-occurring input patterns.
* as a set of genetically determined computational modules in which rules manipulate symbolic representations. Rule-and-representation theories describe the brain as a computational device in which rules and principles operate on symbolic data structures. (Some rule theories further propose that the brain is divided into modular computational systems that have an organization that is largely specified genetically.)

The study of one phenomenon of English grammar and how it is processed and acquired suggests that both theories are partly right. Regular verbs (such as learn, learned) are computed by a suffixation rule in a neural system for grammatical processing, while irregular verbs (such as run, ran, run) are retrieved from associative memory.

The above-mentioned two principles connect to the different models employed by neural networks (the computational kind) and Artificial Intelligence. Neural networks basically act as an associative memory while AI attempts to generate heuristics or rules to find solutions for problems of control, recognition, and object manipulation. The underlying assumption is that problems can be solved by applying formal rules for symbol manipulation – a task digital computers handle well.

Neural networks attempt to solve these problems at the level of the structure of the machine itself. In neural networks, symbolic processing is a result of the low-level structure of the physical system. While neural networks imitate behaviors, AI describes behaviors with rules and symbols.

2. Artificial Intelligence

As early as 1936, Mathematician Alan Turing (1912-1954) — one of the early influential theoreticians of AI who gave the famous Turing Test its name — outlined the Turing machine, a theoretical apparatus which established a connection between the process of the mind, logical instructions, and a machine. Turing’s paper “Computing Machinery and Intelligence” (1950) was a major contribution to the philosophy and practice of Artificial Intelligence, a term that was officially coined in the 1960s by computer scientist John McCarthy. AI had one of its groundbreaking victories in May 1997, when IBM’s Deep Blue Supercomputer beat the reigning world chess champion, Garry Kasparov. Deep Blue’s ‘intelligence’ is a strategic and analytical one and an example of a so-called ‘expert system’ that has expertise in a specific area and is able to draw conclusions by applying rules based on that knowledge.

AI is more successful on the level of expert systems than at the level of speech recognition, which is part of the area of AI research that focuses on man-machine communication. The best-known artificial intelligence ‘characters’ are Eliza and ALICE – software programs you can talk to, also known as chatbots (chat robots). Eliza was developed by Joseph Weizenbaum who joined the MIT Artificial Intelligence Lab in the early 1960s. Admittedly, Eliza does not have much ‘intelligence’ but works with what could more or less be considered tricks, for example, string substitution and pre-programmed responses based on keywords. Eliza’s much more advanced colleague ALICE (Artificial Linguistic Computer Entity) was designed by Dr. Richard S. Wallace and operates on the basis of AIML, or Artificial Intelligence Markup Language, a markup language that allows to customize ALICE and program how she could respond to various input statements. Both Eliza and ALICE are accessible online and people can chat with them through their respective websites.

The AIML Pattern Language Committee is working on what is known as AIML pattern language, a currently very restricted subset of regular expression syntax, plus some rules that allow the inclusion of certain AIML tags. At the Alice Foundation’s site, Richard Wallace keeps a gallery of ALICE Brain Pictures. The pictures are based on the AIML’s pattern matching of language, which is done by means of a so-called ‘Graphmaster.’ The Graphmaster consists of a collection of nodes called Nodemappers, which map the branches from each node. The root of the Graphmaster is a Nodemapper with about 2000 branches, one for each of the first words of all the patterns (40,000 in the case of the ALICE brain).

The eye-shaped log spiral plots all 24,000 categories in the ALICE Brain. The spiral itself represents the root. The trees emerging from the root are the patterns recognized by ALICE. The branching factor for the root is about 2000. As Richard Wallace points out there is a similarity between the graph of ALICE’s brain and the Graphmaster plot of cortical algorithms for visual processing, which, in his opinion, is no coincidence. The same cortical architecture that enables real-time, attention-based visual processing can in fact be applied to linguistic processing as well.

Artists have frequently been incorporating artificial intelligence and speech programs (mostly based on AIML) into their art. Although their works are based on the current research, they cannot be simply labelled as ‘AI projects’ since they are broader in their scope and metaphoric implications. Among the well-known AI-related artworks are Ken Feingold’s If / Then and David Rokeby’s Giver of Names.

In If/Then, two eerily humanoid heads are sitting in a box surrounded by what resembles styrofoam nuggets normally used as packaging materials. As Feingold explains, he wanted them ‘to look like replacement parts being shipped from the factory that had suddenly gotten up and begun a kind of existential dialogue right there on the assembly line’. The heads are involved in an ever-changing dialogue probing the philosophical issues of their existence as well as their separateness and likeness. Their conversation, based on a complex set of rules and exceptions, points to larger issues of human communication: picking up on syntax structure and strings of words in their respective statements, the heads’ communication at times may seem conditioned, limited and random (as human conversations sometimes do) but highlights the meta-levels of meaning created by failed communication, misunderstandings, and silences. The heads’ dialogue unveils crucial elements of the basics of syntax structure and the way we construct meaning, with often extremely poetic results.

While distinctly different from Feingold’s heads, Giver of Names (1990–present) by Canadian artist David Rokeby (b. 1960) addresses similar issues of ‘machine intelligence’ in an equally poetic way that transcends the merely technological fascination with AI and becomes a reflection on semantics and the structure of language. The Giver of Names is a computer system that quite literally gives objects names by trying to describe them. The installation consists of an empty pedestal, a video camera, a computer system, and a small video projection. Visitors can choose an object or set of objects from those in the space or from the ones they might carry with them, and place them on the pedestal, which is observed by a camera. When an object is placed on the pedestal, the computer grabs an image and then performs many levels of image processing (outline analysis, division into separate objects or parts, colour analysis, texture analysis, etc.). These processes are visible on the life-size video projection above the pedestal. The computer’s attempts to arrive at conclusions about objects chosen by visitors lead to increasing levels of abstraction that open up new forms of context and meaning. Giver of Names is an exploration of the various levels of perception that allow us to arrive at interpretations and creates an anatomy of meaning as defined by associative processes. The project ultimately is a reflection on how machines think (and how we make them think).

3. Evolutionary Computation, Behavioral Algorithms and Artificial Life

The sets of genetically determined computational modules (in which rules manipulate symbolic representations) that were mentioned earlier also connect to the concepts of computation in decentralized systems and genetic computation.
The first theories about computation with decentralized systems date back to 1948, when Hungarian-born mathematician John von Neumann (1903-1957) gave a lecture on the “General and Logical Theory of Automata.” His concepts would be later expanded by the Polish-born mathematician and physicist Stanislaw Ulam (1909-1984) who suggested that systems could be modeled on a grid of ‘cells, ’ which could behave on the basis of a set of rules. Further elaborating on von Neumann’s ideas, the American philosopher and computer scientist Arthur Burks — who, with his wife Alice, helped build and program ENIAC, the first computer — introduced the term ‘cellular automaton’ in the 1950s.

According to the International Society of Genetic and Evolutionary Computation , genetic and evolutionary computing (GEA) are computer methods based on natural selection and genetics to solve problems across the spectrum of human endeavor. Evolutionary computation and artificial life are two relatively new but fast-growing areas of science. Some people believe that artificial life and evolutionary computation are very distinct areas which only overlap in the occasional use of evolutionary computation techniques such as genetic algorithms by artificial life researchers; others argue that artificial life and evolutionary computation are very closely related and evolutionary computation is an abstracted form of artificial life, since both strive to represent “solutions” to an environment, deciding which “solutions” get to reproduce and how things reproduce.
At the basis of digital art projects in the realm of artificial life are inherent characteristics of digital technologies: the possibility of infinite ‘reproduction’ in varying combinations according to specified variables; as well as the feasibility of programming certain behaviors (such as ‘ fleeing,’ ‘seeking,’ ‘attacking’) for so-called ‘autonomous’ information units or characters.
Numerous artificial life projects, such as Karl Sims’ Genetics Images and A-Volve by Christa Sommerer and Laurent Mignonneau establish an explicit link between aesthetics and evolution.

Issues of the transformation of information and the survival of the (aesthetically) fittest form the basis of A-Volve (1994/5), which establishes a direct connection between the physical and virtual world. The interactive environment allows visitors to create virtual creatures and interact with them in the space of a water-filled glass pool. By drawing a shape with their finger on a touch screen, visitors produce virtual three-dimensional creatures that automatically become ‘alive’ and start swimming in the real water of the pool as simulated appearances. The movements and behaviors of the virtual creatures are dependent on their forms, which ultimately determine their fitness for survival and ability to mate and reproduce in the pool — aesthetics becomes the crucial factor in the survival of the fittest. The creatures also react to the visitors’ hand movements in the water: people can ‘push’ them forward or backward or stop them (by holding their hands right over them), which may protect them from their predators. A-Volve literally translates evolutionary rules into the virtual realm and at the same time blends the virtual with the real world. Human creation and decision play a decisive role in this virtual ecosystem: A-Volve is a reminder of the complexity of any life-form (organic or inorganic) and of our role in shaping artificial life. Allowing visitors to interact with the creatures in the pool, A-Volve reinstates human manipulation of evolution.

An example of the computational determination of the behavior of characters, as it is often found in gaming, is John Klima’s project Jack & Jill, which defines behavioral patterns for the characters in the familiar children’s story. The project was created for an exhibition called CODeDOC, which I organized for the Whitney Museum’s artport website. The exhibition per se has nothing to do with the subject of this paper but was meant to look at the relationship between the language of code and the (visual) artwork produced by this code. Both the code and its results were published side by side on the website, For Jack & Jill, Klima created a small expert system, and visitors to the site can actually take a look at the code of a small, condensed expert system. (This is by no means a complex one since the artists had to work with severe restrictions and their main code could not be longer than 8K.) John Klima basically wrote a story in Visual Basic that retells the nursery rhyme “Jack & Jill”: the characters have a set of behaviors, such as indecisive or reluctant emotional states. What the characters are doing at any given point is ultimately determined by the system: parameters such as ‘willing,’ ‘unwilling,’ and ‘indecisive’ allow for an interplay between and evolution of behavioral characteristics. If you are familiar with the gaming pantheon, you may recognize that the characters in this story are Super Mario and the princess from the original Donkey Kong game.

The issues outlined here are embedded in the much larger cultural context of human perception, for example, the relationship between the observer and the observed and between real and virtual interfaces. The central questions we are facing are: in how far do the systems we create in the physical world actually replicate the structure of bodies and brains (in terms of the networks of cells, synapses etc.)? And in how far do these systems and ‘interfaces’ we create in turn change our brain, our perception, and our body? Visual representations of culture (in the broadest sense) and the technologies that are used to produce them may in turn feed back into culture and ultimately change our perception. The relationship and feedback loop between the observer, cognitive processes, and cultures is becoming one of the pressing issues of our time, and will profoundly shape our understanding of art and the visual image.

 

The theories and research summarized in this article are outlined in detail in:


Albert-László Barabási, Linked: The New Science of Networks. Perseus Publishing: Cambridge, MA, 2002.
Mark Buchanan, Nexus: Small Worlds and the Groundbreaking Science of Networks. W. W. Norton & Co.: New York / London, 2002.