Neural Prosthetics: A Survey of Technologies

The impact of fiction on synthetic reality

Rooftop – The Matrix. Black latex bodysuit, black trench coat and thigh holsters. “Dodge this,” she says, putting the barrel to his temple. The bullet passing through the chamber and into his skull blows him backward off his feet into a freeze and super-slow click, click, click–breathlessly out of the frame. Other agents are in hot pursuit. Digital agents–or more precisely neural simulations. A nearby helicopter presents an option for escape. Neo: “Can you fly that thing?” In one smooth move as she pulls off her sunglasses and flips open her phone, Trinity answers, “not yet.” Coolly, she proceeds, “Tank, I need a pilot program for a B-212 helicopter.” Pause. “Hurry.” Ever-so-slight stress crosses her face, a mere fluttering of her eyes… “Let’s go!” In a matter of seconds, the program is personalized and downloaded into her memory. The duo jump into the helicopter, the simulated helicopter she can now fly like a pro. The programming required to fly the machine comes from a small disk. It’s data for the neural-simulated mind of the simulated Trinity. In the story, the flight program is processed in the real brain of the non-simulated Trinity who has previously been jacked – a small, machined, receptor organ implanted in the back of her neck receives the download. The scene flies by with the adrenaline speed of pumped-up science fiction narrative.

The speed of the story enables a fast-paced plausibility in the science. Possibly unrealizable, the technology at the very least can be explained. Even a casual science fiction fan has an intuitive appreciation, a vocabulary of the science that is implied –namely, that the interior end of the jack would be intricately interwoven with Trinity’s cerebral cortex. The programming code, flowing in massive quantities, is transformed from a digital signal into stopped-down, electron-sized voltage by synthetic transistors. It pulses through, without interruption, or noise to the synapses of Trinity’s existing neurons, which connect with the synthetic dendrites of the implanted, silicon cultured-neurons. The neurotransmitter chemistry flows out of the synthetic cells as naturally as it does from the existing ones. Also, importantly, the prosthetic cells reproduce in accordance with their counterparts–based on their individual programming determined by the nano-sized machines self-replicating inside the cell walls. The programming is also hardcore, generating a thoroughly tested knowledge of flight maneuvering, an intuitive familiarity with extremely complex instrumentation and countless memories of body movements, muscles tensing and relaxing, sight and distance coordination–perhaps years of logged flight time. Trinity can select from a variety of behavior and skill programs.

The Martrix is the latest stand out in the competitive teching up of the literary imagination. Technology has become more than a trope or the background to the action. It’s central to the aesthetics of the cybertech genre. It’s the guts of the material. Technology drives the aesthetic. The technological capabilities are ultimately what constitute originality in the genre. It’s what’s fascinating. Within a few years the climax of the genre will be characterless, intricately detailed descriptions of technological capabilities, like neuromorphic engineering – pure and plotless.

A more or less acceptable degree of scientific plausibility allows this sci fi narrative to rip by without breaking momentum. To do this the technology must be somewhat grounded in the existing science of its field. For the technology to be competitive within the genre there needs to be more than a casual understanding and depiction of the complexities of the real science. The invented technology extends the real science with a solid, logical sense of credibility–if you can say that about a fictional technology. The balance between actual science and its fictional extension is the equation that gets massaged. Sci fi audiences are sophisticated and will think through the shortcomings of a less-than-sound technological sketch. What really sells the fiction is a sense that given the time (probably a lot of time) and the money, the technology could become available as real science and available electronics. There are desires here that are beyond strictly escapist sensibilities.

As an aesthetic, technology is reflected from the culture-wide fascination with the potential of scientific research, which for a lot of observers appears to be expanding or limitless. Referring to many fields of productivity and activity, “technology” is one of those words that doesn’t really describe the breadth of it’s own inclusiveness. It’s the tools and the tools that make the tools. When its focus is broad it defines cosmologies and when it’s tight it isolates the invisible. The increasing attention paid to the more spectacular scientific advancements is challenging fundamental cultural paradigms. Current medical breakthroughs in cloning and the conclusion of the DNA map have pushed our sense of laboratory practice into a hazy confusion of fiction masterminding reality. Ethical platforms have become aroused with deeply problematic issues. What’s intriguing is that these waves of new achievements fuel an attitude, albeit a simplistic one, that verified scientific truths and physical laws have been temporarily suspended or are at least held in suspicion. Of course, the non-theoretical, pragmatic reality of current science-spectacles is that hundreds of breakthroughs on multiple levels have allowed them to occur. However marvelous and imaginative, I think it’s generally assumed that the smart people, the readers or the viewers who do know the difference between sci fi’s imaginary extensions and real material science, can appreciate each for its own function. But more sensationally, if you were to examine the commingling of desires and influences you might ask to what extent research is being fueled by the fictions that are intimately associated with it.

Flash back. Neuromancer, 1984. “Case was twenty-four. At twenty-two, he’d been a cowboy, a rustler, one of the best in the Sprawl. He’d been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He’d operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.” Sixteen years ago William Gibson seeded the idea that a chunk of hardware could integrate with neural processing. This is significant because in 1999, Andy and Larry Wachowski, the brothers who made the Keanu Reeves matrix were obliged to upgrade their technologies with a sophistication that relies more heavily on actual scientific research. The hard-science technologies of emerging research informs the recent matrix far more aggressively than technology influenced the genre when Gibson set it in motion. Has there been an expansion of technological demand? The notion of the private intellect becomes more difficult within the environment of saturated technological hyperbole–technosis. This may not make it so easy to continue to separate fact from the fiction that has evolved parallel to real science. Literature is more than a passive representational reflection. There is an exchange of influences.

Sci fi technologies share a motivation with actual laboratory progress. In the imaginary, whether or not the technology is displayed as fabulous art direction or technical prose, the demands are the same. The science must be upgraded, re-engineered to accommodate the shortcomings of the previous model or depiction. More than appearing new, the technology must have evolved – it’s self-monitoring. What is new is what is re-engineered. Technical problem-solving is as integral to the subgenre narration as it is to laboratory progress. Across the board, involvement with technology has become aggressive in a demand for problem-solving–implementation, application and availability. Do the technical ideas in a sci fi scenario filter down as an influence to laboratory engineers? So far I have found little evidence to suggest they do, only one, with any specifics. The Hughes Space and Telecommunications Company released a press statement back in ‘98 claiming that Mr. Spock’s idea about ion propulsion engines had served as the impetus for the design of the real-life prototype. Used in the Deep Space 1 spacecraft, the ion thruster is 10 percent more efficient than chemical thrusters, which translates to a reduction of fuel mass up to 90 percent. This supposedly cut the travel time to asteroid 1992 KD in half. This does indicate that technical scenarios can be shared, but influences may not always be so specific.

The B-212 Scenario can be Referenced in Several Research Disciplines

The jack in the back of Trinity’s cranium is probably fabricated from a titanium alloy. It would have to be somehow electrically grounded, and would penetrate only a few centimeters. At the level of the foramen magnum, where the spinal cord ends and the medulla oblongata begins, the gray matter undergoes a marked cellular reorganization. Adjacent to this, the dorsal medium septum is skirted by the gracile fasciculus which comes to an end as its primary sensory axon, a termination point for various impulses. Somewhere in this area would be the bridge between the artificial chips configured between the jack and the receiving neurons of Trinity’s central processing unit. The programming code would be, on the receiving end, a language of neurotransmitter chemical propensities and micro-electrical impulses geared to interact with the existing neuron environment of thought processing, memory, and motor control. An extremely high level of precision in the voltage generated in the synthetic, synaptic activity would be absolutely necessary. Electrical signals in the brain are extremely slight.

As technologies fly by in the split-second scenario, a myriad of research-based sciences are called up for revue. From the inside out, in looking for connections I considered biotechnology, bioengineering, and biomedical research, as well as aspects of molecular neuroscience that address what molecules and mechanisms involved in synaptic transmission are modified, and how they are modified during learning and memory. To locate other connections I looked for research in neural processing, because there are a host of technologies preparing artificial intrusions into these areas. There are teams of researchers who are creating synthetic neural material of various kinds for both prosthetic insertion and as artificial computational machines modeled on neuron environments–robotics, artificial intelligence, and artificial life. The scale of all of this research is a science in-and-of-itself. Tools of nanotechnology are indispensable in these research areas, and they generate their own directions for research within the respective fields. If the sci fi depictions of Trinity’s neural prosthetics and enhanced sensory capacities are legitimate projections of current technological team research it would be found in these areas. Referring to silicon nervous system research, Carver Mead (Analog VLSI and Neural Systems) writes:

If we really understand a system we will be able to build it. Conversely, we can be sure that we do not fully understand the system until we have synthesized and demonstrated a working model. Artificial neural network chips are now on the market. I view these as the forerunners of components from which you might hope to contrive a silicon nervous system.

In emerging technology the computational unit is at the scale of the individual neuron and getting smaller fast.

Scale and Microelectronics

Scale is decisive factor in emerging technologies. The foundations of next-generation electronics are all about perfecting the engineering of atomic-sized devices. I think it might be beneficial to discuss just how tiny the work in these emerging fields of research has become. A nanometer is the standard that guides the new era of industrialization. In an attempt to wrestle with the scale of the nanometer I came across an interesting history of the meter. From the late 18th century until the middle of the 20th, the meter was always one particular bar of metal, suitably stored and protected at a selected site in France. The meter was made from a dense alloy of platinum and iridium and was scrupulously protected. Copies were made and one, and only one, was given to each nation adhering to the Treaty of the Meter. The United States got two copies, numbers 21 and 27. As accuracy advanced the 18th century definition of the meter as the ten-millionth part of the quadrant from the pole to the equator was accurate to about one part in 10,000, which is to about one-tenth of a millimeter per meter. As technology improved, the definition was accurate to about one part in 100,000. The ten-fold gain is commonly referred to as one order of magnitude. Today, light-based meter standards permit measurements that are accurate to within three parts in 100 million. The nanometer is 10-9. This is incredibly small to be constructing functioning devices. By comparison, the diameter of a human hair is 8×10-5. The lower limit of the unaided eye is roughly 4×10-5; a chromosome inside a cell measures approximately one micron or 10-6, another significant measurement in the new technologies. Still smaller, the wavelength of cadmium red light is 6×10-7. Small bacterium: 2×10-7; a virus 10-8; and smaller yet the nanometer at 10-9, one billionth of a meter. Significant measurements go further still with the angstrom, the diameter of an atom, a picometer, the classical radius of an electron, a proton, a fermi and the smallest measurement on my reference chart: a compton wavelength of the atomic nucleus of iron 56: at 2×10-17. Scales beyond the senses are difficult to comprehend – they become imaginary distances. It’s best to just be impressed with the number of zeroes.

The microelectronics age is comprised of fifty years of shrinking transistors. The integrated circuit heralded the so-called information age in 1959. In 1965, Gordon E. Moore (now Chairman of Intel Corp.) predicted that the power of integrated circuits would double every eighteen months with proportionate reductions in cost. This prediction, which is known as Moore’s Law, is the axiom of the microelectronics industry. Eventually these nanotechnicians will run out of room. The question in these engineering circles is how small an electronic device can be made before the classical laws of physics prohibit it from operating. Observers in the field have qualified the nanotechnological incentive as top-down by the engineering disciplines and bottom-up by the physical disciplines. Scientists working from the bottom up are attempting to create a new understanding and structure from the dynamics of the basic materials and their molecules. Scientists working top-down are perfecting the ability to fabricate electrical devices at smaller and smaller scales. In biotechnology these scales make research at the level of the cell much more approachable.

Nanotechnology is the study of how to build materials, machines and products with atomic precision. Moungi Bawendi of the MIT Chemistry department is conducting research on semiconductors that are so small they begin to produce unpredictable results:

Our research involves the study of the physical chemistry of materials of a size between the bulk and isolated molecules. The focus is on nanometer size fragments of semiconducting inorganic solids which contain tens to thousands of atoms. These are also known as nanocrystallites. They exhibit properties that open a window into the fuzzy size region where solid state properties rise out of the molecular noise. The steady decrease in the size of semiconductor structures that has brought about the information age will soon reach a level where the quantum mechanical behavior of electrons in confined dimension cannot be ignored. Semiconductor nanostructures in the 1-10 nm range show distinct quantum behavior at room temperature. Effects resulting from restricting electronic motion in all three dimensions can be very dramatic.

The status of the micro-electric nanosciences is defined currently by the single-electron transistor (SET). This is a metal island, a few nanometers across, coupled to two metal leads via tunnel barriers. The current can be successfully controlled down to a single electron. SETs are also components of semiconductor devices, where their behavior is characterized as a quantum dot. These dots are nanometer-sized man-made boxes that control the flow of atoms by selectively holding or releasing them. The electron, being the smallest unit of electrical charge, would represent bits of information and for the time being would define the smallest processor unit for computer memory.

In their latest research Marc Kastner and his colleagues at the Research Laboratory of Electronics at MIT have pioneered the semiconductor single-electron transistor that turns on and off once for every electron added to it.

Not only do these devices have significant technological potential, but they also provide new insight into the behavior of electrons confined to small spaces. When we first studied amorphous semiconductors in the late 60s, it seemed that the theorists would be able to make predictions that we could test quantitatively. However, as work progressed, it became less possible to do that. The materials were characterized well but we ran into brick walls trying to quantitatively understand the fundamental issues.

Then, high-Tc superconductivity was discovered in 1986. Like thousands of other physicists we knew it was the greatest condensed-matter physics problem of our time. When people talk about dark matter in the universe or why quarks have the masses they do, high-Tc superconductivity is among these great physics problems. It’s extraordinarily mysterious and we still don’t understand it. Most superconductors, when they are warm are conventional metals. When we cool them they become superconductors. However, the high-Tc materials are actually semiconductors that become superconductors once we dope them. Soon after the phenomenon was discovered came the infusion of funds, and the progress quickly resulted in a crystal that allowed us to get a foothold in the field.

The advantage of semiconductors for SETs, from a physics point of view, is that an artificial atom in a semiconductor SET has so few electrons–typically only 30 or 40. On the other hand, if you make a particle of metal and add electrons to it, you’re starting out with billions of electrons and just adding more. As a result, we can see the effects of energy quantization more easily. We use the term “artificial atom” whenever we confine electrons to a small space. This causes their energy and their number to become quantized. People are doing all sorts of fancy things with artificial atoms. They’re putting them together to make artificial molecules which in arrays have unique collective behavior. All exciting research depends on new technology.

Because SETs exhibit extremely low power consumption, reduced device dimensions, excellent current control and low noise, they promise to reveal new physics and innovative electronic devices. Importantly, the design application could result in creation of high-density neural networks.

Some Neural Prosthetics Research

When I began my search for Trinity’s technology, what I found is that, generally speaking, advancements in research and application are conducted not so much by forward-thinking technicians but by hundreds, or more, problem-solving teams, banded together under an umbrella of focus tackling specific areas of health and welfare. The Neural Prosthetics team at Caltech, under the guidance of Richard Andersen, began their focus on neurological disorders that to date have no satisfactory treatment, like spinal-cord injury, stoke and ALS. This team utilizes recent neurological discoveries made in the basics of sight and movement.

Andersen and his team have located in primate and human subjects the signal area that anticipates the next intended arm movement. In their work they implant chronic electrode arrays to record in real time these signals. The recording is sophisticated and involves electrophysiology techniques engineered by Andersen and his team. Replicating and expanding this recording will provide high-level signals for guiding real or prosthetic arms. This is the fundamental groundwork for establishing the code or language of neural activity that will ultimately contribute to the vocabulary necessary to perform a host of projected functions.

There is other health sciences derived research that is laying the groundwork for prosthetic advancement. Robert Langer, a biotechnology engineer at MIT, works at the interface of biotechnology and materials science. His focus is the development of polymers to deliver drugs, particularly genetically engineered proteins, continuously at controlled rates for long periods of time. These controlled rate systems can be triggered magnetically, ultrasonically or enzymatically. The polymeric delivery system will ultimately be absorbed by the body with no toxicity. Langer’s delivery systems will cross the blood-brain barrier. Efficient delivery of brain chemistry is yet another approach to delivering code directly to the specific areas of the brain.

Langer’s colleague at the Biotechnology Process Engineering Center at MIT, Douglas Lauffenburger, focuses on molecular cell bioengineering. The application of this engineering is to develop understanding of cell function in terms of fundamental molecular properties and to apply that understanding to improved design of cell-based technologies. Cell-based design is an area of engineering where the research consists of building simple mechanical machines inside the cell walls. Lauffenburger’s group is isolating important functions of receptor-mediated regulation of blood and tissue cell behavior such as proliferation, adhesion, migration and macromolecular transport, all from an engineers’ perspective. Their intention is to expand on existing bio-cellular function, to build a catalogue of cells that perform novel functions, or multitasking functions that are required in prosthetic problem-solving. The trajectory is to be able to build cells with functions other teams might require for their research.

The fundamental units of an artificial neural network are modeled after individual neurons. A typical neuron is composed of three distinct parts. The cell body is ovular and contains a single cell which is attached on two sides by the dendrite and the axon. These two parts are tree-shaped; they have branches. The dendritic tree collects excitatory and inhibitory inputs from other neurons and passes these messages, as voltages, on to the cell body. These voltages are added to its current voltage, if excitatory, or subtracted if inhibitory. When a threshold is exceeded, an output voltage signal is transmitted down the axon, which can be nearly a meter in length, to synapses that connect the leaves of the axonic tree to the dendrites of other neurons. Each synapse is a chemical connection between the axon in one neuron and a dendrite of the next. When the signal arrives at a synapse, vesicles of a chemical neurotransmitter are popped. The neurotransmitter then disperses across the synaptic gap, where it is picked up as an excitatory or inhibitory input on a dendrite of the postsynaptic neuron. And so on and so on. This is your basic brain activity. The average “network” consists of over 100 billion neurons. The interconnections number trillions and there are support cells that outnumber the neurons ten to one.

In the field of neural prosthetics Charles Ostman of the Nanothinc Science Advisory Board, who spends a lot of time chronicling research by nanoscale researchers and technicians, predicts that “engineered pseudo-organelles, self organizing synthetic proteins interconnecting with dendritic structures and related component systems will be part of the cacophony of nanoscale and microscale devices available to researchers.” Synthetic protein-based dendritic structures are the precursory infrastructural components for an artificially constructed neural interconnect system. Culmination of this line of research would be the interface mechanism for a biochip neural prosthetic device. An increasing portion of the nanotechnology applications are focused on the manipulation of molecular materials found in living cells. Ostman claims that “all living organisms are, in a sense, biomolecular nanofoundries, with the software of the assembly process encoded into the genetic material of each individual cell. However, it is becoming increasingly possible to create synthetically manipulated versions of biomolecular components, utilizing various protein based sub-components and self-manipulating entities, constructed from the building blocks of existing organisms.” The self-manipulating entities that Ostman speaks of are considered a revolutionary breakthrough in medical science because they are not based on the treatment of disease but rather the “cybernetic enhancement of the human body with extracellular corrective chemistry systems” constantly updating and maintaining cellular health and functionality.

The closest I have come to the jackpot regarding Trinity’s technology comes from the research team of Svetlana Tatic-Lucic, John A, Wright, Yu-Chong Tai and Jerome Pine from the physics department at Caltech. This team focuses on the design and fabrication of a new type of micro-machined silicon structure for in vivo (in the body) and in vitro (out of the body) extracellular stimulation and recording. The novelty of these structures is neuron wells fabricated in a silicon membrane 20 microns thick (one-third larger than a nanometer), in which cultured neurons can be implanted and grown. This approach tremendously improves the signal clarity – out of the noise of additional electrons – in their silicon structure for extracellular recording, the foundation work for programmable code – a pilot program for a B-212 helicopter.

At present all existing methods for the analysis of live neural networks have certain drawbacks that present difficulties for the accurate interpretation of results. The most frequent problem is the low signal-to-noise ratio. And, for in vivo, a questionable prediction for long-term use because of the movement of the probe. The Caltech team has addressed these concerns with the design of their neuroprobes. In their studies, the cultured neurons serve as an interface between electronics and cerebral neurons. They have been tested to verify their endurance in the insertion process. Members of the team additionally introduced an etching process to improve the edges of the probe. Sharpness is very important for reducing the tissue damage during the probe penetration. The ability to implant previously unknown programming skills, either motor or memory, in the learning areas of the brain is the fictional extension of this research. And the ability to get that programming to function as an integral chemical dynamic in the performance of a task is altogether another area of research. At Caltech, Mary Kennedy’s laboratory studies the role of protein kinases in learning, memory and other forms of synaptic plasticity. A kinase is a substance that converts an inactive protein into an enzyme. Kennedy’s group characterized type II CaM (calcium/calmodulin-dependent), a neuronal calcium-dependent protein kinase that may play an important role in controlling changes in synaptic strength that underlie memory formation – another component of the code. This presents a potential discrepancy: if Kennedy’s advanced kinase neural code is delivered successfully but is inadequate in design, chemically speaking – in effect it doesn’t perform well –the program becomes memory lost or rejected. Not the same as digital-electronic code, a chemical code must chaotically disperse. Is rejected memory the same as forgetting?

Fictional Scenarios and Unbelievable Accomplishments

Categorically direct influences of sci fi technology on laboratory research, I have to admit were not forthcoming. I did find that some of the core research trajectories in neural science projects share developments with other project groups that have influences from further distanced sources like Artificial Intelligence and Artificial Life. They either share a crucial database, or require similar engineering. These fields share novel approaches to in vitro studies of neural networks. The model of a plodding, pragmatic method, belongs with the isolated white-coated technician. It belongs to a not-to-distant time when information flow was not part of the paradigm. Research today is conducted with a more wide-open sense of possibility – due to sheer numbers, and it’s driven forward by plausibility. The advancement of computer technology brought an increased ability to process extremely complex theoretical material and molecular models. On newer stages, some of these complex modeling projects have been able to demonstrate results with material physics, due to the emerging, engineering sciences. Picture a six-foot-high polished steel device that holds atoms at bay, in novel arrangements. With this new ability to experiment with actual material, like SETs, come whole new bodies of unexpected data. Quickly, the fields open up and other possibilities present themselves–new plausibilities. If these same plausibilites that arrive with the new technologies do in fact share similar qualities as the fictional scenarios, that would qualify as influence. Then again, the reason that material achievements and predictions of these emerging sciences are at all “futuristic” is because fiction sketched it first. So when the actual science begins to fill in behind the fictional we can’t help but casually conclude that some small scenario of the fiction is part of the research engine.

Bruce Sterling has an interesting take on the developing sciences of Artificial Life. “Life is best understood as a complex systematic process. Life as a property is potentially separate from actual living creatures.” In relation to the work being done in silicon cultured-neurons there will come a time when the controversial possibilities of implementation will move closer within the scope of research–what will have to be decided. There is a life dynamic that will be addressed. Sterling: “The great hope of Artificial Life studies is that they will reveal previously unknown principles that govern life itself, the principles that give life its mysterious complexity and power, its ability to defy probability and entropy. Some of these principles are hotly discussed.” At the far end, design models will come from fundamentals of nature. Flocking might inform the design philosophy of the synthetic neural field. Flocking as a dynamic principle is instructive because it represents an extremely complex motion that arises naturally – the amorphous movements of a flock. A flock of birds may be a body of thousands of individuals. Flocking consists of millions of simple actions and simple decisions–each repeated again and again, each affecting the next decision in sequence and rippling through the group in an endless system of feedback. There is no master control, only the individual programming of the individual neuron.

If there is going to be software, a code, or a language of neural signals that can be transcribed into an individuals thinking, it will be an impressive undertaking. The language of Trinitys program will be the ultimate sophistication of electrical signals, the chemical counterparts and pulse frequencies will emanate from an external source, able to communicate to the implanted neural network, grown both inside and outside of the brain. Right now the infrastructure is being laid down. The recording of the most primary signals is underway which will inform and specify all the subsequent work. Whether that is ever achieved is yet to be determined but the plausibility is certainly firming up.