DARPA seeks to mimic in silicon the mammalian brain

The Defense Advanced Research Projects Agency (DARPA) has awarded contracts for the first phase of a program that could revolutionize computer technology and produce systems that work similar to mammalian brains.

For the military, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative could provide the foundation for machines that will supplement humans in many of the most demanding situations facing warfighters today, according to DARPA. The system would include such things as processing video images to extract information, sensory integration, robotics and decision support, the agency said.

The ultimate goal is to produce a chip that mimics the way the brain works and that would drive machines that could autonomously process information in real-world, complex environments by automatically learning what the relevant features are in those environments and how they are associated.

“If we succeed in manifesting this technology into reality, we could deploy computer systems that can deal with ambiguity and use a wide range of both biological and non-biological sensors to act the way the brain does,” said Dharmendra Modha, manager of IBM’s cognitive computing initiative.

IBM, which is working with a number of other universities in the SyNAPSE program, was recently awarded a $4.9 million contract. Other contractors are Malibu, Calif.-based HRL Laboratories, which got $6.2 million, and Hewlett Packard, which got $1.8 million.

The problem with current computers is that they require algorithms derived by humans to describe what information to look for, and then how to process that information. That’s good for situations that are relatively well-defined, but not for real-world situations where there’s an infinite number of ways in which the various data elements can interact.

In those situations, the biological brain is more efficient by factors of anywhere from one million to one billion. According to DARPA.

The revolutionary advance of a successful SyNAPSE initiative in computing terms would be to break the co-called Von Neumann bottleneck. In current computer architectures, the processor and the memory are separated and performance is limited by how fast data can be shuttled between the two.

In the brain, however, the synapse represents both memory and processing function, Modha said.

“There is no Von Neumann bottleneck in nature,” he said. “And we haven’t even begun to exploit that.”

Systems that can work like the brain have been the ultimate goal of computer research for decades, but none of the efforts have so far come to much. But there’s a confluence of three powerful trends that Modha feels could make the SyNAPSE program successful.

Neuroscience has now reached the point where it is successfully producing meaningful data about brain neurons and synapses and how they work. Also, supercomputers have progressed to where researchers can attempt ever larger simulations of brain dynamics.

Also, he said, nanotechnology has now progressed to the point where a brain’s real estate, at least at the level of some 10 billion synapses and 1 million neurons per square centimeter, could be replicated in hardware.

DARPA has designed SyNAPSE as a five-stage program that would stretch over six years or more. The final phase goal is production of a multi-chip neural system capable of driving a robotic platform that could perform at the level of a cat in terms of interpreting the environment around it, DARPA said.

However, that’s still a big if. The first stage, named Phase 0, is intended to show if the technology is even feasible. If none of the contractors succeed in that phase, which will last for up to a year, SyNAPSE will end.

About the Author

Brian Robinson is a special contributor to Defense Systems.

Reader Comments

Sun, Mar 7, 2010 Dr. Ronald J. Swallow Breinigsville, Pa 18031

LOGICAL EXTRACTION OF NEO-CORTEX STRUCTURE

I do not understand why the neocortex is a mystery to everyone. Its neuron net circuit is repeated throughout the cortex. It consists of excitatory and inhibitory neurons whose functions, each, have been known for decades. The neuron net circuit is repeated over layers whose axonal outputs feed on as inputs to other layers. The neurons of each layer, each receive axonal inputs from one or more sending layers and all that they can do is correlate the axonal input stimulus pattern with their axonal connection pattern from those inputs and produce an output frequency related to the resultant psps. Axonal growth toward a neuron is definitely the mechanism for permanent memory formation and it is just what is needed to implement conditioned reflex learning. This axonal growth must be under the control of the glial cells and must be a function of the signals surrounding the neurons.

The cortex is known to be able to do pattern recognition and the correlation between an axonal input stimulus and an axonal input connection pattern is just what is needed to do pattern recognition. However, pattern recognition needs normalized correlations and a means to compare these correlations so that the largest correlation is recognized by the neurons. Without normalization, the psps relative values would not be bounded properly and could not be used to determine the best pattern match. In order to get psps to be compared so that the maximum psp neuron would fire, the inhibitory neuron is needed. By having a group of excitatory neurons feed an inhibitory neuron that feeds back inhibitory axonal signals to those excitatory neurons, one is able to have the psps of the excitatory neurons compared, with the neuron with the largest psps firing before the other do as the inhibitory signal decays after each excitatory stimulus, thus inhibiting the other excitatory neurons with the smaller psps. This inhibitory neuron is needed in order to achieve psp comparisons, no question about it. For a meaningful comparison, the psps must be normalized. As unlikely as it may seem possible, it comes out that the inhibitory connections growing by the same rules as excitatory connections, grow to a value which accomplishes the normalization. That is, as the excitatory axon pattern grows via conditioned reflex rules, the inhibitory axon to each excitatory neuron grows to a value equal to the square root of the sum of the squares of the excitatory connections. This can be shown by a mathematical analysis of a group of mutually inhibiting neurons under conditioned reflex learning. This normalization does not require the neurons to behave different from as known for decades, but rather requires that they interact with an inhibitory neuron as described.

Wed, Dec 24, 2008 wulfcry

The road ahead is never that far its the journey istelf that stals the goal not knowing what happens while your on the road. And the same is with men qeust for A.i an undefined computing factor which only purpose is to learn and adapt by-it-self. The biggest (unsound) confusion comes from mixed view of biological intelligence (humans , mammals ,insects etc) with machine intelligence (computers, electronics) two different things where one of them needs to emulate the other. It is possible to construct machine intelligence but it has not have to be in emulating biologic processes which we learn from. Its a totaly different science. The possible gain in A.i reside in programmable memory which a.i algorithms can use to update any network faster without having a black box effect. By my oppinion the memristor could be , eprom memory which almost every pc have etc. even normal cpu's with programable fpga makes possibility's. The only bottleneck in a.i is the how people feel it must be thats the error that slows the process keep it real.

Tue, Dec 9, 2008 Mark Ross Western Washington State

Yes, we're a long way from making it happen. But, isn't that how we move forward, by trying and learning from our mistakes? The first attempts at flight didn't produce much more than data about what didn't work and Edison found hundreds of ways to build an incandescent bulb that wouldn't light. But now we have airplanes and lightbulbs and ever so much more. The money being spent on this is pretty small and the data retuned may turn out to be very useful in many ways.

Mon, Dec 8, 2008 EnglishPatient Fairfax, Virginia

Is this a joke? We are so far from making this happen it's ridiculous to set such a goal. We haven't even got half-decent AI working yet. Does anyone expect this to succeed, or are we just looking for ways to get cash to cronies and send the govt further into debt in the process?

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Your Name:(optional)
Your Email:(optional)
Your Location:(optional)
Comment:
Please type the letters/numbers you see above