Neural nets find niche

Proponents say artificial neural networks are worth another look for defense applications

A warfighter has no problem distinguishing a tank against the foliage of a tree. The turret, the treads, the squat hull — they’re signs that trigger split-second recognition.

It’s a simple task for the human brain, with 100 billion neurons firing away. But it’s no easy feat for a computer, despite the Defense Department’s once heavily funded efforts to duplicate the processing power of human brains using artificial neural networks (ANNs). ANNs are meant to excel in recognizing the relationships among complex variables. The financial industry has embraced them wholeheartedly as fraud-detection systems, and they’re enjoying a renaissance in the medical field.

A few years ago, there were legions of DOD-funded ANN projects. “In the 1990s, neural networks was a hot word,” said Leonid Perlovsky, principal research physicist at the Air Force Research Laboratory. “You would say ‘neural networks,’ and you would get funding.”

The funded projects included an eight-year effort to put ANNs in M1A1 Abrams tanks as engine diagnostic tools. Officials also considered using them as automated target-recognition tools on board the canceled Comanche helicopter.

ANNs are close to being deployed on Navy ships as part of a firedetection system, based on work that the Naval Research Laboratory oversees. The British Royal Navy has deployed a similar system on its ships, said George Privalov, founder and chief technology officer of axonX. That company is working with the Naval Research Laboratory on the multisensor fire-recognition system, which uses neural networks embedded in video cameras.

Despite such promising applications, ANNs have fallen out of favor at defense agencies. Proponents such as Privalov and Dennis Braunreiter, chief scientist of sensor system operations at Science Applications International Corp., say recent developments show that it’s time for a new, albeit more critical, look at military uses for the technology.

Observers say the Defense Advanced Research Projects Agency in particular gave too much money to neural networks in the past decade and expected too many returns in a short period.

“I think the big-bang approach was not appropriate,” said Harold Szu, a program officer at the Office of Naval Research. A DARPA spokesperson said the agency could not provide anyone to talk about neural networks and doesn’t have ongoing ANN projects.

Although work on ANNs never disappeared, the military now seems to have an aversion to such systems. Perlovsky said he still conducts research on ANNs but is careful to avoid using the term during funding discussions because some people don’t want to hear about them.

Skeptics simply point to ANN failures to keep a lid on financial support.

Almost everyone in the neural-networking field has heard a story about military researchers attempting to use an ANN to detect tanks amid foliage. The story might be apocryphal, but it goes something like this: Scientists fed pictures into a neural network of trees with and without tanks parked beneath them. At first, they had stunning success — the machine had a 100 percent detection rate. But when they tried reproducing the results with new data, the ANN failed.

The computer hadn’t learned to detect tanks at all. Instead, it had focused on the color of the sky to determine whether tanks were present because the test photos had been taken on different days. In the pictures with the tanks, the sky was cloudy; in the pictures without tanks, the sky was bright blue. The network had learned to recognize the difference in the weather.


Scientists argue about the effectiveness of ANNs, but their adoption — or lack thereof — could be attributed to a perceived flaw that might be more cultural than technological. It stems from the nature of ANNs.

Such networks mimic the human brain’s activities in solving complex problems by breaking them down into component pieces for parallel processing. The human brain operates when one neuron builds up enough energy to excite another neuron with electrical and chemical signals. So digital nodes — the ANN equivalent of neurons — must collect enough energy before the data inside is sent forward for more processing. Nodes mimic the buildup of neuron energy by assigning different numerical weights to incoming data.

“Each node is specializing, each node is learning a piece of the problem,” said Steven Templeton, a senior research scientist at Promia. The cybersecurity company developed an intrusion-detection tool used by the Space and Naval Warfare Systems Command that incorporates a neural network.

When scientists know the correct answer, they can train a network with continuous feedback until it adjusts to reach that conclusion. Such so-called back propagation networks are the most mature ANNs, but there are also unsupervised networks for which researchers don’t know the correct answer.

Nevertheless, ANNs have a reputation for being black boxes — mysterious things that transform data in unknown ways. “What you get is just numbers that are connecting the nodes in the neural networks, so there is not an easy explanation for why did you get this result or that result,” said Zvi Boger, an Israeli scientist who worked with the National Institute of Standards and Technology on a research project to use remote sensors to detect toxic airborne chemicals.

Although neural network engineers might disagree, concerns about ANNs’ lack of transparency are genuine.


After Boger returned to Israel, the NIST project abandoned neural networks in favor of traditional statistical methods at the behest of Boger’s replacement, Barani Raman, a postdoctoral researcher whose work was funded through a National Research Council grant. His concern about neural networks stems from their tendency to produce unpredictable results: Researchers can’t guarantee a particular output with the same set of input data.

Raman said ANNs have a tendency to get stuck in local minima — network weights that might be valid locally but not globally as a problem solution. Neural networks attach to the first solution they find, but that set of internal weights might not be the best.

“Once it gets into this particular set of weights…it is very difficult to get it out,” Raman said. Even diehard fans say training an ANN can be tricky. It’s more art than science, Privalov said. “It’s not that a neural network by itself is a panacea that lets you just put data in, train [the network] and voilà, everything works,” he added.

“If you don’t know what you’re doing, you can do a lot of wrong things,” said Lars Kangas, a scientist at the Pacific Northwest National Laboratory who developed a neural network to detect fuel flow problems in Abrams tanks. He still touts the benefits of ANNs. His network was able to piece together data from 30 sensors to monitor engines’ health in real time.

Much of the sensors’ data had nonlinear relationships that are difficult to map, which is why Kangas turned to ANNs — not despite their black-box reputation but because of it. “The neural networks can learn that mapping for you,” he said.

He produced three generations of operational models, but the project was canceled in 2002. “There was a change in command and other things,” he said. “They just decided that the funding could be [used] somewhere else.”


Researchers say successfully training a neural network requires large amounts of data. A lack of data is one reason why ANNs failed to live up to expectations in the past, Braunreiter said. But a recent explosion of available data — made possible by the Internet — has accompanied an exponential leap in computing power that makes processing that data possible, he added. DOD stopped funding SAIC’s research into neural networks, so the company now finances its own projects.

Another criticism of ANNs is that they are not adaptable to changing situations. For example, if a neural network is meant to detect anomalies, it can struggle to adjust to a dynamic baseline, Raman said. “What is normal and what is abnormal has changed a little bit,” which means the ANN was trained on a set of data that’s no longer valid.

ANN proponents say they’re overcoming that limitation. Neural networks that can dynamically shift their internal weights are replacing networks that fix their weights once and stick to them forever, Braunreiter said. And rather than yield just one weighted output, nodes will create multiple weights.

“That gives you more dimensions to separate the objects at a finer level of detail because I’ve now added more weights,” he said. “That means I can create more dimensionality.”

“There are probably not a lot of ANNs in military systems today,” he added. “I think that’s going to change over the next five to 10 years.”

However, scientists want to avoid another round of 1990s-like excitement. “Things will never live up to the hype,” Perlovsky said.

Defense Systems Update

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.