Mind-reading computer could boost ISR image analysis
- By Kevin McCaney
- Nov 10, 2015
Dr. Anthony Ries shows a soldier how to play a computer game using only his eyes.
An analyst trying to manually comb through surveillance imagery in search of a potential target is in for a long day. And computers, while good at parsing a lot of that information, aren’t very good yet at picking out targets. But an analyst and a mind-reading computer working together? The Army Research Laboratory thinks it’s on to something.
In ARL’s MIND Lab—the acronym stands for Mission Impact Through Neurotechnology Design—a desktop computer and a soldier, both hooked up to an electroencephalogram, were effectively able to communicate via the soldier’s thoughts. The EEG, which detects brain-wave activity, allowed the computer to correctly identify which in a series of pictures the soldier was thinking about, according to an ARL release.
By working together, the soldier and computer were able to make use of each other’s strengths. "What we are doing is basically leveraging the neural responses of the visual system," said Dr. Anthony Ries, a cognitive neuroscientist who studies visual perception and target recognition. "Our brain is a much faster image processor than any computer is. And it's better at detecting subtle differences in an image.”
In the test, Ries showed the soldier a series of images, each of which fell into one of five categories: boats, pandas, strawberries, butterflies and chandeliers. The soldier didn’t have to move or say anything, merely pick one of the categories and then count how many images fell into that category. About two minutes after the test concluded, the computer identified the right one: boats. The electroencephalogram allowed the computer to analyze the soldier’s brainwaves, which appeared different when he saw a boat than they did when he saw an image in one of the other categories.
Ries said this kind of interface could eventually lead to more efficient image analysis, something the military needs with all the imagery coming in from drones, satellites and other sources, the volume of which is more than the military can quickly analyze.
He offered a scenario in which an intelligence analyst needs to examine a large image, which he would do by scanning his eyes back and forth on the image while working his way down. "It takes a long time,” he said. “They may be looking for a specific vehicle, house, or airstrip—that sort of thing."
Instead, the computer could break the image up into small “chips,” which then are flashed in rapid fashion on a screen, with the computer noting the change in brainwaves when the target turns up. "All the little chips are presented really fast. They are able to view this whole map in a fraction of the time it would take to do it manually,” Ries said.
He said research has shown that as many as five images a second could be flashed on the screen while still getting accurate responses. "Whenever the soldier or analyst detects something they deem important, it triggers this recognition response," he said. "Only those chips that contain a feature that is relevant to the soldier at the time—a vehicle, or something out of the ordinary, somebody digging by the side of the road, those sorts of things—trigger this response of recognizing something important."
The interface is still under development, so it’s not likely to be used soon. Among the research Ries’ team plans is working more eye movement into the system, studying the effects of audio communications on someone using the system, and the effects of things such as clenched jaw, since the EEG will recognize muscle activity. Algorithms will be needed to account for these and other factors.
"We want to create a solution where image analysts can quickly sort through large volumes of image data, while still maintaining a high level of accuracy, by leveraging the power of the neural responses of individuals," Ries said.
Kevin McCaney is a former editor of Defense Systems and GCN.