As good as surveillance technology has gotten at some tasks, computers still frequently fail when it comes to figuring out the difference between a threat and a tumbleweed. As the Department of Homeland Security found out with its failed efforts to build a “virtual fence” along stretches of the US border, automated sensors can generate a very high level of false alarms, unable to distinguish cars and people from animals. In Israel, animals interfering with sensors have forced the military to string electrified barbed wire to keep wild boars from triggering alarms.
But depending on people alone to do the watching isn’t the answer either. Even with the help of cameras and portable radar systems such as the Cerberus sensor towers deployed by the US military in Afghanistan, nearly half of the potential threats slip by—mostly because of the limits of human vision and fatigue associated with constant scanning of the screen or the horizon.
The Defense Advanced Research Projects Agency (DARPA) set out to find an answer to this problem in 2008 when it launched the Cognitive Technology Threat Warning System (CT2WS) program, an effort to magnify the abilities of a human lookout to achieve the perfect early warning system for soldiers in the field. Now, that program has completed testing of the product of its research: a sensor system that uses the operator’s brain activity as a filter.
The B-Alert x24 wireless EEG “cap” from Advanced Brain Monitoring. A similar device is used to track the brainwaves of CT2WS operators. Developed by a team of researchers from HRL Laboratories, Quantum Applied Science and Research, Advanced Brain Monitoring, and the University of California San Diego, the CT2WS system uses a combination of a 120-megapixel wide-field digital video camera, image processing software, and an electroencephalogram (EEG) “cap” that is worn by the operator. Scanning a 120-degree arc with its digital camera, the system presents up to 10 images per second to the sensor operator, monitoring for a specific type of brain activity—the P-300 brainwave, which is associated with the brain processing images and sounds. Even with those short glimpses, the human brain can perceive things like motion and shapes that would trigger a cognitive response.
The spikes in brain activity detected by the system don’t represent something the operator would necessarily be aware of. The brain filters out a lot of this information before it reaches the level of consciousness, so many of these spikes are effectively ignored, in that a person doesn’t follow up on everything their own visual system thinks is interesting. The system is basically an automated way of flagging every image that the visual system thinks may contain something different, and ensuring that the operator becomes aware of it.
The advantage of using human feedback is that the system can detect “threats that are context-specific and operator specific,” Dr. Deepak Khosla, senior scientist in HRL’s Information System Sciences Laboratory and program manager for CT2WS said in a statement about the program. While the image processing algorithms might not process a bird flying off or moving vegetation as a threat, a human in the loop might respond otherwise because those might be signs of some other activity.
Over the past four months, DARPA’s team tested the CT2WS system in desert, tropical, and “open” terrain. Without the operator wired in, the CT2WS system’s own cognitive visual processing algorithms resulted in 810 false alarms per hour, based on 2,304 “target events” per hour. But when a human was wired into the system with the EEG cap, that error rate plummeted to five false alarms per hour, and the system successfully identified 91 percent of the “real” threats introduced in the test. When a commercial portable radar was added to the system, it achieved a 100 percent detection rate.