The Economist has an article on
Surveillance Technology: If Looks Could Kill surveying the latest in security systems for detecting suspect behaviors. Says the Economist:
"Many people would like to develop intelligent computerised surveillance systems. The perceived need for such systems is stimulating the development of devices that can both recognise people and objects and also detect suspicious behaviour."
Devices like lie detectors and these new systems I call
Suspicion Engines. This is a blossoming innovation domain, and one of the topics in our MIT
Neurotechnology Ventures class. And just yesterday, I met with Patrick Sobalvarro, the MIT alum co-founder
of
Intellivid, which intelligently analyzes CCTV videostreams in retail stores for security and theft prevention. And another MIT alum friend, Malay Kundu, is building up
StopLift, also a vision system doing retail security, but specifically targeting
the checkout line and expensive lossage problems such as "
sweethearting". Several other emergent example solutions are mentioned in the Economist article, including behavior-recognition systems, walking gait analytics, linger-loiter analytics, facial "micro-expression"
sensors, physiometrics such as skin temp and sweating and breathing rate, and more. Interestingly, they mention science fiction author Philip K. Dick’s “pre-crime” technology from his short story Minority Report. Another great SF story imagining similar future technology is
James L. Halperin's
Truth Machine about a world with really good lie detectors everywhere and the radical economic and socio-political possibilities surrounding such a transformative innovation. The Economist warns:
"To the historically minded it smacks of polygraphs, the so-called lie-detectors that rely on measuring physiological correlates of stress. Those have had a patchy and controversial history, fingering nervous innocents while acquitting practised liars. Supporters of hostile-intent systems argue that the computers will not be taking over completely, and human security agents will always remain the final arbiters."
Indeed, it's the human-in-the-loop which is really important -- and why I call these systems Suspicion Engines -- they serve to spotlight the most suspicious behavior at a given time, thus boosting the odds that we sense and intervene in the face of malicious intentions.
No comments:
Post a Comment