X
Innovation

Researchers figure out how to trick facial recognition systems

Neural networks may be the hot topic these days, but they're far from infallible.
Written by Chris Kanaracus, Contributor

The notion of machine-powered facial recognition is one that predates today's actual technology for it by many years, thanks to robot POV shots from sci-fi films such as The Terminator and Robocop. But while many advances have been made, facial recognition software is far from infallible, as researchers from Carnegie Mellon University recently found.

Here are the details from a report in Quartz:

Researchers showed they could trick AI facial recognition systems into misidentifying faces--making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates.

Modern facial recognition software relies on deep neural networks, a flavor of artificial intelligence that learns patterns from thousands and millions of pieces of information. When shown millions of faces, the software learns the idea of a face, and how to tell different ones apart.

As the software learns what a face looks like, it leans heavily on certain details--like the shape of the nose and eyebrows. The Carnegie Mellon glasses don't just cover those facial features, but instead are printed with a pattern that is perceived by the computer as facial details of another person.

In a test where researchers built a state-of-the-art facial recognition system, a white male test subject wearing the glasses appeared as actress Milla Jovovich with 87.87% accuracy.

The test wasn't theoretical--the CMU printed out the glasses on glossy photo paper and wore them in front of a camera in a scenario meant to simulate accessing a building guarded by facial recognition. The glasses cost $.22 per pair to make.

The glasses also had a 100 percent success rate against the commercial facial recognition software Face++, although in this case they were digitally applied onto pictures, Quartz notes. CMU's research follows similar efforts by Google, OpenAI, and others, it adds.

While Quartz's report emphasized the security risks associated with the vulnerability of neural networks regarding facial recognition, there are other serious matters to consider as well, says Constellation Research VP and principal analyst Steve Wilson.

CMU's research "shows us a lot of our intuitions about computerized object recognition are false," he says. "When I say 'intuition,' I really mean we've learned our expectations for artificial intelligence from watching science fiction. What we've got is a huge set of technologies that are still evolving. There's an urgent public discourse about what it all means, from self-driving cars to airport security, but it's based on a simplistic understanding of how machines see."

Today's neural networks go beyond old-fashioned methods of object recognition -- tagging, extracting features and colors, then processing the information to identify various objects -- Wilson notes. Neural networks are touted as reflecting how the human brain works, with the ability to learn over time. That's what makes CMU's research so important.

"CMU comes along and says we can take a thing like a pair of glasses that doesn't even resemble the target face at all and the neural network triggers off something about the object," he says. "The neural network is latching onto non-obvious features. You can fool it, in ways that lay people could never have anticipated."

Wilson points to the incident in June, when a US man became the first person to die in a self-driving car accident. His Tesla's sensor system was unable to distinguish between a tractor trailer crossing the road and the bright sky, and the car drove into it at high speed with fatal results. "I'm not blaming neural networks per se, but I am concerned that these new vision systems are much harder to analyze and debug than the classical computer programs we've come to expect," Wilson says.

Neural networks are seen as a key component in getting self-driving cars to market and on the road en masse. The CMU research should give those eager to see this happen serious pause, and realize it's time for a more sophisticated public conversation about neural networks.

"If someone gets killed [in a self-driving car] and the case goes to court, can we get the system's logs and see what the car saw? That might not be possible," he says. "I reckon that policy-makers think it does work like that."

"Neural networks just don't have a step-by-step algorithm and audit log," he adds. "They don't work like that. Think about the Carnegie Mellon work. It's very surprising that something that does not look like a face can be interpreted as a face. A neural network-powered car barreling down the road, who knows what it's seeing? Therefore, the people that are writing the laws and discussing the possibilities need to have a more nuanced discussion about how artificial brains work and how they might fail."

Wilson recommends, at the very least, that test cases for neural machine vision be cast much more widely. "It's not enough to test how an autonomous car reacts to plastic dogs and plastic children," he says. "We must now realize that neural networks aren't looking for these things explicitly. They might react to combinations of optical inputs we would intuitively never think to test."

24/7 Access to Constellation Insights Subscribe today for unrestricted access to expert analyst views on breaking news.

Editorial standards