Intentionally Sabotaging Facial Recognition AI

Researchers at the University of Chicago recently published (1MB PDF) a paper outlining a novel attack against facial recognition systems, which they call Fawkes.

The general idea is that some facial recognition systems, such as Clearview.ai, scour the Internet for upload images and use those to generate facial recognition profiles for millions of users. If I upload a picture of myself to Facebook, for example, a facial recognition system could potentially associate that picture with my identity and use it to distinguish my face from others more accurately.

But when using data gathered from public sources, there is always the possibility of intentionally creating poisoned data designed to reduce a system’s effectiveness. In this case, the Fawkes system involves making changes to a photo before it is uploaded that is generally imperceptible to human beings, but that will interfere with the facial recognition model.

According to the paper’s abstract, the researchers achieved relatively high success rates at foiling existing facial recognition models.

In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models. Fawkes achieves this by helping users add imperceptible pixel-level changes (we call them “cloaks”) to their own photos before releasing them. When used to train facial recognition models, these “cloaked” images produce functional models that consistently cause normal images of the user to be misidentified. We experimentally demonstrate that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are “leaked” to the tracker and used for training, Fawkes can still maintain an 80+% protection success rate. We achieve 100% success in experiments against today’s state-of-the-art facial recognition services. Finally, we show that Fawkes is robust against a variety of countermeasures that try to detect or disrupt image cloaks.

Reflectacles’ Anti-Facial Recognition Eyewear

Reflectacles makes a number of glasses products designed to make it difficult for facial recognition technology to function correctly while you are wearing them.

For example, these IRCloak sunglasses have lenses that block infrared and are designed to make it difficult for facial recognition systems to make accurate eye measurements.

IR Cloak Sunglasses
IR Cloak Sunglasses

Facial Recognition Can’t Cope With Face Masks

Thoughts and prayers.

The COVID-19 pandemic is causing a surge in people wearing face masks, and facial recognition companies are scrambling to keep up. Face masks cover up a significant portion of what facial recognition needs to identify and detect people — essentially threatening the future of a multimillion-dollar industry unless the technology can learn to recognize people beyond the coverings.