Fawkes has already been downloaded nearly half a million times from the project website. One user has also built an online version, making it even easier for people to use (though Wenger won’t vouch for third parties using the code, warning: “You don’t know what’s happening to your data while that person is processing it”). There’s not yet a phone app, but there’s nothing stopping somebody from making one, says Wenger.

Fawkes may keep a new facial recognition system from recognizing you—the next Clearview, say. But it won’t sabotage existing systems that have been trained on your unprotected images already. The tech is improving all the time, however. Wenger thinks that a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland, one of the teams at ICLR this week, might address this issue. 

Called LowKey, the tool expands on Fawkes by applying perturbations to images based on a stronger kind of adversarial attack, which also fools pretrained commercial models. Like Fawkes, LowKey is also available online.

Ma and his colleagues have added an even bigger twist. Their approach, which turns images into what they call unlearnable examples, effectively makes an AI ignore your selfies entirely. “I think it’s great,” says Wenger. “Fawkes trains a model to learn something wrong about you, and this tool trains a model to learn nothing about you.”

Images of me scraped from the web (top) are turned into unlearnable examples (bottom) that a facial recognition system will ignore. (Credit to Daniel Ma, Sarah Monazam Erfani and colleagues) 

Unlike Fawkes and its followers, unlearnable examples are not based on adversarial attacks. Instead of introducing changes to an image that force an AI to make a mistake, Ma’s team adds tiny changes that trick an AI into ignoring it during training. When presented with the image later, its evaluation of what’s in it will be no better than a random guess.

Unlearnable examples may prove more effective than adversarial attacks, since they cannot be trained against. The more adversarial examples an AI sees, the better it gets at recognizing them. But because Ma and his colleagues stop an AI from training on images in the first place, they claim this won’t happen with unlearnable examples.

Wenger is resigned to an ongoing battle, however. Her team recently noticed that Microsoft Azure’s facial recognition service was no longer spoofed by some of their images. “It suddenly somehow became robust to cloaked images that we had generated,” she says. “We don’t know what happened.”

Microsoft may have changed its algorithm, or the AI may simply have seen so many images from people using Fawkes that it learned to recognize them. Either way, Wenger’s team released an update to their tool last week that works against Azure again. “This is another cat-and-mouse arms race,” she says.

For Wenger, this is the story of the internet. “Companies like Clearview are capitalizing on what they perceive to be freely available data and using it to do whatever they want,” she says.”

Regulation might help in the long run, but that won’t stop companies from exploiting loopholes. “There’s always going to be a disconnect between what is legally acceptable and what people actually want,” she says. “Tools like Fawkes fill that gap.”

“Let’s give people some power that they didn’t have before,” she says. 

Similar Posts