A New AI tool to Thwart Facial Recognition Technologies
A New face-cloaking tool 100 percent effective against Facial Recognition Software of Amazon, Microsoft, Face++
As facial recognition technologies have developed over the span of a few years, privacy is often getting eroded with it. And since the outbreak of the COVID-19 pandemic, this technology has grown in prevalence, but not without controversy. It is everywhere, from security lines in airports to street corners to local drug stores. While officials cite that it is necessary to leverage facial recognition to look for people who might at risk of a COVID-19 infection, common man shares a different opinion—it feels invasive. Further, no one knows if these over the top surveillance will continue after the pandemic emergency, nor have any idea about what shall be done to the facial data collected. Fortunately, a team of researchers at the University of Chicago has some good news to offer with their invention: Fawkes, which is a cloaking app based on AI.
Named after Guy Fawkes masks used by revolutionaries in the V for Vendetta comic book and film (Fawkes was involved in the infamous Gunpowder Plot, a failed conspiracy by a group of provincial English Catholics to assassinate the Protestant King James I of England by blowing up the House of Lords before the State Opening of Parliament in 1605). However, Fawkes, the software, uses artificial intelligence to subtly and almost imperceptibly alter photos to trick facial recognition systems. The algorithm created by researchers in the SAND Lab at the University of Chicago is an open-source software tool that can be downloaded for free and used on one’s desktop.
Though Fawkes doesn’t make one invisible to facial recognition, instead the software makes changes at the pixel level that will trip a facial recognition’s registry but will be indistinguishable to the naked eye when looking at the image. The team calls the result “cloaked” photos, and they can be used like any other images. Ben Y. Zhao, a Neubauer professor of computer science at the University of Chicago, said that the intention behind this project was to corrupt the resource facial recognition systems use to function, i.e., databases of faces scraped from social media.
According to a paper to be presented this month at the USENIX Security symposium, the SAND Lab’s developer team asserts that Fawkes is 100 percent successful against state-of-the-art facial recognition services from Microsoft (Azure Face), Amazon (Rekognition), and Face++ by Chinese tech giant Megvii. While existing facial recognition technologies can still read a user’s face, identify her gender, facial features, and make a note but it will register the face as a new person as opposed to compiling data on her real identity. “It’s about giving individuals agency,” said Emily Wenger, a third-year Ph.D. student at the University of Chicago and co-leader of the project with first-year Ph.D. student Shawn Shan. “While it can’t disrupt existing models already trained on unaltered images downloaded from the internet, publishing cloaked images can eventually erase a person’s online ‘footprint,’ rendering future models incapable of recognizing that individual.
This is not the first attempt to thwart facial recognition software. At Fudan University in China, scientists were working on an invisibility mask that uses tiny infrared LEDs wired to the inside of a baseball cap to project dots of light on to the wearer’s face. Meanwhile, Project KOVR designed a hood, which acts as an anti-surveillance coat that works on the same principle as Faraday bags. It blocks electromagnetic signals and radiation from facial recognition software.