Fooling Facial Recognition Software

One of the points that Luca and I made in Artificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition is that AI is all about algorithms and that it can’t actually think. An AI appears to think due to clever programming, but the limits of that programming quickly become apparent under testing. In the article, U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box, the limits of AI are almost embarrassingly apparent because the AI failed to catch even one of them. In fact, it doesn’t take a Marine to outsmart an AI, the article, This Clothing Line Tricks AI Cameras Without Covering Your Face, tells how to do it and look fashionable at the same time. Anthropomorphizing AI and making it seem like it’s more than it is is one sure path to disappointment.

My book, Machine Learning Security Principles, points out a wealth of specific examples of the AI being fooled as part of an examination of machine learning-based security. Some businesses rely on facial recognition now as part of their security strategy with the false hope that it’s reliable and that it will provide an alert in all cases. As recommended in my book, machine learning-based security is just one tool that requires a human to back it up. The article, This Simple Technique Made Me Invisible to Two Major Facial Recognition Systems, discusses just how absurdly easy it is to fool facial recognition software if you don’t play by the rules; the rules being what the model developer expected someone to do.

The problems become compounded when local laws ban the use of facial recognition software due its overuse by law enforcement in potentially less than perfect circumstances. There are reports of false arrest that could have possibly been avoided if the human doing the arresting made a check to verify the identity of the person in question. There are lessons in all this for a business too. Using facial recognition should be the start of a more intensive process to verify a particular action, rather than just assume that the software is going to be 100% correct.

Yes, AI, machine learning, and deep learning applications can do amazing things today, as witnessed by the explosion in use of ChatGPT for all kinds of tasks. It’s a given that security professionals, researchers, data scientists, and various managerial roles will increasingly use these technologies to reduce their workload and improve overall consistency of all sorts of tasks, including security, that these applications are good at performing. However, even as the technologies improve, people will continue to find ways to overcome them and cause them to perform in unexpected ways. In fact, it’s a good bet that the problems will increase for the foreseeable future as the technologies become more complex (hence, more unreliable). Monitoring the results of any smart application is essential, making humans essential, as part of any solution. Let me know your thoughts about facial recognition software and other security issues at [email protected].

Author: John

John Mueller is a freelance author and technical editor. He has writing in his blood, having produced 123 books and over 600 articles to date. The topics range from networking to artificial intelligence and from database management to heads-down programming. Some of his current offerings include topics on machine learning, AI, Python programming, Android programming, and C++ programming. His technical editing skills have helped over more than 70 authors refine the content of their manuscripts. John also provides a wealth of other services, such as writing certification exams, performing technical edits, and writing articles to custom specifications. You can reach John on the Internet at [email protected].