An Interesting Review of ChatGPT and Other AIs

I’ve written in the past about limitations of AI from a number of perspectives. In Effects of the Mistruths of Data on Model Output I discuss how the data fed to a model must necessary affect its output in a number of ways, including bias and other unwanted effects. Considering the Four Levels of Intelligence Management tells why it’s not possible for an AI to approach human intelligence today. In Fooling Facial Recognition Software I provide a detailed discussion of why it’s so easy to fool certain types of AI-powered applications. And you learn about why some types of occupations are reasonably safe from AI in Automation and the Future of Human Employment. However, I haven’t really done a detailed investigation of AIs like ChatGPT that seem almost human-like in their understanding, but fall remarkably short in many simple areas. On Artifice and Intelligence is one of the more detailed analysis I’ve found to date on the subject and what it reveals will surprise you. There are a lot of simple problems that ChatGPT and other Large Language Models (LLMs) can’t solve.

What I found interesting in the article is that the author, Shlomi Sher, was able to show that one area where an AI should be strong, math, actually isn’t all that strong at all. He talks about Euclid’s proofs. The artifice is that ChatGPT 4 and other AIs can tell you all about prime numbers and even provide seemingly creative output about them. However, depending on how you ask some basic questions, ChatGPT 4 either gets the answer wrong or right, when the answer is pretty much apparent to any human who knows what prime numbers are. What I liked most about the article is that the author takes time to explain why humans can understand the problem, but the AI can’t.

If it seems as if I have a continuing desire to dissuade others from anthropomorphizing AIs, I most certainly do. When it comes to AI, it’s all about the math and nothing more. However, that doesn’t mean that AIs lack functionality and ability as tools to augment human endeavors. It’s likely that the use of AIs will continue to increase over time. In addition, I think that as we better understand precisely how AIs work, we’ll also come to realize that they’re amazing tools, but most definitely not humans in the making. Let me know your thoughts on ChatGPT at [email protected].

Effects of the Mistruths of Data on Model Output

A number of the books Luca and I have written or I have written on my own, including Artificial Intelligence for Dummies, 2nd Edition, Machine Learning for Dummies, 2nd Edition, Python for Data Science for Dummies, and Machine Learning Security Principles, talk about the five mistruths of data: commission, omission, bias, perspective, and frame of reference. Of these five mistruths, the one that receives the most attention is bias, but they’re all important because they all affect how any data science model you build will perform. Because the data used to create the model isn’t free of mistruths, the model can’t perform as expected in many situations. Consequently, the assertions in the article, LGBTQ+ bias in GPT-3, don’t surprise me at all. The data used to create the model is flawed, so the output is flawed as well.

I chose the article in question as a reference because the author takes the time to point out a problem in ever generating a perfect model, the constant change in human perspective. Words that were considered toxic in the past are no longer considered toxic today, but new words have taken their place. Even if a model were to somehow escape bias today, it would be biased tomorrow due to the mistruth of perspective.

So, why have I been using the term mistruth instead of the term lie? A lie is information that is passed off as true in order to avoid responsibility, to harm others in some way, or to knowingly pass off information that is untrue for personal gain. However, humans use mistruths all of the time to reduce the potential for arguments, to save someone’s ego, or simply because the information that the person has is inaccurate. A mistruth doesn’t have the intent of deceiving another for personal gain, but it’s still not true. So, when someone asks, “Do these pants make me look fat?” and another person states, tactfully, that, “They make you look voluptuous.” the statement could be true or a mistruth, but is done to keep an argument at bay and make the other person feel good about themselves. However, machine learning algorithms have no concept of this interplay and the model created using such statements will be biased.

Anthropomorphizing machine learning models doesn’t change the fact that they’re essentially statistically-based mathematical models of data with no understanding of anything built into them. So, true or not, the model sees all data as being the same—the only solution to the problem of bias is to clean the data. However, the human expectation after getting emotionally attached to their AI is that the AI will somehow just know when something is hurtful. Articles like, What is the Proper Pronoun for GPT-4?, point to the problem of why someone would ask the question at all. The interactions with AI have taken on a harmful aspect because humans are seeing them as sentient when they most certainly aren’t. The issue will come to a head when people try to use the bad advice obtained from their AI as a defense in court. Personally, I see the continued use of “it” as essential to remind people that no matter how human a model may seem, it’s still just a model.

It’s important to understand that I see AI as an amazing tool that will only get more amazing as data scientists, developers, and others add to it’s potential by doing things like adding more memory. Although I don’t agree that any machine learning model can be someone’s friend, articles like How to Use ChatGPT in Daily Life? do make a strong argument for using machine learning models as tools to enhance human capability. However, it is a person who thought up these uses, not the machine learning model. Machine learning models will remain limited because they can’t understand the data they manipulate. In fact, articles like Why ChatGPT Won’t Replace Coders Just Yet point out just how limited machine learning models remain. However, I do think that machine learning models will get expanded by humans to perform new tasks as described in articles like How To Build Your Own Custom ChatGPT With Custom Knowledge Base. Of course, if that custom knowledge base is biased in any way, then the output from the new model will also be biased.

It’s important to know that AI is moving forward and that it will be extended to do even more in assisting humans to realize their full potential. However, it’s also important to know that the five mistruths will continue to be a problem because machine learning models are unable to understand the data used to train them and to provide knowledge bases for their output. Realistic expectations will help improve AI as a tool that augments human capabilities and helps us achieve amazing things in the future. Let me know your thoughts on machine learning bias at [email protected].

Fooling Facial Recognition Software

One of the points that Luca and I made in Artificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition is that AI is all about algorithms and that it can’t actually think. An AI appears to think due to clever programming, but the limits of that programming quickly become apparent under testing. In the article, U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box, the limits of AI are almost embarrassingly apparent because the AI failed to catch even one of them. In fact, it doesn’t take a Marine to outsmart an AI, the article, This Clothing Line Tricks AI Cameras Without Covering Your Face, tells how to do it and look fashionable at the same time. Anthropomorphizing AI and making it seem like it’s more than it is is one sure path to disappointment.

My book, Machine Learning Security Principles, points out a wealth of specific examples of the AI being fooled as part of an examination of machine learning-based security. Some businesses rely on facial recognition now as part of their security strategy with the false hope that it’s reliable and that it will provide an alert in all cases. As recommended in my book, machine learning-based security is just one tool that requires a human to back it up. The article, This Simple Technique Made Me Invisible to Two Major Facial Recognition Systems, discusses just how absurdly easy it is to fool facial recognition software if you don’t play by the rules; the rules being what the model developer expected someone to do.

The problems become compounded when local laws ban the use of facial recognition software due its overuse by law enforcement in potentially less than perfect circumstances. There are reports of false arrest that could have possibly been avoided if the human doing the arresting made a check to verify the identity of the person in question. There are lessons in all this for a business too. Using facial recognition should be the start of a more intensive process to verify a particular action, rather than just assume that the software is going to be 100% correct.

Yes, AI, machine learning, and deep learning applications can do amazing things today, as witnessed by the explosion in use of ChatGPT for all kinds of tasks. It’s a given that security professionals, researchers, data scientists, and various managerial roles will increasingly use these technologies to reduce their workload and improve overall consistency of all sorts of tasks, including security, that these applications are good at performing. However, even as the technologies improve, people will continue to find ways to overcome them and cause them to perform in unexpected ways. In fact, it’s a good bet that the problems will increase for the foreseeable future as the technologies become more complex (hence, more unreliable). Monitoring the results of any smart application is essential, making humans essential, as part of any solution. Let me know your thoughts about facial recognition software and other security issues at [email protected].

Making Algorithms Useful

This is an update of a post that originally appeared on December 2, 2015.

Writing about machine learning and deep learning in my various books has been interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in books such as Machine Learning Security PrinciplesAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition:

  • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
  • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
  • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning and deep learning are already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

As I’ve written new books and updated existing ones, I’ve seen an almost magical progression in the capabilities of machine learning and deep learning applications such as ChatGPT, Chat Generative Pre-Trained Transformer, which can produce some pretty amazing output.

Some of these applications, such as Siri and Alexa, continue to learn as you use them. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question.

Machine learning is a big mystery to many people today, while other people have gained enough experience to have strong opinions about it. Because I continue to write new machine learning/deep learning books and update others, it would be interesting to hear your questions about machine learning and deep learning. After all, I’d like to tune the content of my books to meet the most needs that I can. Where do you see this technology headed? What confuses you about it? Talk to me at [email protected].