An Interesting Review of ChatGPT and Other AIs

I’ve written in the past about limitations of AI from a number of perspectives. In Effects of the Mistruths of Data on Model Output I discuss how the data fed to a model must necessary affect its output in a number of ways, including bias and other unwanted effects. Considering the Four Levels of Intelligence Management tells why it’s not possible for an AI to approach human intelligence today. In Fooling Facial Recognition Software I provide a detailed discussion of why it’s so easy to fool certain types of AI-powered applications. And you learn about why some types of occupations are reasonably safe from AI in Automation and the Future of Human Employment. However, I haven’t really done a detailed investigation of AIs like ChatGPT that seem almost human-like in their understanding, but fall remarkably short in many simple areas. On Artifice and Intelligence is one of the more detailed analysis I’ve found to date on the subject and what it reveals will surprise you. There are a lot of simple problems that ChatGPT and other Large Language Models (LLMs) can’t solve.

What I found interesting in the article is that the author, Shlomi Sher, was able to show that one area where an AI should be strong, math, actually isn’t all that strong at all. He talks about Euclid’s proofs. The artifice is that ChatGPT 4 and other AIs can tell you all about prime numbers and even provide seemingly creative output about them. However, depending on how you ask some basic questions, ChatGPT 4 either gets the answer wrong or right, when the answer is pretty much apparent to any human who knows what prime numbers are. What I liked most about the article is that the author takes time to explain why humans can understand the problem, but the AI can’t.

If it seems as if I have a continuing desire to dissuade others from anthropomorphizing AIs, I most certainly do. When it comes to AI, it’s all about the math and nothing more. However, that doesn’t mean that AIs lack functionality and ability as tools to augment human endeavors. It’s likely that the use of AIs will continue to increase over time. In addition, I think that as we better understand precisely how AIs work, we’ll also come to realize that they’re amazing tools, but most definitely not humans in the making. Let me know your thoughts on ChatGPT at [email protected].

Programming Languages Commonly Used for Data Science

The world is packed with programming languages, each of them proclaiming their particular forte and telling you why you need to learn them. A good developer does learn multiple languages, each of which becomes a tool for a certain kind of development, but even the most enthusiastic developer won’t learn every programming language out there. It’s important to make good choices.

Data Science is a particular kind of development task that works well with certain kinds of programming languages. Choosing the correct tool makes your life easier. It’s akin to using a hammer to drive a screw rather than a screwdriver. Yes, the hammer works, but the screwdriver is much easier to use and definitely does a better job. Data scientists usually use only a few languages because they make working with data easier. With this in mind, here are the top languages for data science work in order of preference:

  • Python (general purpose): Many data scientists prefer to use Python because it provides a wealth of libraries, such as NumPy, SciPy, MatPlotLib, pandas, and Scikit-learn, to make data science tasks significantly easier. Python is also a precise language that makes it easy to use multi-processing on large datasets — reducing the time required to analyze them. The data science community has also stepped up with specialized IDEs, such as Anaconda, that implement the Jupyter Notebook concept, which makes working with data science calculations significantly easier. Besides all of these things in Python’s favor, it’s also an excellent language for creating glue code with languages such as C/C++ and Fortran. The Python documentation actually shows how to create the required extensions. Most Python users rely on the language to see patterns, such as allowing a robot to see a group of pixels as an object. It also sees use for all sorts of scientific tasks.
  • R (special purpose statistical): In many respects, Python and R share the same sorts of functionality but implement it in different ways. Depending on which source you view, Python and R have about the same number of proponents, and some people use Python and R interchangeably (or sometimes in tandem). Unlike Python, R provides its own environment, so you don’t need a third-party product such as Anaconda. However, R doesn’t appear to mix with other languages with the ease that Python provides.
  • SQL (database management): The most important thing to remember about Structured Query Language (SQL) is that it focuses on data rather than tasks. Businesses can’t operate without good data management — the data is the business. Large organizations use some sort of relational database, which is normally accessible with SQL, to store their data. Most Database Management System (DBMS) products rely on SQL as their main language, and DBMS usually has a large number of data analysis and other data science features built in. Because you’re accessing the data natively, there is often a significant speed gain in performing data science tasks this way. Database Administrators (DBAs) generally use SQL to manage or manipulate the data rather than necessarily perform detailed analysis of it. However, the data scientist can also use SQL for various data science tasks and make the resulting scripts available to the DBAs for their needs.
  • Java (general purpose): Some data scientists perform other kinds of programming that require a general purpose, widely adapted and popular, language. In addition to providing access to a large number of libraries (most of which aren’t actually all that useful for data science, but do work for other needs), Java supports object orientation better than any of the other languages in this list. In addition, it’s strongly typed and tends to run quite quickly. Consequently, some people prefer it for finalized code. Java isn’t a good choice for experimentation or ad hoc queries.
  • Scala (general purpose): Because Scala uses the Java Virtual Machine (JVM) it does have some of the advantages and disadvantages of Java. However, like Python, Scala provides strong support for the functional programming paradigm, which uses lambda calculus as its basis. In addition, Apache Spark is written in Scala, which means that you have good support for cluster computing when using this language — think huge dataset support. Some of the pitfalls of using Scala are that it’s hard to set up correctly, it has a steep learning curve, and it lacks a comprehensive set of data science specific libraries.

There are likely other languages that data scientists use, but this list gives you a good idea of what to look for in any programming language you choose for data science tasks. What it comes down to is choosing languages that help you perform analysis, work with huge datasets, and allow you to perform some level of general programming tasks. Let me know your thoughts about data science programming languages at [email protected].

Effects of the Mistruths of Data on Model Output

A number of the books Luca and I have written or I have written on my own, including Artificial Intelligence for Dummies, 2nd Edition, Machine Learning for Dummies, 2nd Edition, Python for Data Science for Dummies, and Machine Learning Security Principles, talk about the five mistruths of data: commission, omission, bias, perspective, and frame of reference. Of these five mistruths, the one that receives the most attention is bias, but they’re all important because they all affect how any data science model you build will perform. Because the data used to create the model isn’t free of mistruths, the model can’t perform as expected in many situations. Consequently, the assertions in the article, LGBTQ+ bias in GPT-3, don’t surprise me at all. The data used to create the model is flawed, so the output is flawed as well.

I chose the article in question as a reference because the author takes the time to point out a problem in ever generating a perfect model, the constant change in human perspective. Words that were considered toxic in the past are no longer considered toxic today, but new words have taken their place. Even if a model were to somehow escape bias today, it would be biased tomorrow due to the mistruth of perspective.

So, why have I been using the term mistruth instead of the term lie? A lie is information that is passed off as true in order to avoid responsibility, to harm others in some way, or to knowingly pass off information that is untrue for personal gain. However, humans use mistruths all of the time to reduce the potential for arguments, to save someone’s ego, or simply because the information that the person has is inaccurate. A mistruth doesn’t have the intent of deceiving another for personal gain, but it’s still not true. So, when someone asks, “Do these pants make me look fat?” and another person states, tactfully, that, “They make you look voluptuous.” the statement could be true or a mistruth, but is done to keep an argument at bay and make the other person feel good about themselves. However, machine learning algorithms have no concept of this interplay and the model created using such statements will be biased.

Anthropomorphizing machine learning models doesn’t change the fact that they’re essentially statistically-based mathematical models of data with no understanding of anything built into them. So, true or not, the model sees all data as being the same—the only solution to the problem of bias is to clean the data. However, the human expectation after getting emotionally attached to their AI is that the AI will somehow just know when something is hurtful. Articles like, What is the Proper Pronoun for GPT-4?, point to the problem of why someone would ask the question at all. The interactions with AI have taken on a harmful aspect because humans are seeing them as sentient when they most certainly aren’t. The issue will come to a head when people try to use the bad advice obtained from their AI as a defense in court. Personally, I see the continued use of “it” as essential to remind people that no matter how human a model may seem, it’s still just a model.

It’s important to understand that I see AI as an amazing tool that will only get more amazing as data scientists, developers, and others add to it’s potential by doing things like adding more memory. Although I don’t agree that any machine learning model can be someone’s friend, articles like How to Use ChatGPT in Daily Life? do make a strong argument for using machine learning models as tools to enhance human capability. However, it is a person who thought up these uses, not the machine learning model. Machine learning models will remain limited because they can’t understand the data they manipulate. In fact, articles like Why ChatGPT Won’t Replace Coders Just Yet point out just how limited machine learning models remain. However, I do think that machine learning models will get expanded by humans to perform new tasks as described in articles like How To Build Your Own Custom ChatGPT With Custom Knowledge Base. Of course, if that custom knowledge base is biased in any way, then the output from the new model will also be biased.

It’s important to know that AI is moving forward and that it will be extended to do even more in assisting humans to realize their full potential. However, it’s also important to know that the five mistruths will continue to be a problem because machine learning models are unable to understand the data used to train them and to provide knowledge bases for their output. Realistic expectations will help improve AI as a tool that augments human capabilities and helps us achieve amazing things in the future. Let me know your thoughts on machine learning bias at [email protected].

Machine Learning Security Principles Now Available as an Audiobook

I love to provide people with multiple ways to learn. One of the more popular methods of learning today is the audiobook. You can listen and learn while you do something else, like drive to work. Machine Learning Security Principles is now available in audiobook form and I’m really quite excited about it because this is the first time that one of my books has appeared in this format. You can get this book in audiobook format on the O’Reilly site at https://www.oreilly.com/videos/machine-learning-security/9781805124788/.

After listening to the book myself, I have to say that the audio is quite clear and it does add a new way for me to learn as well. If you do try this audiobook, please let me know how it works for you. I’ll share any input you provide with the publisher as well so that we can work together to provide you with the best possible book materials in a format that works best for you. Please let me know your thoughts at [email protected].

Machine Learning Security and Event Sourcing for Databases

In times past, an application would make an update to a database, essentially overwriting the old data with new data. There wasn’t an actual time element to the update. The data would simply change. This approach to database management worked fine as long as the database was on a local system or even a network owned by an organization. However, as technology has progressed to use functionality like machine learning to perform analysis and microservices to make applications more scalable and reliable, the need for some method of reconstructing events has become more important.

To guarantee atomicity, consistency, isolation, and durability (ACID) in database transactions, products that rely on SQL Server use a transaction log to ensure data integrity. In the event of an error or outage, it’s possible to use the transaction log to rebuild pending database operations or roll them back as needed. It’s possible to recreate the data in the database, but the final result is still a static state. Transaction logs are a good start, but not all database management systems (DBMS) support them. In addition, transaction logs focus solely on the data and its management.

In a machine learning security environment, of the type described in Machine Learning Security Principles, this isn’t enough to perform analysis of sufficient depth to locate hacker activity patterns in many cases. The transaction logs would need to be somehow combined with other logs, such as those that track RESTful interaction with the associated application. The complexity of combining the various data sources would prove daunting to most professionals because of the need to perform data translations between logs. In addition, the process would prove time consuming enough that the result of any analysis wouldn’t be available in a timely manner (in time to stop the hacker).

Event sourcing, of the type that many professionals now advocate for microservice architectures, offers a better solution that it less prone to problems when it comes to security. In this case, instead of just tracking the data changes, the logs would reflect application state. By following the progression of past events, it’s possible to derive the current application state and its associated data. As mentioned in my book, hackers tend to follow patterns in application interaction and usage that fall outside the usual user patterns because the hacker is looking for a way into the application in order to carry out various tasks that likely have nothing to do with ordinary usage.

A critical difference between event sourcing and other transaction logging solutions is the event sourcing relies on its own journal, rather than using the DBMS transaction log, making it possible to provide additional security for this data and reducing the potential for hacker changes to cover up nefarious acts. There are most definitely tradeoffs between techniques such as Change Data Capture (CDC) and event sourcing that need to be considered, but from a security perspective, event sourcing is superior. As with anything, there are pros and cons to using event sourcing, the most important of which from a security perspective is that event sourcing is both harder to implement and more complex. Many developers cite the need to maintain two transaction logs as a major reason to avoid event sourcing. These issues mean that it’s important to test the solution fully before delivering it as a production system.

If you’re looking to create a machine learning-based security monitoring solution for your application that doesn’t require combining data from multiple sources to obtain a good security picture, then event sourcing is a good solution to your problem. It allows you to obtain a complete picture of the entire domain’s activities that helps you locate and understand hacker activity. Most importantly, because the data resides in a single dedicated log that’s easy to secure, the actual analysis process is less complex and you can produce a result in a timely manner. The tradeoff is that you’ll spend more time putting such a system together. Let me know your thoughts about event sourcing as part of security solution at [email protected].

Machine Learning for Dummies, 2nd Edition, MovieLens Dataset

Updated March 15, 2023 to clarify the usage instructions.

The movies.dat file found in the Trudging through the MovieLens dataset section of Chapter 19 of Machine Learning for Dummies, 2nd Edition has been updated on the source site, so it no longer works with the downloadable source. Luca and I want to be sure you have a great learning experience. Fortunately, we do have a copy of the version of movies.dat found in the book. You can download the entire MovieLens dataset here:

To obtain your copy of the MoveLens dataset for your local Python setup, please follow these steps:

  1. Click the link or the Download button. The ml-1m.zip file will appear on your hard drive.
  2. Remove the files from the archive. You should see four files in a folder named ml-1m: movies.dat, ratings.dat, README, and users.dat.
  3. Place the files in the downloadable source directory for this book on your system.

Note that you may not be able to use automatic downloads with my site, which is a security measure on my part. In addition, it ensures that you get all of the MovieLens dataset files, including the README, which contains licensing, citation, and other information. This solution may not work well if you’re using an online IDE and I apologize in advance for the inconvenience. Please let me know if you have any other problems with this example at [email protected].

Locating the Machine Learning for Dummies, 2nd Edition Source Code

A reader recently wrote to say that the source code for Machine Learning for Dummies, 2nd Edition on GitHub is incomplete. Actually, that wasn’t originally one of the download sources for the book’s code and we had used that site for an intermediary code location, so it wasn’t complete. The GitHub site code is complete now, but we’d still prefer that you download the code from one of the two sites listed in the book: On my website at http://www.johnmuellerbooks.com/source-code/ (just click the book’s link) or from the Wiley site at https://www.wiley.com/en-us/Machine+Learning+For+Dummies%2C+2nd+Edition-p-9781119724056 (just click the Downloads link, then the download link next to the source code you want). The two preferred sites offer the source code in either Python or R form in case you don’t want to download both.

When you get the downloadable source, make sure you remove it from the archive as described in the UnZIPping the Downloadable Source post. Using the downloadable source helps you avoid some of the issues described in the Verifying Your Hand Typed Code post. Please let me know whenever you encounter problems with the downloadable source for a book at [email protected].

Considering the Four Levels of Intelligence Management

One of the reasons that Luca and I wrote Artificial Intelligence for Dummies, 2nd Edition was to dispel some of the myths and hype surrounding machine-based intelligence. If anything, the amount of ill-conceived and blatantly false information surrounding AI has only increased since then. So now we have articles like Microsoft’s Bing wants to unleash ‘destruction’ on the internet out there that espouse ideas that can’t work at all because AIs feel nothing. Of course, there is the other end of the spectrum in articles like David Guetta says the future of music is in AI, which also can’t work because computers aren’t creative. A third kind of article starts to bring some reality back into the picture, such as Are you a robot? Sci-fi magazine stops accepting submissions after it found more than 500 stories received from contributors were AI-generated. All of this is interesting for me to read about because I want to see how people react to a technology that I know is simply a technology and nothing more. Anthropomorphizing computers is a truly horrible idea because it leads to the thoughts described in The Risk of a New AI Winter. Another AI winter would be a loss for everyone because AI really is a great tool.

As part Python for Data Science for Dummies and Machine Learning for Dummies, 2nd Edition Luca and I considered issues like the seven kinds of intelligence and how an AI can only partially express most of them. We even talked about how the five mistruths in data can cause issues such as skewed or even false results in machine learning output. In  Machine Learning Security Principles I point out the manifest ways in which humans can use superior intelligence to easily thwart an AI. Still, people seem to refuse to believe that an AI is the product of clever human programmers, a whole lot of data, and methods of managing algorithms. Yes, it’s all about the math.

This post goes to the next step. During my readings of various texts, especially those of a psychological and medical variety, I’ve come to understand that humans embrace four levels of intelligence management. We don’t actually learn in a single step as might be thought by many people, we learn in four steps with each step providing new insights and capabilities. Consider these learning management steps:

  1. Knowledge: A person learns about a new kind of intelligence. That intelligence can affect them physically, emotionally, mentally, or some combination of the three. However, simply knowing about something doesn’t make it useful. An AI can accommodate this level (and even excel at it) because it has a dataset that is simply packed with knowledge. However, the AI only sees numbers, bits, values, and nothing more. There is no comprehension as is the case with humans. Think of knowledge as the what of intelligence.
  2. Skill: After working with new knowledge for some period of time, a human will build a skill in using that knowledge to perform tasks. In fact, very often this is the highest level that a human will achieve with a given bit of knowledge, which I think is the source of confusion for some people with regard to AIs. Training an AI model, that is, assigning weights to a neural network created of algorithms, gives an AI an appearance of skill. However, the AI isn’t actually skilled, it can’t accommodate variations as a human will. What the AI is doing is following the parameters of the algorithm used to create its model. This is the highest step that any AI can achieve. Think of skill as the how of intelligence.
  3. Understanding: As a human develops a skill and uses the skill to perform tasks regularly, new insights develop and the person begins to understand the intelligence at a deeper level, making it possible for a person to use the intelligence in new ways to perform new tasks. A computer is unable to understand anything because it lacks self-awareness, which is a requirement for understanding anything at all. Think of understanding as the why of intelligence.
  4. Wisdom: Simply understanding an intelligence is often not enough to ensure the use of that intelligence in a correct manner. When a person makes enough mistakes in using an intelligence, wisdom in its use begins to take shape. Computers have no moral or ethical ability—they lack any sort of common sense. This is why you keep seeing articles about AIs that are seemingly running amok, the AI has no concept whatsoever of what it is doing or why. All that the AI is doing is crunching numbers. Think of wisdom as the when of intelligence.

It’s critical that society begin to see AIs for what they are, exceptionally useful tools that can be used to perform certain tasks that require only knowledge and a modicum of skill and to augment a human when some level of intelligence management above these levels is required. Otherwise, we’ll eventually get engulfed in another AI winter that thwarts development of further AI capabilities that could help people do things like go to Mars, mine minerals in an environmentally friendly way in space, cure diseases, and create new thoughts that have never seen the light of day before. What are your thoughts on intelligence management? Let me know at [email protected].

Review of The Kaggle Book

A picture of The Kaggle Book cover.
The Kaggle Book tells you everything needed about competing on Kaggle.

The Kaggle Book by Konrad Banachewicz and Luca Massaron is a book about competing on Kaggle. The introductory chapters tell you all about Kaggle and the competitions it sponsors. The bulk of the book provides details on how to compete better against a variety of adversaries. The book ends with some insights into how competing in Kaggle can help with other areas of your life. If the book ended here, it would still be well worth reading cover-to-cover as I did, but it doesn’t end here.

My main reasons for reading the book were to find out more about how data is created and vetted on Kaggle, and to obtain some more insights on how to write better data science applications. In the end, the bulk of this book is a rather intense treatment of data science with a strong Kaggle twist. If you’re looking for datasets and to understand techniques for using them effectively, then this is the book you want to get because the authors are both experts in the field. The biases in the book are toward data management, verification, validation, and checks for model goodness. It’s the model goodness part that is hard to find in any other book (at least, the ones I’ve read so far).

The book does contain interviews from other people who have participated in Kaggle competitions. I did read a number of these interviews and found that they didn’t help me personally because of my goals in reading the book. However, I have no doubt that they’d help someone who was actually intending to enter a Kaggle competition, which sounds like a great deal of work. Before I read the book, I had no idea of just how much goes into these competitions and what the competitors have to do to have a chance of winning. What I found most important is that the authors stress the need to get something more out of a competition than simply winning—that winning is just a potential outcome of a much longer process of learning, skill building, and team building.

You really need this book if you are into data science at all because it helps you gain new insights into working through data science problems and ensuring that you’re getting a good result. I know that my own person skills will be improved as I apply the techniques described in the book, which really do apply to every kind of data science development and not just to Kaggle competitions.

Fooling Facial Recognition Software

One of the points that Luca and I made in Artificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition is that AI is all about algorithms and that it can’t actually think. An AI appears to think due to clever programming, but the limits of that programming quickly become apparent under testing. In the article, U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box, the limits of AI are almost embarrassingly apparent because the AI failed to catch even one of them. In fact, it doesn’t take a Marine to outsmart an AI, the article, This Clothing Line Tricks AI Cameras Without Covering Your Face, tells how to do it and look fashionable at the same time. Anthropomorphizing AI and making it seem like it’s more than it is is one sure path to disappointment.

My book, Machine Learning Security Principles, points out a wealth of specific examples of the AI being fooled as part of an examination of machine learning-based security. Some businesses rely on facial recognition now as part of their security strategy with the false hope that it’s reliable and that it will provide an alert in all cases. As recommended in my book, machine learning-based security is just one tool that requires a human to back it up. The article, This Simple Technique Made Me Invisible to Two Major Facial Recognition Systems, discusses just how absurdly easy it is to fool facial recognition software if you don’t play by the rules; the rules being what the model developer expected someone to do.

The problems become compounded when local laws ban the use of facial recognition software due its overuse by law enforcement in potentially less than perfect circumstances. There are reports of false arrest that could have possibly been avoided if the human doing the arresting made a check to verify the identity of the person in question. There are lessons in all this for a business too. Using facial recognition should be the start of a more intensive process to verify a particular action, rather than just assume that the software is going to be 100% correct.

Yes, AI, machine learning, and deep learning applications can do amazing things today, as witnessed by the explosion in use of ChatGPT for all kinds of tasks. It’s a given that security professionals, researchers, data scientists, and various managerial roles will increasingly use these technologies to reduce their workload and improve overall consistency of all sorts of tasks, including security, that these applications are good at performing. However, even as the technologies improve, people will continue to find ways to overcome them and cause them to perform in unexpected ways. In fact, it’s a good bet that the problems will increase for the foreseeable future as the technologies become more complex (hence, more unreliable). Monitoring the results of any smart application is essential, making humans essential, as part of any solution. Let me know your thoughts about facial recognition software and other security issues at [email protected].