One of the reasons that Luca and I wrote Artificial Intelligence for Dummies, 2nd Edition was to dispel some of the myths and hype surrounding machine-based intelligence. If anything, the amount of ill-conceived and blatantly false information surrounding AI has only increased since then. So now we have articles like Microsoft’s Bing wants to unleash ‘destruction’ on the internet out there that espouse ideas that can’t work at all because AIs feel nothing. Of course, there is the other end of the spectrum in articles like David Guetta says the future of music is in AI, which also can’t work because computers aren’t creative. A third kind of article starts to bring some reality back into the picture, such as Are you a robot? Sci-fi magazine stops accepting submissions after it found more than 500 stories received from contributors were AI-generated. All of this is interesting for me to read about because I want to see how people react to a technology that I know is simply a technology and nothing more. Anthropomorphizing computers is a truly horrible idea because it leads to the thoughts described in The Risk of a New AI Winter. Another AI winter would be a loss for everyone because AI really is a great tool.
As part Python for Data Science for Dummies and Machine Learning for Dummies, 2nd Edition Luca and I considered issues like the seven kinds of intelligence and how an AI can only partially express most of them. We even talked about how the five mistruths in data can cause issues such as skewed or even false results in machine learning output. In Machine Learning Security Principles I point out the manifest ways in which humans can use superior intelligence to easily thwart an AI. Still, people seem to refuse to believe that an AI is the product of clever human programmers, a whole lot of data, and methods of managing algorithms. Yes, it’s all about the math.
This post goes to the next step. During my readings of various texts, especially those of a psychological and medical variety, I’ve come to understand that humans embrace four levels of intelligence management. We don’t actually learn in a single step as might be thought by many people, we learn in four steps with each step providing new insights and capabilities. Consider these learning management steps:
- Knowledge: A person learns about a new kind of intelligence. That intelligence can affect them physically, emotionally, mentally, or some combination of the three. However, simply knowing about something doesn’t make it useful. An AI can accommodate this level (and even excel at it) because it has a dataset that is simply packed with knowledge. However, the AI only sees numbers, bits, values, and nothing more. There is no comprehension as is the case with humans. Think of knowledge as the what of intelligence.
- Skill: After working with new knowledge for some period of time, a human will build a skill in using that knowledge to perform tasks. In fact, very often this is the highest level that a human will achieve with a given bit of knowledge, which I think is the source of confusion for some people with regard to AIs. Training an AI model, that is, assigning weights to a neural network created of algorithms, gives an AI an appearance of skill. However, the AI isn’t actually skilled, it can’t accommodate variations as a human will. What the AI is doing is following the parameters of the algorithm used to create its model. This is the highest step that any AI can achieve. Think of skill as the how of intelligence.
- Understanding: As a human develops a skill and uses the skill to perform tasks regularly, new insights develop and the person begins to understand the intelligence at a deeper level, making it possible for a person to use the intelligence in new ways to perform new tasks. A computer is unable to understand anything because it lacks self-awareness, which is a requirement for understanding anything at all. Think of understanding as the why of intelligence.
- Wisdom: Simply understanding an intelligence is often not enough to ensure the use of that intelligence in a correct manner. When a person makes enough mistakes in using an intelligence, wisdom in its use begins to take shape. Computers have no moral or ethical ability—they lack any sort of common sense. This is why you keep seeing articles about AIs that are seemingly running amok, the AI has no concept whatsoever of what it is doing or why. All that the AI is doing is crunching numbers. Think of wisdom as the when of intelligence.
It’s critical that society begin to see AIs for what they are, exceptionally useful tools that can be used to perform certain tasks that require only knowledge and a modicum of skill and to augment a human when some level of intelligence management above these levels is required. Otherwise, we’ll eventually get engulfed in another AI winter that thwarts development of further AI capabilities that could help people do things like go to Mars, mine minerals in an environmentally friendly way in space, cure diseases, and create new thoughts that have never seen the light of day before. What are your thoughts on intelligence management? Let me know at [email protected].