An Interesting Review of ChatGPT and Other AIs

I’ve written in the past about limitations of AI from a number of perspectives. In Effects of the Mistruths of Data on Model Output I discuss how the data fed to a model must necessary affect its output in a number of ways, including bias and other unwanted effects. Considering the Four Levels of Intelligence Management tells why it’s not possible for an AI to approach human intelligence today. In Fooling Facial Recognition Software I provide a detailed discussion of why it’s so easy to fool certain types of AI-powered applications. And you learn about why some types of occupations are reasonably safe from AI in Automation and the Future of Human Employment. However, I haven’t really done a detailed investigation of AIs like ChatGPT that seem almost human-like in their understanding, but fall remarkably short in many simple areas. On Artifice and Intelligence is one of the more detailed analysis I’ve found to date on the subject and what it reveals will surprise you. There are a lot of simple problems that ChatGPT and other Large Language Models (LLMs) can’t solve.

What I found interesting in the article is that the author, Shlomi Sher, was able to show that one area where an AI should be strong, math, actually isn’t all that strong at all. He talks about Euclid’s proofs. The artifice is that ChatGPT 4 and other AIs can tell you all about prime numbers and even provide seemingly creative output about them. However, depending on how you ask some basic questions, ChatGPT 4 either gets the answer wrong or right, when the answer is pretty much apparent to any human who knows what prime numbers are. What I liked most about the article is that the author takes time to explain why humans can understand the problem, but the AI can’t.

If it seems as if I have a continuing desire to dissuade others from anthropomorphizing AIs, I most certainly do. When it comes to AI, it’s all about the math and nothing more. However, that doesn’t mean that AIs lack functionality and ability as tools to augment human endeavors. It’s likely that the use of AIs will continue to increase over time. In addition, I think that as we better understand precisely how AIs work, we’ll also come to realize that they’re amazing tools, but most definitely not humans in the making. Let me know your thoughts on ChatGPT at [email protected].

Programming Languages Commonly Used for Data Science

The world is packed with programming languages, each of them proclaiming their particular forte and telling you why you need to learn them. A good developer does learn multiple languages, each of which becomes a tool for a certain kind of development, but even the most enthusiastic developer won’t learn every programming language out there. It’s important to make good choices.

Data Science is a particular kind of development task that works well with certain kinds of programming languages. Choosing the correct tool makes your life easier. It’s akin to using a hammer to drive a screw rather than a screwdriver. Yes, the hammer works, but the screwdriver is much easier to use and definitely does a better job. Data scientists usually use only a few languages because they make working with data easier. With this in mind, here are the top languages for data science work in order of preference:

  • Python (general purpose): Many data scientists prefer to use Python because it provides a wealth of libraries, such as NumPy, SciPy, MatPlotLib, pandas, and Scikit-learn, to make data science tasks significantly easier. Python is also a precise language that makes it easy to use multi-processing on large datasets — reducing the time required to analyze them. The data science community has also stepped up with specialized IDEs, such as Anaconda, that implement the Jupyter Notebook concept, which makes working with data science calculations significantly easier. Besides all of these things in Python’s favor, it’s also an excellent language for creating glue code with languages such as C/C++ and Fortran. The Python documentation actually shows how to create the required extensions. Most Python users rely on the language to see patterns, such as allowing a robot to see a group of pixels as an object. It also sees use for all sorts of scientific tasks.
  • R (special purpose statistical): In many respects, Python and R share the same sorts of functionality but implement it in different ways. Depending on which source you view, Python and R have about the same number of proponents, and some people use Python and R interchangeably (or sometimes in tandem). Unlike Python, R provides its own environment, so you don’t need a third-party product such as Anaconda. However, R doesn’t appear to mix with other languages with the ease that Python provides.
  • SQL (database management): The most important thing to remember about Structured Query Language (SQL) is that it focuses on data rather than tasks. Businesses can’t operate without good data management — the data is the business. Large organizations use some sort of relational database, which is normally accessible with SQL, to store their data. Most Database Management System (DBMS) products rely on SQL as their main language, and DBMS usually has a large number of data analysis and other data science features built in. Because you’re accessing the data natively, there is often a significant speed gain in performing data science tasks this way. Database Administrators (DBAs) generally use SQL to manage or manipulate the data rather than necessarily perform detailed analysis of it. However, the data scientist can also use SQL for various data science tasks and make the resulting scripts available to the DBAs for their needs.
  • Java (general purpose): Some data scientists perform other kinds of programming that require a general purpose, widely adapted and popular, language. In addition to providing access to a large number of libraries (most of which aren’t actually all that useful for data science, but do work for other needs), Java supports object orientation better than any of the other languages in this list. In addition, it’s strongly typed and tends to run quite quickly. Consequently, some people prefer it for finalized code. Java isn’t a good choice for experimentation or ad hoc queries.
  • Scala (general purpose): Because Scala uses the Java Virtual Machine (JVM) it does have some of the advantages and disadvantages of Java. However, like Python, Scala provides strong support for the functional programming paradigm, which uses lambda calculus as its basis. In addition, Apache Spark is written in Scala, which means that you have good support for cluster computing when using this language — think huge dataset support. Some of the pitfalls of using Scala are that it’s hard to set up correctly, it has a steep learning curve, and it lacks a comprehensive set of data science specific libraries.

There are likely other languages that data scientists use, but this list gives you a good idea of what to look for in any programming language you choose for data science tasks. What it comes down to is choosing languages that help you perform analysis, work with huge datasets, and allow you to perform some level of general programming tasks. Let me know your thoughts about data science programming languages at [email protected].

Effects of the Mistruths of Data on Model Output

A number of the books Luca and I have written or I have written on my own, including Artificial Intelligence for Dummies, 2nd Edition, Machine Learning for Dummies, 2nd Edition, Python for Data Science for Dummies, and Machine Learning Security Principles, talk about the five mistruths of data: commission, omission, bias, perspective, and frame of reference. Of these five mistruths, the one that receives the most attention is bias, but they’re all important because they all affect how any data science model you build will perform. Because the data used to create the model isn’t free of mistruths, the model can’t perform as expected in many situations. Consequently, the assertions in the article, LGBTQ+ bias in GPT-3, don’t surprise me at all. The data used to create the model is flawed, so the output is flawed as well.

I chose the article in question as a reference because the author takes the time to point out a problem in ever generating a perfect model, the constant change in human perspective. Words that were considered toxic in the past are no longer considered toxic today, but new words have taken their place. Even if a model were to somehow escape bias today, it would be biased tomorrow due to the mistruth of perspective.

So, why have I been using the term mistruth instead of the term lie? A lie is information that is passed off as true in order to avoid responsibility, to harm others in some way, or to knowingly pass off information that is untrue for personal gain. However, humans use mistruths all of the time to reduce the potential for arguments, to save someone’s ego, or simply because the information that the person has is inaccurate. A mistruth doesn’t have the intent of deceiving another for personal gain, but it’s still not true. So, when someone asks, “Do these pants make me look fat?” and another person states, tactfully, that, “They make you look voluptuous.” the statement could be true or a mistruth, but is done to keep an argument at bay and make the other person feel good about themselves. However, machine learning algorithms have no concept of this interplay and the model created using such statements will be biased.

Anthropomorphizing machine learning models doesn’t change the fact that they’re essentially statistically-based mathematical models of data with no understanding of anything built into them. So, true or not, the model sees all data as being the same—the only solution to the problem of bias is to clean the data. However, the human expectation after getting emotionally attached to their AI is that the AI will somehow just know when something is hurtful. Articles like, What is the Proper Pronoun for GPT-4?, point to the problem of why someone would ask the question at all. The interactions with AI have taken on a harmful aspect because humans are seeing them as sentient when they most certainly aren’t. The issue will come to a head when people try to use the bad advice obtained from their AI as a defense in court. Personally, I see the continued use of “it” as essential to remind people that no matter how human a model may seem, it’s still just a model.

It’s important to understand that I see AI as an amazing tool that will only get more amazing as data scientists, developers, and others add to it’s potential by doing things like adding more memory. Although I don’t agree that any machine learning model can be someone’s friend, articles like How to Use ChatGPT in Daily Life? do make a strong argument for using machine learning models as tools to enhance human capability. However, it is a person who thought up these uses, not the machine learning model. Machine learning models will remain limited because they can’t understand the data they manipulate. In fact, articles like Why ChatGPT Won’t Replace Coders Just Yet point out just how limited machine learning models remain. However, I do think that machine learning models will get expanded by humans to perform new tasks as described in articles like How To Build Your Own Custom ChatGPT With Custom Knowledge Base. Of course, if that custom knowledge base is biased in any way, then the output from the new model will also be biased.

It’s important to know that AI is moving forward and that it will be extended to do even more in assisting humans to realize their full potential. However, it’s also important to know that the five mistruths will continue to be a problem because machine learning models are unable to understand the data used to train them and to provide knowledge bases for their output. Realistic expectations will help improve AI as a tool that augments human capabilities and helps us achieve amazing things in the future. Let me know your thoughts on machine learning bias at [email protected].

Considering the Four Levels of Intelligence Management

One of the reasons that Luca and I wrote Artificial Intelligence for Dummies, 2nd Edition was to dispel some of the myths and hype surrounding machine-based intelligence. If anything, the amount of ill-conceived and blatantly false information surrounding AI has only increased since then. So now we have articles like Microsoft’s Bing wants to unleash ‘destruction’ on the internet out there that espouse ideas that can’t work at all because AIs feel nothing. Of course, there is the other end of the spectrum in articles like David Guetta says the future of music is in AI, which also can’t work because computers aren’t creative. A third kind of article starts to bring some reality back into the picture, such as Are you a robot? Sci-fi magazine stops accepting submissions after it found more than 500 stories received from contributors were AI-generated. All of this is interesting for me to read about because I want to see how people react to a technology that I know is simply a technology and nothing more. Anthropomorphizing computers is a truly horrible idea because it leads to the thoughts described in The Risk of a New AI Winter. Another AI winter would be a loss for everyone because AI really is a great tool.

As part Python for Data Science for Dummies and Machine Learning for Dummies, 2nd Edition Luca and I considered issues like the seven kinds of intelligence and how an AI can only partially express most of them. We even talked about how the five mistruths in data can cause issues such as skewed or even false results in machine learning output. In  Machine Learning Security Principles I point out the manifest ways in which humans can use superior intelligence to easily thwart an AI. Still, people seem to refuse to believe that an AI is the product of clever human programmers, a whole lot of data, and methods of managing algorithms. Yes, it’s all about the math.

This post goes to the next step. During my readings of various texts, especially those of a psychological and medical variety, I’ve come to understand that humans embrace four levels of intelligence management. We don’t actually learn in a single step as might be thought by many people, we learn in four steps with each step providing new insights and capabilities. Consider these learning management steps:

  1. Knowledge: A person learns about a new kind of intelligence. That intelligence can affect them physically, emotionally, mentally, or some combination of the three. However, simply knowing about something doesn’t make it useful. An AI can accommodate this level (and even excel at it) because it has a dataset that is simply packed with knowledge. However, the AI only sees numbers, bits, values, and nothing more. There is no comprehension as is the case with humans. Think of knowledge as the what of intelligence.
  2. Skill: After working with new knowledge for some period of time, a human will build a skill in using that knowledge to perform tasks. In fact, very often this is the highest level that a human will achieve with a given bit of knowledge, which I think is the source of confusion for some people with regard to AIs. Training an AI model, that is, assigning weights to a neural network created of algorithms, gives an AI an appearance of skill. However, the AI isn’t actually skilled, it can’t accommodate variations as a human will. What the AI is doing is following the parameters of the algorithm used to create its model. This is the highest step that any AI can achieve. Think of skill as the how of intelligence.
  3. Understanding: As a human develops a skill and uses the skill to perform tasks regularly, new insights develop and the person begins to understand the intelligence at a deeper level, making it possible for a person to use the intelligence in new ways to perform new tasks. A computer is unable to understand anything because it lacks self-awareness, which is a requirement for understanding anything at all. Think of understanding as the why of intelligence.
  4. Wisdom: Simply understanding an intelligence is often not enough to ensure the use of that intelligence in a correct manner. When a person makes enough mistakes in using an intelligence, wisdom in its use begins to take shape. Computers have no moral or ethical ability—they lack any sort of common sense. This is why you keep seeing articles about AIs that are seemingly running amok, the AI has no concept whatsoever of what it is doing or why. All that the AI is doing is crunching numbers. Think of wisdom as the when of intelligence.

It’s critical that society begin to see AIs for what they are, exceptionally useful tools that can be used to perform certain tasks that require only knowledge and a modicum of skill and to augment a human when some level of intelligence management above these levels is required. Otherwise, we’ll eventually get engulfed in another AI winter that thwarts development of further AI capabilities that could help people do things like go to Mars, mine minerals in an environmentally friendly way in space, cure diseases, and create new thoughts that have never seen the light of day before. What are your thoughts on intelligence management? Let me know at [email protected].

Sending Comments on My Books

This is an update of a post that originally appeared on February 23, 2012.

I regularly receive a stack of e-mail about my books. Readers question everything and it makes me happy to see that they’re reviewing my books so closely. It means that I’m accomplishing my principle goal, helping you understand computers in every possible way so that you can be more productive and accomplish tasks with less effort. When I make something easier for someone and they tell me about it, the grin extends from one side of my face to another. It really makes my day.

Some readers are still asking me if it’s OK to send me comments. I definitely want to see any constructive comment that you have. Anything that helps me understand your needs better makes it possible for me to write better books. I really do want to hear from you. The main element that I need to obtain a usable comment is that it’s constructive. A comment that lacks details isn’t helpful because I’ve written so many books. Emotional comments without any substance are especially hard to deal with because they leave me wondering what you need from me. Here are some of the things you can do to create a constructive comment:

  • What is the title of the book you’re reading (be sure to include the edition number, which is usually right on the cover unless it’s a first edition)?
  • Are you using the downloadable source code if this is a programming book?
  • Did you install the recommended version of any required software using the instructions found in the book?
  • Which page contains the error (if you’re using Kindle or other electronic media, please provide a chapter number and section title as a minimum)?
  • What do you view as an error on that page?
  • How would you fix the error?
  • What sort of system are you running?
  • When did you encounter the problem?

The more information you provide, the easier it is for me to understand the issue and provide you with feedback. In many cases, I’ll upload the fix to my blog so that everyone can benefit from the response (so be sure you keep an eye on my blog for new entries). I work hard to ensure that my books are as error free as possible, but everyone makes mistakes. Also remember that sometimes mitigating factors, such as differences in software versions or anticipated hardware, make it appear that there is an error in the book when you’re really looking at a different in environment. Help me provide you with better books—send me comments!

There are a few things that I won’t do for you. I won’t help you pass an exam at school. Your learning experience is important to me, which means that I want you to continue your education by working through the instruction on your own. I also don’t provide free consulting. This means I won’t check the code that you created on your own for errors. In addition, if you don’t use the downloadable source, be sure to read Verifying Your Hand Typed Code for restrictions on the level of support that I provide. I’ll help you with any book-specific question, but I draw the line at that point. Let me know if you have any input or insights on my books at [email protected].

Book Reviews – Doing Your Part

This is an update of a post that originally appeared on October 4, 2013.

Readers contact me quite a lot about my books. On an average day, I receive around 65 reader e-mails about a wide range of book-related topics. Many of them are complimentary about my books and it’s hard to put down in words just how much I appreciate the positive feedback. Often, I’m humbled to think that people would take time to write.

There is another part to reader participation in books, however, and it doesn’t have anything to do with me—it has to do with other readers. When you read one of my books and find the information useful, it’s helpful to write a review about it so that others can know what to expect. I want to be sure that every reader who purchases one of my books is happy with that purchase and gets the most possible out of the book. The wording that the publisher’s marketing staff and I use to describe a book represents our viewpoint of that book and not necessarily the viewpoint of the reader. The only way that other readers will know how a book presents information from the reader perspective is for other readers to write reviews. A good review will tell:

  • What you liked about the book
  • How it met your needs
  • What it provides in the way of usable content
  • Whether you liked any intangibles, such as the author’s writing style
  • When you used the content to obtain a new job or learn a new skill
  • Who recommended the book to you

    The review should also present any negatives (obviously, I want to know about the flaws, too, so that I can correct them in the next edition of the book and also discuss them on my blog):

    • Did the book provide enough detailed procedures needed to accomplish a task?
    • Are significant technical flaws and why do you feel they’re an issue?
    • Are there enough graphics to augment the text and make it clearer?
    • Is the source code useful?

    Many reviewing venues, such as the one found on Amazon, also ask you to provide a rating for the book. You should rate the book based on your experience with other books and on how this particular book met your needs in learning a new topic. The kind of review to avoid writing is a rant or one that isn’t actually based on reading the whole book. As always, I’m here (at [email protected]) to answer any questions you have and many of your questions have appeared as blog posts when the situation warrants.

    So, just where do you make these reviews? The publishers sometimes provide a venue for expressing your opinion and you can certainly go to the publisher site to create such a review. I personally prefer to upload my reviews to Amazon because it’s a location that many people frequent to find out more about books. You can go to the site, click Write a Customer Review (near the bottom of the page), and then provide your viewpoint about the book.

    Thank you in advance for taking the time and effort required to write a review. I know it’s time consuming, but it’s an important task that only you can perform.

    Fooling Facial Recognition Software

    One of the points that Luca and I made in Artificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition is that AI is all about algorithms and that it can’t actually think. An AI appears to think due to clever programming, but the limits of that programming quickly become apparent under testing. In the article, U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box, the limits of AI are almost embarrassingly apparent because the AI failed to catch even one of them. In fact, it doesn’t take a Marine to outsmart an AI, the article, This Clothing Line Tricks AI Cameras Without Covering Your Face, tells how to do it and look fashionable at the same time. Anthropomorphizing AI and making it seem like it’s more than it is is one sure path to disappointment.

    My book, Machine Learning Security Principles, points out a wealth of specific examples of the AI being fooled as part of an examination of machine learning-based security. Some businesses rely on facial recognition now as part of their security strategy with the false hope that it’s reliable and that it will provide an alert in all cases. As recommended in my book, machine learning-based security is just one tool that requires a human to back it up. The article, This Simple Technique Made Me Invisible to Two Major Facial Recognition Systems, discusses just how absurdly easy it is to fool facial recognition software if you don’t play by the rules; the rules being what the model developer expected someone to do.

    The problems become compounded when local laws ban the use of facial recognition software due its overuse by law enforcement in potentially less than perfect circumstances. There are reports of false arrest that could have possibly been avoided if the human doing the arresting made a check to verify the identity of the person in question. There are lessons in all this for a business too. Using facial recognition should be the start of a more intensive process to verify a particular action, rather than just assume that the software is going to be 100% correct.

    Yes, AI, machine learning, and deep learning applications can do amazing things today, as witnessed by the explosion in use of ChatGPT for all kinds of tasks. It’s a given that security professionals, researchers, data scientists, and various managerial roles will increasingly use these technologies to reduce their workload and improve overall consistency of all sorts of tasks, including security, that these applications are good at performing. However, even as the technologies improve, people will continue to find ways to overcome them and cause them to perform in unexpected ways. In fact, it’s a good bet that the problems will increase for the foreseeable future as the technologies become more complex (hence, more unreliable). Monitoring the results of any smart application is essential, making humans essential, as part of any solution. Let me know your thoughts about facial recognition software and other security issues at [email protected].

    Is Security Research Always Useful?

    This is an update of a post that originally appeared on February 19, 2016.

    Anyone involved in the computer industry likely spends some amount of time reading about the latest security issues in books such as Machine Learning Security Principles. Administrators and developers probably spend more time than many people, but no one can possibly read all the security research available today. There are so many researchers looking for so many bugs in so many places and in so many different ways that even if someone had the time and inclination to read every security article produced, it would be impossible. You’d need to be the speediest reader on the planet (and then some) to even think about scratching the surface. So, you must contemplate the usefulness of all that research—whether it’s actually useful or simply a method for some people to get their name on a piece of paper.

    What amazes me since I first wrote this blog post is that I have done a considerable amount of additional reading, including research papers, and find that most exploits remain essentially the same. The techniques may differ, they may improve, but the essentials of the exploit remain the same. It turns out that humans are the weakest link in every security chain and that social engineering attacks remain a mainstay of hackers. The one thing that has changed in seven years is that the use of machine learning and deep learning techniques has automated life for the hacker, much as these technologies have automated life for everyone else. In addition, a lack of proactive privacy makes it even easier than before for a hacker to create a believable attack by using publicly available information about an intended target.

    As part of researching security, you need to consider the viability of an attack, especially with regard to your organization, infrastructure, personnel, and applications. Some of the attacks require physical access to the system. In some cases, you must actually take the system apart to access components in order to perform the security trick. Many IoT attacks fall into this category. Unless you or your organization is in the habit of allowing perfect strangers physical access to your systems, which might include taking them apart, you must wonder whether the security issue is even worth worrying about. You need to ask why someone would take the time to document a security issue that’s nearly impossible to see, much less perform in a real world environment. More importantly, the moment you see that a security issue requires physical access to the device, you can probably stop reading.

    You also find attacks that require special equipment to perform. The article, How encryption keys could be stolen by your lunch, discusses one such attack. In fact, the article contains a picture of the special equipment that you must build to perpetrate the attack. It places said equipment into a piece of pita bread, which adds a fanciful twist to something that is already quite odd and pretty much unworkable given that you must be within 50 cm (19.6 in) from the device you want to attack (assuming that the RF transmission conditions are perfect). Except for the interesting attack vector (using a piece of pita bread), you really have to question why anyone would ever perpetrate this attack given that social engineering and a wealth of other attacks require no special equipment, are highly successful, and work from a much longer distance.

    It does pay to keep an eye on the latest and future targets of hacker attacks. Even though many IoT attacks are the stuff of James Bond today, hackers are paying attention to IoT, so it pays to secure your systems, which are likely wide open right now. As one of my experiments for Machine Learning Security Principles, I actually did hack my own smart thermostat (after which, I immediately improved security). The number of IoT attacks is increasing considerably, so ensuring that you maintain electrical, physical, and application security over your IoT devices is important, but not to the exclusion of other needs.

    A few research pieces become more reasonable by discussing outlandish sorts of hacks that could potentially happen after an initial break-in. The hack discussed in Design flaw in Intel chips opens door to rootkits is one of these sorts of hacks. You can’t perpetrate the hack until after breaking into the system some other way, but the break-in has serious consequences once it occurs. Even so, most hackers won’t take the time because they already have everything needed—the hack is overkill. However, this particular kind of hack should sound alarms in the security professional’s head. The Windows 11 requirement for the TPM 2.0 chip is supposed to make this kind of attack significantly harder, perhaps impossible, to perform. Of course, someone has already found a way to bypass the TPM 2.0 chip requirement and it doesn’t help that Microsoft actually signed off on a piece of rootkit malware for installation on a Windows 11 system. So, security research, even when you know that the actual piece of research isn’t particularly helpful, can become a source of information for thought experiments of what a hacker might do.

    The articles that help most provide a shot of reality into the decidedly conspiracy-oriented world of security. For example, Evil conspiracy? Nope, everyday cyber insecurity, discusses a series of events that everyone initially thought pointed to a major cyber attack. It turns out that the events occurred at the same time by coincidence. The article author thoughtfully points out some of the reasons that the conspiracy theories seemed a bit out of place at the outset anyway.

    It also helps to know the true sources of potential security issues. For example, the articles, In the security world, the good guys aren’t always good and 5 reasons why newer hires are the company’s biggest data security risk, point out the sources you really do need to consider when creating a security plan. These are the sorts of articles that should attract your attention because they describe a security issue that you really should think about.

    The point is that you encounter a lot of information out there that doesn’t help you make your system any more secure. It may be interesting if you have the time to read it, but the tactics truly aren’t practical and no hacker is going to use them. Critical thinking skills are your best asset when building your security knowledge. Let me know about your take on security research at [email protected].

    Automation and the Future of Human Employment

    This is an update of a post that originally appeared on May 9, 2016.

    It wasn’t long ago that I wrote Robotics and Your Job to consider the role that robots will play in human society in the near future. Of course, robots are already doing mundane chores and those list of chores will increase as robot capabilities increase. The question of what sorts of work humans will do in the future has crossed my mind quite a lot as I’ve written a number of AI, machine learning, and deep learning books such as Artificial Intelligence for Dummies, 2nd Edition. In fact, both Luca and I have discussed the topic at depth. It isn’t just robotics, but the whole issue of automation that is important. Robots actually fill an incredibly small niche in the much larger topic of automation. Although articles like The end of humans working in service industry? seem to say that robots are the main issue, automation comes in all sorts of guises. So, here it is seven years later and robot theme parks are still in the news and they are making an impact as security guards. In addition, Huis Ten Bosch still has Robot Kingdom going (you can select either Japanese or English as needed to read the information). The fact of the matter is that in seven years robots have become a significant part of most people’s lives and the impact will continue to grow. Not that I’m actually expecting an I Robot experience any time soon.

    My vision for the future is that people will be able to work in occupations with lower risks, higher rewards, and greater interest. Unfortunately, not everyone wants a job like that. Some people really do want to go to work, clock in, place a tiny cog in a somewhat large wheel all day, clock out, and go home. They want something mindless that doesn’t require much effort, so losing service and assembly line type jobs to automation is a problem for them. In The Robots are Coming for Your Job, Too the author paints a pretty gloomy picture for anyone who thinks their service job will still exist in 50 years. The reality is that any job that currently pays under $25.00 an hour is likely to become a victim of automation. Many people insist that they’re irreplaceable, but the fact is that automation can easily take their job and employers are looking forward to the change because automation doesn’t require healthcare, pensions, vacation days, sick days, or salaries. Most importantly, automation does as its told.

    In the story The rise of greedy robots, the author lays out the basis for an increase in automation that maximizes business profit at the expense of workers. Articles such as On the Phenomenon of Bullshit Jobs tell why people are still working a 40 hour work week when it truly isn’t necessary to do so. In short, if you really do insist on performing a task that is essentially pointless, the government and industry is perfectly willing to let you do so until a time when technology is so entrenched that it’s no longer possible to do anything about it (no, I’m not making this up). As mentioned earlier, even some relatively essential jobs, such as security, have a short life expectancy with the way things are changing.

    The question of how automation will affect human employment in the future remains. Theoretically, people could work a 15 hour work week even now, but then we’d have to give up some of our consumerism—the purchase of gadgets we really don’t need. Earlier, I talked about jobs that are safer, more interesting, and more fulfilling. There are also those pointless jobs that the government will doubtless prop up at some point to keep people from rioting. However, there is another occupation that will likely become a major source of employment, but only for the nit-picky, detail person. In The thin line between good and bad automation the author explores the problem of scripts calling scripts. Even though algorithms will eventually create and maintain other algorithms, which in turn means that automation will eventually build itself, someone will still have to monitor the outcomes of all that automation. In addition, the search for better algorithms continues (as described in The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World and More data or better models?). Of course, these occupations still require someone with a great education and a strong desire to do something significant as part of their occupation.

    The point of all this speculation is that it isn’t possible to know precisely how the world will change due to the effects of automation, but it will most definitely change. Even though automation currently has limits, scientists are currently working on methods to extend automation even further so that the world science fiction authors have written about for years will finally come into being (perhaps not quite in the way they had envisioned, however). Your current occupation may not exist 10 years from now, much less 50 years from now. The smart thing to do is to assume your job is going to be gone and that you really do need a Plan B in place—a Plan B that may call for an increase in flexibility, training, and desire to do something interesting, rather than the same mundane task you’ve plodded along doing for the last ten years. Let me know your thoughts on the effects of automation at [email protected].

    Technology and Child Safety

    This is an update of a post that originally appeared on January 20, 2016.

    I wrote a little over seven years ago that I had read an article in ComputerWorld, Children mine cobalt used in smartphones, other electronics, that had me thinking yet again about how people in rich countries tend to ignore the needs of those in poor countries. I had sincerely hoped at the time that things would be different, better, in seven years. Well, they’re worse! We’ve increased our use of cobalt dramatically in order to create supposedly green cars. The picture at the beginning of the ComputerWorld article says it all, but the details will have you wondering whether a smartphone or an electric car really is worth some child’s life. That’s right, any smartphone or electric car you buy may be killing someone and in a truly horrid manner. Children as young as 7 years old are mining the cobalt needed for the batteries (and other components) in the smartphones and electric cars that people seem to feel are so necessary for life (they aren’t you know; food, water, clothing, shelter, sleep, air, and reproduction are necessities, everything else is a luxury).

    The problem doesn’t stop when someone gets rid the smartphone, electric car, or other technology. Other children end up dismantling the devices sent for recycling. That’s right, a rich country’s efforts to keep electronics out of their landfills is also killing children because countries like India put these children to work taking them apart in unsafe conditions. Recycled wastes go from rich countries to poor countries because the poor countries need the money for necessities, like food. Often, these children are incapable of working by the time they reach 35 or 40 due to health issues induced by their forced labor. In short, the quality of their lives is made horribly low so that it’s possible for people in rich countries to enjoy something that truly isn’t necessary for life. To make matters worse, the vendors of these products build in obsolescence (making them unrepairable) so they can sell more products and make more money, increasing the devastation visited on children.

    I’ve written other blog posts about the issues of technology pollution. However, the emphasis of these previous articles has been on the pollution itself. Taking personal responsibility for the pollution you create is important, but we really need to do more. Robotic (autonomous) mining is one way to keep children out of the mines and projects such as UX-1 show that it’s entirely possible to use robots in place of people today. The weird thing is that autonomous mining would save up to 80% of the mining costs of today, so you have to wonder why manufacturers aren’t rushing to employ this solution.

    In addition, off world mining would keep the pollution in space, rather than on planet earth. Of course, off world mining also requires a heavy investment in robots, but it promises to provide a huge financial payback in addition to keeping earth a bit cleaner. The point is that there are alternatives that we’re not using. Robotics presents an opportunity to make things right with technology and I’m excited to be part of that answer in writing books such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition.

    Unfortunately, companies like Apple, Samsung, and many others simply thumb their noses at laws that are in place to protect the children in these countries because they know you’ll buy their products. Yes, they make official statements, but read their statements in that first article and you’ll quickly figure out that they’re excuses and poorly made excuses at that. They don’t have to care because no one is holding them to account. People in rich countries don’t care because their own backyards aren’t sullied and their own children remain safe. It’s not that I have a problem with technology, quite the contrary, I have a problem with the manner in which technology is currently being made and supported. We need to do better. So, the next time you think about buying electronics, consider the real price for that product. Let me know what you think about polluting other countries to keep your country clean at [email protected].