Making Algorithms Useful

This is an update of a post that originally appeared on December 2, 2015.

Writing about machine learning and deep learning in my various books has been interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in books such as Machine Learning Security PrinciplesAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition:

  • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
  • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
  • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning and deep learning are already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

As I’ve written new books and updated existing ones, I’ve seen an almost magical progression in the capabilities of machine learning and deep learning applications such as ChatGPT, Chat Generative Pre-Trained Transformer, which can produce some pretty amazing output.

Some of these applications, such as Siri and Alexa, continue to learn as you use them. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question.

Machine learning is a big mystery to many people today, while other people have gained enough experience to have strong opinions about it. Because I continue to write new machine learning/deep learning books and update others, it would be interesting to hear your questions about machine learning and deep learning. After all, I’d like to tune the content of my books to meet the most needs that I can. Where do you see this technology headed? What confuses you about it? Talk to me at [email protected].

Contemplating the Issue of Bias in Data Science

When Luca and I wrote Python for Data Science for Dummies we tried to address a range of topics that aren’t well covered in other places. Imagine my surprise when I saw a perfect article to illustrate one of these topics in ComputerWorld this week, Maybe robots, A.I. and algorithms aren’t as smart as we think. With the use of AI and data science growing exponentially due to the fact it can help improve a company’s marketing tenfold, you might think that computers can think. They can’t. You can learn about the role data science has in marketing here but for now, I think it’s important to reiterate the fact that computers can emulate or simulate the thinking process, but they don’t actually think. A computer is a machine designed to perform math quite quickly. If we want thinking computers, then we need a different kind of a machine. It’s the reason I wrote the Computers with Common Sense? post not too long ago. The sort of computer that could potentially think is a neural network and I discuss them in the Considering the Future of Processing Power post. (Even Intel’s latest 18 core processor, which is designed for machine learning and analytics isn’t a neural network-it simply performs the tasks that processors do now more quickly.)

However, the situation is worse than you might think, which is the reason for mentioning the ComputerWorld article. A problem occurs when the computer scientists and data scientists working together to create algorithms that make it appear that computers can think forget that they really can’t do any such thing. Luca and I discuss the effects of bias in Chapter 18 of our book. The chapter might have seemed academic at one time-something of interest, but potentially not all that useful. Today that chapter has taken on added significance. Read the ComputerWorld article and you find that Flickr recently released a new image recognition technology. The effects of not considering the role of bias in interpreting data and in the use of algorithms has has horrible results. The Guardian goes into even more details, describing how the program has tagged black people as apes and animals. Obviously, no one wanted that particular result, but forgetting that computers can’t think has caused precisely that unwanted result.

AI is an older technology that isn’t well understood because we don’t really understand our own thinking processes. It isn’t possible to create a complete and useful model of something until you understand what it is that you’re modeling and we don’t truly understand intelligence. Therefore, it’s hardly surprising that AI has taken so long to complete even the smallest baby steps. Data science is a newer technology that seeks to help people see patterns in huge data sets-to understand the data better and to create knowledge where none existed before. Neither technology is truly designed for stand-alone purposes yet. While I find Siri an interesting experiment, it’s just that, an experiment.

The Flickr application tries to take the human out of the loop and you see the result. Technology is designed to help mankind achieve more by easing the mundane tasks performed to accomplish specific goals. When you take the person out of the loop, what you have is a computer that is only playing around at thinking from a mathematical perspective-nothing more. It’s my hope that the Flickr incident will finally cause people to start thinking about computers, algorithms, and data as the tools that they truly are-tools designed to help people excel in new ways. Let me know your thoughts about AI and data science at [email protected].

 

Self-driving Cars in the News

I remember reading about self-driving cars in science fiction novels. Science fiction has provided me with all sorts of interesting ideas to pursue as I’ve gotten older. Many things I thought would be impossible, have become reality over the years and things that I thought I’d never see five years ago, I’m seeing in reality today. People will start to look at how to sell junk car or their old car just so they can get their hands on a self-driving one, this pretty much looks like the future for us all. I discussed some of the technology behind self-driving cars in my Learning as a Human post. The article was fine as it went, but readers have taken me to task more than a few times for becoming enamored with the technology and not discussing the reality of the technology.

The fact of the matter is that self-driving cars are already here to some extent. Ford has introduced cars that can park themselves. The Ford view of cars is the one that most people can accept. It’s an anticipated next step in the evolution of driving. People tend to favor small changes in technology. Changes that are too large tend to shock them and aren’t readily accepted. People are always happy with small changes, they are more likely to buy a car if they see that it has been improved upon slightly (compared to the model that they already have). People find it so easy to buy a new car now, they don’t even need to worry about getting it either, as they can simply just get it delivered to their home by using a shipping company (you can find out more information here at this Cars Arrive Auto Relocation company). Which is great, but what’s so good about a new car that has one small change?

Google’s new self-driving car might be licensed in Nevada, but don’t plan on seeing it in your city anytime soon (unless you just happen to live in Nevada, of course). A more realistic approach to self-driving cars will probably come in the form of conveyances used in specific locations. For example, you might see self-driving cars used at theme parks and college campuses where the controlled environment will make it easier for them to navigate. More importantly, these strictly controlled situations will help people get used to the idea of seeing and using self-driven vehicles. The point is to build trust in them in a manner that people can accept.

Of course, the heart of the matter is what self-driving cars can actually provide in the way of a payback. According to a number of sources, they can actually reduce driving costs by $190 billion dollars per year in health and accident savings. That’s quite a savings. Money talks, but people have ignored monetary benefits in the past to ensure they remain independent. It will take time to discover whether the potential cost savings actually make people more inclined to use self-driving cars. My guess is that people will refuse to give up their cars unless there is something more than monetary and health benefits, as like non self-driving cars, they still need to be checked over every few years, they’re still susceptible to breaking down and needing new parts from online auto parts store TDC Automotive and others similar. So all in all, consumers won’t be jumping at the chance to buy one for the time being.

Even though no one has really talked about it much, self-driving cars have the potential to provide all sorts of other benefits. For example, because self-driving cars will obey the speed laws and run at the most efficient speeds possible in a given situation, cars will become more fuel efficient and produce less pollution. The software provided with the vehicle will probably allow the car to choose the most efficient route to a destination possible and provide the means for the car to automatically navigate around obstructions, such as accidents (which will be notably fewer). People could probably be more assured of getting to their destination on time because they won’t get lost either. Working on the way to work will allow people to spend more quality time with family. It’s the intangible benefits that will eventually make the self-driving car seem like a good way to do things.

The self-driving car is available today. It won’t be long and you’ll be able to buy one. You can already get a self-parking Ford, so the next step really isn’t that far away. The question is whether you really want to take that step. Some people may see self-driving cars as too much of a gamble, luckily we still have used car companies like Zemotor that we can buy our cars from and feeling secure about it. Let me know your thoughts on self-driving cars, their potential to save lives, reduce costs, create a cleaner environment, and make life generally more pleasant at [email protected].

Learning as a Human

I started discussing the whole idea of robot perception from a primate level in Seeing as a Human. In that post I discussed the need for a robot not to just see objects, but to be able to understand that the object is something unique. The ability to comprehend what is being seen is something that robots must do in order to interact with society at large in a manner that humans will understand and appreciate. Before the concepts espoused in works of science fiction such as I, Robot can be realized, robots must first be able to interact with objects in a manner that programming simply can’t anticipate. That’s why the technology being explored by deep learning is so incredibly important to the advancement of robotics.

Two recent articles point to the fact that deep learning techniques are already starting to have an effect on robotic technology. The first is about the latest Defense Advanced Research Projects Agency (DARPA) challenge. A robot must be able to drive a vehicle, exit the vehicle, and then perform certain tasks. No, this isn’t science fiction, it’s actually a real world exercise. This challenge is significantly different from self-driving cars. In fact, people are actually riding in self-driving cars now and I see a future where all cars will become self-driving. However, asking a robot to drive a car, exit it, and then do something useful is a significantly more difficult test of robotic technology. To make such a test successful, the robot must be able to learn to at least some extent, from each trial. Deep learning provides the means for the robot to learn.

The second article seems mundane by comparison until you consider just what it is that the robot is trying to do, cook a meal that it hasn’t been trained to cook. In this case, the robot watches a YouTube video to learn how to cook the meal just as a human would. To perform this task requires that the robot be able to learn the task by watching the video—something that most people see as something only a human can do. The programming behind this feat breaks cooking down into tasks that the robot can perform. Each of these tasks is equivalent to a skill that a human would possess. Unlike humans, a robot can’t learn new skills yet, but it can reorganize the skills it does possess in an order that makes completing the recipe possible. So, if a recipe calls for coddling an egg and the robot doesn’t know how to perform this task, it’s unlikely that the robot will actually be able to use that recipe. A human, on the other hand, could learn to coddle an egg and then complete the recipe. So, we’re not talking anything near human level intelligence yet.

The potential for robots to free humans from mundane tasks is immense. However, the potential for robots to make life harder for humans is equally great (read Robot Induced Slavery). We’re at a point where some decisions about how technology will affect our lives must be made. Unfortunately, no one seems interested in making such decisions outright and the legal system is definitely behind the times. This means that each person must choose the ways in which technology affects his or her life quite carefully. What is your take on robotic technology? Let me know at [email protected].

 

Seeing as a Human

Neural networks intrigue me because of their ability to change the way in which computers work at a basic level. I last talked about them in my Considering the Future of Processing Power post. This post fed into the A Question of Balancing Robot Technologies post that explored possible ways in which neural networks could be used. The idea that neural networks provide a means of learning and of pattern recognition is central to the goals that this technology seeks to achieve. Even though robots are interesting, neural networks must first solve an even more basic problem. Current robot technology is hindered by an inability of the robot to see properly, so that it can avoid things like chairs in a room. There are all sorts of workarounds for the problem, but they all end up being kludges in the end. A recent ComputerWorld article, Computer vision finally matches primates’ ability, gives me hope that we may finally be turning the corner on making robots that can interact well with the real world.

In this case, the focus is on making it possible for a robot to see just like humans do. Actually, the sensors would be used for all sorts of other technologies, but it’s the use in robots that interests me most. A robot that can truly see as well as a human would be invaluable when it comes to performing complex tasks, such as monitoring a patient or fighting a fire. In both cases, it’s the ability to actually determine what is being seen that is important. In the case of a robotic nurse, it becomes possible to see the same sorts of things a human nurse sees, such as the start of an infection. When looking at a fire fighting robot, it becomes possible to pick out the body of someone to rescue amidst the flames. Video cameras alone can’t allow a robot to see what the camera is providing in the form of data. That being said, thanks to exciting developments in 3D hands data and other computer vision techniques, these possibilities could soon become a reality.

However, just seeing isn’t enough either. Yes, picking out patterns in the display and understanding where each object begins and ends is important. However, in order to use the data, a robot would also need to comprehend what each object is and determine whether that object is important. A burning piece of wood in a fire might not be important, but the human lying in the corner needing help is. The robot would need to comprehend that the object it sees is a human and not a burning piece of wood.

Using standard processors would never work for these applications because standard processors work too slow and can’t remember knowledge learned. Neural networks make it possible for a robot to detect objects, determine which objects are important, focus on specific objects, and then perform tasks based on those selected objects. A human would still need to make certain decisions, but a robot could quickly assess a situation, tell the human operator only the information needed to make a decision, and then act on that decision in the operator’s stead. In short, neural networks make it possible to begin looking at robots as valuable companions to humans in critical situations.

Robot technology still has a long way to go before you start seeing robots of the sort presented in Star Wars. However, each step brings us a little closer to realizing the potential of robots to reduce human suffering and to reduce the potential for injuries. Let me know your thoughts about neural networks at [email protected].

Considering the Future of Processing Power

The vast majority of processors made today perform tasks as procedures. The processor looks at an instruction, performs the task specified by that instruction, and then moves onto the next instruction. It sounds like a simple way of doing things, and it is. Because a processor can perform the instructions incredibly fast—far faster than any human can even imagine—it could appear that the computer is thinking. What you’re seeing is a processor performing one instruction at a time, incredibly fast, and really clever programming. You truly aren’t seeing any sort of thought in the conventional (human) sense of the term. Even when using Artificial Intelligence (AI), the process is still a procedure that only simulates thought.

Most chips today have multiple cores. Some systems have multiple processors. The addition of cores and processors means that the system as a whole can perform more than one task at once—one task for each core or processor. However, the effect is still procedural in nature. An application can divide itself into parts and assign each core or processor a task, which allows the application to reach specific objectives faster, but the result is still a procedure.

The reason the previous two paragraphs are necessary is that even developers have started buying into their own clever programming and feel that application programming environments somehow work like magic. There is no magic involved, just incredibly fast processors guided by even more amazing programming. In order to gain a leap in the ability of processors to perform tasks, the world needs a new kind of processor, which is the topic of this post (finally). The kind of processor that holds the most promise right now is the neural processor. Interestingly enough, science fiction has already beat science fact to the punch by featuring neural processing in shows such as Star Trek and movies such as the Terminator.

Companies such as IBM are working to turn science fiction in to science fact. The first story I read on this topic was several years ago (see IBM creates learning, brain-like, synaptic CPU). This particular story points out three special features of neural processors. The first is that a neural processor relies on massive parallelism. Instead of having just four or eight or even sixteen tasks being performed at once, even a really simple neural processor has in excess of 256 tasks being done. The second is that the electronic equivalent of neurons in such a processor work cooperatively to perform tasks, so that the processing power of the chips is magnified. The third is that the chip actually remembers what it did last and forms patterns based on that memory. This third element is what really sets neural processing apart and makes it the kind of technology that is needed to advance to the next stage of computer technology.

In the three years since the original story was written, IBM (and other companies, such as Intel) have made some great forward progress. When you read IBM Develops a New Chip That Functions Like a Brain, you see that that the technology has indeed moved forward. The latest chip is actually able to react to external stimuli. It can understand, to an extremely limited extent, the changing patterns of light (for example) it receives. An action is no longer just a jumbo of pixels, but is recognized as being initiated by someone or something. The thing that amazes me about this chip is that the power consumption is so low. Most of the efforts so far seem to focus on mobile devices, which makes sense because these processors will eventually end up in devices such as robots.

The eventual goal of all this effort is a learning computer—one that can increase its knowledge based on the inputs it receives. This technology would change the role of a programmer from creating specific instructions to one of providing basic instructions and then providing the input needed for the computer to learn what it needs to know to perform specific tasks. In other words, every computer would have a completely customized set of learning experiences based on specific requirements for that computer. It’s an interesting idea and an amazing technology. Let me know your thoughts about neural processing at [email protected].