Making Algorithms Useful

I’m currently engaged writing Machine Learning for Dummies. The book is interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in Machine Learning for Dummies:

  • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
  • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
  • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning is already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

Some uses are almost, but not quite ready for prime time. One of these uses is the likes of Siri and other AIs that people talk with. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question. I recently read about one such system under development, Nara. What makes Nara interesting is that she seems more generalized than other forms of AI currently out there and can therefore perform more tasks. Nara is from the Connectionists and attempts to mimic the human mind. She’s all about making appropriate matches—everything from your next dinner to your next date. Reading about Nara helps you understand machine learning just a little better, at least, from the Connectionist perspective.

Machine learning is a big mystery to many people today. Given that I’m still writing this book, it would be interesting to hear your questions about machine learning. After all, I’d like to tune the content of my book to meet the most needs that I can. I’ve written a few posts about this book already and you can see them in the Machine Learning for Dummies category. After reading the posts, please let me know your thoughts on machine learning and AI. Where do you see it headed? What confuses you about it? Talk to me at John@JohnMuellerBooks.com.

 

Contemplating the Issue of Bias in Data Science

When Luca and I wrote Python for Data Science for Dummies we tried to address a range of topics that aren’t well covered in other places. Imagine my surprise when I saw a perfect article to illustrate one of these topics in ComputerWorld this week, Maybe robots, A.I. and algorithms aren’t as smart as we think. With the use of AI and data science growing exponentially, you might think that computers can think. They can’t. Computers can emulate or simulate the thinking process, but they don’t actually think. A computer is a machine designed to perform math quite quickly. If we want thinking computers, then we need a different kind of a machine. It’s the reason I wrote the Computers with Common Sense? post not too long ago. The sort of computer that could potentially think is a neural network and I discuss them in the Considering the Future of Processing Power post. (Even Intel’s latest 18 core processor, which is designed for machine learning and analytics isn’t a neural network—it simply performs the tasks that processors do now more quickly.)

However, the situation is worse than you might think, which is the reason for mentioning the ComputerWorld article. A problem occurs when the computer scientists and data scientists working together to create algorithms that make it appear that computers can think forget that they really can’t do any such thing. Luca and I discuss the effects of bias in Chapter 18 of our book. The chapter might have seemed academic at one time—something of interest, but potentially not all that useful. Today that chapter has taken on added significance. Read the ComputerWorld article and you find that Flickr recently released a new image recognition technology. The effects of not considering the role of bias in interpreting data and in the use of algorithms has has horrible results. The Guardian goes into even more details, describing how the program has tagged black people as apes and animals. Obviously, no one wanted that particular result, but forgetting that computers can’t think has caused precisely that unwanted result.

AI is an older technology that isn’t well understood because we don’t really understand our own thinking processes. It isn’t possible to create a complete and useful model of something until you understand what it is that you’re modeling and we don’t truly understand intelligence. Therefore, it’s hardly surprising that AI has taken so long to complete even the smallest baby steps. Data science is a newer technology that seeks to help people see patterns in huge data sets—to understand the data better and to create knowledge where none existed before. Neither technology is truly designed for stand-alone purposes yet. While I find Siri an interesting experiment, it’s just that, an experiment.

The Flickr application tries to take the human out of the loop and you see the result. Technology is designed to help mankind achieve more by easing the mundane tasks performed to accomplish specific goals. When you take the person out of the loop, what you have is a computer that is only playing around at thinking from a mathematical perspective—nothing more. It’s my hope that the Flickr incident will finally cause people to start thinking about computers, algorithms, and data as the tools that they truly are—tools designed to help people excel in new ways. Let me know your thoughts about AI and data science at John@JohnMuellerBooks.com.

 

Self-driving Cars in the News

I remember reading about self-driving cars in science fiction novels. Science fiction has provided me with all sorts of interesting ideas to pursue as I’ve gotten older. Many things I thought would be impossible, have become reality over the years and things that I thought I’d never see five years ago, I’m seeing in reality today. I discussed some of the technology behind self-driving cars in my Learning as a Human post. The article was fine as it went, but readers have taken me to task more than a few times for becoming enamored with the technology and not discussing the reality of the technology.

The fact of the matter is that self-driving cars are already here to some extent. Ford has introduced cars that can park themselves. The Ford view of cars is the one that most people can accept. It’s an anticipated next step in the evolution of driving. People tend to favor small changes in technology. Changes that are too large tend to shock them and aren’t readily accepted.

Google’s new self-driving car might be licensed in Nevada, but don’t plan on seeing it in your city anytime soon (unless you just happen to live in Nevada, of course). A more realistic approach to self-driving cars will probably come in the form of conveyances used in specific locations. For example, you might see self-driving cars used at theme parks and college campuses where the controlled environment will make it easier for them to navigate. More importantly, these strictly controlled situations will help people get used to the idea of seeing and using self-driven vehicles. The point is to build trust in them in a manner that people can accept.

Of course, the heart of the matter is what self-driving cars can actually provide in the way of a payback. According to a number of sources, they can actually reduce driving costs by $190 billion dollars per year in health and accident savings. That’s quite a savings. Money talks, but people have ignored monetary benefits in the past to ensure they remain independent. It will take time to discover whether the potential cost savings actually make people more inclined to use self-driving cars. My guess is that people will refuse to give up their cars unless there is something more than monetary and health benefits.

Even though no one has really talked about it much, self-driving cars have the potential to provide all sorts of other benefits. For example, because self-driving cars will obey the speed laws and run at the most efficient speeds possible in a given situation, cars will become more fuel efficient and produce less pollution. The software provided with the vehicle will probably allow the car to choose the most efficient route to a destination possible and provide the means for the car to automatically navigate around obstructions, such as accidents (which will be notably fewer). People could probably be more assured of getting to their destination on time because they won’t get lost either. Working on the way to work will allow people to spend more quality time with family. It’s the intangible benefits that will eventually make the self-driving car seem like a good way to do things.

The self-driving car is available today. It won’t be long and you’ll be able to buy one. You can already get a self-parking Ford, so the next step really isn’t that far away. The question is whether you really want to take that step. Let me know your thoughts on self-driving cars, their potential to save lives, reduce costs, create a cleaner environment, and make life generally more pleasant at John@JohnMuellerBooks.com.

Learning as a Human

I started discussing the whole idea of robot perception from a primate level in Seeing as a Human. In that post I discussed the need for a robot not to just see objects, but to be able to understand that the object is something unique. The ability to comprehend what is being seen is something that robots must do in order to interact with society at large in a manner that humans will understand and appreciate. Before the concepts espoused in works of science fiction such as I, Robot can be realized, robots must first be able to interact with objects in a manner that programming simply can’t anticipate. That’s why the technology being explored by deep learning is so incredibly important to the advancement of robotics.

Two recent articles point to the fact that deep learning techniques are already starting to have an effect on robotic technology. The first is about the latest Defense Advanced Research Projects Agency (DARPA) challenge. A robot must be able to drive a vehicle, exit the vehicle, and then perform certain tasks. No, this isn’t science fiction, it’s actually a real world exercise. This challenge is significantly different from self-driving cars. In fact, people are actually riding in self-driving cars now and I see a future where all cars will become self-driving. However, asking a robot to drive a car, exit it, and then do something useful is a significantly more difficult test of robotic technology. To make such a test successful, the robot must be able to learn to at least some extent, from each trial. Deep learning provides the means for the robot to learn.

The second article seems mundane by comparison until you consider just what it is that the robot is trying to do, cook a meal that it hasn’t been trained to cook. In this case, the robot watches a YouTube video to learn how to cook the meal just as a human would. To perform this task requires that the robot be able to learn the task by watching the video—something that most people see as something only a human can do. The programming behind this feat breaks cooking down into tasks that the robot can perform. Each of these tasks is equivalent to a skill that a human would possess. Unlike humans, a robot can’t learn new skills yet, but it can reorganize the skills it does possess in an order that makes completing the recipe possible. So, if a recipe calls for coddling an egg and the robot doesn’t know how to perform this task, it’s unlikely that the robot will actually be able to use that recipe. A human, on the other hand, could learn to coddle an egg and then complete the recipe. So, we’re not talking anything near human level intelligence yet.

The potential for robots to free humans from mundane tasks is immense. However, the potential for robots to make life harder for humans is equally great (read Robot Induced Slavery). We’re at a point where some decisions about how technology will affect our lives must be made. Unfortunately, no one seems interested in making such decisions outright and the legal system is definitely behind the times. This means that each person must choose the ways in which technology affects his or her life quite carefully. What is your take on robotic technology? Let me know at John@JohnMuellerBooks.com.

 

Seeing as a Human

Neural networks intrigue me because of their ability to change the way in which computers work at a basic level. I last talked about them in my Considering the Future of Processing Power post. This post fed into the A Question of Balancing Robot Technologies post that explored possible ways in which neural networks could be used. The idea that neural networks provide a means of learning and of pattern recognition is central to the goals that this technology seeks to achieve. Even though robots are interesting, neural networks must first solve an even more basic problem. Current robot technology is hindered by an inability of the robot to see properly, so that it can avoid things like chairs in a room. There are all sorts of workarounds for the problem, but they all end up being kludges in the end. A recent ComputerWorld article, Computer vision finally matches primates’ ability, gives me hope that we may finally be turning the corner on making robots that can interact well with the real world.

In this case, the focus is on making it possible for a robot to see just like humans do. Actually, the sensors would be used for all sorts of other technologies, but it’s the use in robots that interests me most. A robot that can truly see as well as a human would be invaluable when it comes to performing complex tasks, such as monitoring a patient or fighting a fire. In both cases, it’s the ability actually determine what is being seen that is important. In the case of a robotic nurse, it becomes possible to see the same sorts of things a human nurse sees, such as the start of an infection. When looking at a fire fighting robot, it becomes possible to pick out the body of someone to rescue amidst the flames. Video cameras alone can’t allow a robot to see what the camera is providing in the form of data.

However, just seeing isn’t enough either. Yes, picking out patterns in the display and understanding where each object begins and ends is important. However, in order to use the data, a robot would also need to comprehend what each object is and determine whether that object is important. A burning piece of wood in a fire might not be important, but the human lying in the corner needing help is. The robot would need to comprehend that the object it sees is a human and not a burning piece of wood.

Using standard processors would never work for these applications because standard processors work too slow and can’t remember knowledge learned. Neural networks make it possible for a robot to detect objects, determine which objects are important, focus on specific objects, and then perform tasks based on those selected objects. A human would still need to make certain decisions, but a robot could quickly assess a situation, tell the human operator only the information needed to make a decision, and then act on that decision in the operator’s stead. In short, neural networks make it possible to begin looking at robots as valuable companions to humans in critical situations.

Robot technology still has a long way to go before you start seeing robots of the sort presented in Star Wars. However, each step brings us a little closer to realizing the potential of robots to reduce human suffering and to reduce the potential for injuries. Let me know your thoughts about neural networks at John@JohnMuellerBooks.com.

 

Considering the Future of Processing Power

The vast majority of processors made today perform tasks as procedures. The processor looks at an instruction, performs the task specified by that instruction, and then moves onto the next instruction. It sounds like a simple way of doing things, and it is. Because a processor can perform the instructions incredibly fast—far faster than any human can even imagine—it could appear that the computer is thinking. What you’re seeing is a processor performing one instruction at a time, incredibly fast, and really clever programming. You truly aren’t seeing any sort of thought in the conventional (human) sense of the term. Even when using Artificial Intelligence (AI), the process is still a procedure that only simulates thought.

Most chips today have multiple cores. Some systems have multiple processors. The addition of cores and processors means that the system as a whole can perform more than one task at once—one task for each core or processor. However, the effect is still procedural in nature. An application can divide itself into parts and assign each core or processor a task, which allows the application to reach specific objectives faster, but the result is still a procedure.

The reason the previous two paragraphs are necessary is that even developers have started buying into their own clever programming and feel that application programming environments somehow work like magic. There is no magic involved, just incredibly fast processors guided by even more amazing programming. In order to gain a leap in the ability of processors to perform tasks, the world needs a new kind of processor, which is the topic of this post (finally). The kind of processor that holds the most promise right now is the neural processor. Interestingly enough, science fiction has already beat science fact to the punch by featuring neural processing in shows such as Star Trek and movies such as the Terminator.

Companies such as IBM are working to turn science fiction in to science fact. The first story I read on this topic was several years ago (see IBM creates learning, brain-like, synaptic CPU). This particular story points out three special features of neural processors. The first is that a neural processor relies on massive parallelism. Instead of having just four or eight or even sixteen tasks being performed at once, even a really simple neural processor has in excess of 256 tasks being done. The second is that the electronic equivalent of neurons in such a processor work cooperatively to perform tasks, so that the processing power of the chips is magnified. The third is that the chip actually remembers what it did last and forms patterns based on that memory. This third element is what really sets neural processing apart and makes it the kind of technology that is needed to advance to the next stage of computer technology.

In the three years since the original story was written, IBM (and other companies, such as Intel) have made some great forward progress. When you read IBM Develops a New Chip That Functions Like a Brain, you see that that the technology has indeed moved forward. The latest chip is actually able to react to external stimuli. It can understand, to an extremely limited extent, the changing patterns of light (for example) it receives. An action is no longer just a jumbo of pixels, but is recognized as being initiated by someone or something. The thing that amazes me about this chip is that the power consumption is so low. Most of the efforts so far seem to focus on mobile devices, which makes sense because these processors will eventually end up in devices such as robots.

The eventual goal of all this effort is a learning computer—one that can increase its knowledge based on the inputs it receives. This technology would change the role of a programmer from creating specific instructions to one of providing basic instructions and then providing the input needed for the computer to learn what it needs to know to perform specific tasks. In other words, every computer would have a completely customized set of learning experiences based on specific requirements for that computer. It’s an interesting idea and an amazing technology. Let me know your thoughts about neural processing at John@JohnMuellerBooks.com.