Learning as a Human

I started discussing the whole idea of robot perception from a primate level in Seeing as a Human. In that post I discussed the need for a robot not to just see objects, but to be able to understand that the object is something unique. The ability to comprehend what is being seen is something that robots must do in order to interact with society at large in a manner that humans will understand and appreciate. Before the concepts espoused in works of science fiction such as I, Robot can be realized, robots must first be able to interact with objects in a manner that programming simply can’t anticipate. That’s why the technology being explored by deep learning is so incredibly important to the advancement of robotics.

Two recent articles point to the fact that deep learning techniques are already starting to have an effect on robotic technology. The first is about the latest Defense Advanced Research Projects Agency (DARPA) challenge. A robot must be able to drive a vehicle, exit the vehicle, and then perform certain tasks. No, this isn’t science fiction, it’s actually a real world exercise. This challenge is significantly different from self-driving cars. In fact, people are actually riding in self-driving cars now and I see a future where all cars will become self-driving. However, asking a robot to drive a car, exit it, and then do something useful is a significantly more difficult test of robotic technology. To make such a test successful, the robot must be able to learn to at least some extent, from each trial. Deep learning provides the means for the robot to learn.

The second article seems mundane by comparison until you consider just what it is that the robot is trying to do, cook a meal that it hasn’t been trained to cook. In this case, the robot watches a YouTube video to learn how to cook the meal just as a human would. To perform this task requires that the robot be able to learn the task by watching the video—something that most people see as something only a human can do. The programming behind this feat breaks cooking down into tasks that the robot can perform. Each of these tasks is equivalent to a skill that a human would possess. Unlike humans, a robot can’t learn new skills yet, but it can reorganize the skills it does possess in an order that makes completing the recipe possible. So, if a recipe calls for coddling an egg and the robot doesn’t know how to perform this task, it’s unlikely that the robot will actually be able to use that recipe. A human, on the other hand, could learn to coddle an egg and then complete the recipe. So, we’re not talking anything near human level intelligence yet.

The potential for robots to free humans from mundane tasks is immense. However, the potential for robots to make life harder for humans is equally great (read Robot Induced Slavery). We’re at a point where some decisions about how technology will affect our lives must be made. Unfortunately, no one seems interested in making such decisions outright and the legal system is definitely behind the times. This means that each person must choose the ways in which technology affects his or her life quite carefully. What is your take on robotic technology? Let me know at [email protected].

 

Seeing as a Human

Neural networks intrigue me because of their ability to change the way in which computers work at a basic level. I last talked about them in my Considering the Future of Processing Power post. This post fed into the A Question of Balancing Robot Technologies post that explored possible ways in which neural networks could be used. The idea that neural networks provide a means of learning and of pattern recognition is central to the goals that this technology seeks to achieve. Even though robots are interesting, neural networks must first solve an even more basic problem. Current robot technology is hindered by an inability of the robot to see properly, so that it can avoid things like chairs in a room. There are all sorts of workarounds for the problem, but they all end up being kludges in the end. A recent ComputerWorld article, Computer vision finally matches primates’ ability, gives me hope that we may finally be turning the corner on making robots that can interact well with the real world.

In this case, the focus is on making it possible for a robot to see just like humans do. Actually, the sensors would be used for all sorts of other technologies, but it’s the use in robots that interests me most. A robot that can truly see as well as a human would be invaluable when it comes to performing complex tasks, such as monitoring a patient or fighting a fire. In both cases, it’s the ability to actually determine what is being seen that is important. In the case of a robotic nurse, it becomes possible to see the same sorts of things a human nurse sees, such as the start of an infection. When looking at a fire fighting robot, it becomes possible to pick out the body of someone to rescue amidst the flames. Video cameras alone can’t allow a robot to see what the camera is providing in the form of data. That being said, thanks to exciting developments in 3D hands data and other computer vision techniques, these possibilities could soon become a reality.

However, just seeing isn’t enough either. Yes, picking out patterns in the display and understanding where each object begins and ends is important. However, in order to use the data, a robot would also need to comprehend what each object is and determine whether that object is important. A burning piece of wood in a fire might not be important, but the human lying in the corner needing help is. The robot would need to comprehend that the object it sees is a human and not a burning piece of wood.

Using standard processors would never work for these applications because standard processors work too slow and can’t remember knowledge learned. Neural networks make it possible for a robot to detect objects, determine which objects are important, focus on specific objects, and then perform tasks based on those selected objects. A human would still need to make certain decisions, but a robot could quickly assess a situation, tell the human operator only the information needed to make a decision, and then act on that decision in the operator’s stead. In short, neural networks make it possible to begin looking at robots as valuable companions to humans in critical situations.

Robot technology still has a long way to go before you start seeing robots of the sort presented in Star Wars. However, each step brings us a little closer to realizing the potential of robots to reduce human suffering and to reduce the potential for injuries. Let me know your thoughts about neural networks at [email protected].

Considering the Future of Processing Power

The vast majority of processors made today perform tasks as procedures. The processor looks at an instruction, performs the task specified by that instruction, and then moves onto the next instruction. It sounds like a simple way of doing things, and it is. Because a processor can perform the instructions incredibly fast—far faster than any human can even imagine—it could appear that the computer is thinking. What you’re seeing is a processor performing one instruction at a time, incredibly fast, and really clever programming. You truly aren’t seeing any sort of thought in the conventional (human) sense of the term. Even when using Artificial Intelligence (AI), the process is still a procedure that only simulates thought.

Most chips today have multiple cores. Some systems have multiple processors. The addition of cores and processors means that the system as a whole can perform more than one task at once—one task for each core or processor. However, the effect is still procedural in nature. An application can divide itself into parts and assign each core or processor a task, which allows the application to reach specific objectives faster, but the result is still a procedure.

The reason the previous two paragraphs are necessary is that even developers have started buying into their own clever programming and feel that application programming environments somehow work like magic. There is no magic involved, just incredibly fast processors guided by even more amazing programming. In order to gain a leap in the ability of processors to perform tasks, the world needs a new kind of processor, which is the topic of this post (finally). The kind of processor that holds the most promise right now is the neural processor. Interestingly enough, science fiction has already beat science fact to the punch by featuring neural processing in shows such as Star Trek and movies such as the Terminator.

Companies such as IBM are working to turn science fiction in to science fact. The first story I read on this topic was several years ago (see IBM creates learning, brain-like, synaptic CPU). This particular story points out three special features of neural processors. The first is that a neural processor relies on massive parallelism. Instead of having just four or eight or even sixteen tasks being performed at once, even a really simple neural processor has in excess of 256 tasks being done. The second is that the electronic equivalent of neurons in such a processor work cooperatively to perform tasks, so that the processing power of the chips is magnified. The third is that the chip actually remembers what it did last and forms patterns based on that memory. This third element is what really sets neural processing apart and makes it the kind of technology that is needed to advance to the next stage of computer technology.

In the three years since the original story was written, IBM (and other companies, such as Intel) have made some great forward progress. When you read IBM Develops a New Chip That Functions Like a Brain, you see that that the technology has indeed moved forward. The latest chip is actually able to react to external stimuli. It can understand, to an extremely limited extent, the changing patterns of light (for example) it receives. An action is no longer just a jumbo of pixels, but is recognized as being initiated by someone or something. The thing that amazes me about this chip is that the power consumption is so low. Most of the efforts so far seem to focus on mobile devices, which makes sense because these processors will eventually end up in devices such as robots.

The eventual goal of all this effort is a learning computer—one that can increase its knowledge based on the inputs it receives. This technology would change the role of a programmer from creating specific instructions to one of providing basic instructions and then providing the input needed for the computer to learn what it needs to know to perform specific tasks. In other words, every computer would have a completely customized set of learning experiences based on specific requirements for that computer. It’s an interesting idea and an amazing technology. Let me know your thoughts about neural processing at [email protected].