Learning as a Human

I started discussing the whole idea of robot perception from a primate level in Seeing as a Human. In that post I discussed the need for a robot not to just see objects, but to be able to understand that the object is something unique. The ability to comprehend what is being seen is something that robots must do in order to interact with society at large in a manner that humans will understand and appreciate. Before the concepts espoused in works of science fiction such as I, Robot can be realized, robots must first be able to interact with objects in a manner that programming simply can’t anticipate. That’s why the technology being explored by deep learning is so incredibly important to the advancement of robotics.

Two recent articles point to the fact that deep learning techniques are already starting to have an effect on robotic technology. The first is about the latest Defense Advanced Research Projects Agency (DARPA) challenge. A robot must be able to drive a vehicle, exit the vehicle, and then perform certain tasks. No, this isn’t science fiction, it’s actually a real world exercise. This challenge is significantly different from self-driving cars. In fact, people are actually riding in self-driving cars now and I see a future where all cars will become self-driving. However, asking a robot to drive a car, exit it, and then do something useful is a significantly more difficult test of robotic technology. To make such a test successful, the robot must be able to learn to at least some extent, from each trial. Deep learning provides the means for the robot to learn.

The second article seems mundane by comparison until you consider just what it is that the robot is trying to do, cook a meal that it hasn’t been trained to cook. In this case, the robot watches a YouTube video to learn how to cook the meal just as a human would. To perform this task requires that the robot be able to learn the task by watching the video—something that most people see as something only a human can do. The programming behind this feat breaks cooking down into tasks that the robot can perform. Each of these tasks is equivalent to a skill that a human would possess. Unlike humans, a robot can’t learn new skills yet, but it can reorganize the skills it does possess in an order that makes completing the recipe possible. So, if a recipe calls for coddling an egg and the robot doesn’t know how to perform this task, it’s unlikely that the robot will actually be able to use that recipe. A human, on the other hand, could learn to coddle an egg and then complete the recipe. So, we’re not talking anything near human level intelligence yet.

The potential for robots to free humans from mundane tasks is immense. However, the potential for robots to make life harder for humans is equally great (read Robot Induced Slavery). We’re at a point where some decisions about how technology will affect our lives must be made. Unfortunately, no one seems interested in making such decisions outright and the legal system is definitely behind the times. This means that each person must choose the ways in which technology affects his or her life quite carefully. What is your take on robotic technology? Let me know at John@JohnMuellerBooks.com.

 

Seeing as a Human

Neural networks intrigue me because of their ability to change the way in which computers work at a basic level. I last talked about them in my Considering the Future of Processing Power post. This post fed into the A Question of Balancing Robot Technologies post that explored possible ways in which neural networks could be used. The idea that neural networks provide a means of learning and of pattern recognition is central to the goals that this technology seeks to achieve. Even though robots are interesting, neural networks must first solve an even more basic problem. Current robot technology is hindered by an inability of the robot to see properly, so that it can avoid things like chairs in a room. There are all sorts of workarounds for the problem, but they all end up being kludges in the end. A recent ComputerWorld article, Computer vision finally matches primates’ ability, gives me hope that we may finally be turning the corner on making robots that can interact well with the real world.

In this case, the focus is on making it possible for a robot to see just like humans do. Actually, the sensors would be used for all sorts of other technologies, but it’s the use in robots that interests me most. A robot that can truly see as well as a human would be invaluable when it comes to performing complex tasks, such as monitoring a patient or fighting a fire. In both cases, it’s the ability actually determine what is being seen that is important. In the case of a robotic nurse, it becomes possible to see the same sorts of things a human nurse sees, such as the start of an infection. When looking at a fire fighting robot, it becomes possible to pick out the body of someone to rescue amidst the flames. Video cameras alone can’t allow a robot to see what the camera is providing in the form of data.

However, just seeing isn’t enough either. Yes, picking out patterns in the display and understanding where each object begins and ends is important. However, in order to use the data, a robot would also need to comprehend what each object is and determine whether that object is important. A burning piece of wood in a fire might not be important, but the human lying in the corner needing help is. The robot would need to comprehend that the object it sees is a human and not a burning piece of wood.

Using standard processors would never work for these applications because standard processors work too slow and can’t remember knowledge learned. Neural networks make it possible for a robot to detect objects, determine which objects are important, focus on specific objects, and then perform tasks based on those selected objects. A human would still need to make certain decisions, but a robot could quickly assess a situation, tell the human operator only the information needed to make a decision, and then act on that decision in the operator’s stead. In short, neural networks make it possible to begin looking at robots as valuable companions to humans in critical situations.

Robot technology still has a long way to go before you start seeing robots of the sort presented in Star Wars. However, each step brings us a little closer to realizing the potential of robots to reduce human suffering and to reduce the potential for injuries. Let me know your thoughts about neural networks at John@JohnMuellerBooks.com.

 

Considering the Future of Processing Power

The vast majority of processors made today perform tasks as procedures. The processor looks at an instruction, performs the task specified by that instruction, and then moves onto the next instruction. It sounds like a simple way of doing things, and it is. Because a processor can perform the instructions incredibly fast—far faster than any human can even imagine—it could appear that the computer is thinking. What you’re seeing is a processor performing one instruction at a time, incredibly fast, and really clever programming. You truly aren’t seeing any sort of thought in the conventional (human) sense of the term. Even when using Artificial Intelligence (AI), the process is still a procedure that only simulates thought.

Most chips today have multiple cores. Some systems have multiple processors. The addition of cores and processors means that the system as a whole can perform more than one task at once—one task for each core or processor. However, the effect is still procedural in nature. An application can divide itself into parts and assign each core or processor a task, which allows the application to reach specific objectives faster, but the result is still a procedure.

The reason the previous two paragraphs are necessary is that even developers have started buying into their own clever programming and feel that application programming environments somehow work like magic. There is no magic involved, just incredibly fast processors guided by even more amazing programming. In order to gain a leap in the ability of processors to perform tasks, the world needs a new kind of processor, which is the topic of this post (finally). The kind of processor that holds the most promise right now is the neural processor. Interestingly enough, science fiction has already beat science fact to the punch by featuring neural processing in shows such as Star Trek and movies such as the Terminator.

Companies such as IBM are working to turn science fiction in to science fact. The first story I read on this topic was several years ago (see IBM creates learning, brain-like, synaptic CPU). This particular story points out three special features of neural processors. The first is that a neural processor relies on massive parallelism. Instead of having just four or eight or even sixteen tasks being performed at once, even a really simple neural processor has in excess of 256 tasks being done. The second is that the electronic equivalent of neurons in such a processor work cooperatively to perform tasks, so that the processing power of the chips is magnified. The third is that the chip actually remembers what it did last and forms patterns based on that memory. This third element is what really sets neural processing apart and makes it the kind of technology that is needed to advance to the next stage of computer technology.

In the three years since the original story was written, IBM (and other companies, such as Intel) have made some great forward progress. When you read IBM Develops a New Chip That Functions Like a Brain, you see that that the technology has indeed moved forward. The latest chip is actually able to react to external stimuli. It can understand, to an extremely limited extent, the changing patterns of light (for example) it receives. An action is no longer just a jumbo of pixels, but is recognized as being initiated by someone or something. The thing that amazes me about this chip is that the power consumption is so low. Most of the efforts so far seem to focus on mobile devices, which makes sense because these processors will eventually end up in devices such as robots.

The eventual goal of all this effort is a learning computer—one that can increase its knowledge based on the inputs it receives. This technology would change the role of a programmer from creating specific instructions to one of providing basic instructions and then providing the input needed for the computer to learn what it needs to know to perform specific tasks. In other words, every computer would have a completely customized set of learning experiences based on specific requirements for that computer. It’s an interesting idea and an amazing technology. Let me know your thoughts about neural processing at John@JohnMuellerBooks.com.

 

A History of Microprocessors

Every once in a while, someone will send me a truly interesting link. Having seen a few innovations myself and possessing a strong interest in history, I read the CPU DB: Recording Microprocessor History on the Association for Computing Machinery (ACM) site with great interest. The post is a bit long, but essentially, the work by Andrew Danowitz, Kyle Kelley, James Mao, John P. Stevenson, and Mark Horowitz does something that no other site does, it provides you with a comprehensive view of 790 different microprocessors created since the introduction of Intel’s 4004 in November 1971. The CPU DB is available for anyone to use and should prove useful for scientist, developer, and hobbyist alike.

Unlike a lot of the work done on microprocessors, this one hasn’t been commissioned by a particular company. In fact, you’ll find processors from 17 different vendors. The work also spans a considerable number of disciplines. For example, you can discover how the physical scaling of devices has changed over the years and the effects of software on processor design and development.

A lot of the information available in this report is also available from the vendor or a third party in some form. The problem with vendor specification sheets and third party reports is that they vary in composition, depth, and content—making any sort of comparison extremely difficult and time consuming. This database makes it possible to compare the 790 processors directly and using the same criteria. A researcher can now easily see the differences between two microprocessors, making it considerably easier to draw conclusions about microprocessor design and implementation.

Not surprisingly, it has taken a while to collect this sort of information at the depth provided. According to the site, this database has been a work in progress for 30 years now. That’s a long time to research anything, especially something as esoteric as the voltage and frequency ranges of microprocessors. The authors stated their efforts were hampered in some cases by the age of the devices and the unavailability of samples for testing. I would imagine that trying to find a usable copy of a 4004 for testing would be nearly impossible.

You’ll have to read the report to get the full scoop of everything that CPU DB provides. The information is so detailed that the authors resorted to using tables and diagrams to explain it. Let’s just say that if you can’t find the statistic you need in CPU DB, it probably doesn’t exist. In order to provide a level playing field for all of the statistics, the researchers have used standardized testing. For example, they rely on the Standard Performance Evaluation Corporation (SPEC) benchmarks to compare the processors. Tables 1 and 2 in the report provide an overview of the sorts information you’ll find in CPU DB.

This isn’t a resource I’ll use every day. However, it is a resource I plan to use when trying to make sense of performance particulars. Using the information from CPU DB should remove some of the ambiguity in trying to compare system designs and determine how they affect the software running on them. Let me know what you think of CPU DB at John@JohnMuellerBooks.com.