Neural networks intrigue me because of their ability to change the way in which computers work at a basic level. I last talked about them in my Considering the Future of Processing Power post. This post fed into the A Question of Balancing Robot Technologies post that explored possible ways in which neural networks could be used. The idea that neural networks provide a means of learning and of pattern recognition is central to the goals that this technology seeks to achieve. Even though robots are interesting, neural networks must first solve an even more basic problem. Current robot technology is hindered by an inability of the robot to see properly, so that it can avoid things like chairs in a room. There are all sorts of workarounds for the problem, but they all end up being kludges in the end. A recent ComputerWorld article, Computer vision finally matches primates’ ability, gives me hope that we may finally be turning the corner on making robots that can interact well with the real world.
In this case, the focus is on making it possible for a robot to see just like humans do. Actually, the sensors would be used for all sorts of other technologies, but it’s the use in robots that interests me most. A robot that can truly see as well as a human would be invaluable when it comes to performing complex tasks, such as monitoring a patient or fighting a fire. In both cases, it’s the ability to actually determine what is being seen that is important. In the case of a robotic nurse, it becomes possible to see the same sorts of things a human nurse sees, such as the start of an infection. When looking at a fire fighting robot, it becomes possible to pick out the body of someone to rescue amidst the flames. Video cameras alone can’t allow a robot to see what the camera is providing in the form of data. That being said, thanks to exciting developments in 3D hands data and other computer vision techniques, these possibilities could soon become a reality.
However, just seeing isn’t enough either. Yes, picking out patterns in the display and understanding where each object begins and ends is important. However, in order to use the data, a robot would also need to comprehend what each object is and determine whether that object is important. A burning piece of wood in a fire might not be important, but the human lying in the corner needing help is. The robot would need to comprehend that the object it sees is a human and not a burning piece of wood.
Using standard processors would never work for these applications because standard processors work too slow and can’t remember knowledge learned. Neural networks make it possible for a robot to detect objects, determine which objects are important, focus on specific objects, and then perform tasks based on those selected objects. A human would still need to make certain decisions, but a robot could quickly assess a situation, tell the human operator only the information needed to make a decision, and then act on that decision in the operator’s stead. In short, neural networks make it possible to begin looking at robots as valuable companions to humans in critical situations.
Robot technology still has a long way to go before you start seeing robots of the sort presented in Star Wars. However, each step brings us a little closer to realizing the potential of robots to reduce human suffering and to reduce the potential for injuries. Let me know your thoughts about neural networks at [email protected].