Considering Threats to Your Hardware

Most of the security write-ups you see online deal with software. It’s true that you’re far more likely to encounter some sort of software-based security threat than any of the hardware threats to date. However, ignoring hardware threats can be problematic. Unlike the vast majority of software threats that you can clean up, hardware threats often damage a system so that it becomes unusable. You literally have to buy a new system because repair isn’t feasible (at least, for a reasonable price).

The threats are becoming more ingenious too. Consider the USB flash drive threat called USB Killer. In this case, inserting the wrong thumb drive into your system can cause the system to completely malfunction. The attack is ingenious in that your system continues to work as normal until that final moment when it’s too late to do anything about the threat. Your system is fried by high voltage sent to it by the thumb drive. Of course, avoiding the problem means using only thumb drives that you can verify are clean. You really can’t even trust the thumb drive provided by friends because they could have obtained the thumb drive from a contaminated source. The result of such an attack is lost data, lost time, and lost hardware—potentially making the attack far more expensive than a software attack on your system.

Some of the hardware-based threats are more insidious. For example, the Rowhammer vulnerability makes it possible for someone to escalate their privileges by accessing the DRAM on your system in a specific way. The technical details aren’t quite as important as the fact that it can be done in this case because even with repairs, memory will continue to be vulnerable to attack in various ways. The problem is that memory has become so small that protections that used to work well no longer work at all. In addition, hardware vendors often use the least expensive memory available to keep prices low, rather than use higher end (and more expensive) memory.

It’s almost certain that you’ll start to see more hardware threats on the horizon because of the way in which people work with electronics today. All these new revelations remind me of the floppy disk viruses of days past. People would pass viruses back and forth by trading floppies with each other. Some of these viruses would infect the boot sector of the system hard drive, making it nearly impossible to remove. As people start using thumb drives and other removable media to exchange data in various ways, you can expect to see a resurgence of this sort of attack.

The potential for hardware-based attacks continues to increase as the computing environment becomes more and more commoditized and people’s use of devices continues to change. It’s the reason I wrote Does Your Hardware Spy On You? and the reason I’m alerting you to the potential for hardware-based attacks in this post. You need to be careful how you interact with others when exchanging bits of seemingly innocent hardware. Let me know your thoughts about hardware-based attacks at [email protected].

 

Does Your Hardware Spy On You?

Every once in a while I encounter an article that talks about government intrusion into private organizations through means that seem more like a James Bond movie plot than reality. The latest such story appeared in ComputerWorld, “To avoid NSA, Cisco delivers gear to strange addresses.” These articles lead me to wonder whether the authors have an overdeveloped persecution complex or government agencies are really spying on the public in such overtly secretive (and potentially illegal) ways. The fact that some companies apparently believe the threat enough to ship their equipment to odd addresses is frightening. Consider the ramifications of the actions—is it even possible to feel safe ordering hardware you haven’t built yourself (or are the individual components bugged too)?

Obviously, government organizations do require some means of tracking bad guys out there. Of course, the term bad guys is pretty loose and subject to great deal of interpretation. In addition, just how much tracking is too much tracking? Would enough tracking prevent another terrorist attack or the loss of income caused by crooked company executives? There are many questions that remain unanswered in my mind (and obviously in the minds of others) over the use of various tracking technologies.

The topic of government spying, it’s legitimate and illegitimate uses, and just who the bad guy is demands a lot more attention than anyone is giving it. So, how do you feel about government tracking of everything and anything it sets its mind to spy on? Do you feel there should be limits? What do you feel about shipping things to odd addresses to avoid notice and circumvent the system (partly because the system is broken)? I’d love to hear your point of view about the use of modified computer equipment as a tool for spying on the private sector at [email protected].

 

Self-driving Cars in the News

I remember reading about self-driving cars in science fiction novels. Science fiction has provided me with all sorts of interesting ideas to pursue as I’ve gotten older. Many things I thought would be impossible, have become reality over the years and things that I thought I’d never see five years ago, I’m seeing in reality today. People will start to look at how to sell junk car or their old car just so they can get their hands on a self-driving one, this pretty much looks like the future for us all. I discussed some of the technology behind self-driving cars in my Learning as a Human post. The article was fine as it went, but readers have taken me to task more than a few times for becoming enamored with the technology and not discussing the reality of the technology.

The fact of the matter is that self-driving cars are already here to some extent. Ford has introduced cars that can park themselves. The Ford view of cars is the one that most people can accept. It’s an anticipated next step in the evolution of driving. People tend to favor small changes in technology. Changes that are too large tend to shock them and aren’t readily accepted. People are always happy with small changes, they are more likely to buy a car if they see that it has been improved upon slightly (compared to the model that they already have). People find it so easy to buy a new car now, they don’t even need to worry about getting it either, as they can simply just get it delivered to their home by using a shipping company (you can find out more information here at this Cars Arrive Auto Relocation company). Which is great, but what’s so good about a new car that has one small change?

Google’s new self-driving car might be licensed in Nevada, but don’t plan on seeing it in your city anytime soon (unless you just happen to live in Nevada, of course). A more realistic approach to self-driving cars will probably come in the form of conveyances used in specific locations. For example, you might see self-driving cars used at theme parks and college campuses where the controlled environment will make it easier for them to navigate. More importantly, these strictly controlled situations will help people get used to the idea of seeing and using self-driven vehicles. The point is to build trust in them in a manner that people can accept.

Of course, the heart of the matter is what self-driving cars can actually provide in the way of a payback. According to a number of sources, they can actually reduce driving costs by $190 billion dollars per year in health and accident savings. That’s quite a savings. Money talks, but people have ignored monetary benefits in the past to ensure they remain independent. It will take time to discover whether the potential cost savings actually make people more inclined to use self-driving cars. My guess is that people will refuse to give up their cars unless there is something more than monetary and health benefits, as like non self-driving cars, they still need to be checked over every few years, they’re still susceptible to breaking down and needing new parts from online auto parts store TDC Automotive and others similar. So all in all, consumers won’t be jumping at the chance to buy one for the time being.

Even though no one has really talked about it much, self-driving cars have the potential to provide all sorts of other benefits. For example, because self-driving cars will obey the speed laws and run at the most efficient speeds possible in a given situation, cars will become more fuel efficient and produce less pollution. The software provided with the vehicle will probably allow the car to choose the most efficient route to a destination possible and provide the means for the car to automatically navigate around obstructions, such as accidents (which will be notably fewer). People could probably be more assured of getting to their destination on time because they won’t get lost either. Working on the way to work will allow people to spend more quality time with family. It’s the intangible benefits that will eventually make the self-driving car seem like a good way to do things.

The self-driving car is available today. It won’t be long and you’ll be able to buy one. You can already get a self-parking Ford, so the next step really isn’t that far away. The question is whether you really want to take that step. Some people may see self-driving cars as too much of a gamble, luckily we still have used car companies like Zemotor that we can buy our cars from and feeling secure about it. Let me know your thoughts on self-driving cars, their potential to save lives, reduce costs, create a cleaner environment, and make life generally more pleasant at [email protected].

Radio Shack, We Knew Thee Well

I’m dating myself here, but the first time I entered a Radio Shack was in 1972. I had just finished reading The Radio Amateur’s Handbook and was absolutely fascinated by the whole thought of working with electronics. The combination of reading science fiction and electronics books of various sorts, convinced me to go to a technical high school. I graduated with all the necessary knowledge to become an apprentice electrician. However, entering the Navy moved me into computers, where I remain today. (I started out as a hardware guy and moved into programming later.) Radio Shack was filled with all sorts of cool looking gizmos. It was akin to entering my science fiction books and experiencing what “could be” first hand. The day I finished designing and building my first power supply and amplifier was an absolute thrill. I still have the plans for it somewhere. The mono output of 20 watts seemed fantastic. I’m not the only one with fond memories-authors such as PC Magazine’s Jamie Lendino and John Dvorak have them as well.

Over the years I watched Radio Shack change from this absolutely fascinating place I had to visit every time I passed it to something less. Eventually, it became just a common store-the aisles filled with televisions, radios, and consumer gizmos of all sorts. It got to the point where I could buy the same type of goods just about anywhere for a much lower price. For me, the death spiral was just sad to watch. As a country, we really need stores that encourage people to invent-to think outside the box. Unfortunately, Radio Shack is no longer that store. I visited a Radio Shack in a mall the other day and there were only a few items left for sale at high discount. I helped a friend buy a mouse. It was an odd feeling to leave the store one last time knowing that I’d never see the store I knew and loved in the 70s and 80s again.

Even the salespeople changed over time. During the early 70s when I first started going to Radio Shack, I could hear salespeople talking the talk with any customer that came in. One of them even convinced me to use a different transistor for my amplifier and to rely on a full bridge rectifier to make the output cleaner. If these terms seem foreign, they do, in fact, belong to a different time. The surge of creativity I experienced during that phase in my life is gone-replaced by something totally different today. The young lad I talked with the other day was a salesperson and just barely knew his trade. Gone are the salespeople who really made Radio Shack special.

I yearn for the resurgence of creativity and of stores that promote it. This is one case where brick and mortar stores have a definite advantage over their online cousins. When you go into a brick and mortar store, you can talk with real people, see real demonstrations, touch real hardware, and get that special ethereal feeling of entering the zone of the tinkerer and the definer of dreams. Radio Shack, we knew thee well, and we really need something like you back.

 

Learning as a Human

I started discussing the whole idea of robot perception from a primate level in Seeing as a Human. In that post I discussed the need for a robot not to just see objects, but to be able to understand that the object is something unique. The ability to comprehend what is being seen is something that robots must do in order to interact with society at large in a manner that humans will understand and appreciate. Before the concepts espoused in works of science fiction such as I, Robot can be realized, robots must first be able to interact with objects in a manner that programming simply can’t anticipate. That’s why the technology being explored by deep learning is so incredibly important to the advancement of robotics.

Two recent articles point to the fact that deep learning techniques are already starting to have an effect on robotic technology. The first is about the latest Defense Advanced Research Projects Agency (DARPA) challenge. A robot must be able to drive a vehicle, exit the vehicle, and then perform certain tasks. No, this isn’t science fiction, it’s actually a real world exercise. This challenge is significantly different from self-driving cars. In fact, people are actually riding in self-driving cars now and I see a future where all cars will become self-driving. However, asking a robot to drive a car, exit it, and then do something useful is a significantly more difficult test of robotic technology. To make such a test successful, the robot must be able to learn to at least some extent, from each trial. Deep learning provides the means for the robot to learn.

The second article seems mundane by comparison until you consider just what it is that the robot is trying to do, cook a meal that it hasn’t been trained to cook. In this case, the robot watches a YouTube video to learn how to cook the meal just as a human would. To perform this task requires that the robot be able to learn the task by watching the video—something that most people see as something only a human can do. The programming behind this feat breaks cooking down into tasks that the robot can perform. Each of these tasks is equivalent to a skill that a human would possess. Unlike humans, a robot can’t learn new skills yet, but it can reorganize the skills it does possess in an order that makes completing the recipe possible. So, if a recipe calls for coddling an egg and the robot doesn’t know how to perform this task, it’s unlikely that the robot will actually be able to use that recipe. A human, on the other hand, could learn to coddle an egg and then complete the recipe. So, we’re not talking anything near human level intelligence yet.

The potential for robots to free humans from mundane tasks is immense. However, the potential for robots to make life harder for humans is equally great (read Robot Induced Slavery). We’re at a point where some decisions about how technology will affect our lives must be made. Unfortunately, no one seems interested in making such decisions outright and the legal system is definitely behind the times. This means that each person must choose the ways in which technology affects his or her life quite carefully. What is your take on robotic technology? Let me know at [email protected].

 

Seeing as a Human

Neural networks intrigue me because of their ability to change the way in which computers work at a basic level. I last talked about them in my Considering the Future of Processing Power post. This post fed into the A Question of Balancing Robot Technologies post that explored possible ways in which neural networks could be used. The idea that neural networks provide a means of learning and of pattern recognition is central to the goals that this technology seeks to achieve. Even though robots are interesting, neural networks must first solve an even more basic problem. Current robot technology is hindered by an inability of the robot to see properly, so that it can avoid things like chairs in a room. There are all sorts of workarounds for the problem, but they all end up being kludges in the end. A recent ComputerWorld article, Computer vision finally matches primates’ ability, gives me hope that we may finally be turning the corner on making robots that can interact well with the real world.

In this case, the focus is on making it possible for a robot to see just like humans do. Actually, the sensors would be used for all sorts of other technologies, but it’s the use in robots that interests me most. A robot that can truly see as well as a human would be invaluable when it comes to performing complex tasks, such as monitoring a patient or fighting a fire. In both cases, it’s the ability to actually determine what is being seen that is important. In the case of a robotic nurse, it becomes possible to see the same sorts of things a human nurse sees, such as the start of an infection. When looking at a fire fighting robot, it becomes possible to pick out the body of someone to rescue amidst the flames. Video cameras alone can’t allow a robot to see what the camera is providing in the form of data. That being said, thanks to exciting developments in 3D hands data and other computer vision techniques, these possibilities could soon become a reality.

However, just seeing isn’t enough either. Yes, picking out patterns in the display and understanding where each object begins and ends is important. However, in order to use the data, a robot would also need to comprehend what each object is and determine whether that object is important. A burning piece of wood in a fire might not be important, but the human lying in the corner needing help is. The robot would need to comprehend that the object it sees is a human and not a burning piece of wood.

Using standard processors would never work for these applications because standard processors work too slow and can’t remember knowledge learned. Neural networks make it possible for a robot to detect objects, determine which objects are important, focus on specific objects, and then perform tasks based on those selected objects. A human would still need to make certain decisions, but a robot could quickly assess a situation, tell the human operator only the information needed to make a decision, and then act on that decision in the operator’s stead. In short, neural networks make it possible to begin looking at robots as valuable companions to humans in critical situations.

Robot technology still has a long way to go before you start seeing robots of the sort presented in Star Wars. However, each step brings us a little closer to realizing the potential of robots to reduce human suffering and to reduce the potential for injuries. Let me know your thoughts about neural networks at [email protected].

Considering the Future of Processing Power

The vast majority of processors made today perform tasks as procedures. The processor looks at an instruction, performs the task specified by that instruction, and then moves onto the next instruction. It sounds like a simple way of doing things, and it is. Because a processor can perform the instructions incredibly fast—far faster than any human can even imagine—it could appear that the computer is thinking. What you’re seeing is a processor performing one instruction at a time, incredibly fast, and really clever programming. You truly aren’t seeing any sort of thought in the conventional (human) sense of the term. Even when using Artificial Intelligence (AI), the process is still a procedure that only simulates thought.

Most chips today have multiple cores. Some systems have multiple processors. The addition of cores and processors means that the system as a whole can perform more than one task at once—one task for each core or processor. However, the effect is still procedural in nature. An application can divide itself into parts and assign each core or processor a task, which allows the application to reach specific objectives faster, but the result is still a procedure.

The reason the previous two paragraphs are necessary is that even developers have started buying into their own clever programming and feel that application programming environments somehow work like magic. There is no magic involved, just incredibly fast processors guided by even more amazing programming. In order to gain a leap in the ability of processors to perform tasks, the world needs a new kind of processor, which is the topic of this post (finally). The kind of processor that holds the most promise right now is the neural processor. Interestingly enough, science fiction has already beat science fact to the punch by featuring neural processing in shows such as Star Trek and movies such as the Terminator.

Companies such as IBM are working to turn science fiction in to science fact. The first story I read on this topic was several years ago (see IBM creates learning, brain-like, synaptic CPU). This particular story points out three special features of neural processors. The first is that a neural processor relies on massive parallelism. Instead of having just four or eight or even sixteen tasks being performed at once, even a really simple neural processor has in excess of 256 tasks being done. The second is that the electronic equivalent of neurons in such a processor work cooperatively to perform tasks, so that the processing power of the chips is magnified. The third is that the chip actually remembers what it did last and forms patterns based on that memory. This third element is what really sets neural processing apart and makes it the kind of technology that is needed to advance to the next stage of computer technology.

In the three years since the original story was written, IBM (and other companies, such as Intel) have made some great forward progress. When you read IBM Develops a New Chip That Functions Like a Brain, you see that that the technology has indeed moved forward. The latest chip is actually able to react to external stimuli. It can understand, to an extremely limited extent, the changing patterns of light (for example) it receives. An action is no longer just a jumbo of pixels, but is recognized as being initiated by someone or something. The thing that amazes me about this chip is that the power consumption is so low. Most of the efforts so far seem to focus on mobile devices, which makes sense because these processors will eventually end up in devices such as robots.

The eventual goal of all this effort is a learning computer—one that can increase its knowledge based on the inputs it receives. This technology would change the role of a programmer from creating specific instructions to one of providing basic instructions and then providing the input needed for the computer to learn what it needs to know to perform specific tasks. In other words, every computer would have a completely customized set of learning experiences based on specific requirements for that computer. It’s an interesting idea and an amazing technology. Let me know your thoughts about neural processing at [email protected].

 

Bletchley Park Reborn and a Social Issue Revisited

You may not have ever heard about Bletchley Park. In fact, the place was one of the best kept secrets of World War II (WWII) until just recently. Of course, like many secret places, this one fell into disuse after the war and nearly ended up in the scrap heap, but a restoration effort has been under way for quite some time now. As a computer scientist, the entire Bletchley Park project interests me because it was the first time that many computer principles were put into play. The project relied on cutting edge technology to reduce the length of the war and saved thousands of lives. A lot of other people must feel as I do because the park recently had its 100,000 visitor.

This particular historical place is receiving a lot of notice as of late. For example, there is a PBS television show called The Bletchley Circle that talks about what happened to some of the ladies who worked there after the war. The show makes good viewing and the feelings and situations presented are realistic to a point. I doubt very much that any of the people who actually worked there ended up as amateur sleuths, but it’s fun to think about anyway. The show does have the full cooperation of the restoration group and is even filmed there.

The computer systems used at Bletchley Park were immense and even the lowliest smartphone today probably has more processing power. However, WWII was the first war where computer systems played a major role and reading about their history gives insight into the directions that the technology may take in the future. The most important factor for me has been that the group working at Bletchley Park was made up of the finest minds available, regardless of gender, sex orientation, religion, age, or any other factor you can imagine. The only thing that mattered was whether you had a good mind. It’s how things should be today, could be today, but aren’t. It’s not hard to imagine the impact on the problem of global warming if we had such a group now.

All good things must apparently come to an end. At the end of the war, the group that performed so many amazing tasks was broken up, rather than being retained to work on other problems, such as reconstruction. The sheer waste of not keeping these minds working together on other problems staggers me, but it has happened more than a few times throughout history, and all over the world. No country in the world is exempt from such terrible waste. The women in the group ended up going home to be housewives and pretend that nothing ever happened. It’s the reason that The Bletchley Circle strikes such a chord with me. The show presents a kind of “what if” scenario.

If the world is to survive, it’s important that we think about the incredible waste of not using all the resources at hand for solving problems (and there are more than a few problems to solve). If this group serves as nothing else, it’s a reminder of how a few extremely talented people were able to solve a seemingly insurmountable problem. They should serve as an example today for all those who think the world’s problems can’t be solved—they resonate as a beacon of hope. Let me know your thoughts about Bletchley Park and The Bletchley Circle at [email protected].

 

The Science Fiction Effect

I love reading science fiction. In fact, one of my favorite authors of all times is Isaac Asimov, but I’m hardly unique in that perspective. For many people, science fiction represents just another kind of entertainment. In fact, I’d be lying if I said that entertainment wasn’t a major contributor toward my love of science fiction. However, for me, science fiction goes well beyond mere entertainment. For me, it’s a motivator—a source of ideas and inspiration. So I recently read A Warp Speed Analysis on the Influence of Science Fiction with a great deal of interest. It seems that I’m not alone in my view that science fiction authors are often a source of creativity for real world scientists who see something that could be and make it into something that really is.

The science fiction effect has inspired me in both my consulting and writing over the years. For example, I’ve seen how science fiction authors treat those with special needs as if they don’t really have any special need at all—science has provided solutions that level the playing field for them. It’s the reason that I wrote Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements and continue to write on accessibility topics. The whole idea that science could one day make it possible for everyone to live lives free of any physical encumbrance excites me more than just about anything else.

What I find most interesting is that the ability to turn science fiction into science fact receives real world emphasis by colleges and universities. For example, there is a course at MIT entitled, MAS S65: Science Fiction to Science Fabrication. Many articles, such as Why Today’s Inventors Need to Read More Science Fiction, even encourage scientists to read science fiction as a means of determining how their inventions might affect mankind as a whole. The point is that the creativity of science fiction authors has real world implications.

Now, before I get a huge pile of e-mail decrying my omission of other genres of writing—I must admit that I do read other sorts of books. Currently I’m enjoying the robust historical fiction of Patrick O’Brian. I’ll eventually provide a review of the series, but it will take me a while to complete it. Still, other books focus on what was in the past, what is today, or what possibly might be—science fiction propels us into the future. The science fiction effect is real and I’m happy to say it has influenced me in a number of ways. How has science fiction affected you? Let me know at [email protected].

 

Interesting Money Issues for Computer Users

I was reading an article by John Dvorak entitled, “The Secret Printer Companies Are Keeping From You” recently that caused me to think about all of the ways in which I look for ways to reduce the costs of my computing experience without reducing quality. In this article, John discusses the use of less expensive replacements for inkjet printers. I found the arguments for the use of less expensive inks compelling. Then again, I’m always looking for the less expensive route to computing.

I’ve often tried the less expensive solution in other areas. For example, are the white box labels any different than the high end Avery alternatives? I found to my chagrin that this is one time when you want to buy the more expensive label. The less expensive labels often come by their price advantage in the form of less reliable adhesives or thinner paper. This isn’t always the case, but generally it is. When it comes to labels, you often get what you pay for. I tried similar experiments with paper and found that the less expensive paper was a bit less bright or possibly not quite as nicely finished, but otherwise worked just fine. It’s important to look carefully at the cheaper brands when you make a decision to buy them and determine whether there are any actual quality differences and whether you can live with those differences when present.

John is right about more expensive labeled products being passed off as less expensive off brand products. In some cases, I’ve found all sort of items that didn’t quite meet a vendors strict requirements for a labeled product sold as a less expensive off brand product. Sometimes you’d have to look very closely to see any difference at all. I also know that some white box vendors have name brand vendors product equipment with less stringent requirements or possibly not quite as many bells and whistles. The point is, that you can find new products that works almost as well as the name brand for substantially less money if you try.

However, let’s say you’re not willing to take a chance on a white box option. There is also a strong market now in rebuilt and refurbished equipment. Often, this is last year’s model that someone turned back in for the latest product. After a required check of the hardware and possibly a refit of a few items, a company will try to sell it to a new customer at a significantly reduced price. These refurbished items usually work as well as the new products. Because they’re already burned in, there is also less of a chance that you’ll encounter problems with them. Even Apple has gotten into the refurbished product game—I’m planning to buy a refurbished third generation iPad in the near future.

Getting systems designed for expandability is another good way to extend your purchasing power. You might not be able to afford absolutely everything you want today. Get what you can afford and then add onto the system later. This is the route I take quite often. I’ll get a motherboard and other system components that offer room for expansion and then I add what I need until the unit is maxed out. I can then get the next generation setup, move the parts that are still viable, and use the parts that are outdated for some other purpose. Often I’ll take pieces and put them together for a test system or for a unit that I’ll use to run an older operating system.

Some people have asked why I go through all this trouble when you can get a truly inexpensive system from a place like TigerDirect for under $500.00. I’ve looked at this systems closely enough to figure out that they usually won’t work for my needs right out of the box—I always end up adding enough to bring the price near to $1,000.00 and usually more. Once the system is delivered, I find there is little documentation and that the box is too small to accommodate any upgrades. I would have saved money in the long run by getting a better system that has expandability built in. Here is where the trap occurs. There is a point where you have cut costs so much that the PC ends up being a throwaway that proves frustrating. It’s false economy for a power user (the systems often work just fine for students or users who don’t run anything more complex than a word processor).

Getting the most out of your computer purchasing power takes thought and research. What has your best purchasing decision been? How about the worst mistake you’ve made? Let me know your thoughts about computer hardware purchases at [email protected].