Death of Windows XP? (Part 3)

Questions continue to come in from readers who are still using Windows XP despite the fact that Microsoft is only marginally supporting it. Yes, it’s the operating system that refuses to die and readers really are confused as to why Microsoft has decided to kill what is obviously a popular operating system. They’re in good company. In fact, some authors, such as John Dvorak, have gone a lot further in their negative comments regarding the demise of Windows XP. The point is that Microsoft is quite determined to force anyone they can into using Windows 8.1, whether it works for them or not. It doesn’t seem to matter that people still have perfectly usable systems that are happily running Windows XP without problem.

My first two posts on this topic, Death of Windows XP? and Death of Windows XP? (Part 2) should have addressed any questions that people reading my books might have. Essentially, I recommend updating to Windows 7 (for business users) or Windows 8.1 (for consumers) when your hardware begins to die of old age or your needs change.


I no longer have access to a Windows XP system, so I’m not able to provide support for my old Windows XP books at this point in time. If you have one of my old Windows XP books, you’ll need to use it as is. I haven’t purposely gone out of my way to orphan the books, but the technology is old and I simply don’t have the resources to provide support for these books any longer. In addition, none of my current programming books are designed for Windows XP developers.

In the meantime, you need to ensure that you get security updates. Microsoft has extended a limited level of security support until 14 July 2015 that includes malware signatures and the associated engine. You won’t receive any sort of bug fixes. In order to enhance the security of your environment, you may want to consider these changes to your system:

  • Use a browser that receives regular security upgrades, such as Chrome or Firefox (IE is a bad choice because Microsoft won’t update it).

  • Remove any software that is prone to security problems, such as Java.

  • Rely on an account with limited privileges, rather than use the Administrator account.
  • Update any application software as often as is possible.
  • Keep the number of installed applications as small as is possible.
  • Examine your system (especially your hard drive) for signs of intruders (such as unexplained processes) on a regular basis.

  • Stay offline whenever possible.

These strategies can help you out for a while, but they’re short term solutions. Eventually, you need to go offline permanently (such as when using the system to run older games) or upgrade to something newer. Please let me know whether you have any additional questions about Windows XP and how it affects support for my books at

An Update on the RunAs Command

It has been a while since I wrote the Simulating Users with the RunAs Command post that describes how to use the RunAs command to perform tasks that the user’s account can’t normally perform. (The basics of using the RunAs command appear in both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference.) A number of you have written to tell me that there is a problem with using the RunAs command with built-in commands—those that appear as part of CMD.EXE. For example, when you try the following command:

RunAs /User:Administrator “md \Temp”

you are asked for the Administrator password as normal. After you supply the password, you get two error messages:

RUNAS ERROR: Unable to run – md \Temp
2: The system cannot find the file specified.

In fact, you find that built-in commands as a whole won’t work as anticipated. One way to overcome this problem is to place the commands in a batch file and then run the batch file as an administrator. This solution works fine when you plan to execute the command regularly. However, it’s not optimal when you plan to execute the command just once or twice. In this case, you must execute a copy of the command processor and use it to execute the command as shown here:

RunAs /User:Administrator “cmd /c \”md \Temp””

This command looks pretty convoluted, but it’s straightforward if you take it apart a little at a time. At the heart of everything is the md \Temp part of the command. In order to make this a separate command, you must enclose it in double quotes. Remember to escape the double quote that appears withing the command string by using a backslash (as in \”).

To execute the command processor, you simply type cmd. However, you want the command processor to start, execute the command, and then terminate, so you also add the /c command line switch. The command processor string is also enclosed within double quotes to make it appear as a single command to RunAs.


Make sure you use forward slashes and backslashes as needed. Using the wrong slash will make the command fail.

The RunAs command can now proceed as you normally use it. In this case, the command only includes the username. You can also include the password, when necessary. Let me know if you find this workaround helpful at


Death of Windows XP? (Part 2)

The fact that Windows XP, despite some pretty aggressive attack by Microsoft on its own product, is still alive isn’t in doubt. Of course, there is the matter of support to consider. Microsoft has decided not to provide any more support for Windows XP unless you’re a big company or government organization with immensely deep pockets and have a lot of cash to spend. Stories abound about the Dutch and British governments forking over huge bucks to keep their copies of Windows XP patched. Of course, the IRS is in on it too. (Microsoft begrudgingly decided to provide security updates for Windows XP until 14 July 2015 after a lot of complaining.)

My previous post on this topic, Death of Windows XP?, discussed some of the pros and cons of keeping the aging operating system around. In general, it’s a good idea to update to Windows 7 if you have equipment that can run it. Windows 8 has received a lot of negative press, especially for business needs. After working with it for a while myself, I see it as a good consumer operating system, but not necessarily something a business would want to use. Even with the updates, Windows 8 simply forces the user to work too hard to get things done in a manner that businesses would normally do them.

What surprised me this past week (and it shouldn’t have) is that some larger organizations are taking matters into their own hands. For example, if you’re a Windows XP user in China, you can get updates for your Windows XP installation from Qihoo 360. The point is that it appears that Windows XP will continue to receive patches and security updates even if Microsoft isn’t involved. This process almost reminds me of what happened to IBM when it started to drop the ball on the PC. At one time, everything revolved around IBM, but then the company made some really bad decisions and third parties had an opportunity to take control of the market (which they promptly did).

Whether you believe Windows XP is worth saving or not isn’t the issue. What the whole Windows XP scenario points out is that Microsoft is losing it’s grip on the market, even the desktop market where it once reigned supreme. What are your thoughts about Microsoft’s future? Let me know at


Death of Windows XP?

There have been a lot of stories in the trade press about Windows XP as of late. A number of readers have written to ask about the aging operating system because they’re confused by stories from one side that say everyone is sticking with Windows XP and stories from the other that say people are abandoning it. Windows XP is certainly one of the longest lasting and favored operating systems that Microsoft has produced, so it’s not surprising there is so much confusion about it.

Microsoft is certainly putting a lot of effort into getting rid of the aging operating system and for good reason—the code has become hard to maintain. Development decisions that seemed appropriate at the time Windows XP was created have proven not to work out in the long run. Of course, there are monetary reasons for getting rid of Windows XP as well. A company can’t continue to operate if no one buys new product. It must receive a constant influx of funds to stay in business, even a company as large as Microsoft. In short, if you’re Microsoft and you want to stay in business, rather than service what has become an unreliable operating system, you do anything it takes to move people in some other direction.

On the other side of the fence are people are are simply happy with the operating system they have today. The equipment they own is paid for and there isn’t a strong business reason to move to some other platform until said equipment breaks. The reliability of computer equipment is such today that it can last quite a long time without replacement. Theoretically, based on reliability alone, it’s possible that people will continue to use Windows XP for many more years. I have such as system setup to hold my movie database and to play older games I enjoy, but I don’t network it with any other equipment and it definitely doesn’t have access to the Internet.

From many perspectives, reports of the death of Windows XP are likely premature. The latest statistics still place the Windows XP market share above 27 percent. Even when Microsoft’s support goes away on April 8th, many third party vendors will continue to support Windows XP. What Microsoft’s end of support means is that you won’t get any new drivers for new hardware or upgrades to core operating system features. However, you can still get updates to your virus protection and Windows XP will continue to operate with your existing hardware.

For most people, the question of whether to keep Windows XP around hinges around the simple question of whether the operating system still fulfills every need. If this is the case, there really isn’t any reason to succumb to the fear mongering that is taking place and move to something else. However, once your equipment does start to break down or you find that Windows XP doesn’t quite fit the bill any longer, try moving along to something newer.

As to the essential question about the level of Windows XP support I’m willing to provide for my books, it depends on the book. My system no longer has development software on it because developers have moved on to other platforms. So, if you ask me programming questions about Windows XP, I’m not going to be able to help you. To some extent, I can offer a little help with user-level support questions for a few of my older books. However, I won’t be able to cover issues that my support system doesn’t address any longer, such as connecting to a network or the Internet. In sum, even though I can offer you some level of support in many cases, I can’t continue to provide the full support I once did. Let me know about your Windows XP book support questions at


Antivirus and Application Compilation

Sometimes applications don’t get along, especially when one application is designed to create new content at a low level and the other is designed to prevent low level access to a system. Such is the case with compilers and antivirus applications in some cases. I haven’t been able to reproduce this behavior myself, but enough readers have told me about it that I feel I really do need to address it in a post. There are situations where you’re working with source code from one of my books, compile it, and then have your antivirus application complain that the code is infected with something (even though you know it isn’t). Sometimes the antivirus program will go so far as to simply delete the application you just compiled (or place it in a virus vault).

The solution to the problem can take a number of forms. If your antivirus application provides some means of creating exceptions for specific applications, the easiest way to overcome the problem is to create such an exception. You’ll need to read the documentation for your antivirus application to determine whether such a feature exists.

In some cases, the compiler or its associated Integrated Development Environment (IDE) simply don’t follow all the rules required to work safely in protected directories, such as the C:\Program Files directory on a Windows system. This particular issue has caused readers enough woe that my newer books suggest installing the compiler and its IDE in a directory the reader owns. For example, I now ask readers to install Code::Blocks in the C:\CodeBlocks directory on Windows systems because installing it elsewhere has caused some people problems.

Unfortunately, creating exceptions and installing the application in a friendly directory only go so far in fixing the problem. A few antivirus applications are so intent on protecting you from yourself that nothing you do will prevent the behavior. When this happens, you still have a few options. The easiest solution is to turn the antivirus program off just long enough to compile and test the application. Of course, this is also the most dangerous solution because it could leave your system open to attack.

A safer, albeit less palatable solution, is to try a different IDE and compiler. Antivirus programs seem a little picky about which applications they view as a threat. Code::Blocks may cause the antivirus program to react, but Eclipse or Visual Studio might not. Unfortunately, using this solution means that steps in the book may not work precisely as written.

For some developers, the only logical solution is to get a different antivirus application. I’ve personally had really good success with AVG Antivirus. However, you might find that this product doesn’t work for you for whatever reason. Perhaps it interacts badly with some other application on your system or simply doesn’t offer all the features you want.

My goal is to ensure you can use the examples in my books without jumping through a lot of hoops. When you encounter problems that are beyond my control, such as an ornery antivirus application, I’ll still try to offer some suggestions. In this case, the solution truly is out of my control but you can try the techniques offered in this post. Let me know if you find other solutions to the problem at


Thinking of All the Possibilities in Software Design

A number of books on my shelf, some of which I’ve written, broach the topic of divergent thinking. Unfortunately, many developers (and many more managers) don’t really grasp the ideas behind divergent thinking. Simply put, divergent thinking starts with a single premise and views as many permutations of that premise as possible. Most developers don’t take the time to use divergent thinking as part of the development process because they don’t see a use for it. In fact, most books fall short of even discussing the potential for divergent thinking, much less naming it as a specific element of application design. I’ve explored the topic before and a reader recently reminded me of an article I wrote on the topic entitled, Divergent Versus Convergent Thinking: Which Is Better for Software Design?.

The process that most developers rely upon is convergent thinking, which is where you convert general goals and needs into specific solutions that appear within a single application. The difference between the two modes of thinking is that divergent thinking begins with a single specific premise, while convergent thinking begins with a number of general premises. More specifically, divergent thinking is the process you use to consider all of the possibilities before you use convergent thinking to create specific solutions to those possibilities.

There is an actual cycle between divergent and convergent thinking. You use divergent thinking when you start a project to ensure you discover as many possibly ways to address user requirements as possible. Once you have a number of possibilities, you use convergent thinking to consider the solutions for those possibilities in the form of a design. The process will point out those possibilities that will work and those that don’t. Maintaining a middle ground between extremes of divergent and convergent thinking will help create unique solutions, yet keep the project on track and maintain project team integrity. Managing the cycle is the job of the person in charge of the project, who is often the CIO, but could be some other management position. So, the manager has to be knowledgeable about software design in order for the process to work as anticipated.

One of the reasons that many applications fail today is the lack of divergent thinking as part of the software design process. We’re all too busy thinking about solutions, rather than possibilities. Yet, the creative mind and the creative process is based on divergent thinking. The reason we’re getting the same solutions rehashed in a million different ways (resulting in a lack of interest in new solutions) is the lack of divergent thinking in the development process.

In fact, I’d go so far as to say that most developers have never even heard of divergent thinking (and never heard convergent thinking called by that name). With this in mind, I’ve decided to provide some resources you can use to learn more about divergent thinking and possibly add it to your application design process.


These are just four of several hundred articles I located on divergent thinking online. I chose these particular four articles because they represent a range of ideas that most developers will find helpful, especially the idea of not applying stereotypical processes when trying to use divergent thinking. Stereotypes tend to block creative flow, which is partly what divergent thinking is all about.

The bottom line is that until divergent thinking is made part of the software design process, we’ll continue to suffer through rehashed versions of the current solutions. What is your view of divergent thinking? Do you see it as a useful tool or something best avoided? Let me know your thoughts at


In Praise of Dual Monitors

A lot of people have claimed that the desktop system is dead—that people are only interested in using tablets and smartphones for computing. In fact, there is concern that the desktop might become a thing of the past. It’s true that my own efforts, such as HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies, have started to focus on mobile development. However, I plan to continue using my desktop system when working because it’s a lot more practical and saves me considerable time. One such time saver is the use of dual monitors.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at


Extending the Horizons of Computer Technology

OK, I’ll admit it—at one time I was a hardware guy. I still enjoy working with hardware from time-to-time and it’s my love of hardware that helps me see the almost infinite possibilities for extending computer technology to do all sorts of things that we can’t even envision right now. The fact that computers are simply devices for performing calculations really, really fast doesn’t actually matter. The sources of data input do matter, however. As computer technology has progressed, the number of sensor sources available to perform data input have soared. It’s the reason I recently wrote an article entitled, Tools to Help You Write Apps That Use Sensors.

The sensors you can connect to a computer today can do just about any task imaginable. You can detect everything from solar flares to microscopic animals. Sensors can hear specific sounds (such as breaking glass) and detect ranges of light that humans can’t even see. You can rely on sensors to monitor temperature extremes or the amount of liquid flowing in a pipe. In short, if you need to determine when a particular real world event has occurred, there is probably a sensor to do the job for you.

Unfortunately, working with sensors can also be difficult. You don’t just simply plug a sensor into your computer and see it work. The computer needs drivers and other software to interact with the sensor and interpret the data they provide. Given that most developer have better things to do with their time than write arcane driver code, obtaining the right tool for the job is absolutely essential. My article points out some tricks of the trade for making sensors a lot easier to deal with so that you can focus on writing applications that dazzle users, rather than write drivers they’ll never see.

As computer technology advances, the inputs and outputs that computers can handle will continue to increase. Sensors provide inputs, but the outputs will become quite interesting in the future as well. For example, sensors in your smartphone could detect that you’re having a heart attack and automatically call for help. For that matter, the smartphone might even be programmed to help in some significant way. It’s hard to know precisely how technology will change in the future because it has changed so much in just the last few years.

What sorts of sensors have you seen at work in today’s world? Do you commonly write applications that use uncommon sensor capabilities? Let me know about your user of sensors at I’d really be interested to know how many people are interested in these sorts of technologies so that I know whether you’d like to see future blog posts on the topic.


Considering Perception in User Interface Design

I read a couple of articles recently that reminded me of a user interface design discussion I once had with a friend of mine. First, let’s discuss the articles. The first, New Record for Human Brain: Fastest Time to See an Image, says that humans can actually see something in as little as 13 ms. That short time frame provides the information the brain needs to target a point of visual focus. This article leads into the second, ‘Sixth Sense’ Can Be Explained by Science. In this case, the author explains how the sixth sense that many people relate as being supernatural in origin is actually explainable through scientific means. The brain detects a change—probably as a result of that 13 ms view—and informs the rest of the mind about it. However, the change hasn’t been targeted for closer inspection, so the viewer can’t articulate the change. In short, you know the change is there, but you can’t say what has actually changed.

So, you might wonder what this has to do with site design. It turns out that you can use these facts to help focus user attention on specific locations on your site. Now, I’m not talking here about the use of subliminal perception, which is clearly illegal in many locations. Rather, it’s possible to do as a friend suggested in designing a site and change a small, but noticeable, element each time a page is reloaded. Of course, you need not reload the entire page. Technologies such as Asynchronous JavaScript And XML (AJAX) make it possible to reload just a single element as needed. (Of course, changing a single element in a desktop application is incredibly easy because nothing special is needed to do it.) The point of making this change is to cause the viewer to look harder at the element you most want them to focus on. It’s just another method for ensuring that the right area of a page or other user interface element gets viewed.

However, the articles also make for interesting thoughts about the whole issue of user interface design. Presentation is an important part of design. Your application must use good design principles to attract attention. However, these articles also present the idea of time as a factor in designing the user interface. For example, the order in which application elements load is important because the brain can perceive the difference. You might not consciously register that element A loaded some number of milliseconds sooner than element B, but subconsciously, element A attracts more attention because it registered first and your brain targeted it first.

As science continues to probe the depths of perception, it helps developers come up with more effective ways in which to present information in a way that enhances the user experience and the benefit of any given application to the user. However, in order to make any user interface change effective, you must apply it consistently across the entire application and ensure that the technique isn’t used to an extreme. Choosing just one element per display (whether a page, window, or dialog box) to change is important. Otherwise, the effectiveness of the technique is diluted and the user might not notice it at all.

What is your take on the use of perception as a means of controlling the user interface? Do you feel that subtle techniques like the ones described in this post are helpful? Let me know your thoughts at


Choosing Variable Names

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:


  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important

In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. For example, a form of Hungarian Notation, where certain type prefixes, suffixes, and other naming conventions are used, is a common way to reduce the complexity of creating a variable name. In fact, Hungarian Notation (or some form of it) is often used to name objects, methods, functions, classes, and other programming elements as well. For example, NamArCustomers could be an array of customer names (Nam for names, Ar for array). The use of these two prefixes would make it instantly apparent when the variable is being used incorrectly, such as assigning a list of numbers to the array. The point is that an organizational variable naming policy can reduce complexity, make the names easy to read for anyone, and reduces the time the developer spends choosing a name.


Before I get a ton of e-mail on the topic, yes, I know that many people view Hungarian notation as the worst possible way to name variables. They point out that it only really works with statically typed languages and that it doesn’t work at all for languages such as JavaScript. All that I’m really pointing out is that some sort of naming convention is helpful—whether you use something specific like Hungarian Notation is up to you.

Any variable name you create should convey the meaning of that variable to anyone. If you aren’t using some sort of pattern or policy to name the variables, then create a convention that helps you create the names in a consistent manner and document it. When you create a variable name, you need to consider these kinds of questions:


  1. What information does the variable contain (such as a list of names)?
  2. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  3. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  4. Is the variable used for a special task (such as data conversion)?
  5. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at