An Update on the RunAs Command

It has been a while since I wrote the Simulating Users with the RunAs Command post that describes how to use the RunAs command to perform tasks that the user’s account can’t normally perform. (The basics of using the RunAs command appear in both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference.) A number of you have written to tell me that there is a problem with using the RunAs command with built-in commands—those that appear as part of CMD.EXE. For example, when you try the following command:

RunAs /User:Administrator “md \Temp”

you are asked for the Administrator password as normal. After you supply the password, you get two error messages:

RUNAS ERROR: Unable to run – md \Temp
2: The system cannot find the file specified.

In fact, you find that built-in commands as a whole won’t work as anticipated. One way to overcome this problem is to place the commands in a batch file and then run the batch file as an administrator. This solution works fine when you plan to execute the command regularly. However, it’s not optimal when you plan to execute the command just once or twice. In this case, you must execute a copy of the command processor and use it to execute the command as shown here:

RunAs /User:Administrator “cmd /c \”md \Temp””

This command looks pretty convoluted, but it’s straightforward if you take it apart a little at a time. At the heart of everything is the md \Temp part of the command. In order to make this a separate command, you must enclose it in double quotes. Remember to escape the double quote that appears withing the command string by using a backslash (as in \”).

To execute the command processor, you simply type cmd. However, you want the command processor to start, execute the command, and then terminate, so you also add the /c command line switch. The command processor string is also enclosed within double quotes to make it appear as a single command to RunAs.


Make sure you use forward slashes and backslashes as needed. Using the wrong slash will make the command fail.

The RunAs command can now proceed as you normally use it. In this case, the command only includes the username. You can also include the password, when necessary. Let me know if you find this workaround helpful at


Antivirus and Application Compilation

Sometimes applications don’t get along, especially when one application is designed to create new content at a low level and the other is designed to prevent low level access to a system. Such is the case with compilers and antivirus applications in some cases. I haven’t been able to reproduce this behavior myself, but enough readers have told me about it that I feel I really do need to address it in a post. There are situations where you’re working with source code from one of my books, compile it, and then have your antivirus application complain that the code is infected with something (even though you know it isn’t). Sometimes the antivirus program will go so far as to simply delete the application you just compiled (or place it in a virus vault).

The solution to the problem can take a number of forms. If your antivirus application provides some means of creating exceptions for specific applications, the easiest way to overcome the problem is to create such an exception. You’ll need to read the documentation for your antivirus application to determine whether such a feature exists.

In some cases, the compiler or its associated Integrated Development Environment (IDE) simply don’t follow all the rules required to work safely in protected directories, such as the C:\Program Files directory on a Windows system. This particular issue has caused readers enough woe that my newer books suggest installing the compiler and its IDE in a directory the reader owns. For example, I now ask readers to install Code::Blocks in the C:\CodeBlocks directory on Windows systems because installing it elsewhere has caused some people problems.

Unfortunately, creating exceptions and installing the application in a friendly directory only go so far in fixing the problem. A few antivirus applications are so intent on protecting you from yourself that nothing you do will prevent the behavior. When this happens, you still have a few options. The easiest solution is to turn the antivirus program off just long enough to compile and test the application. Of course, this is also the most dangerous solution because it could leave your system open to attack.

A safer, albeit less palatable solution, is to try a different IDE and compiler. Antivirus programs seem a little picky about which applications they view as a threat. Code::Blocks may cause the antivirus program to react, but Eclipse or Visual Studio might not. Unfortunately, using this solution means that steps in the book may not work precisely as written.

For some developers, the only logical solution is to get a different antivirus application. I’ve personally had really good success with AVG Antivirus. However, you might find that this product doesn’t work for you for whatever reason. Perhaps it interacts badly with some other application on your system or simply doesn’t offer all the features you want.

My goal is to ensure you can use the examples in my books without jumping through a lot of hoops. When you encounter problems that are beyond my control, such as an ornery antivirus application, I’ll still try to offer some suggestions. In this case, the solution truly is out of my control but you can try the techniques offered in this post. Let me know if you find other solutions to the problem at


Thinking of All the Possibilities in Software Design

A number of books on my shelf, some of which I’ve written, broach the topic of divergent thinking. Unfortunately, many developers (and many more managers) don’t really grasp the ideas behind divergent thinking. Simply put, divergent thinking starts with a single premise and views as many permutations of that premise as possible. Most developers don’t take the time to use divergent thinking as part of the development process because they don’t see a use for it. In fact, most books fall short of even discussing the potential for divergent thinking, much less naming it as a specific element of application design. I’ve explored the topic before and a reader recently reminded me of an article I wrote on the topic entitled, Divergent Versus Convergent Thinking: Which Is Better for Software Design?.

The process that most developers rely upon is convergent thinking, which is where you convert general goals and needs into specific solutions that appear within a single application. The difference between the two modes of thinking is that divergent thinking begins with a single specific premise, while convergent thinking begins with a number of general premises. More specifically, divergent thinking is the process you use to consider all of the possibilities before you use convergent thinking to create specific solutions to those possibilities.

There is an actual cycle between divergent and convergent thinking. You use divergent thinking when you start a project to ensure you discover as many possibly ways to address user requirements as possible. Once you have a number of possibilities, you use convergent thinking to consider the solutions for those possibilities in the form of a design. The process will point out those possibilities that will work and those that don’t. Maintaining a middle ground between extremes of divergent and convergent thinking will help create unique solutions, yet keep the project on track and maintain project team integrity. Managing the cycle is the job of the person in charge of the project, who is often the CIO, but could be some other management position. So, the manager has to be knowledgeable about software design in order for the process to work as anticipated.

One of the reasons that many applications fail today is the lack of divergent thinking as part of the software design process. We’re all too busy thinking about solutions, rather than possibilities. Yet, the creative mind and the creative process is based on divergent thinking. The reason we’re getting the same solutions rehashed in a million different ways (resulting in a lack of interest in new solutions) is the lack of divergent thinking in the development process.

In fact, I’d go so far as to say that most developers have never even heard of divergent thinking (and never heard convergent thinking called by that name). With this in mind, I’ve decided to provide some resources you can use to learn more about divergent thinking and possibly add it to your application design process.


These are just four of several hundred articles I located on divergent thinking online. I chose these particular four articles because they represent a range of ideas that most developers will find helpful, especially the idea of not applying stereotypical processes when trying to use divergent thinking. Stereotypes tend to block creative flow, which is partly what divergent thinking is all about.

The bottom line is that until divergent thinking is made part of the software design process, we’ll continue to suffer through rehashed versions of the current solutions. What is your view of divergent thinking? Do you see it as a useful tool or something best avoided? Let me know your thoughts at


Backslash (\) Versus Forward Slash (/)

A number of readers have noted recently that I’ve been using the forward slash (/) more and more often in my books to denote hard drive paths. Of course, when working on Windows systems (and DOS before that) it’s common practice to use the backslash (\) for paths. However, using a forward slash has certain benefits, not the least of which is portability. It turns out that the forward slash works well on other platforms and that it also works on Windows systems without problem (at least in most cases). Using a forward slash whenever possible means that your path will work equally well on Windows, Mac, Linux, and other platforms without modification.

In addition, when working with languages such as C++, JavaScript, Java, and even C#, you must exercise care when using the backslash because the languages use it as an escape character (a character pair that denotes something special). For example, using \n defines a newline character and \r is a carriage return. In order to create a backslash, you must actually use two of them \\. The potential for error is relatively high in this case. Forward slashes appear singly, so you can copy them directly, rather than manipulating the path in various ways.

There are situations where you must use a backslash in the Windows (and also the DOS) environment. You can type CD / or CD \ and get to the root directory of a Windows system. However if you try to type Dir /, you’ll get an error. In order to obtain a directory listing of the root directory, you must type Dir \ instead. In fact, many native utilities require that you use the backslash for input. On the other hand, many Windows APIs accept the forward slash without problem. When in doubt, try both slashes to see which works without problem. If you see a forward slash used in one of my books, the forward slash will definitely work in that instance. In general, I only use the forward slash when compatibility with other platforms is a consideration. Windows-specific platform information will still use the backslash.

As things stand today, the more you can do to make your applications run on multiple platforms, the better off you’ll be. Users don’t just rely on Windows any longer—they rely on a range of platforms that you might be called upon to support. Having something like an incorrectly formatted path in your code is easy to overlook, but devastating in its effects on the usability of your application.

Let me know your concerns about the use of backslashes and forward slashes in my books at The book that uses the largest number of forward slashes for paths right now is C++ All-In-One Desk Reference For Dummies. I want to be sure everyone is comfortable with my use of these special symbols and understands why I’ve used one or the other in a particular circumstance.


In Praise of Dual Monitors

A lot of people have claimed that the desktop system is dead—that people are only interested in using tablets and smartphones for computing. In fact, there is concern that the desktop might become a thing of the past. It’s true that my own efforts, such as HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies, have started to focus on mobile development. However, I plan to continue using my desktop system when working because it’s a lot more practical and saves me considerable time. One such time saver is the use of dual monitors.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at


Considering Perception in User Interface Design

I read a couple of articles recently that reminded me of a user interface design discussion I once had with a friend of mine. First, let’s discuss the articles. The first, New Record for Human Brain: Fastest Time to See an Image, says that humans can actually see something in as little as 13 ms. That short time frame provides the information the brain needs to target a point of visual focus. This article leads into the second, ‘Sixth Sense’ Can Be Explained by Science. In this case, the author explains how the sixth sense that many people relate as being supernatural in origin is actually explainable through scientific means. The brain detects a change—probably as a result of that 13 ms view—and informs the rest of the mind about it. However, the change hasn’t been targeted for closer inspection, so the viewer can’t articulate the change. In short, you know the change is there, but you can’t say what has actually changed.

So, you might wonder what this has to do with site design. It turns out that you can use these facts to help focus user attention on specific locations on your site. Now, I’m not talking here about the use of subliminal perception, which is clearly illegal in many locations. Rather, it’s possible to do as a friend suggested in designing a site and change a small, but noticeable, element each time a page is reloaded. Of course, you need not reload the entire page. Technologies such as Asynchronous JavaScript And XML (AJAX) make it possible to reload just a single element as needed. (Of course, changing a single element in a desktop application is incredibly easy because nothing special is needed to do it.) The point of making this change is to cause the viewer to look harder at the element you most want them to focus on. It’s just another method for ensuring that the right area of a page or other user interface element gets viewed.

However, the articles also make for interesting thoughts about the whole issue of user interface design. Presentation is an important part of design. Your application must use good design principles to attract attention. However, these articles also present the idea of time as a factor in designing the user interface. For example, the order in which application elements load is important because the brain can perceive the difference. You might not consciously register that element A loaded some number of milliseconds sooner than element B, but subconsciously, element A attracts more attention because it registered first and your brain targeted it first.

As science continues to probe the depths of perception, it helps developers come up with more effective ways in which to present information in a way that enhances the user experience and the benefit of any given application to the user. However, in order to make any user interface change effective, you must apply it consistently across the entire application and ensure that the technique isn’t used to an extreme. Choosing just one element per display (whether a page, window, or dialog box) to change is important. Otherwise, the effectiveness of the technique is diluted and the user might not notice it at all.

What is your take on the use of perception as a means of controlling the user interface? Do you feel that subtle techniques like the ones described in this post are helpful? Let me know your thoughts at


Choosing Variable Names

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:


  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important

In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. For example, a form of Hungarian Notation, where certain type prefixes, suffixes, and other naming conventions are used, is a common way to reduce the complexity of creating a variable name. In fact, Hungarian Notation (or some form of it) is often used to name objects, methods, functions, classes, and other programming elements as well. For example, NamArCustomers could be an array of customer names (Nam for names, Ar for array). The use of these two prefixes would make it instantly apparent when the variable is being used incorrectly, such as assigning a list of numbers to the array. The point is that an organizational variable naming policy can reduce complexity, make the names easy to read for anyone, and reduces the time the developer spends choosing a name.


Before I get a ton of e-mail on the topic, yes, I know that many people view Hungarian notation as the worst possible way to name variables. They point out that it only really works with statically typed languages and that it doesn’t work at all for languages such as JavaScript. All that I’m really pointing out is that some sort of naming convention is helpful—whether you use something specific like Hungarian Notation is up to you.

Any variable name you create should convey the meaning of that variable to anyone. If you aren’t using some sort of pattern or policy to name the variables, then create a convention that helps you create the names in a consistent manner and document it. When you create a variable name, you need to consider these kinds of questions:


  1. What information does the variable contain (such as a list of names)?
  2. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  3. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  4. Is the variable used for a special task (such as data conversion)?
  5. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at


Verifying Your Hand Typed Code

I maintain statistics for each of my books that are based on reviews and reader e-mails (so those e-mails you send really are important). These statistics help me write better books in the future and also help me determine the sorts of topics I need to address in my blog. It turns out that one of the most commonly asked questions is why a reader’s hand typed code doesn’t work. Some readers simply ask the question without giving me any details at all, which makes the question impossible to answer. In some cases, the reader sends the hand typed code, expecting that I’ll take time to troubleshoot it. However, this isn’t a realistic request because it defeats the very purpose behind typing the code by hand. If I take the time to diagnose the problems in the code you typed, I’ll be the one to learn an interesting lesson, not you. If you learn better by doing—that is, by typing the code by hand and then running it, then you need to be the one to troubleshoot any problems with the resulting code.

My advice to readers is to use the downloadable source code when working through the book text. If you want to type the code by hand after that as part of your learning experience, at least you’ll know that the example works on your system and you’ll also understand how the example works well enough to troubleshoot any errors in your own code. However, you need to be the one to diagnose the errors. If nothing else, perform a character-by-character comparison of your code to the example code that you downloaded from the publisher’s site. Often, a reader will write back after I suggest this approach and mention that they had no idea that a particular special symbol or method of formatting content was important. These are the sorts of lessons that this kind of exercise provide.

Now, it has happened that the downloadable source code doesn’t always work on a particular user’s system. When the error is in the code or something I can determine about the coding environment, you can be certain that I’ll post information about it on my blog. This should be the first place you look for such information. Simply click on the book title in question under the Technical category. You’ll find a list of posts for that book. Always feel free to contact me about a book-specific question. I want to be sure you have a good learning experience.

There are some situations where a reader tries to run application code that won’t work on a particular system. My books provide information on the kind of system you should use, but I can’t always determine exceptions to the rule in advance. When I post system requirements, your system must meet those requirements because the examples are guaranteed to fail on lesser systems. If you encounter a situation where the downloadable code won’t run on your system, but none of the fixes I post for that code work and your system does meet the requirements, then please feel free to contact me. There are times where an example simply won’t run because you can’t use the required software or the system won’t support it for whatever reason.

The point of this post is that you need to work with the downloadable source code whenever possible. The downloadable source code has been tested by a number of people, usually on a range of systems, to ensure it will work on your system too. I understand that typing the code by hand is an important and viable way to learn, but you should reserve this method as the second learning tier—used after you have tried the downloadable source code. Please let me know if you have any questions or concerns at


Application Development and BYOD

I read an article a while ago in InforWorld entitled, “The unintended consequences of forced BYOD.” The Bring Your Own Device (BYOD) phenomenon will only gain in strength because more people are using their mobile devices for everything they do and corporations are continually looking for ways to improve the bottom line. The push from both sides ensures that BYOD will become a reality. The article made me think quite hard about how developers who work in the BYOD environment will face new challenges that developers haven’t even had to consider in the past.

Of course, developers have always had to consider security. Trying to maintain a secure environment has always been a problem. The only truly secure application is one that has no connectivity to anything, including the user. Obviously, none of the applications out there are truly secure—the developer has always had to settle for something less than the ideal situation. At least devices in the past were firmly under IT control, but not with BYOD. Now the developer has to face the fact that the application will run on just about any device, anywhere, at any time, and in any environment. A user could be working on company secrets with a competitor looking right at the screen. Worse, how will developers legal requirements such as the Health Insurance Portability and Accountability Act (HIPAA)? Is the user now considered an independent vendor or is the company still on the hook for maintaining a secure environment? The legal system has yet to address these sorts of questions, but it will have to do so soon because you can expect that your doctor (and other health professionals) will use a mobile device to enter information as well.

Developers will also have to get used to working with new tools and techniques. Desktop development has meant working with tools designed for a specific platform. A developer would use something like C# to create a desktop application meant for use on any platform that supports the .NET Framework, which mainly meant working with Windows unless the company also decided to support .NET Framework alternatives such as Mono (an open source version of the .NET Framework). Modern applications will very likely need to work on any platform, which means writing server-based applications, browser-based applications, or a combination of the two in order to ensure the maximum number of people possible can interact with the application. The developer will have to get used to the idea that there is no way to test absolutely every platform that will use the application because the next platform hasn’t been delivered yet.

Speed also becomes a problem for developers. When working with a PC or laptop, a developer can rely on the client having a certain level of functionality. Now the application needs to work equally well with a smartphone that may not have enough processing power to do much. In order to ensure the application works acceptably, the developer needs to consider using browser-based programming techniques that will work equally well on every device, no matter what level of power the device possesses.

Some in industry have begun advocating that BYOD should also include Bring Your Own Software (BYOS). This would mean creating an environment where developers would make data available through something like a Web service that could be accessed by any sort of device using any capable piece of software. However, the details of such a setup have yet to be worked out, much less implemented. The interface would have to be nearly automatic with regard to connectivity. The browser-based application could do this, but only if the organization could at least ensure that everyone would be required to use a browser that met minimum standards.

My current books, HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies both address the needs of developers who are looking to move from the desktop into the browser-based world of applications that work anywhere, any time. Let me know your thoughts about BYOD and BYOS at


The Place of Automation in the User Interface

There was a time that a developer could rely on users to possess a certain level of technical acumen. That’s no longer the case. Most of the people using a device containing a CPU today (I’m including PCs, laptops, tablets, and smartphones here) don’t know anything about how it works and they don’t care to know either. All these people know is that they really must have access to their app. (Some don’t even realize the role data plays in making the app work.) The app can perform myriad tasks—everything from keeping track of the calories they’ve eaten to maintaining the scheduled events for the day. Devices that contain CPUs have become the irreplaceable partner for many people and these devices must work without much concern on the part of the user. In short, the device must provide a superior level of automation or the user simply won’t know how to interact with it.

I was recently watching television and saw a commercial for a Weight Watchers app for mobile devices. In the commercial, a woman exclaims wonder about the new programs that Weight Watchers provides, which include this app for her mobile devices. To track her calories, she simply points her phone at the box containing whatever food she plans to eat and the app tracks the calories for her. The interesting part is that there is no data entry required. As technology continues to progress, you can expect to see more apps of this type. People really don’t want to know anything about your app, how it works, or the cool code you put into it. They want to use the app without thinking about it at all.

Of all the parts of a device that must be automated, the user interface is most important and also the most difficult to create. Users really don’t want to have to think about the interface. Their focus is on the task that the app performs for them. In fact, some e-mails I’ve received recently about my Windows 8 book have driven home the idea that the app must simply work without any thought at all. It’s because of these e-mails (and those for other books I’ve written) that I wrote the article entitled, “Designing Apps with Automation in Mind.” This article points out the essential behaviors that applications must exhibit today to be successful.

On the other side of the fence, I continue to encounter an “old world” philosophy on the part of developers that applications should pack as much as possible into a small space—users will eventually figure out the complexity of the interface provided. Unfortunately, as users become more vocal in requiring IT to meet their demands, these approaches to the user interface will lose out. The next app is a click away and if it does the job of automating the interface better, your app will lose out. Even if there isn’t another app to use, the user will simply ignore the app you’ve provided. The user will rely on an older version or simply not interact with the app if there is no older version. Many organizations have found out the hard way that attempting to force users to interact with an app is bound to fail.

The fact is, to be successful today, a developer must be part psychologist, part graphics artist, and part programming genius. Creating an acceptable interface is no longer good enough, especially when people want to access the same app from more than one device. The interface must be simple, must work well, and must automate as much of the input as possible for the user or it simply won’t succeed. Let me know your thoughts about user interface automation at