In Praise of Dual Monitors

This is an update of a post that originally appeared on February 5, 2014.

In reading many of my old blog posts, I’m finding that many of the things I said way back when apply equally well today. I’ve received email from budding developers who use their smartphone to code. Just how they perform this trick is beyond me because I squint at the screen performing the simplest of tasks and often find that my fingers are two sizes too big. I have tried coding on a tablet, a laptop, and (oddly enough) my television. While they do work, they’re not particularly efficient, so I’ll stick with my dual-monitor desktop system for coding.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at [email protected].

Pondering the Death of the Desktop Computer

Being an author of computer books makes me naturally curious about the health of certain technologies. After all, I need to know what to write about next. Lately there has been all sorts of ruckus generated about the death of the desktop computer. Many people claim that the desktop computer is on its last legs with one foot in the grave and the other on a banana peel. The expression is cliched and so are the arguments about the desktop computer you’re probably using at work most of the time.

At issue is whether everyone can use a small device to perform all of their work. From some of what I read, I get a picture of a teenager texting a tome the size of War and Piece on a smartphone. (You can even find articles that tell you how to replace your laptop with a smartphone.) The moment that the visualization is complete, I admit that I get a good laugh from the picture. Imagine for a moment seeing someone’s thumbs flying at a speed that defies imagination for months on end to complete the book. The whole idea is ludicrous, but I’m sure someone will try it and succeed as a proof of concept.

You can create a Dick Tracy style computer in a watch. The technology has no size restriction. In fact, I’m not entirely sure that you’d even require the space consumed by the entire watch anymore. The problem isn’t one of making the technology small enough, but one of allowing a human to interact with the technology safely. The reason that the teenage texting of War and Peace brings tearful laughter to my eyes is the insanity of even attempting it. At issue are repetitive stress injuries and special needs.

Desktop computers provide an instrument that is large enough for most people to interact with successfully without incurring almost immediate trauma. That trauma occurs even with this form factor should tell you something. In order to work successfully for long periods of time, the environment must suit the human form factor—something that smaller devices simply can’t provide. As keyboards get smaller and people start typing in crouched or other uncomfortable positions, the opportunity for serious injury increases. In short, the reason the desktop computer won’t go away completely is that people need something large enough to perform large quantities of useful work successfully.

The issue of special needs would seem to seal the deal for desktop computers. People constantly complain about the size of smartphone screens—how the text is nearly impossible to see. It’s hard to believe that anyone would seriously consider trying to write large documents, work on graphics, or create presentations on such a small display. In fact, as the population ages, I see a problem performing even minor tasks with the small screen in some cases. People simply won’t be able to see the display to use it.

It was with great interest that I recently read a post entitled, “Post-PC Bunkum” by John Dvorak. In it, John mentions something that should make everyone aware that the desktop computer isn’t going away—it has become a commodity. It has become something that most people are familiar with and have in their home, office, or both. The desktop computer has almost become a refrigerator in terms of ubiquity in the home and office environment. However, the reason most people are uncomfortable with the desktop computer is that it truly is a complex device capable of performing some truly amazing feats in the right hands. People want to make tasks and their environment mindlessly simple and the desktop computer doesn’t do that for them. Even so, I doubt very much we’ll see the desktop go away anytime soon.

What is your take on the death of the desktop computer? What sorts of devices do you work with to perform most of your tasks? What sorts of tasks do you perform most often? Let me know at [email protected].