Thinking of All the Possibilities in Software Design

A number of books on my shelf, some of which I’ve written, broach the topic of divergent thinking. Unfortunately, many developers (and many more managers) don’t really grasp the ideas behind divergent thinking. Simply put, divergent thinking starts with a single premise and views as many permutations of that premise as possible. Most developers don’t take the time to use divergent thinking as part of the development process because they don’t see a use for it. In fact, most books fall short of even discussing the potential for divergent thinking, much less naming it as a specific element of application design. I’ve explored the topic before and a reader recently reminded me of an article I wrote on the topic entitled, Divergent Versus Convergent Thinking: Which Is Better for Software Design?.

The process that most developers rely upon is convergent thinking, which is where you convert general goals and needs into specific solutions that appear within a single application. The difference between the two modes of thinking is that divergent thinking begins with a single specific premise, while convergent thinking begins with a number of general premises. More specifically, divergent thinking is the process you use to consider all of the possibilities before you use convergent thinking to create specific solutions to those possibilities.

There is an actual cycle between divergent and convergent thinking. You use divergent thinking when you start a project to ensure you discover as many possibly ways to address user requirements as possible. Once you have a number of possibilities, you use convergent thinking to consider the solutions for those possibilities in the form of a design. The process will point out those possibilities that will work and those that don’t. Maintaining a middle ground between extremes of divergent and convergent thinking will help create unique solutions, yet keep the project on track and maintain project team integrity. Managing the cycle is the job of the person in charge of the project, who is often the CIO, but could be some other management position. So, the manager has to be knowledgeable about software design in order for the process to work as anticipated.

One of the reasons that many applications fail today is the lack of divergent thinking as part of the software design process. We’re all too busy thinking about solutions, rather than possibilities. Yet, the creative mind and the creative process is based on divergent thinking. The reason we’re getting the same solutions rehashed in a million different ways (resulting in a lack of interest in new solutions) is the lack of divergent thinking in the development process.

In fact, I’d go so far as to say that most developers have never even heard of divergent thinking (and never heard convergent thinking called by that name). With this in mind, I’ve decided to provide some resources you can use to learn more about divergent thinking and possibly add it to your application design process.


These are just four of several hundred articles I located on divergent thinking online. I chose these particular four articles because they represent a range of ideas that most developers will find helpful, especially the idea of not applying stereotypical processes when trying to use divergent thinking. Stereotypes tend to block creative flow, which is partly what divergent thinking is all about.

The bottom line is that until divergent thinking is made part of the software design process, we’ll continue to suffer through rehashed versions of the current solutions. What is your view of divergent thinking? Do you see it as a useful tool or something best avoided? Let me know your thoughts at


Backslash (\) Versus Forward Slash (/)

A number of readers have noted recently that I’ve been using the forward slash (/) more and more often in my books to denote hard drive paths. Of course, when working on Windows systems (and DOS before that) it’s common practice to use the backslash (\) for paths. However, using a forward slash has certain benefits, not the least of which is portability. It turns out that the forward slash works well on other platforms and that it also works on Windows systems without problem (at least in most cases). Using a forward slash whenever possible means that your path will work equally well on Windows, Mac, Linux, and other platforms without modification.

In addition, when working with languages such as C++, JavaScript, Java, and even C#, you must exercise care when using the backslash because the languages use it as an escape character (a character pair that denotes something special). For example, using \n defines a newline character and \r is a carriage return. In order to create a backslash, you must actually use two of them \\. The potential for error is relatively high in this case. Forward slashes appear singly, so you can copy them directly, rather than manipulating the path in various ways.

There are situations where you must use a backslash in the Windows (and also the DOS) environment. You can type CD / or CD \ and get to the root directory of a Windows system. However if you try to type Dir /, you’ll get an error. In order to obtain a directory listing of the root directory, you must type Dir \ instead. In fact, many native utilities require that you use the backslash for input. On the other hand, many Windows APIs accept the forward slash without problem. When in doubt, try both slashes to see which works without problem. If you see a forward slash used in one of my books, the forward slash will definitely work in that instance. In general, I only use the forward slash when compatibility with other platforms is a consideration. Windows-specific platform information will still use the backslash.

As things stand today, the more you can do to make your applications run on multiple platforms, the better off you’ll be. Users don’t just rely on Windows any longer—they rely on a range of platforms that you might be called upon to support. Having something like an incorrectly formatted path in your code is easy to overlook, but devastating in its effects on the usability of your application.

Let me know your concerns about the use of backslashes and forward slashes in my books at The book that uses the largest number of forward slashes for paths right now is C++ All-In-One Desk Reference For Dummies. I want to be sure everyone is comfortable with my use of these special symbols and understands why I’ve used one or the other in a particular circumstance.


In Praise of Dual Monitors

A lot of people have claimed that the desktop system is dead—that people are only interested in using tablets and smartphones for computing. In fact, there is concern that the desktop might become a thing of the past. It’s true that my own efforts, such as HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies, have started to focus on mobile development. However, I plan to continue using my desktop system when working because it’s a lot more practical and saves me considerable time. One such time saver is the use of dual monitors.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at


Choosing Variable Names

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:


  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important

In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. For example, a form of Hungarian Notation, where certain type prefixes, suffixes, and other naming conventions are used, is a common way to reduce the complexity of creating a variable name. In fact, Hungarian Notation (or some form of it) is often used to name objects, methods, functions, classes, and other programming elements as well. For example, NamArCustomers could be an array of customer names (Nam for names, Ar for array). The use of these two prefixes would make it instantly apparent when the variable is being used incorrectly, such as assigning a list of numbers to the array. The point is that an organizational variable naming policy can reduce complexity, make the names easy to read for anyone, and reduces the time the developer spends choosing a name.


Before I get a ton of e-mail on the topic, yes, I know that many people view Hungarian notation as the worst possible way to name variables. They point out that it only really works with statically typed languages and that it doesn’t work at all for languages such as JavaScript. All that I’m really pointing out is that some sort of naming convention is helpful—whether you use something specific like Hungarian Notation is up to you.

Any variable name you create should convey the meaning of that variable to anyone. If you aren’t using some sort of pattern or policy to name the variables, then create a convention that helps you create the names in a consistent manner and document it. When you create a variable name, you need to consider these kinds of questions:


  1. What information does the variable contain (such as a list of names)?
  2. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  3. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  4. Is the variable used for a special task (such as data conversion)?
  5. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at


Verifying Your Hand Typed Code

I maintain statistics for each of my books that are based on reviews and reader e-mails (so those e-mails you send really are important). These statistics help me write better books in the future and also help me determine the sorts of topics I need to address in my blog. It turns out that one of the most commonly asked questions is why a reader’s hand typed code doesn’t work. Some readers simply ask the question without giving me any details at all, which makes the question impossible to answer. In some cases, the reader sends the hand typed code, expecting that I’ll take time to troubleshoot it. However, this isn’t a realistic request because it defeats the very purpose behind typing the code by hand. If I take the time to diagnose the problems in the code you typed, I’ll be the one to learn an interesting lesson, not you. If you learn better by doing—that is, by typing the code by hand and then running it, then you need to be the one to troubleshoot any problems with the resulting code.

My advice to readers is to use the downloadable source code when working through the book text. If you want to type the code by hand after that as part of your learning experience, at least you’ll know that the example works on your system and you’ll also understand how the example works well enough to troubleshoot any errors in your own code. However, you need to be the one to diagnose the errors. If nothing else, perform a character-by-character comparison of your code to the example code that you downloaded from the publisher’s site. Often, a reader will write back after I suggest this approach and mention that they had no idea that a particular special symbol or method of formatting content was important. These are the sorts of lessons that this kind of exercise provide.

Now, it has happened that the downloadable source code doesn’t always work on a particular user’s system. When the error is in the code or something I can determine about the coding environment, you can be certain that I’ll post information about it on my blog. This should be the first place you look for such information. Simply click on the book title in question under the Technical category. You’ll find a list of posts for that book. Always feel free to contact me about a book-specific question. I want to be sure you have a good learning experience.

There are some situations where a reader tries to run application code that won’t work on a particular system. My books provide information on the kind of system you should use, but I can’t always determine exceptions to the rule in advance. When I post system requirements, your system must meet those requirements because the examples are guaranteed to fail on lesser systems. If you encounter a situation where the downloadable code won’t run on your system, but none of the fixes I post for that code work and your system does meet the requirements, then please feel free to contact me. There are times where an example simply won’t run because you can’t use the required software or the system won’t support it for whatever reason.

The point of this post is that you need to work with the downloadable source code whenever possible. The downloadable source code has been tested by a number of people, usually on a range of systems, to ensure it will work on your system too. I understand that typing the code by hand is an important and viable way to learn, but you should reserve this method as the second learning tier—used after you have tried the downloadable source code. Please let me know if you have any questions or concerns at


Book Reviews – Doing Your Part

Readers contact me quite a lot about my books. On an average day, I receive around 65 reader e-mails about a wide range of book-related topics. Many of them are complimentary about my books and it’s hard to put down in words just how much I appreciate the positive feedback. Often, I’m humbled to think that people would take time to write.

There is another part to reader participation in books, however, and it doesn’t have anything to do with me—it has to do with other readers. When you read one of my books and find the information useful, it’s helpful to write a review about it so that others can know what to expect. I want to be sure that every reader who purchases one of my books is happy with that purchase and gets the most possible out of the book. The wording that the publisher’s marketing staff and I use to describe a book represents our viewpoint of that book and not necessarily the viewpoint of the reader. The only way that other readers will know how a book presents information from the reader perspective is for other readers to write reviews.

A good review will tell what you liked about the book—how it met your needs, what it provides in the way of usable content, and whether you liked intangibles, such as the author’s writing style. The review should also present any negatives. For example, the book may not have provided detailed enough procedures for you to actually accomplish a task. (Obviously, I want to know about the flaws, too, so that I can correct them in the next edition of the book and also discuss them on my blog.) Many reviewing venues, such as the one found on Amazon, also ask you to provide a rating for the book. You should rate the book based on your experience with other books and on how this particular book met your needs in learning a new topic. The kind of review to avoid writing is a rant or one that isn’t actually based on reading the whole book. As always, I’m here (at to answer any questions you have and many of your questions have appeared as blog posts when the situation warrants.

So, just where do you make these reviews? The publishers sometimes provide a venue for expressing your opinion and you can certainly go to the publisher site to create such a review. I personally prefer to upload my reviews to Amazon because it’s a location that many people frequent to find out more about books. With that in mind, here are the URLs for many of my books. You can go to the site, click Write a Customer Review (near the bottom of the page), and then provide your viewpoint about the book.


Thank you in advance for taking the time and effort required to write a review. I know it’s time consuming, but it’s an important task that only you can perform.


DateTimePicker Control Data Type Mismatch Problem

A reader recently made me aware of a problem with the Get User Favorites example in Chapter 2 that could cause a lot of problems depending on which language you use when working with Visual Studio 2012. This issue does affect some of the examples in Microsoft ADO.NET Entity Framework Step by Step so it’s really important you know about it.

Look at the model on page 30 of the book (the “Creating the model” section of Chapter 2). The Birthday field is defined as type DateTime. When you finish creating the model, you right click the resulting entity and choose Generate Database from Model to create a database from it. The “Working with the mapping details” section of Chapter 1 (page 19) tells you how to use the Generate Database Wizard to create the database. The Birthday field in the database will appear as a datetime type when you complete the wizard, which is precisely what you should get.

At this point, you begin creating the example form to test the database (the “Creating the test application” section on page 36). The example uses a DateTimePicker control for the Birthday field by default. You don’t add it, the IDE adds it for you because it sees that Birthday is a datetime data type. The example will compile just fine and you’ll be able to start it up as normal.

However, there is a problem that occurs when working with certain languages when you start the “Running the basic query” section that starts on page 39. The DateTimePicker control appears to send a datetime2 data type back to SQL Server when you change the birthday information. You’ll receive an exception that says the data types don’t match, which they don’t. There are several fixes for this problem. For example, you could coerce the output of the DateTimePicker control to output a datetime data type instead of a datetime2 data type. However, the easiest fix is to simply change the data type of the Birthday field in the database from datetime to datetime2. After you make this change, the example will work as it should. You only need to make this change when you see the data type mismatch exception. I haven’t been able to verify as of yet which languages are affected and would love to hear from my international readers about the issue.

As far as the book is concerned, this problem is quite fixable using the manual edit (although, manually editing the database shouldn’t be something you should have to do). However, it does bring up the question of how teams working across international boundaries will interact with each other if they can’t depend on the controls acting in a particular way every time they’re used. This is a problem that you need to be aware of when working with larger, international, teams. Let me know if you have any questions or concerns about the book example at You’ll have to contact Microsoft about potential fixes to the DateTimePicker control since I have no control over it.


Application Development and BYOD

I read an article a while ago in InforWorld entitled, “The unintended consequences of forced BYOD.” The Bring Your Own Device (BYOD) phenomenon will only gain in strength because more people are using their mobile devices for everything they do and corporations are continually looking for ways to improve the bottom line. The push from both sides ensures that BYOD will become a reality. The article made me think quite hard about how developers who work in the BYOD environment will face new challenges that developers haven’t even had to consider in the past.

Of course, developers have always had to consider security. Trying to maintain a secure environment has always been a problem. The only truly secure application is one that has no connectivity to anything, including the user. Obviously, none of the applications out there are truly secure—the developer has always had to settle for something less than the ideal situation. At least devices in the past were firmly under IT control, but not with BYOD. Now the developer has to face the fact that the application will run on just about any device, anywhere, at any time, and in any environment. A user could be working on company secrets with a competitor looking right at the screen. Worse, how will developers legal requirements such as the Health Insurance Portability and Accountability Act (HIPAA)? Is the user now considered an independent vendor or is the company still on the hook for maintaining a secure environment? The legal system has yet to address these sorts of questions, but it will have to do so soon because you can expect that your doctor (and other health professionals) will use a mobile device to enter information as well.

Developers will also have to get used to working with new tools and techniques. Desktop development has meant working with tools designed for a specific platform. A developer would use something like C# to create a desktop application meant for use on any platform that supports the .NET Framework, which mainly meant working with Windows unless the company also decided to support .NET Framework alternatives such as Mono (an open source version of the .NET Framework). Modern applications will very likely need to work on any platform, which means writing server-based applications, browser-based applications, or a combination of the two in order to ensure the maximum number of people possible can interact with the application. The developer will have to get used to the idea that there is no way to test absolutely every platform that will use the application because the next platform hasn’t been delivered yet.

Speed also becomes a problem for developers. When working with a PC or laptop, a developer can rely on the client having a certain level of functionality. Now the application needs to work equally well with a smartphone that may not have enough processing power to do much. In order to ensure the application works acceptably, the developer needs to consider using browser-based programming techniques that will work equally well on every device, no matter what level of power the device possesses.

Some in industry have begun advocating that BYOD should also include Bring Your Own Software (BYOS). This would mean creating an environment where developers would make data available through something like a Web service that could be accessed by any sort of device using any capable piece of software. However, the details of such a setup have yet to be worked out, much less implemented. The interface would have to be nearly automatic with regard to connectivity. The browser-based application could do this, but only if the organization could at least ensure that everyone would be required to use a browser that met minimum standards.

My current books, HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies both address the needs of developers who are looking to move from the desktop into the browser-based world of applications that work anywhere, any time. Let me know your thoughts about BYOD and BYOS at


The Place of Automation in the User Interface

There was a time that a developer could rely on users to possess a certain level of technical acumen. That’s no longer the case. Most of the people using a device containing a CPU today (I’m including PCs, laptops, tablets, and smartphones here) don’t know anything about how it works and they don’t care to know either. All these people know is that they really must have access to their app. (Some don’t even realize the role data plays in making the app work.) The app can perform myriad tasks—everything from keeping track of the calories they’ve eaten to maintaining the scheduled events for the day. Devices that contain CPUs have become the irreplaceable partner for many people and these devices must work without much concern on the part of the user. In short, the device must provide a superior level of automation or the user simply won’t know how to interact with it.

I was recently watching television and saw a commercial for a Weight Watchers app for mobile devices. In the commercial, a woman exclaims wonder about the new programs that Weight Watchers provides, which include this app for her mobile devices. To track her calories, she simply points her phone at the box containing whatever food she plans to eat and the app tracks the calories for her. The interesting part is that there is no data entry required. As technology continues to progress, you can expect to see more apps of this type. People really don’t want to know anything about your app, how it works, or the cool code you put into it. They want to use the app without thinking about it at all.

Of all the parts of a device that must be automated, the user interface is most important and also the most difficult to create. Users really don’t want to have to think about the interface. Their focus is on the task that the app performs for them. In fact, some e-mails I’ve received recently about my Windows 8 book have driven home the idea that the app must simply work without any thought at all. It’s because of these e-mails (and those for other books I’ve written) that I wrote the article entitled, “Designing Apps with Automation in Mind.” This article points out the essential behaviors that applications must exhibit today to be successful.

On the other side of the fence, I continue to encounter an “old world” philosophy on the part of developers that applications should pack as much as possible into a small space—users will eventually figure out the complexity of the interface provided. Unfortunately, as users become more vocal in requiring IT to meet their demands, these approaches to the user interface will lose out. The next app is a click away and if it does the job of automating the interface better, your app will lose out. Even if there isn’t another app to use, the user will simply ignore the app you’ve provided. The user will rely on an older version or simply not interact with the app if there is no older version. Many organizations have found out the hard way that attempting to force users to interact with an app is bound to fail.

The fact is, to be successful today, a developer must be part psychologist, part graphics artist, and part programming genius. Creating an acceptable interface is no longer good enough, especially when people want to access the same app from more than one device. The interface must be simple, must work well, and must automate as much of the input as possible for the user or it simply won’t succeed. Let me know your thoughts about user interface automation at


Accessing Sample Database Data (Part 4)

The previous post, Accessing Sample Database Data (Part 3), discussed the need to change the connection string for a database in the App.CONFIG file. However, you haven’t actually gained access to the database as of yet. Depending on how the book is written, you may have to create the database using code and manually inputting the entries, run a script, or add a database file to SQL Server. Of the three, writing the database code and manually inputting the entries is the most straightforward. All you need to do is perform the instructions written in the book. As mentioned in my first post, many readers find this method the least acceptable because database management tasks of this sort are supposedly in the realm of DBAs.

In order to work with databases in Visual Studio, you must have a connection to the database. All of my books show how to create such a connection using Server Explorer. The project will not include a connection when you open the downloadable source code. The connection to the server is part of your Visual Studio setup and you must create it when you begin working with the downloadable source. The short take on creating a connection is to right click the Data Connections folder in Server Explorer and choose Add Connection from the context menu. You’ll see a wizard that will lead you through the steps for creating the connection.

Using Scripts

The next level of complexity is the script. A number of my books use scripts in order to make it at least reasonable to use something other than SQL Server as the database manager. More and more of my readers want to use other solutions. Accommodating this need is proving difficult, but using scripts does have the advantage of reducing the work required to use non-SQL Server solutions. I’m going to assume that you’re working with Visual Studio and that you have a connection to your database manager, whatever that database manager might be. The following steps tell how to use scripts in a general way (you may find slight variations in the steps between different versions of Visual Studio).

  1. Choose File | Open | File. You see an Open File dialog box.
  2. Highlight the script you want to use (a script always has a .SQL extension) and click Open. You see the script opened in Visual Studio.
  3. Right click anywhere in the editor window and choose Execute SQL from the context menu. Visual Studio interacts with your database manager to run the script.

Using a Database File

The third level of complexity is actually using a database file. The advantage of using this option is that the database is completely configured and contains everything needed to use it. This level represents the least amount of work on the part of the reader in most cases. It’s also the least flexible because it only works with SQL Server and the version of SQL Server must be able to support the features in the database file. Use these steps to work with a SQL Server database file from Visual Studio.

  1. Open Server Explorer by choosing View | Server Explorer. You see the Server Explorer window open as shown here.
  2. Right click Data Connections and choose Add Connection from the context menu. You see the Choose Data Source dialog box shown here.
  3. Select Microsoft SQL Server Database File. The wizard automatically chooses the correct Data Provider option for you. Click Continue. You see the Add Connection dialog box shown here.
  4. Click Browse. You see the Select SQL Server Database File dialog box shown here.
  5. Highlight the file you want to use and click Open. You return to the Add Connection dialog box.
  6. Click Test Connection. You should see a success message. If your version of SQL Server doesn’t support the features needed to use the database file, you’ll see an error message that states the file can’t be downgraded. The message will also tell you which version of SQL Server you require to use the database file.
  7. Click OK. Visual Studio creates the connection for you. What has actually happened in the background is that SQL Server has created the connection at the request of Visual Studio.


I try to choose the database option for my books with care. Many of my books use more  than one option to allow more people to work with the option that best suits their needs. However, it’s impossible to please everyone with the choices I make. When you encounter problems using one of the database options I’ve selected, I’ll try my best to help you work through the difficulties. As with every other aspect of my books, working with the databases is a learning experience. Let me know your thoughts on database access in my books or if you have additional database access questions at