Source Code Placement

I always recommend that you download the source code for my books. The Verifying Your Hand Typed Code post discusses some of the issues that readers encounter when typing code by hand. However, I also understand that many people learn best when they type the code by hand and that’s the point of getting my books—to learn something really interesting (see my principles for creating book source code in the Handling Source Code in Books post). Even if you do need to type the source code in order to learn, having the downloadable source code handy will help you locate errors in your code with greater ease. I won’t usually have time to debug your hand typed code for you.

Depending on your platform, you might encounter a situation the IDE chooses an unfortunate place to put the source code you want to save. For example, on a Windows system it might choose the C:\Program Files folder (or a subdirectory) to the store the file. Microsoft wants to make your computing experience safer, so you don’t actually have rights to this folder for storing your data file. As a result, the IDE will stubbornly refuse the save the files in that folder. Likewise, some IDEs have a problem with folder names that have spaces in them. For example, your C:\Users\<Your Name>\My Documents folder might seem like the perfect place to store your source code files, but the spaces in the path will cause problems for the IDE and it will claim that it can’t find the file, even if it manages to successfully save the file.

My recommendation for fixing these, and other source code placement problems, is to create a folder that you can access and have full rights to work with to store your source code. My books usually make a recommendation for the source code file path, but you can use any path you want. The point is to create a path that’s:

  • Easy to access
  • Allows full rights
  • Lacks spaces in any of the path name elements

As long as you follow these rules, you likely won’t experience problems with your choice of source code location. If you do experience source code placement problems when working with my books, please be sure to let me know at John@JohnMuellerBooks.com.

 

Using My Coding Books Effectively

A lot of people ask me how to use my books to learn a coding technique quickly.  I recently wrote two articles for New Relic that help explain the techniques for choosing a technical book and the best way to get precisely the book you want. These articles are important to you, the reader, because I want to be sure that you’ll always like the books you purchase, no matter who wrote them. More importantly, these articles help you get a good start with my coding books because you start with a book that contains something you really do need.

Of course, there is more to the process than simply getting the right book. When you already have some experience with the language and techniques for using it, you can simply look up the appropriate example in the book and use it as a learning aid. However, the vast majority of the people asking this question have absolutely no experience with the language or the techniques for using it. Some people have never written an application or worked with code at all. In this case, there really aren’t any shortcuts. Learning something really does mean spending the time to take the small steps required to obtain the skills required. Someday, there may be a technology that will simply pour the knowledge into your head, but that technology doesn’t exist today.

Even reading a book cover-to-cover won’t help you succeed. My own personal experiences tell me that I need to use multiple strategies to ensure I actually understand a new programming technique and I’ve been doing this for a long time (well over 30 years). Just reading my books won’t make you a coder, you must work harder than that. Here is a quick overview of some techniques that I use when I need to discover a new way of working with code or to learn an entirely new technology (the articles will provide you with more detail):

  • Read the text carefully.
  • Work through the examples in the book.
  • Download the code examples and run them in the IDE.
  • Write the code examples by hand and execute them.
  • Work through the examples line-by-line using the debugger (see Debugging as An Educational Tool).
  • Talk to the author of the book about specific examples.
  • Modify the examples to obtain different effects or to expand them in specific ways.
  • Use the coding technique in an existing application.
  • Talk to other developers about the coding technique.
  • Research different versions of the coding technique online.
  • View a video of someone using the technique to perform specific tasks.

There are other methods you can use to work with my books, but this list represents the most common techniques I use. Yes, it’s a relatively long list and they all require some amount of effort on my part to perform. It isn’t possible to learn a new technique without putting in the time required to learn it. In a day of instant gratification, knowledge still requires time to obtain. The wisdom to use the knowledge appropriately, takes even longer. I truly wish there were an easier way to help you get the knowledge needed, but there simply isn’t.

Of course, I’m always here to help you with my books. When you have a book-specific question, I want to hear about it because I want you to have the best possible experience using my books. In addition, unless you tell me that something isn’t working for you, I’ll never know and I won’t be able to discuss solutions for the issue as part of blog post or e-mail resolution.

What methods do you use to make the knowledge you obtain from books work better? The question of how people learn takes up a considerable part of my time, so this is an important question for my future books and making them better. Let me know your thoughts about the question at John@JohnMuellerBooks.com. The same e-mail address also works for your book-specific questions.

 

Examining the Calculator in Windows 7 (Part 2)

A while back, over two years ago in fact, I uploaded a post entitled, “Examining the Calculator in Windows 7.” Since that time, a number of people have asked about the other features that the new calculator includes. Yes, there are these rather significant problems that Microsoft has introduced, but there are some good things about the new calculator as well.

The good thing appears on the View menu. When you click this menu, you see options at the bottom of the list that provide access to the special features as shown here.

The View menu includes options for unit conversion, date conversion, and worksheets.
The Windows 7 Calculator View Menu

The Unit Conversion and Date Conversion options are the most useful. However, the worksheets can prove helpful when you need them. Of the new features, I personally use Unit Conversion the most and many people likely will. After all, it’s not often you need to figure out a new mortgage, vehicle lease amount, or the fuel economy of your vehicle (and if you do such work for a living, you’ll have something better than the Windows Calculator to use). To see what this option provides, click Unit Conversion. You see a new interface like the one shown here:

The Unit Conversion display makes it possible to convert from one unit of measure to another.
Calculator Unit Conversion Display

You start using this feature by selecting the type of unit you want to convert. As you can see from this list, the kinds of conversions you can perform are extensive:

Select a conversion type to determine what options are offered in the From and To fields.
The Calculator Supports a Healthy List of Conversion Types

The option you select determines the content of the From and To fields. For example, if you want to convert from kilometers to miles, you select the Length option. After you select the type of unit, type a value in the From field and select the From field unit of measure. Select the To field unit of measure last. Here is what happens when you convert 15 kilometers to miles:

The output shows that converting 15 kilometers to miles equals 9.32056788356001 miles.
Converting Kilometers to Miles

I’ve found use for most of the entries in the types list at one time or another. Every one of them works quite well and you’ll be happy they’re available when you need them. The Data Calculation option can be similarly useful if you work with dates relatively often, as I do. However, I can’t see many people needing to figure out the number of days between two dates on a regular basic. Even so, this feature is probably used more often than any of the worksheets.

The ability to perform conversions of various kinds and to access the worksheets that Windows 7 Calculator provides isn’t enough to change my opinion. The implementation of the Calculator is extremely flawed and I stick by my review in the first posting. However, you do have the right to know there are some positives, which is the point of this post. Let me know your thoughts about Calculator now that you have a better view of it at John@JohnMuellerBooks.com.

 

Coding Schools and the Learning Process

There are three essential ways to begin a career as a developer. The first is to get a college degree in the subject, which is normally a Bachelor of Computer Science or a Bachelor of Information Technology (amongst other degrees). The second is to teach yourself the trade, which means spending a lot of time with books and in front of your screen working through online tutorials. The third is a new option, coding school. The third option has become extremely popular due to limitations in the first two techniques.

The cost of a college education has continued to skyrocket over the past few decades until it has started to elude the grasp of more than a few people. I’ve read estimates that a college degree now costs between $20,000 and $100,000 in various places. How much you actually pay depends on the school, your personal needs, and the electives you choose. The point is that many people are looking for something less expensive.

A college education also requires a large investment in time. A four year degree may require five or six years to actually complete because most people have to work while they’re going to school. A degree is only four years when you can go full time and apply yourself fully. Someone who is out of work today and needs a job immediately can’t wait for five or six years to get a job.

Teaching yourself is a time-honored method of obtaining new skills. I’ve personally taught myself a considerable number of skills. However, I’m also not trying to market those skills to someone else. My self-taught skills usually come in the areas of crafting or self-sufficiency (or sometimes a new programming language). The problem with being self-taught is that you have no independent assessment of your skills and most employers can’t take time to test them. An employer needs someone with a proven set of skills. Consequently, self-teaching is extremely useful for learning new hobbies or adding to existing (proven) skills, but almost valueless when getting a new job. In addition, few people are actually motivated enough to learn a new skill completely (at the same level as a college graduate) on their own.

Coding schools overcome the problem with self-teaching because they offer proof of your skills and ensure you get a consistent level of training. You get the required sheepskin to show to employers. They also address deficiencies in the college approach. The time factor is favorable because most of these schools promise to teach you basic development skills in three months (compared to the five or six years required by a college). In addition, the cost is significantly less (between $6,000 and $18,000). So, it would seem that going to a coding school is the optimum choice.

Recently people have begun to question the ability of coding schools to fulfill the promises they make. It’s important to consider what a coding school is offering before you go to one. The schools vary greatly in what they offer (you can see reviews of three popular code schools at http://www.mikelapeter.com/code-school-vs-treehouse-vs-codecademy-a-review/). However, there are similarities between schools. A coding school teaches you the bare basics of a language. You don’t gain the sort of experience that a college graduate would have. In addition, coding schools don’t teach such concepts as application design or how to work in a team environment. You don’t learn the low-level concepts of how application development works. I don’t know if building a compiler is still part of the curriculum at colleges, but it was one of my more important learning experiences because I gained insights into how my code actually ended up turning switches on and off within the chips housed in the computer.

I see coding schools as fulfilling an important role—helping those who do have programming skills to build competence in a new language quickly. In addition, a coding school could provide an entry point for someone who thinks they may want a computer science degree, but isn’t certain. Spending a short time in a coding school is better than spending a year or two in college and only then finding out that computer science isn’t what the person wants. Coding schools could also help people who need to know how to write simple applications as part of another occupation. For example, a researcher could learn the basic skills require to write simple applications to aid in their main occupation.

People learn in different ways. It’s the lesson that readers keep driving home to me. Some people learn with hands on exercises, some by reading, and still others by researching on their own. Coding schools can fulfill an important role in teaching computer science, but they’re not even close to a complete solution. In order to get the full story about computer science, a student must be willing to invest the required time. Until we discover some method for simply pouring information into the minds of people, the time-consuming approach to learning must continue as it has for thousands of year. There really aren’t any shortcuts when it comes to learning. Let me know your thoughts about coding schools at John@JohnMuellerBooks.com.

 

Antivirus and Application Compilation

Sometimes applications don’t get along, especially when one application is designed to create new content at a low level and the other is designed to prevent low level access to a system. Such is the case with compilers and antivirus applications in some cases. I haven’t been able to reproduce this behavior myself, but enough readers have told me about it that I feel I really do need to address it in a post. There are situations where you’re working with source code from one of my books, compile it, and then have your antivirus application complain that the code is infected with something (even though you know it isn’t). Sometimes the antivirus program will go so far as to simply delete the application you just compiled (or place it in a virus vault).

The solution to the problem can take a number of forms. If your antivirus application provides some means of creating exceptions for specific applications, the easiest way to overcome the problem is to create such an exception. You’ll need to read the documentation for your antivirus application to determine whether such a feature exists.

In some cases, the compiler or its associated Integrated Development Environment (IDE) simply don’t follow all the rules required to work safely in protected directories, such as the C:\Program Files directory on a Windows system. This particular issue has caused readers enough woe that my newer books suggest installing the compiler and its IDE in a directory the reader owns. For example, I now ask readers to install Code::Blocks in the C:\CodeBlocks directory on Windows systems because installing it elsewhere has caused some people problems.

Unfortunately, creating exceptions and installing the application in a friendly directory only go so far in fixing the problem. A few antivirus applications are so intent on protecting you from yourself that nothing you do will prevent the behavior. When this happens, you still have a few options. The easiest solution is to turn the antivirus program off just long enough to compile and test the application. Of course, this is also the most dangerous solution because it could leave your system open to attack.

A safer, albeit less palatable solution, is to try a different IDE and compiler. Antivirus programs seem a little picky about which applications they view as a threat. Code::Blocks may cause the antivirus program to react, but Eclipse or Visual Studio might not. Unfortunately, using this solution means that steps in the book may not work precisely as written.

For some developers, the only logical solution is to get a different antivirus application. I’ve personally had really good success with AVG Antivirus. However, you might find that this product doesn’t work for you for whatever reason. Perhaps it interacts badly with some other application on your system or simply doesn’t offer all the features you want.

My goal is to ensure you can use the examples in my books without jumping through a lot of hoops. When you encounter problems that are beyond my control, such as an ornery antivirus application, I’ll still try to offer some suggestions. In this case, the solution truly is out of my control but you can try the techniques offered in this post. Let me know if you find other solutions to the problem at John@JohnMuellerBooks.com.

 

Thinking of All the Possibilities in Software Design

A number of books on my shelf, some of which I’ve written, broach the topic of divergent thinking. Unfortunately, many developers (and many more managers) don’t really grasp the ideas behind divergent thinking. Simply put, divergent thinking starts with a single premise and views as many permutations of that premise as possible. Most developers don’t take the time to use divergent thinking as part of the development process because they don’t see a use for it. In fact, most books fall short of even discussing the potential for divergent thinking, much less naming it as a specific element of application design. I’ve explored the topic before and a reader recently reminded me of an article I wrote on the topic entitled, Divergent Versus Convergent Thinking: Which Is Better for Software Design?.

The process that most developers rely upon is convergent thinking, which is where you convert general goals and needs into specific solutions that appear within a single application. The difference between the two modes of thinking is that divergent thinking begins with a single specific premise, while convergent thinking begins with a number of general premises. More specifically, divergent thinking is the process you use to consider all of the possibilities before you use convergent thinking to create specific solutions to those possibilities.

There is an actual cycle between divergent and convergent thinking. You use divergent thinking when you start a project to ensure you discover as many possibly ways to address user requirements as possible. Once you have a number of possibilities, you use convergent thinking to consider the solutions for those possibilities in the form of a design. The process will point out those possibilities that will work and those that don’t. Maintaining a middle ground between extremes of divergent and convergent thinking will help create unique solutions, yet keep the project on track and maintain project team integrity. Managing the cycle is the job of the person in charge of the project, who is often the CIO, but could be some other management position. So, the manager has to be knowledgeable about software design in order for the process to work as anticipated.

One of the reasons that many applications fail today is the lack of divergent thinking as part of the software design process. We’re all too busy thinking about solutions, rather than possibilities. Yet, the creative mind and the creative process is based on divergent thinking. The reason we’re getting the same solutions rehashed in a million different ways (resulting in a lack of interest in new solutions) is the lack of divergent thinking in the development process.

In fact, I’d go so far as to say that most developers have never even heard of divergent thinking (and never heard convergent thinking called by that name). With this in mind, I’ve decided to provide some resources you can use to learn more about divergent thinking and possibly add it to your application design process.

 

These are just four of several hundred articles I located on divergent thinking online. I chose these particular four articles because they represent a range of ideas that most developers will find helpful, especially the idea of not applying stereotypical processes when trying to use divergent thinking. Stereotypes tend to block creative flow, which is partly what divergent thinking is all about.

The bottom line is that until divergent thinking is made part of the software design process, we’ll continue to suffer through rehashed versions of the current solutions. What is your view of divergent thinking? Do you see it as a useful tool or something best avoided? Let me know your thoughts at John@JohnMuellerBooks.com.

 

In Praise of Dual Monitors

A lot of people have claimed that the desktop system is dead—that people are only interested in using tablets and smartphones for computing. In fact, there is concern that the desktop might become a thing of the past. It’s true that my own efforts, such as HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies, have started to focus on mobile development. However, I plan to continue using my desktop system when working because it’s a lot more practical and saves me considerable time. One such time saver is the use of dual monitors.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at John@JohnMuellerBooks.com.

 

Choosing Variable Names

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:

 

  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important


In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. For example, a form of Hungarian Notation, where certain type prefixes, suffixes, and other naming conventions are used, is a common way to reduce the complexity of creating a variable name. In fact, Hungarian Notation (or some form of it) is often used to name objects, methods, functions, classes, and other programming elements as well. For example, NamArCustomers could be an array of customer names (Nam for names, Ar for array). The use of these two prefixes would make it instantly apparent when the variable is being used incorrectly, such as assigning a list of numbers to the array. The point is that an organizational variable naming policy can reduce complexity, make the names easy to read for anyone, and reduces the time the developer spends choosing a name.

 

Before I get a ton of e-mail on the topic, yes, I know that many people view Hungarian notation as the worst possible way to name variables. They point out that it only really works with statically typed languages and that it doesn’t work at all for languages such as JavaScript. All that I’m really pointing out is that some sort of naming convention is helpful—whether you use something specific like Hungarian Notation is up to you.


Any variable name you create should convey the meaning of that variable to anyone. If you aren’t using some sort of pattern or policy to name the variables, then create a convention that helps you create the names in a consistent manner and document it. When you create a variable name, you need to consider these kinds of questions:

 

  1. What information does the variable contain (such as a list of names)?
  2. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  3. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  4. Is the variable used for a special task (such as data conversion)?
  5. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at John@JohnMuellerBooks.com.

 

Verifying Your Hand Typed Code

I maintain statistics for each of my books that are based on reviews and reader e-mails (so those e-mails you send really are important). These statistics help me write better books in the future and also help me determine the sorts of topics I need to address in my blog. It turns out that one of the most commonly asked questions is why a reader’s hand typed code doesn’t work. Some readers simply ask the question without giving me any details at all, which makes the question impossible to answer. In some cases, the reader sends the hand typed code, expecting that I’ll take time to troubleshoot it. However, this isn’t a realistic request because it defeats the very purpose behind typing the code by hand. If I take the time to diagnose the problems in the code you typed, I’ll be the one to learn an interesting lesson, not you. If you learn better by doing—that is, by typing the code by hand and then running it, then you need to be the one to troubleshoot any problems with the resulting code.

My advice to readers is to use the downloadable source code when working through the book text. If you want to type the code by hand after that as part of your learning experience, at least you’ll know that the example works on your system and you’ll also understand how the example works well enough to troubleshoot any errors in your own code. However, you need to be the one to diagnose the errors. If nothing else, perform a character-by-character comparison of your code to the example code that you downloaded from the publisher’s site. Often, a reader will write back after I suggest this approach and mention that they had no idea that a particular special symbol or method of formatting content was important. These are the sorts of lessons that this kind of exercise provide.

Now, it has happened that the downloadable source code doesn’t always work on a particular user’s system. When the error is in the code or something I can determine about the coding environment, you can be certain that I’ll post information about it on my blog. This should be the first place you look for such information. Simply click on the book title in question under the Technical category. You’ll find a list of posts for that book. Always feel free to contact me about a book-specific question. I want to be sure you have a good learning experience.

There are some situations where a reader tries to run application code that won’t work on a particular system. My books provide information on the kind of system you should use, but I can’t always determine exceptions to the rule in advance. When I post system requirements, your system must meet those requirements because the examples are guaranteed to fail on lesser systems. If you encounter a situation where the downloadable code won’t run on your system, but none of the fixes I post for that code work and your system does meet the requirements, then please feel free to contact me. There are times where an example simply won’t run because you can’t use the required software or the system won’t support it for whatever reason.

The point of this post is that you need to work with the downloadable source code whenever possible. The downloadable source code has been tested by a number of people, usually on a range of systems, to ensure it will work on your system too. I understand that typing the code by hand is an important and viable way to learn, but you should reserve this method as the second learning tier—used after you have tried the downloadable source code. Please let me know if you have any questions or concerns at John@JohnMuellerBooks.com.

 

Application Development and BYOD

I read an article a while ago in InforWorld entitled, “The unintended consequences of forced BYOD.” The Bring Your Own Device (BYOD) phenomenon will only gain in strength because more people are using their mobile devices for everything they do and corporations are continually looking for ways to improve the bottom line. The push from both sides ensures that BYOD will become a reality. The article made me think quite hard about how developers who work in the BYOD environment will face new challenges that developers haven’t even had to consider in the past.

Of course, developers have always had to consider security. Trying to maintain a secure environment has always been a problem. The only truly secure application is one that has no connectivity to anything, including the user. Obviously, none of the applications out there are truly secure—the developer has always had to settle for something less than the ideal situation. At least devices in the past were firmly under IT control, but not with BYOD. Now the developer has to face the fact that the application will run on just about any device, anywhere, at any time, and in any environment. A user could be working on company secrets with a competitor looking right at the screen. Worse, how will developers legal requirements such as the Health Insurance Portability and Accountability Act (HIPAA)? Is the user now considered an independent vendor or is the company still on the hook for maintaining a secure environment? The legal system has yet to address these sorts of questions, but it will have to do so soon because you can expect that your doctor (and other health professionals) will use a mobile device to enter information as well.

Developers will also have to get used to working with new tools and techniques. Desktop development has meant working with tools designed for a specific platform. A developer would use something like C# to create a desktop application meant for use on any platform that supports the .NET Framework, which mainly meant working with Windows unless the company also decided to support .NET Framework alternatives such as Mono (an open source version of the .NET Framework). Modern applications will very likely need to work on any platform, which means writing server-based applications, browser-based applications, or a combination of the two in order to ensure the maximum number of people possible can interact with the application. The developer will have to get used to the idea that there is no way to test absolutely every platform that will use the application because the next platform hasn’t been delivered yet.

Speed also becomes a problem for developers. When working with a PC or laptop, a developer can rely on the client having a certain level of functionality. Now the application needs to work equally well with a smartphone that may not have enough processing power to do much. In order to ensure the application works acceptably, the developer needs to consider using browser-based programming techniques that will work equally well on every device, no matter what level of power the device possesses.

Some in industry have begun advocating that BYOD should also include Bring Your Own Software (BYOS). This would mean creating an environment where developers would make data available through something like a Web service that could be accessed by any sort of device using any capable piece of software. However, the details of such a setup have yet to be worked out, much less implemented. The interface would have to be nearly automatic with regard to connectivity. The browser-based application could do this, but only if the organization could at least ensure that everyone would be required to use a browser that met minimum standards.

My current books, HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies both address the needs of developers who are looking to move from the desktop into the browser-based world of applications that work anywhere, any time. Let me know your thoughts about BYOD and BYOS at John@JohnMuellerBooks.com.