Coding Schools and the Learning Process

There are three essential ways to begin a career as a developer. The first is to get a college degree in the subject, which is normally a Bachelor of Computer Science or a Bachelor of Information Technology (amongst other degrees). The second is to teach yourself the trade, which means spending a lot of time with books and in front of your screen working through online tutorials. The third is a new option, coding school. The third option has become extremely popular due to limitations in the first two techniques.

The cost of a college education has continued to skyrocket over the past few decades until it has started to elude the grasp of more than a few people. I’ve read estimates that a college degree now costs between $20,000 and $100,000 in various places. How much you actually pay depends on the school, your personal needs, and the electives you choose. The point is that many people are looking for something less expensive.

A college education also requires a large investment in time. A four year degree may require five or six years to actually complete because most people have to work while they’re going to school. A degree is only four years when you can go full time and apply yourself fully. Someone who is out of work today and needs a job immediately can’t wait for five or six years to get a job.

Teaching yourself is a time-honored method of obtaining new skills. I’ve personally taught myself a considerable number of skills. However, I’m also not trying to market those skills to someone else. My self-taught skills usually come in the areas of crafting or self-sufficiency (or sometimes a new programming language). The problem with being self-taught is that you have no independent assessment of your skills and most employers can’t take time to test them. An employer needs someone with a proven set of skills. Consequently, self-teaching is extremely useful for learning new hobbies or adding to existing (proven) skills, but almost valueless when getting a new job. In addition, few people are actually motivated enough to learn a new skill completely (at the same level as a college graduate) on their own.

Coding schools overcome the problem with self-teaching because they offer proof of your skills and ensure you get a consistent level of training. You get the required sheepskin to show to employers. They also address deficiencies in the college approach. The time factor is favorable because most of these schools promise to teach you basic development skills in three months (compared to the five or six years required by a college). In addition, the cost is significantly less (between $6,000 and $18,000). So, it would seem that going to a coding school is the optimum choice.

Recently people have begun to question the ability of coding schools to fulfill the promises they make. It’s important to consider what a coding school is offering before you go to one. The schools vary greatly in what they offer (you can see reviews of three popular code schools at http://www.mikelapeter.com/code-school-vs-treehouse-vs-codecademy-a-review/). However, there are similarities between schools. A coding school teaches you the bare basics of a language. You don’t gain the sort of experience that a college graduate would have. In addition, coding schools don’t teach such concepts as application design or how to work in a team environment. You don’t learn the low-level concepts of how application development works. I don’t know if building a compiler is still part of the curriculum at colleges, but it was one of my more important learning experiences because I gained insights into how my code actually ended up turning switches on and off within the chips housed in the computer.

I see coding schools as fulfilling an important role—helping those who do have programming skills to build competence in a new language quickly. In addition, a coding school could provide an entry point for someone who thinks they may want a computer science degree, but isn’t certain. Spending a short time in a coding school is better than spending a year or two in college and only then finding out that computer science isn’t what the person wants. Coding schools could also help people who need to know how to write simple applications as part of another occupation. For example, a researcher could learn the basic skills require to write simple applications to aid in their main occupation.

People learn in different ways. It’s the lesson that readers keep driving home to me. Some people learn with hands on exercises, some by reading, and still others by researching on their own. Coding schools can fulfill an important role in teaching computer science, but they’re not even close to a complete solution. In order to get the full story about computer science, a student must be willing to invest the required time. Until we discover some method for simply pouring information into the minds of people, the time-consuming approach to learning must continue as it has for thousands of year. There really aren’t any shortcuts when it comes to learning. Let me know your thoughts about coding schools at John@JohnMuellerBooks.com.

 

Death of Windows XP? (Part 4)

The last post, Death of Windows XP? (Part 3), was supposed to be the last word on this topic that won’t die, but as usual, it isn’t. The hackers of the world have figured out a new an interesting way of getting around Microsoft’s plan to kill Windows XP. It turns out that you can continue to get updates if you’re willing to use a registry hack to convince Windows Update that your system is a different version of Windows that is almost like Windows XP Service Pack 3, but not quite. You can read the article, How to get security updates for Windows XP until April 2019, to get the required details.

The hack involves making Windows Update believe that you actually own a Point of Sale (POS) system that’s based on Windows XP. The POS version of Windows XP will continue to have support until April of 2019, when it appears that Windows XP will finally have to die unless something else comes along. It’s important to note that you must have Windows XP Service Pack 3 installed. Older versions of Windows XP aren’t able to use the hack successfully.

After reading quite a few articles on the topic and thinking through the way Microsoft has conducted business in the past, I can’t really recommend the registry hack. There are a number of problems with using it that could cause issues with your setup.

 

  • You have no way of knowing whether the updates will provide complete security support for a consumer version of Windows XP.
  • The updates aren’t specifically tested for the version of Windows XP that you’re using, so you could see odd errors pop up.
  • Microsoft could add code that will trash your copy of Windows XP (once it figures out how to do so).


There are probably other reasons not to use the hack, but these are the reasons that come to mind that are most important for my readers. As with most hacks, this one is dangerous and I do have a strong feeling that Microsoft will eventually find a way to make anyone using it sorry they did. The support period for Windows XP has ended unless you have the money to pay for corporate level support—it’s time to move on.

I most definitely won’t provide support to readers who use the hack. There isn’t any way I can create a test system that will cover all of the contingencies so that I could even think about providing you with any support. If you come to me with a book-related issue and have the hack installed, I won’t be able to provide you with any support. This may seem like a hard nosed attitude to take, but there simply isn’t any way I can support you.

 

Saving Development Time Using CSS Templates and Boilerplate

Recently I created a post entitled, Differentiating Between CSS Boilerplate, Template, and Frameworks that defines the differences between these technologies. However, I stopped short of making any product recommendations. A few of you have asked about the products I’ve tried and liked. So, I put some suggestions together in a recent article, 5 Truly Effective CSS Boilerplates and Frameworks. Mind you, there are scores of such products available on the market. This article represents the cream of the crop from my perspective based on those products I’ve actually tried and found to work well in my particular circumstances.

There are many criteria for choosing a development product and you probably have specific needs that you must address. For example, you might have specific packages that an solution must work with because these other packages form the basis of a mission critical applications. Only you know what these criteria are and it pays to write them down before you look for any third party product. However, there are some questions that you can ask yourself before you begin the search.

 

  • Will the product actually save me time?
  • Can I create a unique look using it?
  • Is the product simple to use based on my experience level?
  • Does the product come with good documentation?
  • How much community support can I expect to obtain with this product?
  • Does the vendor clearly state which packages this product will work with?
  • Has anyone investigated, validated, and qualified the vendor’s claims?


These questions will definitely get you started in the right direction, no matter what your other needs might be. Learning to identify products that meet your specific needs is important because no one can perform that particular part of the software development process for you. Yes, you can hire a consultant to guide you in your efforts, but when all is said and done, you need to make certain decisions regarding the products you use, especially when it comes to intangibles such as appeal and usefulness in making a statement your organization can live with.

In addition to these questions, you need to also ask yourself organization-specific questions such as the need to have access to a Content Delivery Network (CDN). Some organizations prefer to host third party software on their own server (which requires a download), but other organizations prefer to use a CDN that makes it possible to access the product from a remote server. There are advantages and disadvantages to each approach.

What sorts of questions do you ask yourself when looking for a third party product to save development time? The kinds of questions you ask is important and I’d love to know more about the processes that take place in other organizations. Let me know your thoughts at John@JohnMuellerBooks.com.

 

Thinking of All the Possibilities in Software Design

A number of books on my shelf, some of which I’ve written, broach the topic of divergent thinking. Unfortunately, many developers (and many more managers) don’t really grasp the ideas behind divergent thinking. Simply put, divergent thinking starts with a single premise and views as many permutations of that premise as possible. Most developers don’t take the time to use divergent thinking as part of the development process because they don’t see a use for it. In fact, most books fall short of even discussing the potential for divergent thinking, much less naming it as a specific element of application design. I’ve explored the topic before and a reader recently reminded me of an article I wrote on the topic entitled, Divergent Versus Convergent Thinking: Which Is Better for Software Design?.

The process that most developers rely upon is convergent thinking, which is where you convert general goals and needs into specific solutions that appear within a single application. The difference between the two modes of thinking is that divergent thinking begins with a single specific premise, while convergent thinking begins with a number of general premises. More specifically, divergent thinking is the process you use to consider all of the possibilities before you use convergent thinking to create specific solutions to those possibilities.

There is an actual cycle between divergent and convergent thinking. You use divergent thinking when you start a project to ensure you discover as many possibly ways to address user requirements as possible. Once you have a number of possibilities, you use convergent thinking to consider the solutions for those possibilities in the form of a design. The process will point out those possibilities that will work and those that don’t. Maintaining a middle ground between extremes of divergent and convergent thinking will help create unique solutions, yet keep the project on track and maintain project team integrity. Managing the cycle is the job of the person in charge of the project, who is often the CIO, but could be some other management position. So, the manager has to be knowledgeable about software design in order for the process to work as anticipated.

One of the reasons that many applications fail today is the lack of divergent thinking as part of the software design process. We’re all too busy thinking about solutions, rather than possibilities. Yet, the creative mind and the creative process is based on divergent thinking. The reason we’re getting the same solutions rehashed in a million different ways (resulting in a lack of interest in new solutions) is the lack of divergent thinking in the development process.

In fact, I’d go so far as to say that most developers have never even heard of divergent thinking (and never heard convergent thinking called by that name). With this in mind, I’ve decided to provide some resources you can use to learn more about divergent thinking and possibly add it to your application design process.

 

These are just four of several hundred articles I located on divergent thinking online. I chose these particular four articles because they represent a range of ideas that most developers will find helpful, especially the idea of not applying stereotypical processes when trying to use divergent thinking. Stereotypes tend to block creative flow, which is partly what divergent thinking is all about.

The bottom line is that until divergent thinking is made part of the software design process, we’ll continue to suffer through rehashed versions of the current solutions. What is your view of divergent thinking? Do you see it as a useful tool or something best avoided? Let me know your thoughts at John@JohnMuellerBooks.com.

 

Backslash (\) Versus Forward Slash (/)

A number of readers have noted recently that I’ve been using the forward slash (/) more and more often in my books to denote hard drive paths. Of course, when working on Windows systems (and DOS before that) it’s common practice to use the backslash (\) for paths. However, using a forward slash has certain benefits, not the least of which is portability. It turns out that the forward slash works well on other platforms and that it also works on Windows systems without problem (at least in most cases). Using a forward slash whenever possible means that your path will work equally well on Windows, Mac, Linux, and other platforms without modification.

In addition, when working with languages such as C++, JavaScript, Java, and even C#, you must exercise care when using the backslash because the languages use it as an escape character (a character pair that denotes something special). For example, using \n defines a newline character and \r is a carriage return. In order to create a backslash, you must actually use two of them \\. The potential for error is relatively high in this case. Forward slashes appear singly, so you can copy them directly, rather than manipulating the path in various ways.

There are situations where you must use a backslash in the Windows (and also the DOS) environment. You can type CD / or CD \ and get to the root directory of a Windows system. However if you try to type Dir /, you’ll get an error. In order to obtain a directory listing of the root directory, you must type Dir \ instead. In fact, many native utilities require that you use the backslash for input. On the other hand, many Windows APIs accept the forward slash without problem. When in doubt, try both slashes to see which works without problem. If you see a forward slash used in one of my books, the forward slash will definitely work in that instance. In general, I only use the forward slash when compatibility with other platforms is a consideration. Windows-specific platform information will still use the backslash.

As things stand today, the more you can do to make your applications run on multiple platforms, the better off you’ll be. Users don’t just rely on Windows any longer—they rely on a range of platforms that you might be called upon to support. Having something like an incorrectly formatted path in your code is easy to overlook, but devastating in its effects on the usability of your application.

Let me know your concerns about the use of backslashes and forward slashes in my books at John@JohnMuellerBooks.com. The book that uses the largest number of forward slashes for paths right now is C++ All-In-One Desk Reference For Dummies. I want to be sure everyone is comfortable with my use of these special symbols and understands why I’ve used one or the other in a particular circumstance.

 

Differentiating Between CSS Boilerplate, Template, and Frameworks

You often see the terms boilerplate, template, and framework uses almost interchangeably online when people discuss CSS. In fact, some readers are confused about these terms and have written me about them. There are distinct differences between the three terms and you really do need to know what they are.

Boilerplate is code that you can simply cut and paste to use in an application. As an alternative, you can simply import the file that contains the boilerplate code. However, boilerplate code is designed to perform a specific task without modification in your application. Someone else has written code that performs a common task and you simply borrow it for your application (assuming they make it publicly available). It would be nearly impossible to create a complete application using just boilerplate code because it’s inflexible. Developers often stitch multiple sources of boilerplate code together to perform specific tasks, but then add custom code to the mix to customize the output.

Templates, like boilerplate, provide a means to reuse code, but the implication is that the code is modified in some way and that it’s also incomplete as presented. Yes, the template may provide a default presentation, but the default is usually bland and simplistic. Using templates makes it possible to create a customized appearance without writing a lot of code. Of all the readily available code categories, templates are the most common. Template makers also commonly create wizards to reduce the time a developer needs to employ a template even further. The downside to templates is that they do require some level of skill and more time to use than boilerplate.

Frameworks can be viewed as a container for holding other sorts of code and presenting it in an organized manner. However, a framework still consists of modifiable code. The difference is that a framework will commonly control the layout of a page, rather than the appearance of content it contains. Developers commonly combine frameworks with both templates and boilerplate to create a finished site. In general, frameworks always rely on a wizard to make creating the site easier.

Using the correct term for the kind of code that you’ve created makes it easier for others to know whether your solution will help them. In many cases, developers don’t want to be bothered with details, so boilerplate is precisely the solution needed. At other times, management has decided that a site with a perfectly usable framework needs a new look, so the developer may want a template to create a customized appearance within the existing framework. There are still other times where a new layout will better address the needs of site users, so a new framework is required.

Precise terminology makes it possible for people to communicate with each other. The creation and use of jargon is essential to any craft because precision takes on new importance when describing a process or technique. Developers need to use precise terms to ensure they can communicate effectively. Do you find that there is any ambiguous use of terminology in my books? If so, I always want to know about it and will take time to define the terms better in my blog. Let me know about your terminology concerns at John@JohnMuellerBooks.com.

 

In Praise of Dual Monitors

A lot of people have claimed that the desktop system is dead—that people are only interested in using tablets and smartphones for computing. In fact, there is concern that the desktop might become a thing of the past. It’s true that my own efforts, such as HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies, have started to focus on mobile development. However, I plan to continue using my desktop system when working because it’s a lot more practical and saves me considerable time. One such time saver is the use of dual monitors.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at John@JohnMuellerBooks.com.

 

Extending the Horizons of Computer Technology

OK, I’ll admit it—at one time I was a hardware guy. I still enjoy working with hardware from time-to-time and it’s my love of hardware that helps me see the almost infinite possibilities for extending computer technology to do all sorts of things that we can’t even envision right now. The fact that computers are simply devices for performing calculations really, really fast doesn’t actually matter. The sources of data input do matter, however. As computer technology has progressed, the number of sensor sources available to perform data input have soared. It’s the reason I recently wrote an article entitled, Tools to Help You Write Apps That Use Sensors.

The sensors you can connect to a computer today can do just about any task imaginable. You can detect everything from solar flares to microscopic animals. Sensors can hear specific sounds (such as breaking glass) and detect ranges of light that humans can’t even see. You can rely on sensors to monitor temperature extremes or the amount of liquid flowing in a pipe. In short, if you need to determine when a particular real world event has occurred, there is probably a sensor to do the job for you.

Unfortunately, working with sensors can also be difficult. You don’t just simply plug a sensor into your computer and see it work. The computer needs drivers and other software to interact with the sensor and interpret the data they provide. Given that most developer have better things to do with their time than write arcane driver code, obtaining the right tool for the job is absolutely essential. My article points out some tricks of the trade for making sensors a lot easier to deal with so that you can focus on writing applications that dazzle users, rather than write drivers they’ll never see.

As computer technology advances, the inputs and outputs that computers can handle will continue to increase. Sensors provide inputs, but the outputs will become quite interesting in the future as well. For example, sensors in your smartphone could detect that you’re having a heart attack and automatically call for help. For that matter, the smartphone might even be programmed to help in some significant way. It’s hard to know precisely how technology will change in the future because it has changed so much in just the last few years.

What sorts of sensors have you seen at work in today’s world? Do you commonly write applications that use uncommon sensor capabilities? Let me know about your user of sensors at John@JohnMuellerBooks.com. I’d really be interested to know how many people are interested in these sorts of technologies so that I know whether you’d like to see future blog posts on the topic.

 

Considering Perception in User Interface Design

I read a couple of articles recently that reminded me of a user interface design discussion I once had with a friend of mine. First, let’s discuss the articles. The first, New Record for Human Brain: Fastest Time to See an Image, says that humans can actually see something in as little as 13 ms. That short time frame provides the information the brain needs to target a point of visual focus. This article leads into the second, ‘Sixth Sense’ Can Be Explained by Science. In this case, the author explains how the sixth sense that many people relate as being supernatural in origin is actually explainable through scientific means. The brain detects a change—probably as a result of that 13 ms view—and informs the rest of the mind about it. However, the change hasn’t been targeted for closer inspection, so the viewer can’t articulate the change. In short, you know the change is there, but you can’t say what has actually changed.

So, you might wonder what this has to do with site design. It turns out that you can use these facts to help focus user attention on specific locations on your site. Now, I’m not talking here about the use of subliminal perception, which is clearly illegal in many locations. Rather, it’s possible to do as a friend suggested in designing a site and change a small, but noticeable, element each time a page is reloaded. Of course, you need not reload the entire page. Technologies such as Asynchronous JavaScript And XML (AJAX) make it possible to reload just a single element as needed. (Of course, changing a single element in a desktop application is incredibly easy because nothing special is needed to do it.) The point of making this change is to cause the viewer to look harder at the element you most want them to focus on. It’s just another method for ensuring that the right area of a page or other user interface element gets viewed.

However, the articles also make for interesting thoughts about the whole issue of user interface design. Presentation is an important part of design. Your application must use good design principles to attract attention. However, these articles also present the idea of time as a factor in designing the user interface. For example, the order in which application elements load is important because the brain can perceive the difference. You might not consciously register that element A loaded some number of milliseconds sooner than element B, but subconsciously, element A attracts more attention because it registered first and your brain targeted it first.

As science continues to probe the depths of perception, it helps developers come up with more effective ways in which to present information in a way that enhances the user experience and the benefit of any given application to the user. However, in order to make any user interface change effective, you must apply it consistently across the entire application and ensure that the technique isn’t used to an extreme. Choosing just one element per display (whether a page, window, or dialog box) to change is important. Otherwise, the effectiveness of the technique is diluted and the user might not notice it at all.

What is your take on the use of perception as a means of controlling the user interface? Do you feel that subtle techniques like the ones described in this post are helpful? Let me know your thoughts at John@JohnMuellerBooks.com.

 

Choosing Variable Names

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:

 

  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important


In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. For example, a form of Hungarian Notation, where certain type prefixes, suffixes, and other naming conventions are used, is a common way to reduce the complexity of creating a variable name. In fact, Hungarian Notation (or some form of it) is often used to name objects, methods, functions, classes, and other programming elements as well. For example, NamArCustomers could be an array of customer names (Nam for names, Ar for array). The use of these two prefixes would make it instantly apparent when the variable is being used incorrectly, such as assigning a list of numbers to the array. The point is that an organizational variable naming policy can reduce complexity, make the names easy to read for anyone, and reduces the time the developer spends choosing a name.

 

Before I get a ton of e-mail on the topic, yes, I know that many people view Hungarian notation as the worst possible way to name variables. They point out that it only really works with statically typed languages and that it doesn’t work at all for languages such as JavaScript. All that I’m really pointing out is that some sort of naming convention is helpful—whether you use something specific like Hungarian Notation is up to you.


Any variable name you create should convey the meaning of that variable to anyone. If you aren’t using some sort of pattern or policy to name the variables, then create a convention that helps you create the names in a consistent manner and document it. When you create a variable name, you need to consider these kinds of questions:

 

  1. What information does the variable contain (such as a list of names)?
  2. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  3. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  4. Is the variable used for a special task (such as data conversion)?
  5. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at John@JohnMuellerBooks.com.