Cross Platform Functionality for .NET

Microsoft has recently announced that it will port the .NET Framework to the Mac and Linux platforms. This is welcome news because more and more of my readers have expressed an interest in developing applications that run on multiple platforms. It’s the reason that I cover Windows, Linux, and Mac requirements in books such as Beginning Programming with Python For Dummies. Until now, I usually had to include some mention of alternative solutions, such Mono, to help my readers achieve cross-platform functionality. (For readers with older versions of my books, Mono is actually delivered by Xamarin now, see my announcement in the An Update About Mono post.) Even though Mono makes a valiant effort to make cross-platform a reality, it does have limits, so the Microsoft announcement is welcome. Now we have to see whether Microsoft actually delivers on its promises.

There has been a lot of analysis about the announcement. You can find some general information about the product on eWeek. The information is pretty much a reworded version of the Microsoft announcement, but I found it clear and succinct. The InfoWorld writeup provides additional information and takes Microsoft to task for not completely opening the .NET Framework. There are still some licensing issues to consider. For my part, I wonder when Microsoft will make it possible to fully use C# on any platform. At some point, Microsoft must make it possible to develop applications on a platform other than Windows or developers will continue to lose interest.

One of the biggest questions I’ll need to answer for you is whether any of my book examples will run on other platforms. Given how Microsoft has done things in the past, it seems unlikely that you’ll be able to use any of my existing book examples on other platforms. The code might possibly work, but the downloadable source would have to be redone to make it possible to compile the examples with the new tools. So, for now, I’m saying outright that you need to continue to use my books with the version of Visual Studio for which they are written and not assume that the examples will work on other platforms.

I do find the news exciting because there is finally a chance that I’ll be able to address your needs better when it comes to working with languages such as C#. Yes, working with solutions such as Mono did allow you to perform certain tasks across platforms, but there is not a potential for writing complete applications of nearly any type and having them work anywhere, which is where the world as a whole has been headed for a long time. I applaud Microsoft’s efforts to move forward.

Please do contact me with your questions regarding cross-platform functionality in .NET and how it affects by books at John@JohnMuellerBooks.com. No, I can’t answer your question about how Microsoft will implement cross-platform functionality in the new versions of .NET, but yes, I do want to hear about your ideas for book updates based on this technology. What I want to do is help you use this new functionality as soon as is possible.

 

API Security and the Developer

As our world becomes ever more interconnected, developers rely more and more on code and data sources outside of the environment in which the application runs. Using external code and data sources has a considerable number of advantages, not the least of which is keeping application development on schedule and within budget. However, working with APIs, whether local or on someone else’s system, means performing additional levels of testing. It isn’t enough to know that the application works as planned when used in the way you originally envisioned it being used. That’s why I wrote API Security Testing: Think Like a Bad Guy. This article helps you understand the sorts of attacks you might encounter when working with a third party API or allowing others to use your API.

Knowing the sources and types of potential threats can help you create better debugging processes for your organization. In reality, most security breaches today point to a lack of proper testing and an inability to debug applications because the inner workings of that application are poorly understood by those who maintain them. Blaming the user for interacting with an application incorrectly, hackers for exploiting API weaknesses, or third parties for improperly testing their APIs are simply excuses. Unfortunately, no one is interested in hearing excuses when an application opens a door to user data of various types.

It was serendipity that I was asked to review the recent Snapchat debacle and write an article about it. My editorial appears as Security Lessons Courtesy of Snapchat. The problems with Snapchat are significant and they could have been avoided with proper testing procedures, QA, and debugging techniques.This vendor is doing precisely all the wrong things—I truly couldn’t have asked for a better example to illustrate the issues that can occur when APIs aren’t tested correctly and fully. The results of the security breach are truly devastating from a certain perspective. As far as I know, no one had their identity stolen, but many people have lost their dignity and privacy as a result of the security breach. Certainly, someone will try to extort money from those who have been compromised. After all, you really don’t want your significant other, your boss, or your associates to see that inappropriate picture.

The need to test APIs carefully, fully, and without preconceived notions of how the user will interact with the API is essential. Until APIs become more secure and developers begin to take security seriously, you can expect a continuous stream of security breaches to appear in both the trade press and national media. The disturbing trend is that vendors now tend to blame users, but this really is a dead end. The best advice I can provide to software developers is to assume the user will always attempt to use your application incorrectly, no matter how much training the user receives.

Of course, it isn’t just APIs that require vigorous testing, but applications as a whole. However, errors in APIs tend to be worse because a single API can see use in a number of applications. So, a single error in the API is spread far wider than a similar error in an application. Let me know your thoughts on API security testing at John@JohnMuellerBooks.com.

Examining the Calculator in Windows 7 (Part 2)

A while back, over two years ago in fact, I uploaded a post entitled, “Examining the Calculator in Windows 7.” Since that time, a number of people have asked about the other features that the new calculator includes. Yes, there are these rather significant problems that Microsoft has introduced, but there are some good things about the new calculator as well.

The good thing appears on the View menu. When you click this menu, you see options at the bottom of the list that provide access to the special features as shown here.

The View menu includes options for unit conversion, date conversion, and worksheets.
The Windows 7 Calculator View Menu

The Unit Conversion and Date Conversion options are the most useful. However, the worksheets can prove helpful when you need them. Of the new features, I personally use Unit Conversion the most and many people likely will. After all, it’s not often you need to figure out a new mortgage, vehicle lease amount, or the fuel economy of your vehicle (and if you do such work for a living, you’ll have something better than the Windows Calculator to use). To see what this option provides, click Unit Conversion. You see a new interface like the one shown here:

The Unit Conversion display makes it possible to convert from one unit of measure to another.
Calculator Unit Conversion Display

You start using this feature by selecting the type of unit you want to convert. As you can see from this list, the kinds of conversions you can perform are extensive:

Select a conversion type to determine what options are offered in the From and To fields.
The Calculator Supports a Healthy List of Conversion Types

The option you select determines the content of the From and To fields. For example, if you want to convert from kilometers to miles, you select the Length option. After you select the type of unit, type a value in the From field and select the From field unit of measure. Select the To field unit of measure last. Here is what happens when you convert 15 kilometers to miles:

The output shows that converting 15 kilometers to miles equals 9.32056788356001 miles.
Converting Kilometers to Miles

I’ve found use for most of the entries in the types list at one time or another. Every one of them works quite well and you’ll be happy they’re available when you need them. The Data Calculation option can be similarly useful if you work with dates relatively often, as I do. However, I can’t see many people needing to figure out the number of days between two dates on a regular basic. Even so, this feature is probably used more often than any of the worksheets.

The ability to perform conversions of various kinds and to access the worksheets that Windows 7 Calculator provides isn’t enough to change my opinion. The implementation of the Calculator is extremely flawed and I stick by my review in the first posting. However, you do have the right to know there are some positives, which is the point of this post. Let me know your thoughts about Calculator now that you have a better view of it at John@JohnMuellerBooks.com.

 

Coding Schools and the Learning Process

There are three essential ways to begin a career as a developer. The first is to get a college degree in the subject, which is normally a Bachelor of Computer Science or a Bachelor of Information Technology (amongst other degrees). The second is to teach yourself the trade, which means spending a lot of time with books and in front of your screen working through online tutorials. The third is a new option, coding school. The third option has become extremely popular due to limitations in the first two techniques.

The cost of a college education has continued to skyrocket over the past few decades until it has started to elude the grasp of more than a few people. I’ve read estimates that a college degree now costs between $20,000 and $100,000 in various places. How much you actually pay depends on the school, your personal needs, and the electives you choose. The point is that many people are looking for something less expensive.

A college education also requires a large investment in time. A four year degree may require five or six years to actually complete because most people have to work while they’re going to school. A degree is only four years when you can go full time and apply yourself fully. Someone who is out of work today and needs a job immediately can’t wait for five or six years to get a job.

Teaching yourself is a time-honored method of obtaining new skills. I’ve personally taught myself a considerable number of skills. However, I’m also not trying to market those skills to someone else. My self-taught skills usually come in the areas of crafting or self-sufficiency (or sometimes a new programming language). The problem with being self-taught is that you have no independent assessment of your skills and most employers can’t take time to test them. An employer needs someone with a proven set of skills. Consequently, self-teaching is extremely useful for learning new hobbies or adding to existing (proven) skills, but almost valueless when getting a new job. In addition, few people are actually motivated enough to learn a new skill completely (at the same level as a college graduate) on their own.

Coding schools overcome the problem with self-teaching because they offer proof of your skills and ensure you get a consistent level of training. You get the required sheepskin to show to employers. They also address deficiencies in the college approach. The time factor is favorable because most of these schools promise to teach you basic development skills in three months (compared to the five or six years required by a college). In addition, the cost is significantly less (between $6,000 and $18,000). So, it would seem that going to a coding school is the optimum choice.

Recently people have begun to question the ability of coding schools to fulfill the promises they make. It’s important to consider what a coding school is offering before you go to one. The schools vary greatly in what they offer (you can see reviews of three popular code schools at http://www.mikelapeter.com/code-school-vs-treehouse-vs-codecademy-a-review/). However, there are similarities between schools. A coding school teaches you the bare basics of a language. You don’t gain the sort of experience that a college graduate would have. In addition, coding schools don’t teach such concepts as application design or how to work in a team environment. You don’t learn the low-level concepts of how application development works. I don’t know if building a compiler is still part of the curriculum at colleges, but it was one of my more important learning experiences because I gained insights into how my code actually ended up turning switches on and off within the chips housed in the computer.

I see coding schools as fulfilling an important role—helping those who do have programming skills to build competence in a new language quickly. In addition, a coding school could provide an entry point for someone who thinks they may want a computer science degree, but isn’t certain. Spending a short time in a coding school is better than spending a year or two in college and only then finding out that computer science isn’t what the person wants. Coding schools could also help people who need to know how to write simple applications as part of another occupation. For example, a researcher could learn the basic skills require to write simple applications to aid in their main occupation.

People learn in different ways. It’s the lesson that readers keep driving home to me. Some people learn with hands on exercises, some by reading, and still others by researching on their own. Coding schools can fulfill an important role in teaching computer science, but they’re not even close to a complete solution. In order to get the full story about computer science, a student must be willing to invest the required time. Until we discover some method for simply pouring information into the minds of people, the time-consuming approach to learning must continue as it has for thousands of year. There really aren’t any shortcuts when it comes to learning. Let me know your thoughts about coding schools at John@JohnMuellerBooks.com.

 

An Update on the RunAs Command

It has been a while since I wrote the Simulating Users with the RunAs Command post that describes how to use the RunAs command to perform tasks that the user’s account can’t normally perform. (The basics of using the RunAs command appear in both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference.) A number of you have written to tell me that there is a problem with using the RunAs command with built-in commands—those that appear as part of CMD.EXE. For example, when you try the following command:

RunAs /User:Administrator “md \Temp”

you are asked for the Administrator password as normal. After you supply the password, you get two error messages:

RUNAS ERROR: Unable to run – md \Temp
2: The system cannot find the file specified.

In fact, you find that built-in commands as a whole won’t work as anticipated. One way to overcome this problem is to place the commands in a batch file and then run the batch file as an administrator. This solution works fine when you plan to execute the command regularly. However, it’s not optimal when you plan to execute the command just once or twice. In this case, you must execute a copy of the command processor and use it to execute the command as shown here:

RunAs /User:Administrator “cmd /c \”md \Temp””

This command looks pretty convoluted, but it’s straightforward if you take it apart a little at a time. At the heart of everything is the md \Temp part of the command. In order to make this a separate command, you must enclose it in double quotes. Remember to escape the double quote that appears withing the command string by using a backslash (as in \”).

To execute the command processor, you simply type cmd. However, you want the command processor to start, execute the command, and then terminate, so you also add the /c command line switch. The command processor string is also enclosed within double quotes to make it appear as a single command to RunAs.

 

Make sure you use forward slashes and backslashes as needed. Using the wrong slash will make the command fail.

The RunAs command can now proceed as you normally use it. In this case, the command only includes the username. You can also include the password, when necessary. Let me know if you find this workaround helpful at John@JohnMuellerBooks.com.

 

Thinking of All the Possibilities in Software Design

A number of books on my shelf, some of which I’ve written, broach the topic of divergent thinking. Unfortunately, many developers (and many more managers) don’t really grasp the ideas behind divergent thinking. Simply put, divergent thinking starts with a single premise and views as many permutations of that premise as possible. Most developers don’t take the time to use divergent thinking as part of the development process because they don’t see a use for it. In fact, most books fall short of even discussing the potential for divergent thinking, much less naming it as a specific element of application design. I’ve explored the topic before and a reader recently reminded me of an article I wrote on the topic entitled, Divergent Versus Convergent Thinking: Which Is Better for Software Design?.

The process that most developers rely upon is convergent thinking, which is where you convert general goals and needs into specific solutions that appear within a single application. The difference between the two modes of thinking is that divergent thinking begins with a single specific premise, while convergent thinking begins with a number of general premises. More specifically, divergent thinking is the process you use to consider all of the possibilities before you use convergent thinking to create specific solutions to those possibilities.

There is an actual cycle between divergent and convergent thinking. You use divergent thinking when you start a project to ensure you discover as many possibly ways to address user requirements as possible. Once you have a number of possibilities, you use convergent thinking to consider the solutions for those possibilities in the form of a design. The process will point out those possibilities that will work and those that don’t. Maintaining a middle ground between extremes of divergent and convergent thinking will help create unique solutions, yet keep the project on track and maintain project team integrity. Managing the cycle is the job of the person in charge of the project, who is often the CIO, but could be some other management position. So, the manager has to be knowledgeable about software design in order for the process to work as anticipated.

One of the reasons that many applications fail today is the lack of divergent thinking as part of the software design process. We’re all too busy thinking about solutions, rather than possibilities. Yet, the creative mind and the creative process is based on divergent thinking. The reason we’re getting the same solutions rehashed in a million different ways (resulting in a lack of interest in new solutions) is the lack of divergent thinking in the development process.

In fact, I’d go so far as to say that most developers have never even heard of divergent thinking (and never heard convergent thinking called by that name). With this in mind, I’ve decided to provide some resources you can use to learn more about divergent thinking and possibly add it to your application design process.

 

These are just four of several hundred articles I located on divergent thinking online. I chose these particular four articles because they represent a range of ideas that most developers will find helpful, especially the idea of not applying stereotypical processes when trying to use divergent thinking. Stereotypes tend to block creative flow, which is partly what divergent thinking is all about.

The bottom line is that until divergent thinking is made part of the software design process, we’ll continue to suffer through rehashed versions of the current solutions. What is your view of divergent thinking? Do you see it as a useful tool or something best avoided? Let me know your thoughts at John@JohnMuellerBooks.com.

 

Using the Set Command to Your Advantage

Last week I created a post about the Windows path. A number of people wrote me about that post with questions. Yes, you can use the technique for setting the Path environment variable to set any other environment variable. The Windows Environment Variables dialog box works for any environment variable—including those used by language environments such as Java, JavaScript, and Python. Windows doesn’t actually care what sort of environment variable you create using the method that I discuss in that post. The environment variable will appear in every new command prompt window you create for either a single user or all users of a particular system, depending on how you create the environment variable.

A few of you took me to task for not mentioning the Set command. This particular command appears in both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference. It’s a useful command because you can temporarily configure a command prompt session to support a new set of settings. When the session is ended, the settings are gone. Only those settings you create as part of Environment Variables window have any permanence. There are other tricks you can use, but using Set for temporary environment variables and the Environment Variables window for permanent environment variables are the two most common approaches.

In order to see the current environment variables you simply type Set and press Enter at the command line. If you add a space and one or more letters, you see just the matching environment variables. For example, type Set U and press Enter to see all of the environment variables that begin with the letter U.

To set an environment variable, you add the name of the variable, an equals sign (=), and the variable value. For example, to set the value of MyVariable to Hello, you type Set MyVariable=Hello and press Enter. To verify that MyVariable does indeed equal Hello, you type Set MyVariable and press Enter. The command prompt will display the value of MyVariable. When you’re done using MyVariable, you can type Set MyVariable= and press Enter. Notice the addition of the equals sign. If you ask for the value of MyVariable again, the command prompt will tell you it doesn’t exist.

Newer versions of the command prompt provide some additional functionality. For example, you might set MyVariable within a batch file and not know what value it should contain when you create the batch file. In this case, you can prompt the user to provide a value using the /P command line switch. For example, if you type Set /P MyVariable=Type a value for my variable: and press Enter, you’ll see a prompt to enter the variable value.

It’s also possible to perform math with Set using the /A command line switch. There is a whole list of standard math notations you can use. Type Set /? and press Enter to see them all. If you write application code at all, you’ll recognize the standard symbols. For example, if you want to increment the value of a variable each time something happens, you can use the += operator. Type Set /A MyVariable+=1 and press Enter to see how this works. The first time you make the call, MyVariable will equal 1. However, on each succeeding call, the value will increment by 1 (for values of 2, 3, and so on).

Environment variables support expansion and you can see this work using the Echo command. For example, if you type Echo %MyVariable%, you see the value of MyVariable.

However, you might not want the entire value of MyVariable. Newer versions of the command prompt support substrings. The variable name is followed by a :~, the beginning position, a comma, and the ending position. For example, if you place Hello World in MyVariable, and then type Echo %MyVariable:~0,5% and press Enter, you see Hello as the output, not Hello World. Adding a negative sign causes the expansion to occur from the end of the string. For example, if you type Echo %MyVariable:~-5% and press Enter, you see World as the output.

The Set command is a valuable addition to both the administrator’s and programmer’s toolkit because it lets you set environment variables temporarily. The Set command figures prominently in batch file processing and also provides configuration options for specific needs. Let me know about your environment variable questions as they pertain to my books at John@JohnMuellerBooks.com.

 

In Praise of Dual Monitors

A lot of people have claimed that the desktop system is dead—that people are only interested in using tablets and smartphones for computing. In fact, there is concern that the desktop might become a thing of the past. It’s true that my own efforts, such as HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies, have started to focus on mobile development. However, I plan to continue using my desktop system when working because it’s a lot more practical and saves me considerable time. One such time saver is the use of dual monitors.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at John@JohnMuellerBooks.com.

 

Choosing Variable Names

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:

 

  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important


In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. For example, a form of Hungarian Notation, where certain type prefixes, suffixes, and other naming conventions are used, is a common way to reduce the complexity of creating a variable name. In fact, Hungarian Notation (or some form of it) is often used to name objects, methods, functions, classes, and other programming elements as well. For example, NamArCustomers could be an array of customer names (Nam for names, Ar for array). The use of these two prefixes would make it instantly apparent when the variable is being used incorrectly, such as assigning a list of numbers to the array. The point is that an organizational variable naming policy can reduce complexity, make the names easy to read for anyone, and reduces the time the developer spends choosing a name.

 

Before I get a ton of e-mail on the topic, yes, I know that many people view Hungarian notation as the worst possible way to name variables. They point out that it only really works with statically typed languages and that it doesn’t work at all for languages such as JavaScript. All that I’m really pointing out is that some sort of naming convention is helpful—whether you use something specific like Hungarian Notation is up to you.


Any variable name you create should convey the meaning of that variable to anyone. If you aren’t using some sort of pattern or policy to name the variables, then create a convention that helps you create the names in a consistent manner and document it. When you create a variable name, you need to consider these kinds of questions:

 

  1. What information does the variable contain (such as a list of names)?
  2. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  3. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  4. Is the variable used for a special task (such as data conversion)?
  5. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at John@JohnMuellerBooks.com.

 

Verifying Your Hand Typed Code

I maintain statistics for each of my books that are based on reviews and reader e-mails (so those e-mails you send really are important). These statistics help me write better books in the future and also help me determine the sorts of topics I need to address in my blog. It turns out that one of the most commonly asked questions is why a reader’s hand typed code doesn’t work. Some readers simply ask the question without giving me any details at all, which makes the question impossible to answer. In some cases, the reader sends the hand typed code, expecting that I’ll take time to troubleshoot it. However, this isn’t a realistic request because it defeats the very purpose behind typing the code by hand. If I take the time to diagnose the problems in the code you typed, I’ll be the one to learn an interesting lesson, not you. If you learn better by doing—that is, by typing the code by hand and then running it, then you need to be the one to troubleshoot any problems with the resulting code.

My advice to readers is to use the downloadable source code when working through the book text. If you want to type the code by hand after that as part of your learning experience, at least you’ll know that the example works on your system and you’ll also understand how the example works well enough to troubleshoot any errors in your own code. However, you need to be the one to diagnose the errors. If nothing else, perform a character-by-character comparison of your code to the example code that you downloaded from the publisher’s site. Often, a reader will write back after I suggest this approach and mention that they had no idea that a particular special symbol or method of formatting content was important. These are the sorts of lessons that this kind of exercise provide.

Now, it has happened that the downloadable source code doesn’t always work on a particular user’s system. When the error is in the code or something I can determine about the coding environment, you can be certain that I’ll post information about it on my blog. This should be the first place you look for such information. Simply click on the book title in question under the Technical category. You’ll find a list of posts for that book. Always feel free to contact me about a book-specific question. I want to be sure you have a good learning experience.

There are some situations where a reader tries to run application code that won’t work on a particular system. My books provide information on the kind of system you should use, but I can’t always determine exceptions to the rule in advance. When I post system requirements, your system must meet those requirements because the examples are guaranteed to fail on lesser systems. If you encounter a situation where the downloadable code won’t run on your system, but none of the fixes I post for that code work and your system does meet the requirements, then please feel free to contact me. There are times where an example simply won’t run because you can’t use the required software or the system won’t support it for whatever reason.

The point of this post is that you need to work with the downloadable source code whenever possible. The downloadable source code has been tested by a number of people, usually on a range of systems, to ensure it will work on your system too. I understand that typing the code by hand is an important and viable way to learn, but you should reserve this method as the second learning tier—used after you have tried the downloadable source code. Please let me know if you have any questions or concerns at John@JohnMuellerBooks.com.