Creating Sensible Error Trapping

This is an update of a post that originally appeared on May 23, 2011.

Errors in software happen. A file is missing on the hard drive or the user presses an unexpected key combination. There are errors of all shapes and sizes; expected and unexpected. The sources of errors are almost limitless. Some developers look at this vastness, become overwhelmed, and handle all errors the same way—by generating an ambiguous exception for absolutely every error that doesn’t help anyone solve anything. This is the worst case scenario that’s all too common in software today. I’ve talked with any number of people who have had to employ extreme effort just to figure the source of the exception out; many people simply give up and hope that someone has already discovered the source of the error.

At one time, error handling functionality in application development languages was so poor that it was possible to give the developer the benefit of a doubt. However, with the development tools that developers have at their disposal today, there is never a reason to provide an ambiguous “one size fits all” exception. For one thing, developers should make a distinction between the expected and the unexpected. Any expected error—a missing file for example—should be handled directly and specifically. If the application absolutely must have the file and can’t recreate it, then it should display a message saying which file is missing, where it is missing from, and possibly how to obtain another copy.

Even more than simply shoving the burden onto the user, however, modern applications have significantly more resources available for handling the error automatically. For example, it’s quite possible to use an Internet connection to access the vendor’s Web site and automatically download a missing application file. Except to tell the user what’s happening when the repair will take a few minutes, the application shouldn’t even bother the user with this particular kind of error—the repair should be automatic.

All of my essential programming books include at least mentions of error handling, debugging, exceptions, and other tasks associated with running code efficiently and smoothly. For example, Part IV of C++ All-In-One for Dummies, 4th Edition is devoted to the topic of debugging. Part V Chapter 3 of this same book talks about exceptions. If you’re a C# developer, C# 10.0 All-in-One for Dummies discusses exception handling in Book I Chapter 9. Book IV Chapter 2 discusses how to use the debugger to find errors. The point is that it’s essential to handle errors in your applications in a manner that makes sense to the users who rely on the application daily and the developers who maintain it.

Note that many of my newer books provide instructions for working with online IDEs, most especially Google Colab. These online IDEs rarely provide built-in debugging functionality, so then you need to resort to other means, such as those expressed in Debugging in Google Colab notebook.

Exceptional conditions do occur. However, even in these situations the developer must avoid the generic exception at all costs. If an application experiences an unexpected error and there isn’t any way to recover from it automatically, the user requires as much information as possible about the error in order to fix it. This means that the application should diagnose the problem as much as possible. Don’t tell the user that the application simply has to end—there is never a good reason to include this sort of message. Instead, tell the user that the application can’t locate a required resource and specify the resource in as much detail as possible. If possible, let the user fix the resource access problem and then retry access before you simply let the application die an ignoble death. Remember this! Any exception that your application displays means that you’ve failed as a developer to locate and repair the errors, so exceptions should be reserved for truly exceptional conditions.

Not everyone agrees with my approach to error trapping, but I have yet to hear a convincing argument to provide unreliable, non-specific error trapping in an application. Poor error trapping always translates into increased user dissatisfaction, increased support costs, and a reduction in profitability. Let me know your thoughts on the issue of creating a sensible error trapping strategy at [email protected].

Creating Useful Comments

This is an update of a post that originally appeared on November 21, 2011.

A major problem with most applications today is that they lack useful comments. It’s impossible for anyone to truly understand how an application works unless the developer provides comments at the time the code is written. In fact, this issue extends to the developer. A month after someone writes an application, it’s possible to forget the important details about it. In fact, for some of us, the interval between writing and forgetting is even shorter. Despite my best efforts and those of many other authors, many online examples lack any comments whatsoever, making them nearly useless to anyone who lacks time to run the application through a debugger to discover how it works.

Good application code comments help developers of all stripes in a number of ways. As a minimum, the comments you provide as part of your application code provides these benefits.

  • Debugging: It’s easier to debug an application that has good comments because the comments help the person performing the debugging understand how the developer envisioned the application working.
  • Updating: Anyone who has tried to update an application that lacks comments knows the pain of trying to figure out the best way to do it. Often, an update introduces new bugs because the person performing the update doesn’t understand how to interact with the original code.
  • Documentation: Modern IDEs often provide a means of automatically generating application documentation based on the developer comments. Good comments significantly reduce the work required to create documentation and sometimes eliminate it altogether.
  • Technique Description: You get a brainstorm in the middle of the night and try it in your code the next day. It works! Comments help you preserve the brainstorm that you won’t get back later no matter how hard you try. The technique you use today could also solve problems in future applications, but the technique may become unavailable unless you document it.
  • Problem Resolution: Code often takes a circuitous route to accomplish a task because the direct path will result in failure. Unless you document your reasons for using a less direct route, an update could cause problems by removing the safeguards you’ve provided.
  • Performance Tuning: Good comments help anyone tuning the application understand where performance changes could end up causing the application to run more slowly or not at all. A lot of performance improvements end up hurting the user, the data, or the application because the person tuning the application didn’t have proper comments for making the adjustments.

The need for good comments means creating a comment that has the substance required for someone to understand and use it. Unfortunately, it’s sometimes hard to determine what a good comment contains in the moment because you already know what the code does and how it does it. Consequently, having a guide as to what to write is helpful. When writing a comment, ask yourself these questions:

  • Who is affected by the code?
  • What is the code supposed to do?
  • When is the code supposed to perform this task?
  • Where does the code obtain resources needed to perform the task?
  • Why did the developer use a particular technique to write the code?
  • How does the code accomplish the task without causing problems with other applications or system resources?

There are many other questions you could ask yourself, but these six questions are a good start. You won’t answer every question for every last piece of code in the application because sometimes a question isn’t pertinent. As you work through your code and gain experience, start writing down questions you find yourself asking. Good answers to aggravating questions produce superior comments. Whenever you pull your hair out trying to figure out someone’s code, especially your own, remember that a comment could have saved you time, frustration, and effort. What is your take on comments? Let me know at [email protected].

JavaScript and Memory Leaks

This is an update of a post that originally appeared on January 25, 2013.

I’ve written any number of books that either include JavaScript development directly or indirectly. For example, when you create a web application in C#, there is some JavaScript involved that might ruin your day unless you have some idea of what that code is doing and how it can go wrong. If you enable JavaScript on your Android phone or tablet, then you can also use JavaScript development techniques in that environment. Because JavaScript provides a well-known and standardized environment, you often find it used in places where you may not think to look, which means taking the time to actually review the code that your IDE generates for web-based applications.

One of my goals in writing a book is to introduce you to techniques that produce useful applications in an incredibly short time without writing bad code. The term bad code covers a lot of ground, but one of the more serious issues is one of memory leaks. Applications that have memory leaks will cause the application and everything else on the system to slow down due to a lack of usable memory. In addition, memory leaks can actually cause the application to crash or the system to freeze when all of the available memory is used up. So, it was with great interest that I read an LogRocket article recently entitled, How to escape from memory leaks in JavaScript. The article contains a lot of useful advice in writing good JavaScript code that won’t cause your users heartache.

One of the most important parts of this article is that it covers memory leaks as a process. There is a list of common memory leak types and how to identify them. It also introduces you to tools and techniques for fixing memory errors using Chrome DevTools.

It’s essential to know that this article doesn’t cover everything. A big one is that the memory leak you’re seeing in your application may not be due to your code—it may be caused by the browser. The potential for browser problems is an important one to keep in mind because these issues affect every application that runs, not just yours. However, when your application performs a lot of work that requires heavy memory use, the user may see your application as the culprit. It pays to track browser issues so that you can support your users properly and recommend browser updates for running your application when appropriate. For that matter, you can simply determine whether the user has one of the poorly designed browsers and tell the user to perform an update instead of running the application.

There are other potential sources of memory leaks. For example, using the wrong third party library could cause considerable woe when it comes to memory usage (amongst other issues). Consequently, you need to research any libraries or templates that you use carefully. The libraries, templates, and other tools discussed in my books are chosen with extreme care to ensure you get the best start possible in creating JavaScript applications.

One of the reasons I find JavaScript so compelling as a language is that it has grown to include enough features to create real applications that run in just about any browser on just about any platform. The ability to run applications anywhere at any time has been a long term goal of computer science and it finally seems to be a reality at a certain level. What are your thoughts on JavaScript? Let me know at [email protected].

Considering Perception in User Interface Design

This is an update of a post that originally appeared on January 24, 2014.

The original version of this article had humans seeing images in as little as 13 ms. Nothing much has really changed since then. I read a few articles recently that reminded me of a user interface design discussion I once had with a friend of mine. First, let’s discuss the articles:

  • The first, Everything we see is a mash-up of the brain’s last 15 seconds of visual information, says that humans can actually see something in as little as 15 ms. That short time frame provides the information the brain needs to target a point of visual focus.
  • The older second article, ‘Sixth Sense’ Can Be Explained by Science, explains how the sixth sense that many people relate as being supernatural in origin is actually explainable through scientific means. The brain detects a change-probably as a result of that 15 ms view-and informs the rest of the mind about it. However, the change hasn’t been targeted for closer inspection, so the viewer can’t articulate the change.
  • The third article, The silent “sixth” sense, is a more scientific and slightly modernized view of the second article. In short, you know the change is there, but you can’t say what has actually changed.

So, you might wonder what this has to do with website design. It turns out that you can use these facts to help focus user attention on specific locations on your site. Now, I’m not talking here about the use of subliminal perception, which is clearly illegal in many locations. Rather, it’s possible to do as a friend suggested in designing a site and change a small, but noticeable, element each time a page is reloaded. Of course, you need not reload the entire page. Technologies such as Asynchronous JavaScript And XML (AJAX) make it possible to reload just a single element as needed. (Of course, changing a single element in a desktop application is incredibly easy because nothing special is needed to do it.) The point of making this change is to cause the viewer to look harder at the element you most want them to focus on. It’s just another method for ensuring that the right area of a page or other user interface element gets viewed.

However, the articles also make for interesting thoughts about the whole issue of user interface design. Presentation is an important part of design. Your application must use good design principles to attract attention. However, these articles also present the idea of time as a factor in designing the user interface. For example, the order in which application elements load is important because the brain can perceive the difference. You might not consciously register that element A loaded some number of milliseconds sooner than element B, but subconsciously, element A attracts more attention because it registered first and your brain targeted it first. This essentially explains the difference between UX and UI, since the two are eternally intermingled in the world of development.

As science continues to probe the depths of perception, it helps developers come up with more effective ways in which to present information in a way that enhances the user experience and the benefit of any given application to the user. However, in order to make any user interface change effective, you must apply it consistently across the entire application and ensure that the technique isn’t used to an extreme. Choosing just one element per display (whether a page, window, or dialog box) to change is important. Otherwise, the effectiveness of the technique is diluted and the user might not notice it at all.

What is your take on the use of perception as a means of controlling the user interface? Do you feel that subtle techniques like the ones described in this post are helpful? Let me know your thoughts at [email protected].

Web Page Units of Measure

This is an update of a post that originally appeared on October 23, 2013.

There are ways today of avoiding any thought of what units of measure the Web page you’re working on uses. For example, I use products like WordPress all the time now to make the task of creating content quickly a lot easier. I don’t ever think about the units of measure I’m using when working with these products. However, when designing an interface for an Android or C# application (as examples), I sometimes do need to think about units of measure because I’m more intimately involved in the user interface design process. Today, applications need to work everywhere on a large number of devices, all of which have different characteristics, so units of measure can become an issue. So, it’s entirely possible that you could create a nearly unlimited amount of content for a website without ever worrying about units of measure, but this blog post is for those times when it does matter.

Some units of measure work better than others do in obtaining a specific result. For example, when you specify placement in pixels (px), you tell the browser to define element placement with regard to the physical units of a display. This is the most precise, yet least flexible, method of defining units of measure. In addition, it can create issues with mobile devices because these devices typically don’t offer that many pixels of display area and may not allow scrolling of information that doesn’t appear in a single screen (the part that appears to the right and toward the bottom of the page).

More flexible units of device-specific measure include the inch (in), centimeter (cm), and millimeter (mm). In this case, the browser converts the measurement to pixels using the device’s conversion metric. For example, a typical PC display uses 96 pixels-per-inch. However, the user can change the metric so that an inch could consume 120 pixels instead (making the elements larger than normal). Whether this flexibility solves the problem of working with mobile devices depends on the mobile device and the metric it uses to convert physical units to pixels.

Besides device and physical measures, you can also use printer’s measures that include the point (pt) and pica (pc). These units of measure theoretically work the same as physical measures because a point is 1/72nd inch and a pica is 12 points (or 1/6th inch). In reality, it’s possible that a browser will convert the units of measure based on the size of the fonts that the device uses. However, you can’t count on this flexibility and must assume that these printer’s measures are simply a different kind of physical measure.

Fortunately there are two units of measure that are guaranteed to reflect the size of a font on the display. The em is a measure of the actual font size. One device may use a 12 point font while another device uses a 10 point font. An em will equal 12 points on the first device and 10 points on the second device without any modification of the code on your part. This feature makes the page quite flexible and usable with any device. The other unit of measure is the ex, which is the measure of the x-height of a font (the median of all of the characters in a particular letter set). As with the em, the ex automatically scales to consider the point size of characters used by a particular device.

All of the units of measure so far are absolute. You place elements on a screen in a precise position. Modern Web design dictates that pages employ Responsive Web Design (RWD) to ensure that the page will work on any device. A part of RWD is to use relative placement wherever possible so that the page and its elements automatically resize to meet the needs of a device. You use the percentage (%) unit of measure in this case, where an element uses a percentage of the available space, whatever that space might be. Of course, this approach means that all devices see the entire page. However, a disadvantage of this approach is that the elements might be so small on some devices as to make them unusable.

The article, CSS values and units, provides you with more information about units of measure as they apply to Cascading Style Sheets (CSS). These guidelines generally apply to other working environments as well. What are your thoughts about units of measure? Which do you use most often? Let me know at [email protected].

In Praise of Dual Monitors

This is an update of a post that originally appeared on February 5, 2014.

In reading many of my old blog posts, I’m finding that many of the things I said way back when apply equally well today. I’ve received email from budding developers who use their smartphone to code. Just how they perform this trick is beyond me because I squint at the screen performing the simplest of tasks and often find that my fingers are two sizes too big. I have tried coding on a tablet, a laptop, and (oddly enough) my television. While they do work, they’re not particularly efficient, so I’ll stick with my dual-monitor desktop system for coding.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at [email protected].

Choosing Variable Names

This is an update of a post that originally appeared on January 17, 2014.

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:

  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important

In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. If you don’t have an organizational style guide for variable naming, modern programming languages like Python commonly provide a style guide for you to use. These style guides often consider a great deal more than simply variable naming and include issues like the amount of indentation to use. In some respects, they become quite draconian in their approach. Other style guides, like the one for C#, are less time consuming to learn, which is a good thing because most developers have better things to do with their time than to learn some of these nitpicky details. A few languages suffer from an abundance of style guides, like C++. It’s best to choose one of them, such as the Google C++ Style Guide, and stick with it.

However, let’s say that you want to create your own style guide for your organization to use because you use multiple languages and having a different style guide for each language seems just a bit absurd, not to mention adding needless complexity. In this case, you need to ask yourself a series of questions to determine how you want the style guide to work, such as these:

  1. What sort of casing do you want to use for what types of variables?
  2. What information does the variable contain (such as a list of names)?
  3. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  4. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  5. Is the variable used for a special task (such as data conversion)?
  6. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at [email protected].

Python Used for Common User Interface Needs

This is an update of a post that originally appeared on September 12, 2014.

Beginning Programming with Python For Dummies, 3rd Edition describes how to start working with Python. You discover how to perform all the basics and I provide a few real world examples. However, once you’re done with the book, you might ask how Python can be used for real world programming of the sort that you need to do. One of the most common tasks is creating a user interface. Just about every application out there requires a user interface and it has become popular to make user interfaces touchable. Fortunately, Python developers have access to a huge number of libraries to make seemingly hard tasks simple. In fact, that’s one of the advantages of using Python—the immense number of really practical and useful libraries at your disposal. It’s possible to find a library for just about any need.

One of the more interesting libraries available for Python is Kivy. This library makes it possible to create multitouch applications without having to do all the heavy lifting yourself. The interesting thing about using Kivy for this task is that it helps you avoid some of the problems with other sort of multitouch application environments, such as using a combination of HTML5, CSS3, and JavaScript (where a less than compatible browser can ruin your chances of making the application work properly). This is a native code library that works on the Linux, Windows, OS X, Android and iOS platforms, so you have a good chance of finding precisely the support you need in a package that will perform well on the chosen platforms. Like all Python applications, the application you create on the Mac will work just fine on Windows too.

Of course, there are tons of libraries for Python, so why did I choose to talk about this particular library? It turns out that Kivy is proactive about obtaining as much developer support as possible. I’ll admit it, I was bedazzled looking at all the eye candy on this site. What I thought was a five minute scan of the example applications turned out to be more than an hour of perusing what’s possible with Kivy and Python. All you need to do to try one of the applications out is to click its link, download the code, and start running it. Nothing could be easier (or time consuming as it turns out). Soon, you’ll find your days consumed by checking out Kivy applications too.

Fortunately, Kivy is also free. All you need to do is download the copy for your platform and install it. So, you get this great library that you can use for your business applications and it doesn’t cost you a dime. What I’d most like to hear about is whether someone is using Kivy in a large scale business application and how its performing for them. Speed is always an issue with Python, despite all the other amazing features it provides, so finding libraries that use every bit of speed Python has to offer is essential.

I take a lot of time looking for various tools, libraries, applications, and other resources for readers to use with my books. I’m not looking for anything cheesy, crippled, or difficult to use—I want well written, popular, and preferably free resources I can share. If you are a developer who is using an outstanding library or tool that specifically meets the needs of my readers, please let me know about it at [email protected]. Please, no vendors! I want to hear from people not associated with an organization who are actually using the tool or library in question for development purposes.

Is Security Research Always Useful?

This is an update of a post that originally appeared on February 19, 2016.

Anyone involved in the computer industry likely spends some amount of time reading about the latest security issues in books such as Machine Learning Security Principles. Administrators and developers probably spend more time than many people, but no one can possibly read all the security research available today. There are so many researchers looking for so many bugs in so many places and in so many different ways that even if someone had the time and inclination to read every security article produced, it would be impossible. You’d need to be the speediest reader on the planet (and then some) to even think about scratching the surface. So, you must contemplate the usefulness of all that research—whether it’s actually useful or simply a method for some people to get their name on a piece of paper.

What amazes me since I first wrote this blog post is that I have done a considerable amount of additional reading, including research papers, and find that most exploits remain essentially the same. The techniques may differ, they may improve, but the essentials of the exploit remain the same. It turns out that humans are the weakest link in every security chain and that social engineering attacks remain a mainstay of hackers. The one thing that has changed in seven years is that the use of machine learning and deep learning techniques has automated life for the hacker, much as these technologies have automated life for everyone else. In addition, a lack of proactive privacy makes it even easier than before for a hacker to create a believable attack by using publicly available information about an intended target.

As part of researching security, you need to consider the viability of an attack, especially with regard to your organization, infrastructure, personnel, and applications. Some of the attacks require physical access to the system. In some cases, you must actually take the system apart to access components in order to perform the security trick. Many IoT attacks fall into this category. Unless you or your organization is in the habit of allowing perfect strangers physical access to your systems, which might include taking them apart, you must wonder whether the security issue is even worth worrying about. You need to ask why someone would take the time to document a security issue that’s nearly impossible to see, much less perform in a real world environment. More importantly, the moment you see that a security issue requires physical access to the device, you can probably stop reading.

You also find attacks that require special equipment to perform. The article, How encryption keys could be stolen by your lunch, discusses one such attack. In fact, the article contains a picture of the special equipment that you must build to perpetrate the attack. It places said equipment into a piece of pita bread, which adds a fanciful twist to something that is already quite odd and pretty much unworkable given that you must be within 50 cm (19.6 in) from the device you want to attack (assuming that the RF transmission conditions are perfect). Except for the interesting attack vector (using a piece of pita bread), you really have to question why anyone would ever perpetrate this attack given that social engineering and a wealth of other attacks require no special equipment, are highly successful, and work from a much longer distance.

It does pay to keep an eye on the latest and future targets of hacker attacks. Even though many IoT attacks are the stuff of James Bond today, hackers are paying attention to IoT, so it pays to secure your systems, which are likely wide open right now. As one of my experiments for Machine Learning Security Principles, I actually did hack my own smart thermostat (after which, I immediately improved security). The number of IoT attacks is increasing considerably, so ensuring that you maintain electrical, physical, and application security over your IoT devices is important, but not to the exclusion of other needs.

A few research pieces become more reasonable by discussing outlandish sorts of hacks that could potentially happen after an initial break-in. The hack discussed in Design flaw in Intel chips opens door to rootkits is one of these sorts of hacks. You can’t perpetrate the hack until after breaking into the system some other way, but the break-in has serious consequences once it occurs. Even so, most hackers won’t take the time because they already have everything needed—the hack is overkill. However, this particular kind of hack should sound alarms in the security professional’s head. The Windows 11 requirement for the TPM 2.0 chip is supposed to make this kind of attack significantly harder, perhaps impossible, to perform. Of course, someone has already found a way to bypass the TPM 2.0 chip requirement and it doesn’t help that Microsoft actually signed off on a piece of rootkit malware for installation on a Windows 11 system. So, security research, even when you know that the actual piece of research isn’t particularly helpful, can become a source of information for thought experiments of what a hacker might do.

The articles that help most provide a shot of reality into the decidedly conspiracy-oriented world of security. For example, Evil conspiracy? Nope, everyday cyber insecurity, discusses a series of events that everyone initially thought pointed to a major cyber attack. It turns out that the events occurred at the same time by coincidence. The article author thoughtfully points out some of the reasons that the conspiracy theories seemed a bit out of place at the outset anyway.

It also helps to know the true sources of potential security issues. For example, the articles, In the security world, the good guys aren’t always good and 5 reasons why newer hires are the company’s biggest data security risk, point out the sources you really do need to consider when creating a security plan. These are the sorts of articles that should attract your attention because they describe a security issue that you really should think about.

The point is that you encounter a lot of information out there that doesn’t help you make your system any more secure. It may be interesting if you have the time to read it, but the tactics truly aren’t practical and no hacker is going to use them. Critical thinking skills are your best asset when building your security knowledge. Let me know about your take on security research at [email protected].

Review of Essential Algorithms

Working in computer science means knowing how to work with computer languages, but it also means knowing how to use math to obtain the results you want. Some math is relatively straightforward, but some becomes so complicated that you really do need some type of process or procedure for working with it. Essential Algorithms by Rod Stephens, “defines steps for performing a task in a certain way.” The first chapter begins by defining what an algorithm is and moves on from there to show you how they can help improve your ability to write complex applications.

The examples are written in a pseudocode that the author explains in Chapter 1. In fact, the explanation is accompanied by some examples of how to turn the pseudocode into an actual programming language. I’m almost positive some readers will take exception to the use of pseudocode because it doesn’t relate the example in their specific programming language, which would make implementation of the code as easy as possible for the reader. In this case, the use of pseudocode is impossible to avoid because the book would be far less useful without it.

This text could easily be used in a college. Each chapter ends with exercises that help the reader understand the concepts better (or at least determine whether any of the material actually sunk in). The answers to the examples appear in an appendix at the end of the book. However, in a college setting it might be possible to create a student version of the book without the appendix and a teacher version that includes the answers. The author also uses many of the same examples that I used when I was a student in college, but with an emphasis on diagrams to pictorially show how the examples work. The addition of graphics makes the examples considerably easier to understand.

The early chapters discuss specific kinds of algorithms that are used in every programming language that exists. For example, the author tackles the topic of randomizing data and ensuring that the randomizing process is fair. Of course, getting truly random data on a computer is impossible, but it’s possible to create random sequences of such complexity that the average human would never notice they aren’t random. This book discusses the topic at a length that I wish the text I had used in college would have provided.

Don’t get the idea that Essential Algorithms is light on the computer science aspects of using algorithms. For example, you’ll find coverage of all the basic structures used by most languages: linked lists, arrays, stacks, and queues. I could have wished for coverage of dequeues because many languages modify dequeues to create stacks and queues. Understanding how this essential structure works would have been great.

There are separate chapters for sorting and searching. These two tasks are performed so often by applications that an in depth knowledge really is a necessity for any computer scientist. All the common sorts are covered in sufficient detail that the reader should understand them with relative ease: insertion, selection, bubble, heap, quick, and merge. In addition, you find the counting and bucket sorts (two types of sorts that are completely missing my my college text—I took the time to check). The list of searches are likewise complete: linear, binary, and interpolation.

The opening chapters are finished with chapters on hash tables and recursion. I thought the chapter on hash tables was a bit light and their use as dictionaries in languages such as Python is only mentioned in passing. The chapter on recursion was far better done. I found the material on the various kinds of curves: Koch, Hilbert, and Sierpinski, exceptional.

The middle of the book (starting with Chapter 10) is taken up with trees, networks, and strings. There should be enough material here for anyone who really wants to learn the information. The author seems to hit his stride in these chapters—they’re both interesting and well written.

The end of the book starts with cryptography in Chapter 16. It’s the part of the book that just about anyone will find helpful and it’s also the part that separates this book from being a mere college text and more of a reference book. The chapter on complexity theory is exceptionally nice. Even if you’re already an expert in other areas of this book, it’s likely that you’ll find some new ideas in this part of the book—enough ideas to make it well worth the purchase price.

Overall, Essential Algorithms is the text I wish I had when studying the topic in college and it’ll make a fine addition to my bookshelf. I’ll likely use it as a reference book when trying to understand how various programming languages are implementing a practical need, such as determining how to work with structures such as stacks. I don’t delve deeply into security issues very often, but I’m sure that material will see use as well. There are some holes in the book, but I wouldn’t consider them deal killers and could provide great fodder for the author in the form of articles and blog posts. This is a great book and one that you need on your shelf.