Adding a Location to the Windows Path

This is an update of a post that originally appeared on February 17, 2014.

A number of my books tell the reader to perform tasks at the command line. What this means is that the reader must have access to applications stored on the hard drive. Windows doesn’t track the location of every application. Instead, it relies on the Path environment variable to provide the potential locations of applications. If the application the reader needs doesn’t appear on the path, Windows won’t be able to find it. Windows will simply display an error message. So, it’s important that any applications you need to access for my books appear on the path if you need to access them from the command line.

You can always see the current path by typing Path at the command line and pressing Enter. What you’ll see is a listing of locations, each of which is separated by a semicolon as shown here (your path will differ from mine).

Using the Path command displays the current path.

In this case, Windows will begin looking for an application in the current folder. If it doesn’t find the application there, then it will look in C:\Python33\, then in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common, and so on down the list. Each potential location is separated from other locations using a semicolon as shown in the figure.

There are a number of ways to add a location to the Windows path. If you only need to add a path temporarily, you can simply extend the path by setting it to the new value, plus the old value. For example, if you want to add C:\MyApp to the path, you’d type Path=C:\MyApp;%Path% and press Enter. Notice that you must add a semicolon after C:\MyApp. Using %Path% appends the existing path after C:\MyApp. Here is how the result looks on screen.

Adding a path is a relatively simple process using the Path= command

Of course, there are times when you want to make the addition to the path permanent because you plan to access the associated application regularly. In this case, you must perform the task within Windows itself. The following steps tell you how.

1. Right click This Computer (or Computer) and choose Properties from the context menu or select Settings (or System in the Control Panel). You see the a Settings (System) window similar to the one shown here open (precisely what you see depends on which version of Windows you have, the figure shows Windows 10).

    The Settings window in Windows 10.

    2. Click Advanced System Settings. You see the Advanced tab of the System Properties dialog box shown here.

    The Advanced tab provides access to your permanent path in Windows.

    3. Click Environment Variables. You see the Environment Variables dialog box shown here. Notice that there are actually two sets of variables. The top set affects only the current user. So, if you plan to use the application, but don’t plan for others to use it, you’d make the Path environment variable change in the top field. The bottom set affects everyone who uses the computer. This is where you’d change the path if you want everyone to be able to use the application.

    There are paths that affect only the current user and those that affect the system as a whole.

    4. Locate the existing Path environment variable in the list of variables for either the personal or system environment variables and click Edit. If there is no existing Path environment variable, click New instead. You see a dialog box similar to the one shown here when working with Windows 10 (other versions of Windows will show a different dialog box, but the purpose is the same, to edit the path).

    Each path location appears on a separate line to make it easy to edit.

    When you open a new command prompt, you’ll see the new path in play. Changing the environment variable won’t change the path for any existing command prompt windows. Having the right path available when you want to perform the exercises in my books is important. Let me know if you have any questions about them at [email protected].

     

     

     

    Antiquated Technology Making Developers Faster

    This is an update of a post that originally appeared on November 7, 2014.

    I often wonder when I create a blog post whether the technology I’m describing will stand the test of time. In this case, I asked whether the reader would like to be able to type application code faster and with fewer keystrokes. The article, The 100 Year Old Trick to Writing at 240 Words Per Minute, probably has some good advice for you—at least, if you’re willing to learn the technique. It turns out that stenography isn’t only useful for court typists and people who print out the text for the hearing impaired on television, it’s also quite useful for developer. Yes, your IDE probably has more than a few tricks available for speeding up your typing, but I guarantee that these tricks only go so far. My personal best typing speed is 110 wpm and that’s flat out typing as fast as my fingers will go.

    Since that original post, someone has come out with a book called Learn Plover! that describes how to use this stenographic technique in more detail. There is also a site devoted to Plover now.

    Naturally, I haven’t ever used one of the devices mentioned in the article. However, a stenographer named Mirabai Knight has tried one of the devices and reproduced a 140 keystroke Python application using only 50 keystrokes. By the way, she has produced a series of interesting videos that you may want to review if you really think that Plover is for you. I don’t know of any IDE that can provide that sort of efficiency. Of course, it’s one thing for a trained stenographer to produce these sorts of results, but I’d like to hear from any developer who has used the technique to hear about how well it worked for them. Please contact me about your experiences at [email protected]. (Oddly enough, I did hear from at least one developer who uses it successfully.)

    The part that interested me most though is that the system, called Plover, is written in Python. (If you want to see Plover in action, check out the video at http://plover.stenoknight.com/2014/10/longer-plover-coding-snippet-in-python.html. A number of Beginning Programming with Python For Dummies, 3rd Edition readers have written to ask me how they can use their new found programming skills. The book contains sections that tell you about all sorts of ways in which Python is being used, but many of these uses are in large corporations. This particular use is by a small developer—someone just like you. Yet, it has a big potential for impacting how developers work. Just imagine the look on the boss’ face when you turn in your application in half the time because you can type it in so much faster? So, Python isn’t just for really large companies or for scientists—it’s for everyone who needs a language that can help them create useful applications of the sort that Python is best suited to target (and I describe all of these uses in my book).

    Security = Scrutiny

    This is an update of a post that originally appeared on July 22,2015.

    There is a myth among administrators and developers that it’s possible to keep a machine free of viruses, adware, Trojans, and other forms of malware simply by disconnecting it from the Internet. I was reminded of this bias while writing Machine Learning Security Principles because some of the exploits I cover included air-gapped PCs. I’m showing my age (yet again), but machines were being infected with all sorts of malware long before the Internet became any sort of connectivity solution for any system. At one time it was floppy disks that were the culprit, but all sorts of other avenues of attack present themselves. To dismiss things like evil USB drives that take over systems, even systems not connected to the Internet, is akin to closing your eyes and hoping an opponent doesn’t choose to hit you while you’re not looking. After all, it wouldn’t be fair. To make matters worse, you can easily find instructions for creating an evil USB drive online. However, whoever said that life was fair or that anyone involved in security plays by the rules? If you want to keep your systems free of malware, then you need to be alert and scrutinize them continually.

    Let’s look at this issue another way. If you refused to do anything about the burglar rummaging around on the first floor while you listened in your bedroom on the second floor, the police would think you’re pretty odd. The first thing they’ll ask you is why you don’t have an alarm system implemented into your home. Or if you do have one, wouldn’t it have been a good idea to set it in the first place, so more people would have been notified about this security breach. In addition to alarm systems, some homeowners also have an external security system installed around their homes. They would be able to provide a good image of the burglar. However, it’s still important to try and do something to actually stop the burglar. Whatever you do, you can’t just stand back and do nothing. More importantly, you’d have a really hard time getting any sort of sympathy or empathy from them. After all, if you just let a burglar take your things while you blithely refuse to acknowledge the burglar’s presence, whose fault is that? (Getting bonked on the back of the head while you are looking is another story.) That’s why you need to monitor your systems, even if they aren’t connected to the Internet. Someone wants to ruin your day and they’re not playing around. Hackers are dead serious about grabbing every bit of usable data on your system and using it to make your life truly terrible. Your misery makes them sublimely happy. Really, take my word for it.

    The reason I’m discussing this issue is that I’m still seeing stories like, Chinese Hackers Target Air-Gapped Military Networks. So, what about all those networks that were hacked before the Internet became a connectivity solution? Hackers have been taking networks down for a considerable time period and it doesn’t take an Internet connection to do it. The story is an interesting one because the technique used demonstrates that hackers don’t have to be particularly good at their profession to break into many networks. It’s also alarming because some of the networks targeted were contractors for the US military.

    There is no tool, software, connection method, or secret incantation that can protect your system from determined hackers. I’ve said this in every writing about security. Yes, you can use a number of tools to make it more difficult to get through and to dissuade someone who truly isn’t all that determined. Unfortunately, no matter how high you make the walls of your server fortress, the hacker can always go just a bit further to climb them. Sites like America’s Data Held Hostage (this site specializes in ransomware) tell me that most organizations could do more to scrutinize their networks. Every writing I read about informed security is that you can’t trust anyone or anything when you’re responsible for security, yet organizations continue to ignore that burglar on the first floor.

    There is the question of whether it’s possible to detect and handle every threat. The answer is that it isn’t. Truly gifted hackers will blindside you and can cause terrifying damage to your systems every time. Monitoring can mitigate the damage and help you recover more quickly, but the fact is that it’s definitely possible to do better. Let me know your thoughts about security at [email protected].

    Is Security Research Always Useful?

    This is an update of a post that originally appeared on February 19, 2016.

    Anyone involved in the computer industry likely spends some amount of time reading about the latest security issues in books such as Machine Learning Security Principles. Administrators and developers probably spend more time than many people, but no one can possibly read all the security research available today. There are so many researchers looking for so many bugs in so many places and in so many different ways that even if someone had the time and inclination to read every security article produced, it would be impossible. You’d need to be the speediest reader on the planet (and then some) to even think about scratching the surface. So, you must contemplate the usefulness of all that research—whether it’s actually useful or simply a method for some people to get their name on a piece of paper.

    What amazes me since I first wrote this blog post is that I have done a considerable amount of additional reading, including research papers, and find that most exploits remain essentially the same. The techniques may differ, they may improve, but the essentials of the exploit remain the same. It turns out that humans are the weakest link in every security chain and that social engineering attacks remain a mainstay of hackers. The one thing that has changed in seven years is that the use of machine learning and deep learning techniques has automated life for the hacker, much as these technologies have automated life for everyone else. In addition, a lack of proactive privacy makes it even easier than before for a hacker to create a believable attack by using publicly available information about an intended target.

    As part of researching security, you need to consider the viability of an attack, especially with regard to your organization, infrastructure, personnel, and applications. Some of the attacks require physical access to the system. In some cases, you must actually take the system apart to access components in order to perform the security trick. Many IoT attacks fall into this category. Unless you or your organization is in the habit of allowing perfect strangers physical access to your systems, which might include taking them apart, you must wonder whether the security issue is even worth worrying about. You need to ask why someone would take the time to document a security issue that’s nearly impossible to see, much less perform in a real world environment. More importantly, the moment you see that a security issue requires physical access to the device, you can probably stop reading.

    You also find attacks that require special equipment to perform. The article, How encryption keys could be stolen by your lunch, discusses one such attack. In fact, the article contains a picture of the special equipment that you must build to perpetrate the attack. It places said equipment into a piece of pita bread, which adds a fanciful twist to something that is already quite odd and pretty much unworkable given that you must be within 50 cm (19.6 in) from the device you want to attack (assuming that the RF transmission conditions are perfect). Except for the interesting attack vector (using a piece of pita bread), you really have to question why anyone would ever perpetrate this attack given that social engineering and a wealth of other attacks require no special equipment, are highly successful, and work from a much longer distance.

    It does pay to keep an eye on the latest and future targets of hacker attacks. Even though many IoT attacks are the stuff of James Bond today, hackers are paying attention to IoT, so it pays to secure your systems, which are likely wide open right now. As one of my experiments for Machine Learning Security Principles, I actually did hack my own smart thermostat (after which, I immediately improved security). The number of IoT attacks is increasing considerably, so ensuring that you maintain electrical, physical, and application security over your IoT devices is important, but not to the exclusion of other needs.

    A few research pieces become more reasonable by discussing outlandish sorts of hacks that could potentially happen after an initial break-in. The hack discussed in Design flaw in Intel chips opens door to rootkits is one of these sorts of hacks. You can’t perpetrate the hack until after breaking into the system some other way, but the break-in has serious consequences once it occurs. Even so, most hackers won’t take the time because they already have everything needed—the hack is overkill. However, this particular kind of hack should sound alarms in the security professional’s head. The Windows 11 requirement for the TPM 2.0 chip is supposed to make this kind of attack significantly harder, perhaps impossible, to perform. Of course, someone has already found a way to bypass the TPM 2.0 chip requirement and it doesn’t help that Microsoft actually signed off on a piece of rootkit malware for installation on a Windows 11 system. So, security research, even when you know that the actual piece of research isn’t particularly helpful, can become a source of information for thought experiments of what a hacker might do.

    The articles that help most provide a shot of reality into the decidedly conspiracy-oriented world of security. For example, Evil conspiracy? Nope, everyday cyber insecurity, discusses a series of events that everyone initially thought pointed to a major cyber attack. It turns out that the events occurred at the same time by coincidence. The article author thoughtfully points out some of the reasons that the conspiracy theories seemed a bit out of place at the outset anyway.

    It also helps to know the true sources of potential security issues. For example, the articles, In the security world, the good guys aren’t always good and 5 reasons why newer hires are the company’s biggest data security risk, point out the sources you really do need to consider when creating a security plan. These are the sorts of articles that should attract your attention because they describe a security issue that you really should think about.

    The point is that you encounter a lot of information out there that doesn’t help you make your system any more secure. It may be interesting if you have the time to read it, but the tactics truly aren’t practical and no hacker is going to use them. Critical thinking skills are your best asset when building your security knowledge. Let me know about your take on security research at [email protected].

    Technology and Child Safety

    This is an update of a post that originally appeared on January 20, 2016.

    I wrote a little over seven years ago that I had read an article in ComputerWorld, Children mine cobalt used in smartphones, other electronics, that had me thinking yet again about how people in rich countries tend to ignore the needs of those in poor countries. I had sincerely hoped at the time that things would be different, better, in seven years. Well, they’re worse! We’ve increased our use of cobalt dramatically in order to create supposedly green cars. The picture at the beginning of the ComputerWorld article says it all, but the details will have you wondering whether a smartphone or an electric car really is worth some child’s life. That’s right, any smartphone or electric car you buy may be killing someone and in a truly horrid manner. Children as young as 7 years old are mining the cobalt needed for the batteries (and other components) in the smartphones and electric cars that people seem to feel are so necessary for life (they aren’t you know; food, water, clothing, shelter, sleep, air, and reproduction are necessities, everything else is a luxury).

    The problem doesn’t stop when someone gets rid the smartphone, electric car, or other technology. Other children end up dismantling the devices sent for recycling. That’s right, a rich country’s efforts to keep electronics out of their landfills is also killing children because countries like India put these children to work taking them apart in unsafe conditions. Recycled wastes go from rich countries to poor countries because the poor countries need the money for necessities, like food. Often, these children are incapable of working by the time they reach 35 or 40 due to health issues induced by their forced labor. In short, the quality of their lives is made horribly low so that it’s possible for people in rich countries to enjoy something that truly isn’t necessary for life. To make matters worse, the vendors of these products build in obsolescence (making them unrepairable) so they can sell more products and make more money, increasing the devastation visited on children.

    I’ve written other blog posts about the issues of technology pollution. However, the emphasis of these previous articles has been on the pollution itself. Taking personal responsibility for the pollution you create is important, but we really need to do more. Robotic (autonomous) mining is one way to keep children out of the mines and projects such as UX-1 show that it’s entirely possible to use robots in place of people today. The weird thing is that autonomous mining would save up to 80% of the mining costs of today, so you have to wonder why manufacturers aren’t rushing to employ this solution.

    In addition, off world mining would keep the pollution in space, rather than on planet earth. Of course, off world mining also requires a heavy investment in robots, but it promises to provide a huge financial payback in addition to keeping earth a bit cleaner. The point is that there are alternatives that we’re not using. Robotics presents an opportunity to make things right with technology and I’m excited to be part of that answer in writing books such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition.

    Unfortunately, companies like Apple, Samsung, and many others simply thumb their noses at laws that are in place to protect the children in these countries because they know you’ll buy their products. Yes, they make official statements, but read their statements in that first article and you’ll quickly figure out that they’re excuses and poorly made excuses at that. They don’t have to care because no one is holding them to account. People in rich countries don’t care because their own backyards aren’t sullied and their own children remain safe. It’s not that I have a problem with technology, quite the contrary, I have a problem with the manner in which technology is currently being made and supported. We need to do better. So, the next time you think about buying electronics, consider the real price for that product. Let me know what you think about polluting other countries to keep your country clean at [email protected].

    Making Algorithms Useful

    This is an update of a post that originally appeared on December 2, 2015.

    Writing about machine learning and deep learning in my various books has been interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in books such as Machine Learning Security PrinciplesAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition:

    • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
    • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
    • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
    • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
    • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

    Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning and deep learning are already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

    As I’ve written new books and updated existing ones, I’ve seen an almost magical progression in the capabilities of machine learning and deep learning applications such as ChatGPT, Chat Generative Pre-Trained Transformer, which can produce some pretty amazing output.

    Some of these applications, such as Siri and Alexa, continue to learn as you use them. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question.

    Machine learning is a big mystery to many people today, while other people have gained enough experience to have strong opinions about it. Because I continue to write new machine learning/deep learning books and update others, it would be interesting to hear your questions about machine learning and deep learning. After all, I’d like to tune the content of my books to meet the most needs that I can. Where do you see this technology headed? What confuses you about it? Talk to me at [email protected].

    Robotics and Your Job

    This is an update of a post that originally appeared on February 29, 2016.

    I have written more than a few books now that involve robotics, AI, machine learning, deep learning, or some other form of advanced technology such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd Edition, Algorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition. People have often asked me whether a Terminator style robot is possible based on comments by people like Stephen Hawking and Elon Musk (who seems to have changed his mind). It isn’t, Ex Machina and The Terminator notwithstanding. I’m also asked whether they’ll be without work sometime soon (the topic of this post). (As an aside, deus ex machina is a literary plot device that has been around for a long time before the movie came out.)

    Whether your job is secure depends on the kind of job you have, whether robotics jobs will actually save money and improve the technology we already have, what you believe as a person, and how your boss interprets all the hype currently out there. For example, if your claim to fame is flipping burgers, then you’d better be ready to get another job soon. McDonald’s has opened its first mostly fully automated store in Texas. Some jobs are simply going to go away, no doubt about it.

    However, robots aren’t always the answer to the question. Many experts see three scenarios: humans working for robots (as in a doctor collaborating with a robot to perform surgery more accurately and with greater efficiency), humans servicing robots (those McDonald’s jobs may be going away, but someone will have to maintain the robots), and robots working for humans (such as that Roomba that’s currently keeping your house clean). The point is that robots will actually create new jobs, but that means humans will need new skills. Instead of boring jobs that pay little, someone with the proper training can have an interesting job that pays moderately well.

    An interesting backlash against automation has occurred in several areas. So, what you believe as a person does matter when it comes to the question of jobs. The story that tells the tale most succinctly appears in ComputerWorld, Taxpayer demand for human help soars, despite IRS automation (the fact that the IRS automation is overloaded doesn’t help matters). Sometimes people want a human to help them. This backlash could actually thwart strategies such as the use of robotic police dogs, which don’t appear to be very popular with the public.

    There is also the boss’ perspective to consider. A boss is only a boss as long as there is someone or something to manage. Even though your boss will begrudgingly give up your job to automation, you can be sure that giving up a job personally isn’t on the list of things to do. Some members of the press have resorted to viewing the future as a time when robots do everything and humans don’t work, but really, this viewpoint is a fantasy. However, it’s not a fantasy that companies such as Hitachi are experimenting with robot managers. Some employees actually prefer the consistent interaction of a robot boss. It’s unlikely that managers will take this invasion of their domain sitting down and do something to make using robots untenable.

    Robots are definitely making inroads into society and children growing up with robots being a part of their lives will likely be more accepting of them. Still, there is some debate as to just how far robot use will go and how fast it will get there. The interaction between business and the people that businesses serve will play a distinct role in how things play out. However, all this said, your job will likely be different in the future due to the influences of robots. For the most part, I feel that life will be better for everyone after the adjustment, but that the adjustment will be quite hard. Let me know your thoughts on robots at [email protected].

    IPython Magic Functions

    This is an update of a post that originally appeared on April 25, 2016.

    All of my current Python language books (and those I collaborated on with Luca Massaron): Machine Learning Security Principles, Algorithms for Dummies, 2nd Edition, Beginning Programming with Python For Dummies, 3rd Edition, Python for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition allow use of Jupyter Notebook (through Anaconda) or Google Colab to interact with the example code. Both of these IDEs extend the development environment in a number of ways, one of which is the use of magic functions. You see the magic functions in the code of these books as calls that begin with either one or two percent signs (% or %%). The most common of these magic functions is %matplotlib, which controls how IPython Notebook or Jupyter Notebook display plot output from the code.

    You can find a listing of the most common magic functions in the Python for Data Science for Dummies Cheat Sheet. None of my books use any other magic functions, so this is also a complete list of magic functions that you can expect to find in our books. However, you might want to know more. Fortunately, the site at https://damontallen.github.io/IPython-quick-ref-sheets/ provides you with a complete listing of the magic commands (and a wealth of other information about Jupyter Notebook).

    There are differences in magic function support between Jupyter Notebook and Google Colab, some of which are outlined in our books as needed. None of these differences will significantly affect your learning experience. However, it pays to know that Jupyter Notebook and Google Colab are only mostly the same, not precisely the same, and you’ll encounter differences. The screenshots in my books reflect the Jupyter Notebook version supported by that book, so what you might see on your screen when using magic functions in Google Colab may differ from the book.

    Of course, you might choose to use another IDE—one that isn’t quite so magical as Jupyter Notebook or Google Colab. In this case, you need to remove those magic commands. Removing the commands generally won’t affect functionality of the code. The example will still work as explained in the book. However, the way that the IDE presents output could change. For example, instead of being inline, plots could appear in a separate window. Even though using a separate window is less convenient, either method works just fine. If you ever do encounter a magic function-related problem, please be sure to let me know at [email protected].

    Apathy, Sympathy, and Empathy in Books

    This is an update of a post that originally appeared on May 23, 2016.

    I’ve written more than a few times about the role that emotion plays in books, even technical books. Technical books such as Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements and Machine Learning Security Principles are tough to write because they’re packed with emotion. The author not only must convey emotion and evoke emotions in the reader, but explore the emotion behind the writing. In this case, the author’s emotions may actually cause problems with the book content. The writing is tiring because the author experiences emotions in the creation of the text. The roller-coaster of emotions tends to take a toll. Three common emotions that authors experience in the writing of a book and that authors convey to the reader as part of communicating the content are apathy, sympathy, and empathy. These three emotions can play a significant role in the suitability of the book’s content in helping readers discover something new about the people they support, themselves, and even the author.

    It’s a mistake to feel apathy toward any technical topic. Writers need to consider the ramifications of the content and how it affects both the reader and the people that the reader serve. For example, during the writing of Artificial Intelligence for Dummies, 2nd Edition, Python for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition Luca and I discussed the potential issues that automation creates for the people who use it and those who are replaced by it in the job market. Considering how to approach automation in an ethical manner is essential to creating a positive view of the technology that helps people use it for good. Even though apathy is often associated with no emotion at all, people are emotional creatures and apathy often results in an arrogant or narcissistic attitude. Not caring about a topic isn’t an option.

    I once worked with an amazing technical editor who told me more than a few times that people don’t want my sympathy. When you look at sympathy in the dictionary, the result of having sympathy toward someone would seem positive, but after more than a few exercises to demonstrate the effects of sympathy on stakeholders with disabilities, I concluded that the technical editor was correct—no one wanted my sympathy. The reason is simple when you think about it. The connotation of sympathy is that you’re on the outside looking in and feel pity for the person struggling to complete a task. Sympathy makes the person who engages in it feel better, but does nothing for the intended recipient except make them feel worse. However, sympathy is still better than apathy because at least you have focused your attention on the person who benefits from the result of your writing efforts.

    Empathy is often introduced as a synonym of sympathy, but the connotation and effects of empathy are far different from sympathy. When you feel empathy and convey that emotion in your writing, you are on the inside, with the person you’re writing for, looking out. Putting yourself in the position of the people you want to help is potentially the hardest thing you can do and certainly the most tiring. However, it also does the most good.

    Empathy helps you understand that someone who loses a job to automation isn’t looking for a new career, the old one worked just fine. The future doesn’t look bright at all to them. Likewise, some with disabilities isn’t looking for a handout and they don’t want you to perform the task for them. They may, in fact, not feel as if they have a disability at all. It was the realization that using technology to create a level playing field so that the people I wanted to help could help themselves and feel empowered by their actions that opened new vistas for me. The experience has colored every book I’ve written since the first time I came to realize that empathy is the correct emotion to convey and my books all try to convey emotion in a manner that empowers, rather than saps, the strength the my reader and the people my reader serves.

    Obviously, a good author has more than three emotions. In fact, the toolbox of emotions that an author carries are nearly limitless and its wise to employ them all as needed. However, these three emotions have a particular role to play and are often misunderstood by authors. Let me know your thoughts on these three emotions or about emotions in general at [email protected].

    Handling Source Code in Books

    This is an update of a post that originally appeared on April 4, 2011.

    One of the biggest conundrums for the technical writer is how to handle source code in a book. The goal is to present an easily understood example to the reader—one that demonstrates a principle in a clear and concise manner. In fact, complexity is a problem with many examples—the author tries to stuff too much information into the example and ends up obfuscating the very principles that the reader is supposed to obtain.

    There is also the problem with pages because books have a limited number of them. The technical writer must balance the depth and functionality of the examples against a need to present as many examples as possible. Even if a book is balanced, some readers are going to be disappointed that the book doesn’t contain the example they actually needed. So, very often simplicity must win the day in creating application source code for a book, despite the desire of the author to present something more real world, something with additional glitz and polish.

    Because the goal of an example is to teach, very often the examples you see in a book have more comments than those that you see in real life. An example in a book must include as much information as possible if the code is going to fulfill its purpose. Of course, book comments should illustrate all the best principles of creating comments in real code. In short, if real world code looked a bit more like book code, then its possible that developers would spend far less time trying to figure code out and more time making changes.

    Some readers will take the author to task because the code may not always provide the error trapping that production code provides. In fact, as with many teaching environments, the safety features in code are often removed for the sake of clarity. This problem plagues other environments too. In the past, it was common for woodworking magazines post a note near the beginning of the magazine telling the reader that the safety devices have been removed for the sake of clarity and that no one in their right mind would actually work with woodworking equipment without the safety devices. Likewise, the code you see in a book often lacks sufficient error trapping, making the principle that the code demonstrates clearer, at the cost of fragility. You can usually cause book examples to break easily, but no one in their right mind would create production code like that.

    Choosing good examples for a book is hard, so getting your input really is important. I may not be able to provide precisely the example you need or want, but I may be able to provide something similar in the next edition of the book. Of course, I won’t know your needs or wants unless you tell me about them. I’m always open to hearing your ideas. However, I’m not open to providing free consulting in the form of troubleshooting your error code unless you’re willing to hire me to do so. Please keep the discussion to ideas that you’d like to see in book updates by contacting me at [email protected].