Adding a Location to the Windows Path

This is an update of a post that originally appeared on February 17, 2014.

A number of my books tell the reader to perform tasks at the command line. What this means is that the reader must have access to applications stored on the hard drive. Windows doesn’t track the location of every application. Instead, it relies on the Path environment variable to provide the potential locations of applications. If the application the reader needs doesn’t appear on the path, Windows won’t be able to find it. Windows will simply display an error message. So, it’s important that any applications you need to access for my books appear on the path if you need to access them from the command line.

You can always see the current path by typing Path at the command line and pressing Enter. What you’ll see is a listing of locations, each of which is separated by a semicolon as shown here (your path will differ from mine).

Using the Path command displays the current path.

In this case, Windows will begin looking for an application in the current folder. If it doesn’t find the application there, then it will look in C:\Python33\, then in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common, and so on down the list. Each potential location is separated from other locations using a semicolon as shown in the figure.

There are a number of ways to add a location to the Windows path. If you only need to add a path temporarily, you can simply extend the path by setting it to the new value, plus the old value. For example, if you want to add C:\MyApp to the path, you’d type Path=C:\MyApp;%Path% and press Enter. Notice that you must add a semicolon after C:\MyApp. Using %Path% appends the existing path after C:\MyApp. Here is how the result looks on screen.

Adding a path is a relatively simple process using the Path= command

Of course, there are times when you want to make the addition to the path permanent because you plan to access the associated application regularly. In this case, you must perform the task within Windows itself. The following steps tell you how.

1. Right click This Computer (or Computer) and choose Properties from the context menu or select Settings (or System in the Control Panel). You see the a Settings (System) window similar to the one shown here open (precisely what you see depends on which version of Windows you have, the figure shows Windows 10).

    The Settings window in Windows 10.

    2. Click Advanced System Settings. You see the Advanced tab of the System Properties dialog box shown here.

    The Advanced tab provides access to your permanent path in Windows.

    3. Click Environment Variables. You see the Environment Variables dialog box shown here. Notice that there are actually two sets of variables. The top set affects only the current user. So, if you plan to use the application, but don’t plan for others to use it, you’d make the Path environment variable change in the top field. The bottom set affects everyone who uses the computer. This is where you’d change the path if you want everyone to be able to use the application.

    There are paths that affect only the current user and those that affect the system as a whole.

    4. Locate the existing Path environment variable in the list of variables for either the personal or system environment variables and click Edit. If there is no existing Path environment variable, click New instead. You see a dialog box similar to the one shown here when working with Windows 10 (other versions of Windows will show a different dialog box, but the purpose is the same, to edit the path).

    Each path location appears on a separate line to make it easy to edit.

    When you open a new command prompt, you’ll see the new path in play. Changing the environment variable won’t change the path for any existing command prompt windows. Having the right path available when you want to perform the exercises in my books is important. Let me know if you have any questions about them at [email protected].

     

     

     

    Antiquated Technology Making Developers Faster

    This is an update of a post that originally appeared on November 7, 2014.

    I often wonder when I create a blog post whether the technology I’m describing will stand the test of time. In this case, I asked whether the reader would like to be able to type application code faster and with fewer keystrokes. The article, The 100 Year Old Trick to Writing at 240 Words Per Minute, probably has some good advice for you—at least, if you’re willing to learn the technique. It turns out that stenography isn’t only useful for court typists and people who print out the text for the hearing impaired on television, it’s also quite useful for developer. Yes, your IDE probably has more than a few tricks available for speeding up your typing, but I guarantee that these tricks only go so far. My personal best typing speed is 110 wpm and that’s flat out typing as fast as my fingers will go.

    Since that original post, someone has come out with a book called Learn Plover! that describes how to use this stenographic technique in more detail. There is also a site devoted to Plover now.

    Naturally, I haven’t ever used one of the devices mentioned in the article. However, a stenographer named Mirabai Knight has tried one of the devices and reproduced a 140 keystroke Python application using only 50 keystrokes. By the way, she has produced a series of interesting videos that you may want to review if you really think that Plover is for you. I don’t know of any IDE that can provide that sort of efficiency. Of course, it’s one thing for a trained stenographer to produce these sorts of results, but I’d like to hear from any developer who has used the technique to hear about how well it worked for them. Please contact me about your experiences at [email protected]. (Oddly enough, I did hear from at least one developer who uses it successfully.)

    The part that interested me most though is that the system, called Plover, is written in Python. (If you want to see Plover in action, check out the video at http://plover.stenoknight.com/2014/10/longer-plover-coding-snippet-in-python.html. A number of Beginning Programming with Python For Dummies, 3rd Edition readers have written to ask me how they can use their new found programming skills. The book contains sections that tell you about all sorts of ways in which Python is being used, but many of these uses are in large corporations. This particular use is by a small developer—someone just like you. Yet, it has a big potential for impacting how developers work. Just imagine the look on the boss’ face when you turn in your application in half the time because you can type it in so much faster? So, Python isn’t just for really large companies or for scientists—it’s for everyone who needs a language that can help them create useful applications of the sort that Python is best suited to target (and I describe all of these uses in my book).

    Understanding the Continuing Need for C++

    This is an update of a post that originally appeared on February 23, 2015.

    I maintain statistics on all my books, including C++ All-In-One for Dummies, 4th Edition. These statistics are based on reader e-mail and other sources of input that I get. I even take the comments on Amazon.com into account. One of the most common C++ questions I get (not the most common, but it’s up there) is why someone would want to use the language in the first place. It’s true, C++ isn’t the language to use if you’re creating a database application and schools now prefer Python as an introductory programming language for older children. However, it is the language to use if you’re writing low-level code that has to run fast. C++ also sees use in a vast number of libraries because library code has to be fast. For example, check out the Python libraries at some point and you’ll find C++ staring back at you. In fact, part of the Python documentation discusses how to use C++ to create extensions.

    I decided to look through some of my past notes to see if there was some succinct discussion of just why C++ is a useful language for the average developer to know. That’s when I ran across an InfoWorld article entitled, “Stroustrup: Why the 35-year-old C++ still dominates ‘real’ dev.” Given that the guy being interviewed is Bjarne Stroustrup, the inventor of C++, it’s a great source of information. The interview is revealing because it’s obvious that Bjarne is taking a measured view of C++ and not simply telling everyone to use it for every occasion (quite the contrary, in fact).

    The bottom line in C++ development is speed. Along with speed, you also get flexibility and great access to the hardware. As with anything, you pay a price for getting these features. In the case of C++, you’ll experience increased development time, greater complexity, and more difficulty in locating bugs. Some people are taking a new route to C++ speed though and that’s to write their code in one language and move it to C++ from there. For example, some Python developers are now cross-compiling their code into C++ to gain a speed advantage. You can read about it in the InfoWorld article entitled, Python to C converter tool.

    A lot of readers will close a message to me asking whether there is a single language they can learn to do everything well. Unfortunately, there isn’t any such language and given the nature of computer languages, I doubt there ever will be. Every language has a niche for which it’s indispensable. The smart developer has a toolbox full of languages suited for every job the developer intends to tackle.

    Do you find that you really don’t understand how the languages in my books can help you? Let me know your book-specific language questions at [email protected]. It’s always my goal that you understand how the material you’ve learned while reading one of my books will eventually help you in the long run. After all, what’s the point of reading a book that doesn’t help you in some material way? Thanks, as always, for your staunch support of my writing efforts!

    Security = Scrutiny

    This is an update of a post that originally appeared on July 22,2015.

    There is a myth among administrators and developers that it’s possible to keep a machine free of viruses, adware, Trojans, and other forms of malware simply by disconnecting it from the Internet. I was reminded of this bias while writing Machine Learning Security Principles because some of the exploits I cover included air-gapped PCs. I’m showing my age (yet again), but machines were being infected with all sorts of malware long before the Internet became any sort of connectivity solution for any system. At one time it was floppy disks that were the culprit, but all sorts of other avenues of attack present themselves. To dismiss things like evil USB drives that take over systems, even systems not connected to the Internet, is akin to closing your eyes and hoping an opponent doesn’t choose to hit you while you’re not looking. After all, it wouldn’t be fair. To make matters worse, you can easily find instructions for creating an evil USB drive online. However, whoever said that life was fair or that anyone involved in security plays by the rules? If you want to keep your systems free of malware, then you need to be alert and scrutinize them continually.

    Let’s look at this issue another way. If you refused to do anything about the burglar rummaging around on the first floor while you listened in your bedroom on the second floor, the police would think you’re pretty odd. The first thing they’ll ask you is why you don’t have an alarm system implemented into your home. Or if you do have one, wouldn’t it have been a good idea to set it in the first place, so more people would have been notified about this security breach. In addition to alarm systems, some homeowners also have an external security system installed around their homes. They would be able to provide a good image of the burglar. However, it’s still important to try and do something to actually stop the burglar. Whatever you do, you can’t just stand back and do nothing. More importantly, you’d have a really hard time getting any sort of sympathy or empathy from them. After all, if you just let a burglar take your things while you blithely refuse to acknowledge the burglar’s presence, whose fault is that? (Getting bonked on the back of the head while you are looking is another story.) That’s why you need to monitor your systems, even if they aren’t connected to the Internet. Someone wants to ruin your day and they’re not playing around. Hackers are dead serious about grabbing every bit of usable data on your system and using it to make your life truly terrible. Your misery makes them sublimely happy. Really, take my word for it.

    The reason I’m discussing this issue is that I’m still seeing stories like, Chinese Hackers Target Air-Gapped Military Networks. So, what about all those networks that were hacked before the Internet became a connectivity solution? Hackers have been taking networks down for a considerable time period and it doesn’t take an Internet connection to do it. The story is an interesting one because the technique used demonstrates that hackers don’t have to be particularly good at their profession to break into many networks. It’s also alarming because some of the networks targeted were contractors for the US military.

    There is no tool, software, connection method, or secret incantation that can protect your system from determined hackers. I’ve said this in every writing about security. Yes, you can use a number of tools to make it more difficult to get through and to dissuade someone who truly isn’t all that determined. Unfortunately, no matter how high you make the walls of your server fortress, the hacker can always go just a bit further to climb them. Sites like America’s Data Held Hostage (this site specializes in ransomware) tell me that most organizations could do more to scrutinize their networks. Every writing I read about informed security is that you can’t trust anyone or anything when you’re responsible for security, yet organizations continue to ignore that burglar on the first floor.

    There is the question of whether it’s possible to detect and handle every threat. The answer is that it isn’t. Truly gifted hackers will blindside you and can cause terrifying damage to your systems every time. Monitoring can mitigate the damage and help you recover more quickly, but the fact is that it’s definitely possible to do better. Let me know your thoughts about security at [email protected].

    Is Security Research Always Useful?

    This is an update of a post that originally appeared on February 19, 2016.

    Anyone involved in the computer industry likely spends some amount of time reading about the latest security issues in books such as Machine Learning Security Principles. Administrators and developers probably spend more time than many people, but no one can possibly read all the security research available today. There are so many researchers looking for so many bugs in so many places and in so many different ways that even if someone had the time and inclination to read every security article produced, it would be impossible. You’d need to be the speediest reader on the planet (and then some) to even think about scratching the surface. So, you must contemplate the usefulness of all that research—whether it’s actually useful or simply a method for some people to get their name on a piece of paper.

    What amazes me since I first wrote this blog post is that I have done a considerable amount of additional reading, including research papers, and find that most exploits remain essentially the same. The techniques may differ, they may improve, but the essentials of the exploit remain the same. It turns out that humans are the weakest link in every security chain and that social engineering attacks remain a mainstay of hackers. The one thing that has changed in seven years is that the use of machine learning and deep learning techniques has automated life for the hacker, much as these technologies have automated life for everyone else. In addition, a lack of proactive privacy makes it even easier than before for a hacker to create a believable attack by using publicly available information about an intended target.

    As part of researching security, you need to consider the viability of an attack, especially with regard to your organization, infrastructure, personnel, and applications. Some of the attacks require physical access to the system. In some cases, you must actually take the system apart to access components in order to perform the security trick. Many IoT attacks fall into this category. Unless you or your organization is in the habit of allowing perfect strangers physical access to your systems, which might include taking them apart, you must wonder whether the security issue is even worth worrying about. You need to ask why someone would take the time to document a security issue that’s nearly impossible to see, much less perform in a real world environment. More importantly, the moment you see that a security issue requires physical access to the device, you can probably stop reading.

    You also find attacks that require special equipment to perform. The article, How encryption keys could be stolen by your lunch, discusses one such attack. In fact, the article contains a picture of the special equipment that you must build to perpetrate the attack. It places said equipment into a piece of pita bread, which adds a fanciful twist to something that is already quite odd and pretty much unworkable given that you must be within 50 cm (19.6 in) from the device you want to attack (assuming that the RF transmission conditions are perfect). Except for the interesting attack vector (using a piece of pita bread), you really have to question why anyone would ever perpetrate this attack given that social engineering and a wealth of other attacks require no special equipment, are highly successful, and work from a much longer distance.

    It does pay to keep an eye on the latest and future targets of hacker attacks. Even though many IoT attacks are the stuff of James Bond today, hackers are paying attention to IoT, so it pays to secure your systems, which are likely wide open right now. As one of my experiments for Machine Learning Security Principles, I actually did hack my own smart thermostat (after which, I immediately improved security). The number of IoT attacks is increasing considerably, so ensuring that you maintain electrical, physical, and application security over your IoT devices is important, but not to the exclusion of other needs.

    A few research pieces become more reasonable by discussing outlandish sorts of hacks that could potentially happen after an initial break-in. The hack discussed in Design flaw in Intel chips opens door to rootkits is one of these sorts of hacks. You can’t perpetrate the hack until after breaking into the system some other way, but the break-in has serious consequences once it occurs. Even so, most hackers won’t take the time because they already have everything needed—the hack is overkill. However, this particular kind of hack should sound alarms in the security professional’s head. The Windows 11 requirement for the TPM 2.0 chip is supposed to make this kind of attack significantly harder, perhaps impossible, to perform. Of course, someone has already found a way to bypass the TPM 2.0 chip requirement and it doesn’t help that Microsoft actually signed off on a piece of rootkit malware for installation on a Windows 11 system. So, security research, even when you know that the actual piece of research isn’t particularly helpful, can become a source of information for thought experiments of what a hacker might do.

    The articles that help most provide a shot of reality into the decidedly conspiracy-oriented world of security. For example, Evil conspiracy? Nope, everyday cyber insecurity, discusses a series of events that everyone initially thought pointed to a major cyber attack. It turns out that the events occurred at the same time by coincidence. The article author thoughtfully points out some of the reasons that the conspiracy theories seemed a bit out of place at the outset anyway.

    It also helps to know the true sources of potential security issues. For example, the articles, In the security world, the good guys aren’t always good and 5 reasons why newer hires are the company’s biggest data security risk, point out the sources you really do need to consider when creating a security plan. These are the sorts of articles that should attract your attention because they describe a security issue that you really should think about.

    The point is that you encounter a lot of information out there that doesn’t help you make your system any more secure. It may be interesting if you have the time to read it, but the tactics truly aren’t practical and no hacker is going to use them. Critical thinking skills are your best asset when building your security knowledge. Let me know about your take on security research at [email protected].

    Automation and the Future of Human Employment

    This is an update of a post that originally appeared on May 9, 2016.

    It wasn’t long ago that I wrote Robotics and Your Job to consider the role that robots will play in human society in the near future. Of course, robots are already doing mundane chores and those list of chores will increase as robot capabilities increase. The question of what sorts of work humans will do in the future has crossed my mind quite a lot as I’ve written a number of AI, machine learning, and deep learning books such as Artificial Intelligence for Dummies, 2nd Edition. In fact, both Luca and I have discussed the topic at depth. It isn’t just robotics, but the whole issue of automation that is important. Robots actually fill an incredibly small niche in the much larger topic of automation. Although articles like The end of humans working in service industry? seem to say that robots are the main issue, automation comes in all sorts of guises. So, here it is seven years later and robot theme parks are still in the news and they are making an impact as security guards. In addition, Huis Ten Bosch still has Robot Kingdom going (you can select either Japanese or English as needed to read the information). The fact of the matter is that in seven years robots have become a significant part of most people’s lives and the impact will continue to grow. Not that I’m actually expecting an I Robot experience any time soon.

    My vision for the future is that people will be able to work in occupations with lower risks, higher rewards, and greater interest. Unfortunately, not everyone wants a job like that. Some people really do want to go to work, clock in, place a tiny cog in a somewhat large wheel all day, clock out, and go home. They want something mindless that doesn’t require much effort, so losing service and assembly line type jobs to automation is a problem for them. In The Robots are Coming for Your Job, Too the author paints a pretty gloomy picture for anyone who thinks their service job will still exist in 50 years. The reality is that any job that currently pays under $25.00 an hour is likely to become a victim of automation. Many people insist that they’re irreplaceable, but the fact is that automation can easily take their job and employers are looking forward to the change because automation doesn’t require healthcare, pensions, vacation days, sick days, or salaries. Most importantly, automation does as its told.

    In the story The rise of greedy robots, the author lays out the basis for an increase in automation that maximizes business profit at the expense of workers. Articles such as On the Phenomenon of Bullshit Jobs tell why people are still working a 40 hour work week when it truly isn’t necessary to do so. In short, if you really do insist on performing a task that is essentially pointless, the government and industry is perfectly willing to let you do so until a time when technology is so entrenched that it’s no longer possible to do anything about it (no, I’m not making this up). As mentioned earlier, even some relatively essential jobs, such as security, have a short life expectancy with the way things are changing.

    The question of how automation will affect human employment in the future remains. Theoretically, people could work a 15 hour work week even now, but then we’d have to give up some of our consumerism—the purchase of gadgets we really don’t need. Earlier, I talked about jobs that are safer, more interesting, and more fulfilling. There are also those pointless jobs that the government will doubtless prop up at some point to keep people from rioting. However, there is another occupation that will likely become a major source of employment, but only for the nit-picky, detail person. In The thin line between good and bad automation the author explores the problem of scripts calling scripts. Even though algorithms will eventually create and maintain other algorithms, which in turn means that automation will eventually build itself, someone will still have to monitor the outcomes of all that automation. In addition, the search for better algorithms continues (as described in The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World and More data or better models?). Of course, these occupations still require someone with a great education and a strong desire to do something significant as part of their occupation.

    The point of all this speculation is that it isn’t possible to know precisely how the world will change due to the effects of automation, but it will most definitely change. Even though automation currently has limits, scientists are currently working on methods to extend automation even further so that the world science fiction authors have written about for years will finally come into being (perhaps not quite in the way they had envisioned, however). Your current occupation may not exist 10 years from now, much less 50 years from now. The smart thing to do is to assume your job is going to be gone and that you really do need a Plan B in place—a Plan B that may call for an increase in flexibility, training, and desire to do something interesting, rather than the same mundane task you’ve plodded along doing for the last ten years. Let me know your thoughts on the effects of automation at [email protected].

    Technology and Child Safety

    This is an update of a post that originally appeared on January 20, 2016.

    I wrote a little over seven years ago that I had read an article in ComputerWorld, Children mine cobalt used in smartphones, other electronics, that had me thinking yet again about how people in rich countries tend to ignore the needs of those in poor countries. I had sincerely hoped at the time that things would be different, better, in seven years. Well, they’re worse! We’ve increased our use of cobalt dramatically in order to create supposedly green cars. The picture at the beginning of the ComputerWorld article says it all, but the details will have you wondering whether a smartphone or an electric car really is worth some child’s life. That’s right, any smartphone or electric car you buy may be killing someone and in a truly horrid manner. Children as young as 7 years old are mining the cobalt needed for the batteries (and other components) in the smartphones and electric cars that people seem to feel are so necessary for life (they aren’t you know; food, water, clothing, shelter, sleep, air, and reproduction are necessities, everything else is a luxury).

    The problem doesn’t stop when someone gets rid the smartphone, electric car, or other technology. Other children end up dismantling the devices sent for recycling. That’s right, a rich country’s efforts to keep electronics out of their landfills is also killing children because countries like India put these children to work taking them apart in unsafe conditions. Recycled wastes go from rich countries to poor countries because the poor countries need the money for necessities, like food. Often, these children are incapable of working by the time they reach 35 or 40 due to health issues induced by their forced labor. In short, the quality of their lives is made horribly low so that it’s possible for people in rich countries to enjoy something that truly isn’t necessary for life. To make matters worse, the vendors of these products build in obsolescence (making them unrepairable) so they can sell more products and make more money, increasing the devastation visited on children.

    I’ve written other blog posts about the issues of technology pollution. However, the emphasis of these previous articles has been on the pollution itself. Taking personal responsibility for the pollution you create is important, but we really need to do more. Robotic (autonomous) mining is one way to keep children out of the mines and projects such as UX-1 show that it’s entirely possible to use robots in place of people today. The weird thing is that autonomous mining would save up to 80% of the mining costs of today, so you have to wonder why manufacturers aren’t rushing to employ this solution.

    In addition, off world mining would keep the pollution in space, rather than on planet earth. Of course, off world mining also requires a heavy investment in robots, but it promises to provide a huge financial payback in addition to keeping earth a bit cleaner. The point is that there are alternatives that we’re not using. Robotics presents an opportunity to make things right with technology and I’m excited to be part of that answer in writing books such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition.

    Unfortunately, companies like Apple, Samsung, and many others simply thumb their noses at laws that are in place to protect the children in these countries because they know you’ll buy their products. Yes, they make official statements, but read their statements in that first article and you’ll quickly figure out that they’re excuses and poorly made excuses at that. They don’t have to care because no one is holding them to account. People in rich countries don’t care because their own backyards aren’t sullied and their own children remain safe. It’s not that I have a problem with technology, quite the contrary, I have a problem with the manner in which technology is currently being made and supported. We need to do better. So, the next time you think about buying electronics, consider the real price for that product. Let me know what you think about polluting other countries to keep your country clean at [email protected].

    Making Algorithms Useful

    This is an update of a post that originally appeared on December 2, 2015.

    Writing about machine learning and deep learning in my various books has been interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in books such as Machine Learning Security PrinciplesAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition:

    • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
    • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
    • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
    • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
    • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

    Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning and deep learning are already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

    As I’ve written new books and updated existing ones, I’ve seen an almost magical progression in the capabilities of machine learning and deep learning applications such as ChatGPT, Chat Generative Pre-Trained Transformer, which can produce some pretty amazing output.

    Some of these applications, such as Siri and Alexa, continue to learn as you use them. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question.

    Machine learning is a big mystery to many people today, while other people have gained enough experience to have strong opinions about it. Because I continue to write new machine learning/deep learning books and update others, it would be interesting to hear your questions about machine learning and deep learning. After all, I’d like to tune the content of my books to meet the most needs that I can. Where do you see this technology headed? What confuses you about it? Talk to me at [email protected].