Locating the Machine Learning for Dummies, 2nd Edition Source Code

A reader recently wrote to say that the source code for Machine Learning for Dummies, 2nd Edition on GitHub is incomplete. Actually, that wasn’t originally one of the download sources for the book’s code and we had used that site for an intermediary code location, so it wasn’t complete. The GitHub site code is complete now, but we’d still prefer that you download the code from one of the two sites listed in the book: On my website at http://www.johnmuellerbooks.com/source-code/ (just click the book’s link) or from the Wiley site at https://www.wiley.com/en-us/Machine+Learning+For+Dummies%2C+2nd+Edition-p-9781119724056 (just click the Downloads link, then the download link next to the source code you want). The two preferred sites offer the source code in either Python or R form in case you don’t want to download both.

When you get the downloadable source, make sure you remove it from the archive as described in the UnZIPping the Downloadable Source post. Using the downloadable source helps you avoid some of the issues described in the Verifying Your Hand Typed Code post. Please let me know whenever you encounter problems with the downloadable source for a book at [email protected].

Considering the Four Levels of Intelligence Management

One of the reasons that Luca and I wrote Artificial Intelligence for Dummies, 2nd Edition was to dispel some of the myths and hype surrounding machine-based intelligence. If anything, the amount of ill-conceived and blatantly false information surrounding AI has only increased since then. So now we have articles like Microsoft’s Bing wants to unleash ‘destruction’ on the internet out there that espouse ideas that can’t work at all because AIs feel nothing. Of course, there is the other end of the spectrum in articles like David Guetta says the future of music is in AI, which also can’t work because computers aren’t creative. A third kind of article starts to bring some reality back into the picture, such as Are you a robot? Sci-fi magazine stops accepting submissions after it found more than 500 stories received from contributors were AI-generated. All of this is interesting for me to read about because I want to see how people react to a technology that I know is simply a technology and nothing more. Anthropomorphizing computers is a truly horrible idea because it leads to the thoughts described in The Risk of a New AI Winter. Another AI winter would be a loss for everyone because AI really is a great tool.

As part Python for Data Science for Dummies and Machine Learning for Dummies, 2nd Edition Luca and I considered issues like the seven kinds of intelligence and how an AI can only partially express most of them. We even talked about how the five mistruths in data can cause issues such as skewed or even false results in machine learning output. In  Machine Learning Security Principles I point out the manifest ways in which humans can use superior intelligence to easily thwart an AI. Still, people seem to refuse to believe that an AI is the product of clever human programmers, a whole lot of data, and methods of managing algorithms. Yes, it’s all about the math.

This post goes to the next step. During my readings of various texts, especially those of a psychological and medical variety, I’ve come to understand that humans embrace four levels of intelligence management. We don’t actually learn in a single step as might be thought by many people, we learn in four steps with each step providing new insights and capabilities. Consider these learning management steps:

  1. Knowledge: A person learns about a new kind of intelligence. That intelligence can affect them physically, emotionally, mentally, or some combination of the three. However, simply knowing about something doesn’t make it useful. An AI can accommodate this level (and even excel at it) because it has a dataset that is simply packed with knowledge. However, the AI only sees numbers, bits, values, and nothing more. There is no comprehension as is the case with humans. Think of knowledge as the what of intelligence.
  2. Skill: After working with new knowledge for some period of time, a human will build a skill in using that knowledge to perform tasks. In fact, very often this is the highest level that a human will achieve with a given bit of knowledge, which I think is the source of confusion for some people with regard to AIs. Training an AI model, that is, assigning weights to a neural network created of algorithms, gives an AI an appearance of skill. However, the AI isn’t actually skilled, it can’t accommodate variations as a human will. What the AI is doing is following the parameters of the algorithm used to create its model. This is the highest step that any AI can achieve. Think of skill as the how of intelligence.
  3. Understanding: As a human develops a skill and uses the skill to perform tasks regularly, new insights develop and the person begins to understand the intelligence at a deeper level, making it possible for a person to use the intelligence in new ways to perform new tasks. A computer is unable to understand anything because it lacks self-awareness, which is a requirement for understanding anything at all. Think of understanding as the why of intelligence.
  4. Wisdom: Simply understanding an intelligence is often not enough to ensure the use of that intelligence in a correct manner. When a person makes enough mistakes in using an intelligence, wisdom in its use begins to take shape. Computers have no moral or ethical ability—they lack any sort of common sense. This is why you keep seeing articles about AIs that are seemingly running amok, the AI has no concept whatsoever of what it is doing or why. All that the AI is doing is crunching numbers. Think of wisdom as the when of intelligence.

It’s critical that society begin to see AIs for what they are, exceptionally useful tools that can be used to perform certain tasks that require only knowledge and a modicum of skill and to augment a human when some level of intelligence management above these levels is required. Otherwise, we’ll eventually get engulfed in another AI winter that thwarts development of further AI capabilities that could help people do things like go to Mars, mine minerals in an environmentally friendly way in space, cure diseases, and create new thoughts that have never seen the light of day before. What are your thoughts on intelligence management? Let me know at [email protected].

Creating Useful Comments

This is an update of a post that originally appeared on November 21, 2011.

A major problem with most applications today is that they lack useful comments. It’s impossible for anyone to truly understand how an application works unless the developer provides comments at the time the code is written. In fact, this issue extends to the developer. A month after someone writes an application, it’s possible to forget the important details about it. In fact, for some of us, the interval between writing and forgetting is even shorter. Despite my best efforts and those of many other authors, many online examples lack any comments whatsoever, making them nearly useless to anyone who lacks time to run the application through a debugger to discover how it works.

Good application code comments help developers of all stripes in a number of ways. As a minimum, the comments you provide as part of your application code provides these benefits.

  • Debugging: It’s easier to debug an application that has good comments because the comments help the person performing the debugging understand how the developer envisioned the application working.
  • Updating: Anyone who has tried to update an application that lacks comments knows the pain of trying to figure out the best way to do it. Often, an update introduces new bugs because the person performing the update doesn’t understand how to interact with the original code.
  • Documentation: Modern IDEs often provide a means of automatically generating application documentation based on the developer comments. Good comments significantly reduce the work required to create documentation and sometimes eliminate it altogether.
  • Technique Description: You get a brainstorm in the middle of the night and try it in your code the next day. It works! Comments help you preserve the brainstorm that you won’t get back later no matter how hard you try. The technique you use today could also solve problems in future applications, but the technique may become unavailable unless you document it.
  • Problem Resolution: Code often takes a circuitous route to accomplish a task because the direct path will result in failure. Unless you document your reasons for using a less direct route, an update could cause problems by removing the safeguards you’ve provided.
  • Performance Tuning: Good comments help anyone tuning the application understand where performance changes could end up causing the application to run more slowly or not at all. A lot of performance improvements end up hurting the user, the data, or the application because the person tuning the application didn’t have proper comments for making the adjustments.

The need for good comments means creating a comment that has the substance required for someone to understand and use it. Unfortunately, it’s sometimes hard to determine what a good comment contains in the moment because you already know what the code does and how it does it. Consequently, having a guide as to what to write is helpful. When writing a comment, ask yourself these questions:

  • Who is affected by the code?
  • What is the code supposed to do?
  • When is the code supposed to perform this task?
  • Where does the code obtain resources needed to perform the task?
  • Why did the developer use a particular technique to write the code?
  • How does the code accomplish the task without causing problems with other applications or system resources?

There are many other questions you could ask yourself, but these six questions are a good start. You won’t answer every question for every last piece of code in the application because sometimes a question isn’t pertinent. As you work through your code and gain experience, start writing down questions you find yourself asking. Good answers to aggravating questions produce superior comments. Whenever you pull your hair out trying to figure out someone’s code, especially your own, remember that a comment could have saved you time, frustration, and effort. What is your take on comments? Let me know at [email protected].

Considering Perception in User Interface Design

This is an update of a post that originally appeared on January 24, 2014.

The original version of this article had humans seeing images in as little as 13 ms. Nothing much has really changed since then. I read a few articles recently that reminded me of a user interface design discussion I once had with a friend of mine. First, let’s discuss the articles:

  • The first, Everything we see is a mash-up of the brain’s last 15 seconds of visual information, says that humans can actually see something in as little as 15 ms. That short time frame provides the information the brain needs to target a point of visual focus.
  • The older second article, ‘Sixth Sense’ Can Be Explained by Science, explains how the sixth sense that many people relate as being supernatural in origin is actually explainable through scientific means. The brain detects a change-probably as a result of that 15 ms view-and informs the rest of the mind about it. However, the change hasn’t been targeted for closer inspection, so the viewer can’t articulate the change.
  • The third article, The silent “sixth” sense, is a more scientific and slightly modernized view of the second article. In short, you know the change is there, but you can’t say what has actually changed.

So, you might wonder what this has to do with website design. It turns out that you can use these facts to help focus user attention on specific locations on your site. Now, I’m not talking here about the use of subliminal perception, which is clearly illegal in many locations. Rather, it’s possible to do as a friend suggested in designing a site and change a small, but noticeable, element each time a page is reloaded. Of course, you need not reload the entire page. Technologies such as Asynchronous JavaScript And XML (AJAX) make it possible to reload just a single element as needed. (Of course, changing a single element in a desktop application is incredibly easy because nothing special is needed to do it.) The point of making this change is to cause the viewer to look harder at the element you most want them to focus on. It’s just another method for ensuring that the right area of a page or other user interface element gets viewed.

However, the articles also make for interesting thoughts about the whole issue of user interface design. Presentation is an important part of design. Your application must use good design principles to attract attention. However, these articles also present the idea of time as a factor in designing the user interface. For example, the order in which application elements load is important because the brain can perceive the difference. You might not consciously register that element A loaded some number of milliseconds sooner than element B, but subconsciously, element A attracts more attention because it registered first and your brain targeted it first. This essentially explains the difference between UX and UI, since the two are eternally intermingled in the world of development.

As science continues to probe the depths of perception, it helps developers come up with more effective ways in which to present information in a way that enhances the user experience and the benefit of any given application to the user. However, in order to make any user interface change effective, you must apply it consistently across the entire application and ensure that the technique isn’t used to an extreme. Choosing just one element per display (whether a page, window, or dialog box) to change is important. Otherwise, the effectiveness of the technique is diluted and the user might not notice it at all.

What is your take on the use of perception as a means of controlling the user interface? Do you feel that subtle techniques like the ones described in this post are helpful? Let me know your thoughts at [email protected].

In Praise of Dual Monitors

This is an update of a post that originally appeared on February 5, 2014.

In reading many of my old blog posts, I’m finding that many of the things I said way back when apply equally well today. I’ve received email from budding developers who use their smartphone to code. Just how they perform this trick is beyond me because I squint at the screen performing the simplest of tasks and often find that my fingers are two sizes too big. I have tried coding on a tablet, a laptop, and (oddly enough) my television. While they do work, they’re not particularly efficient, so I’ll stick with my dual-monitor desktop system for coding.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at [email protected].

Choosing Variable Names

This is an update of a post that originally appeared on January 17, 2014.

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:

  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important

In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. If you don’t have an organizational style guide for variable naming, modern programming languages like Python commonly provide a style guide for you to use. These style guides often consider a great deal more than simply variable naming and include issues like the amount of indentation to use. In some respects, they become quite draconian in their approach. Other style guides, like the one for C#, are less time consuming to learn, which is a good thing because most developers have better things to do with their time than to learn some of these nitpicky details. A few languages suffer from an abundance of style guides, like C++. It’s best to choose one of them, such as the Google C++ Style Guide, and stick with it.

However, let’s say that you want to create your own style guide for your organization to use because you use multiple languages and having a different style guide for each language seems just a bit absurd, not to mention adding needless complexity. In this case, you need to ask yourself a series of questions to determine how you want the style guide to work, such as these:

  1. What sort of casing do you want to use for what types of variables?
  2. What information does the variable contain (such as a list of names)?
  3. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  4. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  5. Is the variable used for a special task (such as data conversion)?
  6. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at [email protected].

Sending Comments on My Books

This is an update of a post that originally appeared on February 23, 2012.

I regularly receive a stack of e-mail about my books. Readers question everything and it makes me happy to see that they’re reviewing my books so closely. It means that I’m accomplishing my principle goal, helping you understand computers in every possible way so that you can be more productive and accomplish tasks with less effort. When I make something easier for someone and they tell me about it, the grin extends from one side of my face to another. It really makes my day.

Some readers are still asking me if it’s OK to send me comments. I definitely want to see any constructive comment that you have. Anything that helps me understand your needs better makes it possible for me to write better books. I really do want to hear from you. The main element that I need to obtain a usable comment is that it’s constructive. A comment that lacks details isn’t helpful because I’ve written so many books. Emotional comments without any substance are especially hard to deal with because they leave me wondering what you need from me. Here are some of the things you can do to create a constructive comment:

  • What is the title of the book you’re reading (be sure to include the edition number, which is usually right on the cover unless it’s a first edition)?
  • Are you using the downloadable source code if this is a programming book?
  • Did you install the recommended version of any required software using the instructions found in the book?
  • Which page contains the error (if you’re using Kindle or other electronic media, please provide a chapter number and section title as a minimum)?
  • What do you view as an error on that page?
  • How would you fix the error?
  • What sort of system are you running?
  • When did you encounter the problem?

The more information you provide, the easier it is for me to understand the issue and provide you with feedback. In many cases, I’ll upload the fix to my blog so that everyone can benefit from the response (so be sure you keep an eye on my blog for new entries). I work hard to ensure that my books are as error free as possible, but everyone makes mistakes. Also remember that sometimes mitigating factors, such as differences in software versions or anticipated hardware, make it appear that there is an error in the book when you’re really looking at a different in environment. Help me provide you with better books—send me comments!

There are a few things that I won’t do for you. I won’t help you pass an exam at school. Your learning experience is important to me, which means that I want you to continue your education by working through the instruction on your own. I also don’t provide free consulting. This means I won’t check the code that you created on your own for errors. In addition, if you don’t use the downloadable source, be sure to read Verifying Your Hand Typed Code for restrictions on the level of support that I provide. I’ll help you with any book-specific question, but I draw the line at that point. Let me know if you have any input or insights on my books at [email protected].

Book Reviews – Doing Your Part

This is an update of a post that originally appeared on October 4, 2013.

Readers contact me quite a lot about my books. On an average day, I receive around 65 reader e-mails about a wide range of book-related topics. Many of them are complimentary about my books and it’s hard to put down in words just how much I appreciate the positive feedback. Often, I’m humbled to think that people would take time to write.

There is another part to reader participation in books, however, and it doesn’t have anything to do with me—it has to do with other readers. When you read one of my books and find the information useful, it’s helpful to write a review about it so that others can know what to expect. I want to be sure that every reader who purchases one of my books is happy with that purchase and gets the most possible out of the book. The wording that the publisher’s marketing staff and I use to describe a book represents our viewpoint of that book and not necessarily the viewpoint of the reader. The only way that other readers will know how a book presents information from the reader perspective is for other readers to write reviews. A good review will tell:

  • What you liked about the book
  • How it met your needs
  • What it provides in the way of usable content
  • Whether you liked any intangibles, such as the author’s writing style
  • When you used the content to obtain a new job or learn a new skill
  • Who recommended the book to you

    The review should also present any negatives (obviously, I want to know about the flaws, too, so that I can correct them in the next edition of the book and also discuss them on my blog):

    • Did the book provide enough detailed procedures needed to accomplish a task?
    • Are significant technical flaws and why do you feel they’re an issue?
    • Are there enough graphics to augment the text and make it clearer?
    • Is the source code useful?

    Many reviewing venues, such as the one found on Amazon, also ask you to provide a rating for the book. You should rate the book based on your experience with other books and on how this particular book met your needs in learning a new topic. The kind of review to avoid writing is a rant or one that isn’t actually based on reading the whole book. As always, I’m here (at [email protected]) to answer any questions you have and many of your questions have appeared as blog posts when the situation warrants.

    So, just where do you make these reviews? The publishers sometimes provide a venue for expressing your opinion and you can certainly go to the publisher site to create such a review. I personally prefer to upload my reviews to Amazon because it’s a location that many people frequent to find out more about books. You can go to the site, click Write a Customer Review (near the bottom of the page), and then provide your viewpoint about the book.

    Thank you in advance for taking the time and effort required to write a review. I know it’s time consuming, but it’s an important task that only you can perform.

    Fooling Facial Recognition Software

    One of the points that Luca and I made in Artificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition is that AI is all about algorithms and that it can’t actually think. An AI appears to think due to clever programming, but the limits of that programming quickly become apparent under testing. In the article, U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box, the limits of AI are almost embarrassingly apparent because the AI failed to catch even one of them. In fact, it doesn’t take a Marine to outsmart an AI, the article, This Clothing Line Tricks AI Cameras Without Covering Your Face, tells how to do it and look fashionable at the same time. Anthropomorphizing AI and making it seem like it’s more than it is is one sure path to disappointment.

    My book, Machine Learning Security Principles, points out a wealth of specific examples of the AI being fooled as part of an examination of machine learning-based security. Some businesses rely on facial recognition now as part of their security strategy with the false hope that it’s reliable and that it will provide an alert in all cases. As recommended in my book, machine learning-based security is just one tool that requires a human to back it up. The article, This Simple Technique Made Me Invisible to Two Major Facial Recognition Systems, discusses just how absurdly easy it is to fool facial recognition software if you don’t play by the rules; the rules being what the model developer expected someone to do.

    The problems become compounded when local laws ban the use of facial recognition software due its overuse by law enforcement in potentially less than perfect circumstances. There are reports of false arrest that could have possibly been avoided if the human doing the arresting made a check to verify the identity of the person in question. There are lessons in all this for a business too. Using facial recognition should be the start of a more intensive process to verify a particular action, rather than just assume that the software is going to be 100% correct.

    Yes, AI, machine learning, and deep learning applications can do amazing things today, as witnessed by the explosion in use of ChatGPT for all kinds of tasks. It’s a given that security professionals, researchers, data scientists, and various managerial roles will increasingly use these technologies to reduce their workload and improve overall consistency of all sorts of tasks, including security, that these applications are good at performing. However, even as the technologies improve, people will continue to find ways to overcome them and cause them to perform in unexpected ways. In fact, it’s a good bet that the problems will increase for the foreseeable future as the technologies become more complex (hence, more unreliable). Monitoring the results of any smart application is essential, making humans essential, as part of any solution. Let me know your thoughts about facial recognition software and other security issues at [email protected].

    Using the Set Command to Your Advantage

    This is an update of a post that originally appeared on February 24, 2014.

    A short while ago, I created a post about the Windows path. A number of people wrote me about that post with questions. Yes, you can use the technique for setting the Path environment variable to set any other environment variable. The Windows Environment Variables dialog box works for any environment variable—including those used by language environments such as Java, JavaScript, and Python. Windows doesn’t actually care what sort of environment variable you create using the method that I discuss in that post. The environment variable will appear in every new command prompt window you create for either a single user or all users of a particular system, depending on how you create the environment variable.

    A few of you took me to task for not mentioning the Set command. This particular command appears in Microsoft Windows Command Line Administration Instant Reference. It’s a useful command because you can temporarily configure a command prompt session to support a new set of settings. When the session is ended, the settings are gone. Only those settings you create as part of Environment Variables window have any permanence. There are other tricks you can use, but using Set for temporary environment variables and the Environment Variables window for permanent environment variables are the two most common approaches.

    In order to see the current environment variables you simply type Set and press Enter at the command line. If you add a space and one or more letters, you see just the matching environment variables. For example, type Set U and press Enter to see all of the environment variables that begin with the letter U.

    To set an environment variable, you add the name of the variable, an equals sign (=), and the variable value. For example, to set the value of MyVariable to Hello, you type Set MyVariable=Hello and press Enter. To verify that MyVariable does indeed equal Hello, you type Set MyVariable and press Enter. The command prompt will display the value of MyVariable. When you’re done using MyVariable, you can type Set MyVariable= and press Enter. Notice the addition of the equals sign. If you ask for the value of MyVariable again, the command prompt will tell you it doesn’t exist.

    Newer versions of the command prompt provide some additional functionality. For example, you might set MyVariable within a batch file and not know what value it should contain when you create the batch file. In this case, you can prompt the user to provide a value using the /P command line switch. For example, if you type Set /P MyVariable=Type a value for my variable: and press Enter, you’ll see a prompt to enter the variable value.

    It’s also possible to perform math with Set using the /A command line switch. There is a whole list of standard math notations you can use. Type Set /? and press Enter to see them all. If you write application code at all, you’ll recognize the standard symbols. For example, if you want to increment the value of a variable each time something happens, you can use the += operator. Type Set /A MyVariable+=1 and press Enter to see how this works. The first time you make the call, MyVariable will equal 1. However, on each succeeding call, the value will increment by 1 (for values of 2, 3, and so on).

    Environment variables support expansion and you can see this work using the Echo command. For example, if you type Echo %MyVariable%, you see the value of MyVariable.

    However, you might not want the entire value of MyVariable. Newer versions of the command prompt support substrings. The variable name is followed by a :~, the beginning position, a comma, and the ending position. For example, if you place Hello World in MyVariable, and then type Echo %MyVariable:~0,5% and press Enter, you see Hello as the output, not Hello World. Adding a negative sign causes the expansion to occur from the end of the string. For example, if you type Echo %MyVariable:~-5% and press Enter, you see World as the output.

    The Set command is a valuable addition to both the administrator’s and programmer’s toolkit because it lets you set environment variables temporarily. The Set command figures prominently in batch file processing and also provides configuration options for specific needs. Let me know about your environment variable questions as they pertain to my books at [email protected].