Warning Messages in Jupyter Notebook Example Code

You’re working with the downloadable source code from a book like  Algorithms for Dummies, 2nd EditionBeginning Programming with Python For Dummies, 3rd EditionMachine Learning for Dummies, 2nd EditionPython for Data Science for Dummies, or Machine Learning Security Principles and see a warning message like this:

C:\Users\John\anaconda3\lib\site-packages\sklearn\feature_selection\_sequential.py:206: FutureWarning: Leaving `n_features_to_select` to None is deprecated in 1.0 and will become 'auto' in 1.3. To keep the same behaviour as with None (i.e. select half of the features) and avoid this warning, you should manually set `n_features_to_select='auto'` and set tol=None when creating an instance.
  warnings.warn(

Well, that’s pretty confusing looking and if you’re just learning to work with Python may give you the idea that you’ve done something seriously wrong. There are a couple things to note here. First, this is a warning message. In fact, it’s a FutureWarning message, which means the change mentioned in the warning hasn’t actually taken effect yet.

Second, if you’re using the version of Jupyter Notebook and Python mentioned in the book, it’s unlikely that the effects described in the message will become a problem anytime soon, so you can usually ignore them. (This is one reason that I always ask which version of Jupyter Notebook and Python you’re using because a newer version can definitely cause error messages to appear.) Of course, if this warning ever does turn into an error, Luca and I definitely want to hear about it at [email protected].

Third, the message does state a potential fix for the problem. If the fix is simple enough, you can always try to make the required change to see if it works. However, this is a do it at your own risk sort of modification. The point is that the warning isn’t keeping you from using the downloadable source today, so ignoring it is probably the best action to take.

If you really don’t want to see these warnings, you can always add two lines of code the to first cell of the downloadable source. The warning isn’t actually going away, you just won’t see it:

import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)

So, what causes these warning messages in the first place? Is the book’s source code faulty? There is nothing wrong with the book’s source code. What you’re seeing is the result of a library upgrade. Python uses a huge number of libraries and a change in any one of them can create a warning message of the sort you’ve seen. Luca and I work hard to ensure that the source code you get with the book is functional (and warning free) on all of the supported platforms at the time of writing, but it would be impossible for us to constantly update the book’s code to keep up with these library changes.

An Interesting Review of ChatGPT and Other AIs

I’ve written in the past about limitations of AI from a number of perspectives. In Effects of the Mistruths of Data on Model Output I discuss how the data fed to a model must necessary affect its output in a number of ways, including bias and other unwanted effects. Considering the Four Levels of Intelligence Management tells why it’s not possible for an AI to approach human intelligence today. In Fooling Facial Recognition Software I provide a detailed discussion of why it’s so easy to fool certain types of AI-powered applications. And you learn about why some types of occupations are reasonably safe from AI in Automation and the Future of Human Employment. However, I haven’t really done a detailed investigation of AIs like ChatGPT that seem almost human-like in their understanding, but fall remarkably short in many simple areas. On Artifice and Intelligence is one of the more detailed analysis I’ve found to date on the subject and what it reveals will surprise you. There are a lot of simple problems that ChatGPT and other Large Language Models (LLMs) can’t solve.

What I found interesting in the article is that the author, Shlomi Sher, was able to show that one area where an AI should be strong, math, actually isn’t all that strong at all. He talks about Euclid’s proofs. The artifice is that ChatGPT 4 and other AIs can tell you all about prime numbers and even provide seemingly creative output about them. However, depending on how you ask some basic questions, ChatGPT 4 either gets the answer wrong or right, when the answer is pretty much apparent to any human who knows what prime numbers are. What I liked most about the article is that the author takes time to explain why humans can understand the problem, but the AI can’t.

If it seems as if I have a continuing desire to dissuade others from anthropomorphizing AIs, I most certainly do. When it comes to AI, it’s all about the math and nothing more. However, that doesn’t mean that AIs lack functionality and ability as tools to augment human endeavors. It’s likely that the use of AIs will continue to increase over time. In addition, I think that as we better understand precisely how AIs work, we’ll also come to realize that they’re amazing tools, but most definitely not humans in the making. Let me know your thoughts on ChatGPT at [email protected].

IDE Screenshot Usage in Books

There are cases where it’s very tough to figure out the correct presentation of material in a book, which is made more difficult by some readers preferring one presentation and other readers another. It comes down to how people learn in many cases. Visual learners prefer screenshots, abstract learners prefer text. Of course, there are all sorts of learners between these two extremes. So, what seems like a simple question can become quite complex.

The question at hand is whether to present screenshots of an IDE in a book with the associated example code and its output. The problem is that vendors now assume that developers have very large displays and so have made use of all of that extra screen real estate. In addition, book publishers don’t want books where a single image consumes an entire page. The result is that it’s very hard to get a screenshot where the text is completely readable. It can be done, but the text will generally still be smaller than the print in the book. Older readers complain that they need a magnifying glass to see the text at all.

However, there are benefits to using screenshots. The most important benefit is that, even if the text isn’t completely readable, visual learners can see what their IDE should look like as they follow the progress of procedures in the book. This feedback lets the visual learner know that they are doing things correctly and are getting the correct result. Another benefit is that an example tends to stay in one piece. The graphical output of an example doesn’t end up several pages away from the source code that produces it. Sometimes, textual output is wider than the page will allow using the normal font size. So, the options are to print the output in the book at the normal font size, but in a truncated form, which means that it’s no longer complete. A screenshot can show the complete textual output, but at a smaller font. For beginner readers, the second form, while not optimal, is preferred because truncating the output produces questions in the reader’s mind.

So, how do you feel about IDE screenshots in books? Are they more helpful or more confusing? Part of the reason for posts like this is to get your opinion and discover more about you as a reader. Obviously, a book author wants to use the communication techniques that work best overall for everyone, book space often not allowing for the investigation of every presentation alternative. Let me know your thoughts at [email protected].

Jupyter Notebook vs JupyterLite

There seems to be some confusion for readers of   Algorithms for Dummies, 2nd EditionBeginning Programming with Python For Dummies, 3rd Edition, Machine Learning for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning Security Principles lately due to the similarity of names of two Integrated Development Environments (IDEs) available now. Even though I’m sure that JupyterLite is a very good product, even the website states, “Not all the usual features available in JupyterLab and the Classic Notebook will work with JupyterLite, but many already do!” This lack of support becomes a problem when you try to run the downloadable source using JupyterLite. In addition, Luca and I haven’t tested the downloadable source with this product, so we can’t even tell you what will and won’t work.

The two supported IDEs for our books are Google Colab (recommended for those of you who want to use a mobile device) and Jupyter Notebook (recommended for those of you who have a desktop system). It’s actually preferred that you get Jupyter Notebook as part of the Anaconda toolset because Anaconda makes it very easy for you to perform some advanced setup tasks found in some of our books. For example, you gain access to the Anaconda prompt and the associated Conda utility that definitely makes it easier for you to manage some of the machine learning packages found in our books. Using either Google Colab or Jupyter Notebook makes it very much easier for Luca and I to help you with your book-specific questions.

Please let me know if you have any questions or concerns about how to setup your programming environment for our books at [email protected]. Remember to use the version of the products listed in the book for optimal results in working with the downloadable source. In addition, always remember to use the downloadable source to enhance your learning experience.

Programming Languages Commonly Used for Data Science

The world is packed with programming languages, each of them proclaiming their particular forte and telling you why you need to learn them. A good developer does learn multiple languages, each of which becomes a tool for a certain kind of development, but even the most enthusiastic developer won’t learn every programming language out there. It’s important to make good choices.

Data Science is a particular kind of development task that works well with certain kinds of programming languages. Choosing the correct tool makes your life easier. It’s akin to using a hammer to drive a screw rather than a screwdriver. Yes, the hammer works, but the screwdriver is much easier to use and definitely does a better job. Data scientists usually use only a few languages because they make working with data easier. With this in mind, here are the top languages for data science work in order of preference:

  • Python (general purpose): Many data scientists prefer to use Python because it provides a wealth of libraries, such as NumPy, SciPy, MatPlotLib, pandas, and Scikit-learn, to make data science tasks significantly easier. Python is also a precise language that makes it easy to use multi-processing on large datasets — reducing the time required to analyze them. The data science community has also stepped up with specialized IDEs, such as Anaconda, that implement the Jupyter Notebook concept, which makes working with data science calculations significantly easier. Besides all of these things in Python’s favor, it’s also an excellent language for creating glue code with languages such as C/C++ and Fortran. The Python documentation actually shows how to create the required extensions. Most Python users rely on the language to see patterns, such as allowing a robot to see a group of pixels as an object. It also sees use for all sorts of scientific tasks.
  • R (special purpose statistical): In many respects, Python and R share the same sorts of functionality but implement it in different ways. Depending on which source you view, Python and R have about the same number of proponents, and some people use Python and R interchangeably (or sometimes in tandem). Unlike Python, R provides its own environment, so you don’t need a third-party product such as Anaconda. However, R doesn’t appear to mix with other languages with the ease that Python provides.
  • SQL (database management): The most important thing to remember about Structured Query Language (SQL) is that it focuses on data rather than tasks. Businesses can’t operate without good data management — the data is the business. Large organizations use some sort of relational database, which is normally accessible with SQL, to store their data. Most Database Management System (DBMS) products rely on SQL as their main language, and DBMS usually has a large number of data analysis and other data science features built in. Because you’re accessing the data natively, there is often a significant speed gain in performing data science tasks this way. Database Administrators (DBAs) generally use SQL to manage or manipulate the data rather than necessarily perform detailed analysis of it. However, the data scientist can also use SQL for various data science tasks and make the resulting scripts available to the DBAs for their needs.
  • Java (general purpose): Some data scientists perform other kinds of programming that require a general purpose, widely adapted and popular, language. In addition to providing access to a large number of libraries (most of which aren’t actually all that useful for data science, but do work for other needs), Java supports object orientation better than any of the other languages in this list. In addition, it’s strongly typed and tends to run quite quickly. Consequently, some people prefer it for finalized code. Java isn’t a good choice for experimentation or ad hoc queries.
  • Scala (general purpose): Because Scala uses the Java Virtual Machine (JVM) it does have some of the advantages and disadvantages of Java. However, like Python, Scala provides strong support for the functional programming paradigm, which uses lambda calculus as its basis. In addition, Apache Spark is written in Scala, which means that you have good support for cluster computing when using this language — think huge dataset support. Some of the pitfalls of using Scala are that it’s hard to set up correctly, it has a steep learning curve, and it lacks a comprehensive set of data science specific libraries.

There are likely other languages that data scientists use, but this list gives you a good idea of what to look for in any programming language you choose for data science tasks. What it comes down to is choosing languages that help you perform analysis, work with huge datasets, and allow you to perform some level of general programming tasks. Let me know your thoughts about data science programming languages at [email protected].

Compiling Python

None of my Python books, including Algorithms for Dummies, 2nd Edition, Beginning Programming with Python For Dummies, 3rd EditionMachine Learning for Dummies, 2nd Edition,  Machine Learning Security Principles, and Python for Data Science for Dummies, show how to compile a Python program. This is because the interpreted nature of Python makes it easier to work with scripts for these reasons:

  • The interpreter provides instant results to make learning faster.
  • It’s easier and faster to fix errors.
  • The use of notebooks, as is found in all of the books, makes creating output easier.
  • The use of literate programming techniques helps create an environment where acquired knowledge is more likely to remain acquired.
  • Using literate programming techniques also makes it possible to document the code in a manner that’s more like reading a textbook than looking at source code.
  • The use of scripts promotes experimentation, which leads to new ideas and techniques.

These are all great reasons to use scripts in books. In fact, I’m sure that many people will have other reasons to use scripts. The one thing you should note is that Python does automatically compile some files to do things like reduce loading time. Anytime you see a .pyc file, the file has been compiled by Python to bytecode through various means, including importing the script. It’s also possible to pre-compile a script using the python interpreter’s -m command line switch. The resulting output appears in the __pycache__ folder with a .pyc extension. You can further modify the compilation process by using the -o and -oo command line switches, which offer various optimizations to make the code load even faster. The problems with these outputs is that they’re only mildly obfuscated, so if your intent is to hide your code from prying eyes, this isn’t the best option.

Another built-in compilation option is to use the compile() function, which performs a compilation directly in your code. The purpose of using this function is to speed up code that is used often within your application. For example, you might use it to compile code that appears within a loop. Obviously, you get no obfuscation advantage using this approach, but you do get a speed advantage. If you don’t want to go through the bother of using the compile() function, you could always use a third party product like Numba, which reduces the task to one of adding a decorator to your code.

None of the solutions discussed so far do anything more than turn your Python script into bytecode, which is still interpreted (albeit, much faster than using a human language script). There is also an option for turning your Python code into actual machine code through various intermediate steps. A Python compiler usually turns your Python script into an intermediate language, which is then compiled into actual machine code that is native to the host platform. However, it may simply run your script online, so you need to know in advance whether you’ll end up with an executable file in the end. An executable file can offer these advantages:

  • The source code is fully obfuscated, protecting your development investment.
  • The code runs significantly faster than any other means of interacting with it.
  • Instead of a host of script files, you usually end up with just a few executable files, perhaps even just one.
  • Because it’s harder to modify, an executable file can be more secure and reliable than using scripts.

If your goal is to exclusively create an executable output, then a product like auto-py-to-exe might be your best option. This way you get to use your interpreter of choice to develop the application, then use another product to turn the result into an .exe file. The idea is to get the best of both worlds. The point of all this is that you don’t strictly have to interact with Python code in one way, using an interpreter. You have a great many options at your disposal. Let me know your thoughts about working with compiled Python code at [email protected].

Considering the Four Levels of Intelligence Management

One of the reasons that Luca and I wrote Artificial Intelligence for Dummies, 2nd Edition was to dispel some of the myths and hype surrounding machine-based intelligence. If anything, the amount of ill-conceived and blatantly false information surrounding AI has only increased since then. So now we have articles like Microsoft’s Bing wants to unleash ‘destruction’ on the internet out there that espouse ideas that can’t work at all because AIs feel nothing. Of course, there is the other end of the spectrum in articles like David Guetta says the future of music is in AI, which also can’t work because computers aren’t creative. A third kind of article starts to bring some reality back into the picture, such as Are you a robot? Sci-fi magazine stops accepting submissions after it found more than 500 stories received from contributors were AI-generated. All of this is interesting for me to read about because I want to see how people react to a technology that I know is simply a technology and nothing more. Anthropomorphizing computers is a truly horrible idea because it leads to the thoughts described in The Risk of a New AI Winter. Another AI winter would be a loss for everyone because AI really is a great tool.

As part Python for Data Science for Dummies and Machine Learning for Dummies, 2nd Edition Luca and I considered issues like the seven kinds of intelligence and how an AI can only partially express most of them. We even talked about how the five mistruths in data can cause issues such as skewed or even false results in machine learning output. In  Machine Learning Security Principles I point out the manifest ways in which humans can use superior intelligence to easily thwart an AI. Still, people seem to refuse to believe that an AI is the product of clever human programmers, a whole lot of data, and methods of managing algorithms. Yes, it’s all about the math.

This post goes to the next step. During my readings of various texts, especially those of a psychological and medical variety, I’ve come to understand that humans embrace four levels of intelligence management. We don’t actually learn in a single step as might be thought by many people, we learn in four steps with each step providing new insights and capabilities. Consider these learning management steps:

  1. Knowledge: A person learns about a new kind of intelligence. That intelligence can affect them physically, emotionally, mentally, or some combination of the three. However, simply knowing about something doesn’t make it useful. An AI can accommodate this level (and even excel at it) because it has a dataset that is simply packed with knowledge. However, the AI only sees numbers, bits, values, and nothing more. There is no comprehension as is the case with humans. Think of knowledge as the what of intelligence.
  2. Skill: After working with new knowledge for some period of time, a human will build a skill in using that knowledge to perform tasks. In fact, very often this is the highest level that a human will achieve with a given bit of knowledge, which I think is the source of confusion for some people with regard to AIs. Training an AI model, that is, assigning weights to a neural network created of algorithms, gives an AI an appearance of skill. However, the AI isn’t actually skilled, it can’t accommodate variations as a human will. What the AI is doing is following the parameters of the algorithm used to create its model. This is the highest step that any AI can achieve. Think of skill as the how of intelligence.
  3. Understanding: As a human develops a skill and uses the skill to perform tasks regularly, new insights develop and the person begins to understand the intelligence at a deeper level, making it possible for a person to use the intelligence in new ways to perform new tasks. A computer is unable to understand anything because it lacks self-awareness, which is a requirement for understanding anything at all. Think of understanding as the why of intelligence.
  4. Wisdom: Simply understanding an intelligence is often not enough to ensure the use of that intelligence in a correct manner. When a person makes enough mistakes in using an intelligence, wisdom in its use begins to take shape. Computers have no moral or ethical ability—they lack any sort of common sense. This is why you keep seeing articles about AIs that are seemingly running amok, the AI has no concept whatsoever of what it is doing or why. All that the AI is doing is crunching numbers. Think of wisdom as the when of intelligence.

It’s critical that society begin to see AIs for what they are, exceptionally useful tools that can be used to perform certain tasks that require only knowledge and a modicum of skill and to augment a human when some level of intelligence management above these levels is required. Otherwise, we’ll eventually get engulfed in another AI winter that thwarts development of further AI capabilities that could help people do things like go to Mars, mine minerals in an environmentally friendly way in space, cure diseases, and create new thoughts that have never seen the light of day before. What are your thoughts on intelligence management? Let me know at [email protected].

Creating Useful Comments

This is an update of a post that originally appeared on November 21, 2011.

A major problem with most applications today is that they lack useful comments. It’s impossible for anyone to truly understand how an application works unless the developer provides comments at the time the code is written. In fact, this issue extends to the developer. A month after someone writes an application, it’s possible to forget the important details about it. In fact, for some of us, the interval between writing and forgetting is even shorter. Despite my best efforts and those of many other authors, many online examples lack any comments whatsoever, making them nearly useless to anyone who lacks time to run the application through a debugger to discover how it works.

Good application code comments help developers of all stripes in a number of ways. As a minimum, the comments you provide as part of your application code provides these benefits.

  • Debugging: It’s easier to debug an application that has good comments because the comments help the person performing the debugging understand how the developer envisioned the application working.
  • Updating: Anyone who has tried to update an application that lacks comments knows the pain of trying to figure out the best way to do it. Often, an update introduces new bugs because the person performing the update doesn’t understand how to interact with the original code.
  • Documentation: Modern IDEs often provide a means of automatically generating application documentation based on the developer comments. Good comments significantly reduce the work required to create documentation and sometimes eliminate it altogether.
  • Technique Description: You get a brainstorm in the middle of the night and try it in your code the next day. It works! Comments help you preserve the brainstorm that you won’t get back later no matter how hard you try. The technique you use today could also solve problems in future applications, but the technique may become unavailable unless you document it.
  • Problem Resolution: Code often takes a circuitous route to accomplish a task because the direct path will result in failure. Unless you document your reasons for using a less direct route, an update could cause problems by removing the safeguards you’ve provided.
  • Performance Tuning: Good comments help anyone tuning the application understand where performance changes could end up causing the application to run more slowly or not at all. A lot of performance improvements end up hurting the user, the data, or the application because the person tuning the application didn’t have proper comments for making the adjustments.

The need for good comments means creating a comment that has the substance required for someone to understand and use it. Unfortunately, it’s sometimes hard to determine what a good comment contains in the moment because you already know what the code does and how it does it. Consequently, having a guide as to what to write is helpful. When writing a comment, ask yourself these questions:

  • Who is affected by the code?
  • What is the code supposed to do?
  • When is the code supposed to perform this task?
  • Where does the code obtain resources needed to perform the task?
  • Why did the developer use a particular technique to write the code?
  • How does the code accomplish the task without causing problems with other applications or system resources?

There are many other questions you could ask yourself, but these six questions are a good start. You won’t answer every question for every last piece of code in the application because sometimes a question isn’t pertinent. As you work through your code and gain experience, start writing down questions you find yourself asking. Good answers to aggravating questions produce superior comments. Whenever you pull your hair out trying to figure out someone’s code, especially your own, remember that a comment could have saved you time, frustration, and effort. What is your take on comments? Let me know at [email protected].

In Praise of Dual Monitors

This is an update of a post that originally appeared on February 5, 2014.

In reading many of my old blog posts, I’m finding that many of the things I said way back when apply equally well today. I’ve received email from budding developers who use their smartphone to code. Just how they perform this trick is beyond me because I squint at the screen performing the simplest of tasks and often find that my fingers are two sizes too big. I have tried coding on a tablet, a laptop, and (oddly enough) my television. While they do work, they’re not particularly efficient, so I’ll stick with my dual-monitor desktop system for coding.

Yes, I know that some developers use more than just two monitors, but I find that two monitors work just fine. The first monitor is my work monitor—the monitor I use for actually typing code. The second monitor is my view monitor. When I run the application, the output appears on the second monitor so that I can see the result of changes I’ve made. Using two monitors lets me easily correlate the change in code to the changes in application design. Otherwise, I’d be wasting time switching between the application output and my IDE.

I also use two monitors when writing my books. The work monitor contains my word processor, while my view monitor contains the application I’m writing about. This is possibly one time when a third monitor could be helpful—one to hold the word processor, one to hold the IDE, and one to view the application output. However, in this case, a third monitor could actually slow things down because the time spent viewing the output of an example is small when compared to creating a production application.

The concept of separating work from the source of information used to perform the work isn’t new. People have used the idea for thousands of years, in fact. For example, when people employed typewriters to output printed text, the typist employed a special stand to hold the manuscript being typed. The idea of having a view of your work and then another surface to actually work on is used quite often throughout history because it’s a convenient way to perform tasks quickly. By employing dual monitors, I commonly get between a 15 percent to 33 percent increase in output, simply because I can see my work and its associated view at the same time.

Working with dual monitors not only saves time, but can also reduce errors. By typing as I view the output of applications, I can more reliably relate the text of labels and other information the application provides. The same holds true when viewing information sources found in other locations. Seeing the information as I type it is always less likely to produce errors.

Don’t get the idea that I support using dual monitors in every situation. Many consumer-oriented computer uses are served just fine with a single monitor. For example, there isn’t a good reason to use two monitors when viewing e-mail in many cases—at least, not at the consumer level (you could make a case for using dual monitors when working with e-mails and a calendar to manage tasks, for example). Dual monitors commonly see use in the business environment because people aren’t necessarily creating their own information source—the information comes from a variety of sources that the user must view in order to use reliably.

Do you see yourself using dual monitors? If you use such a setup now, how do you employ it? Let me know at [email protected].

Choosing Variable Names

This is an update of a post that originally appeared on January 17, 2014.

It often surprises me that developers seem to choose completely useless variable names like MyVariable when creating an application. Although MyVariable could be an interesting variable name for an example in a book, it never has a place in any sort of production code. Even then, I try to create book examples with meaningful variable names, especially when getting past the initial “Hello World” example. Variable names are important because they tell others:

  • What sort of information the variable stores
  • When the variable is commonly used
  • Where the variable is used
  • How to use the variable correctly
  • Why the variable is important

In some cases, the variable name could even indicate who created the variable; although, this sort of information is extremely rare. If you never thought a variable name should contain all that information, then perhaps you haven’t been choosing the best variable names for your application.

Even with these restrictions in place, choosing a variable name can be uncommonly hard if you want to maximize the name’s value to both yourself and other developers. Some organizations make the selection process easier by following certain conventions. If you don’t have an organizational style guide for variable naming, modern programming languages like Python commonly provide a style guide for you to use. These style guides often consider a great deal more than simply variable naming and include issues like the amount of indentation to use. In some respects, they become quite draconian in their approach. Other style guides, like the one for C#, are less time consuming to learn, which is a good thing because most developers have better things to do with their time than to learn some of these nitpicky details. A few languages suffer from an abundance of style guides, like C++. It’s best to choose one of them, such as the Google C++ Style Guide, and stick with it.

However, let’s say that you want to create your own style guide for your organization to use because you use multiple languages and having a different style guide for each language seems just a bit absurd, not to mention adding needless complexity. In this case, you need to ask yourself a series of questions to determine how you want the style guide to work, such as these:

  1. What sort of casing do you want to use for what types of variables?
  2. What information does the variable contain (such as a list of names)?
  3. How is the variable used (such as locally or globally, or to contain coordinates, or a special kind of object)?
  4. When appropriate, what kind of information does the variable contain (such as a string or the coordinate of a pixel on screen)?
  5. Is the variable used for a special task (such as data conversion)?
  6. What case should prefixes, suffixes, and other naming elements appear in when a language is case sensitive?

The point is that you need to choose variable names with care so that you know what they mean later. Carefully chosen variable names make it possible for you to read your code with greater ease and locate bugs a lot faster. They also make it easier for others to understand your code and for you to remember what the code does months after you’ve written it. However, most important of all, useful variable names help you see immediately that a variable is being using the wrong way, such as assigning the length of a name string to a coordinate position on screen (even though both variables are integer values). Let me know your thoughts about variable naming at [email protected].