Antiquated Technology Making Developers Faster

This is an update of a post that originally appeared on November 7, 2014.

I often wonder when I create a blog post whether the technology I’m describing will stand the test of time. In this case, I asked whether the reader would like to be able to type application code faster and with fewer keystrokes. The article, The 100 Year Old Trick to Writing at 240 Words Per Minute, probably has some good advice for you—at least, if you’re willing to learn the technique. It turns out that stenography isn’t only useful for court typists and people who print out the text for the hearing impaired on television, it’s also quite useful for developer. Yes, your IDE probably has more than a few tricks available for speeding up your typing, but I guarantee that these tricks only go so far. My personal best typing speed is 110 wpm and that’s flat out typing as fast as my fingers will go.

Since that original post, someone has come out with a book called Learn Plover! that describes how to use this stenographic technique in more detail. There is also a site devoted to Plover now.

Naturally, I haven’t ever used one of the devices mentioned in the article. However, a stenographer named Mirabai Knight has tried one of the devices and reproduced a 140 keystroke Python application using only 50 keystrokes. By the way, she has produced a series of interesting videos that you may want to review if you really think that Plover is for you. I don’t know of any IDE that can provide that sort of efficiency. Of course, it’s one thing for a trained stenographer to produce these sorts of results, but I’d like to hear from any developer who has used the technique to hear about how well it worked for them. Please contact me about your experiences at [email protected]. (Oddly enough, I did hear from at least one developer who uses it successfully.)

The part that interested me most though is that the system, called Plover, is written in Python. (If you want to see Plover in action, check out the video at http://plover.stenoknight.com/2014/10/longer-plover-coding-snippet-in-python.html. A number of Beginning Programming with Python For Dummies, 3rd Edition readers have written to ask me how they can use their new found programming skills. The book contains sections that tell you about all sorts of ways in which Python is being used, but many of these uses are in large corporations. This particular use is by a small developer—someone just like you. Yet, it has a big potential for impacting how developers work. Just imagine the look on the boss’ face when you turn in your application in half the time because you can type it in so much faster? So, Python isn’t just for really large companies or for scientists—it’s for everyone who needs a language that can help them create useful applications of the sort that Python is best suited to target (and I describe all of these uses in my book).

Understanding the Continuing Need for C++

This is an update of a post that originally appeared on February 23, 2015.

I maintain statistics on all my books, including C++ All-In-One for Dummies, 4th Edition. These statistics are based on reader e-mail and other sources of input that I get. I even take the comments on Amazon.com into account. One of the most common C++ questions I get (not the most common, but it’s up there) is why someone would want to use the language in the first place. It’s true, C++ isn’t the language to use if you’re creating a database application and schools now prefer Python as an introductory programming language for older children. However, it is the language to use if you’re writing low-level code that has to run fast. C++ also sees use in a vast number of libraries because library code has to be fast. For example, check out the Python libraries at some point and you’ll find C++ staring back at you. In fact, part of the Python documentation discusses how to use C++ to create extensions.

I decided to look through some of my past notes to see if there was some succinct discussion of just why C++ is a useful language for the average developer to know. That’s when I ran across an InfoWorld article entitled, “Stroustrup: Why the 35-year-old C++ still dominates ‘real’ dev.” Given that the guy being interviewed is Bjarne Stroustrup, the inventor of C++, it’s a great source of information. The interview is revealing because it’s obvious that Bjarne is taking a measured view of C++ and not simply telling everyone to use it for every occasion (quite the contrary, in fact).

The bottom line in C++ development is speed. Along with speed, you also get flexibility and great access to the hardware. As with anything, you pay a price for getting these features. In the case of C++, you’ll experience increased development time, greater complexity, and more difficulty in locating bugs. Some people are taking a new route to C++ speed though and that’s to write their code in one language and move it to C++ from there. For example, some Python developers are now cross-compiling their code into C++ to gain a speed advantage. You can read about it in the InfoWorld article entitled, Python to C converter tool.

A lot of readers will close a message to me asking whether there is a single language they can learn to do everything well. Unfortunately, there isn’t any such language and given the nature of computer languages, I doubt there ever will be. Every language has a niche for which it’s indispensable. The smart developer has a toolbox full of languages suited for every job the developer intends to tackle.

Do you find that you really don’t understand how the languages in my books can help you? Let me know your book-specific language questions at [email protected]. It’s always my goal that you understand how the material you’ve learned while reading one of my books will eventually help you in the long run. After all, what’s the point of reading a book that doesn’t help you in some material way? Thanks, as always, for your staunch support of my writing efforts!

Security = Scrutiny

This is an update of a post that originally appeared on July 22,2015.

There is a myth among administrators and developers that it’s possible to keep a machine free of viruses, adware, Trojans, and other forms of malware simply by disconnecting it from the Internet. I was reminded of this bias while writing Machine Learning Security Principles because some of the exploits I cover included air-gapped PCs. I’m showing my age (yet again), but machines were being infected with all sorts of malware long before the Internet became any sort of connectivity solution for any system. At one time it was floppy disks that were the culprit, but all sorts of other avenues of attack present themselves. To dismiss things like evil USB drives that take over systems, even systems not connected to the Internet, is akin to closing your eyes and hoping an opponent doesn’t choose to hit you while you’re not looking. After all, it wouldn’t be fair. To make matters worse, you can easily find instructions for creating an evil USB drive online. However, whoever said that life was fair or that anyone involved in security plays by the rules? If you want to keep your systems free of malware, then you need to be alert and scrutinize them continually.

Let’s look at this issue another way. If you refused to do anything about the burglar rummaging around on the first floor while you listened in your bedroom on the second floor, the police would think you’re pretty odd. The first thing they’ll ask you is why you don’t have an alarm system implemented into your home. Or if you do have one, wouldn’t it have been a good idea to set it in the first place, so more people would have been notified about this security breach. In addition to alarm systems, some homeowners also have an external security system installed around their homes. They would be able to provide a good image of the burglar. However, it’s still important to try and do something to actually stop the burglar. Whatever you do, you can’t just stand back and do nothing. More importantly, you’d have a really hard time getting any sort of sympathy or empathy from them. After all, if you just let a burglar take your things while you blithely refuse to acknowledge the burglar’s presence, whose fault is that? (Getting bonked on the back of the head while you are looking is another story.) That’s why you need to monitor your systems, even if they aren’t connected to the Internet. Someone wants to ruin your day and they’re not playing around. Hackers are dead serious about grabbing every bit of usable data on your system and using it to make your life truly terrible. Your misery makes them sublimely happy. Really, take my word for it.

The reason I’m discussing this issue is that I’m still seeing stories like, Chinese Hackers Target Air-Gapped Military Networks. So, what about all those networks that were hacked before the Internet became a connectivity solution? Hackers have been taking networks down for a considerable time period and it doesn’t take an Internet connection to do it. The story is an interesting one because the technique used demonstrates that hackers don’t have to be particularly good at their profession to break into many networks. It’s also alarming because some of the networks targeted were contractors for the US military.

There is no tool, software, connection method, or secret incantation that can protect your system from determined hackers. I’ve said this in every writing about security. Yes, you can use a number of tools to make it more difficult to get through and to dissuade someone who truly isn’t all that determined. Unfortunately, no matter how high you make the walls of your server fortress, the hacker can always go just a bit further to climb them. Sites like America’s Data Held Hostage (this site specializes in ransomware) tell me that most organizations could do more to scrutinize their networks. Every writing I read about informed security is that you can’t trust anyone or anything when you’re responsible for security, yet organizations continue to ignore that burglar on the first floor.

There is the question of whether it’s possible to detect and handle every threat. The answer is that it isn’t. Truly gifted hackers will blindside you and can cause terrifying damage to your systems every time. Monitoring can mitigate the damage and help you recover more quickly, but the fact is that it’s definitely possible to do better. Let me know your thoughts about security at [email protected].

Is Security Research Always Useful?

This is an update of a post that originally appeared on February 19, 2016.

Anyone involved in the computer industry likely spends some amount of time reading about the latest security issues in books such as Machine Learning Security Principles. Administrators and developers probably spend more time than many people, but no one can possibly read all the security research available today. There are so many researchers looking for so many bugs in so many places and in so many different ways that even if someone had the time and inclination to read every security article produced, it would be impossible. You’d need to be the speediest reader on the planet (and then some) to even think about scratching the surface. So, you must contemplate the usefulness of all that research—whether it’s actually useful or simply a method for some people to get their name on a piece of paper.

What amazes me since I first wrote this blog post is that I have done a considerable amount of additional reading, including research papers, and find that most exploits remain essentially the same. The techniques may differ, they may improve, but the essentials of the exploit remain the same. It turns out that humans are the weakest link in every security chain and that social engineering attacks remain a mainstay of hackers. The one thing that has changed in seven years is that the use of machine learning and deep learning techniques has automated life for the hacker, much as these technologies have automated life for everyone else. In addition, a lack of proactive privacy makes it even easier than before for a hacker to create a believable attack by using publicly available information about an intended target.

As part of researching security, you need to consider the viability of an attack, especially with regard to your organization, infrastructure, personnel, and applications. Some of the attacks require physical access to the system. In some cases, you must actually take the system apart to access components in order to perform the security trick. Many IoT attacks fall into this category. Unless you or your organization is in the habit of allowing perfect strangers physical access to your systems, which might include taking them apart, you must wonder whether the security issue is even worth worrying about. You need to ask why someone would take the time to document a security issue that’s nearly impossible to see, much less perform in a real world environment. More importantly, the moment you see that a security issue requires physical access to the device, you can probably stop reading.

You also find attacks that require special equipment to perform. The article, How encryption keys could be stolen by your lunch, discusses one such attack. In fact, the article contains a picture of the special equipment that you must build to perpetrate the attack. It places said equipment into a piece of pita bread, which adds a fanciful twist to something that is already quite odd and pretty much unworkable given that you must be within 50 cm (19.6 in) from the device you want to attack (assuming that the RF transmission conditions are perfect). Except for the interesting attack vector (using a piece of pita bread), you really have to question why anyone would ever perpetrate this attack given that social engineering and a wealth of other attacks require no special equipment, are highly successful, and work from a much longer distance.

It does pay to keep an eye on the latest and future targets of hacker attacks. Even though many IoT attacks are the stuff of James Bond today, hackers are paying attention to IoT, so it pays to secure your systems, which are likely wide open right now. As one of my experiments for Machine Learning Security Principles, I actually did hack my own smart thermostat (after which, I immediately improved security). The number of IoT attacks is increasing considerably, so ensuring that you maintain electrical, physical, and application security over your IoT devices is important, but not to the exclusion of other needs.

A few research pieces become more reasonable by discussing outlandish sorts of hacks that could potentially happen after an initial break-in. The hack discussed in Design flaw in Intel chips opens door to rootkits is one of these sorts of hacks. You can’t perpetrate the hack until after breaking into the system some other way, but the break-in has serious consequences once it occurs. Even so, most hackers won’t take the time because they already have everything needed—the hack is overkill. However, this particular kind of hack should sound alarms in the security professional’s head. The Windows 11 requirement for the TPM 2.0 chip is supposed to make this kind of attack significantly harder, perhaps impossible, to perform. Of course, someone has already found a way to bypass the TPM 2.0 chip requirement and it doesn’t help that Microsoft actually signed off on a piece of rootkit malware for installation on a Windows 11 system. So, security research, even when you know that the actual piece of research isn’t particularly helpful, can become a source of information for thought experiments of what a hacker might do.

The articles that help most provide a shot of reality into the decidedly conspiracy-oriented world of security. For example, Evil conspiracy? Nope, everyday cyber insecurity, discusses a series of events that everyone initially thought pointed to a major cyber attack. It turns out that the events occurred at the same time by coincidence. The article author thoughtfully points out some of the reasons that the conspiracy theories seemed a bit out of place at the outset anyway.

It also helps to know the true sources of potential security issues. For example, the articles, In the security world, the good guys aren’t always good and 5 reasons why newer hires are the company’s biggest data security risk, point out the sources you really do need to consider when creating a security plan. These are the sorts of articles that should attract your attention because they describe a security issue that you really should think about.

The point is that you encounter a lot of information out there that doesn’t help you make your system any more secure. It may be interesting if you have the time to read it, but the tactics truly aren’t practical and no hacker is going to use them. Critical thinking skills are your best asset when building your security knowledge. Let me know about your take on security research at [email protected].

Automation and the Future of Human Employment

This is an update of a post that originally appeared on May 9, 2016.

It wasn’t long ago that I wrote Robotics and Your Job to consider the role that robots will play in human society in the near future. Of course, robots are already doing mundane chores and those list of chores will increase as robot capabilities increase. The question of what sorts of work humans will do in the future has crossed my mind quite a lot as I’ve written a number of AI, machine learning, and deep learning books such as Artificial Intelligence for Dummies, 2nd Edition. In fact, both Luca and I have discussed the topic at depth. It isn’t just robotics, but the whole issue of automation that is important. Robots actually fill an incredibly small niche in the much larger topic of automation. Although articles like The end of humans working in service industry? seem to say that robots are the main issue, automation comes in all sorts of guises. So, here it is seven years later and robot theme parks are still in the news and they are making an impact as security guards. In addition, Huis Ten Bosch still has Robot Kingdom going (you can select either Japanese or English as needed to read the information). The fact of the matter is that in seven years robots have become a significant part of most people’s lives and the impact will continue to grow. Not that I’m actually expecting an I Robot experience any time soon.

My vision for the future is that people will be able to work in occupations with lower risks, higher rewards, and greater interest. Unfortunately, not everyone wants a job like that. Some people really do want to go to work, clock in, place a tiny cog in a somewhat large wheel all day, clock out, and go home. They want something mindless that doesn’t require much effort, so losing service and assembly line type jobs to automation is a problem for them. In The Robots are Coming for Your Job, Too the author paints a pretty gloomy picture for anyone who thinks their service job will still exist in 50 years. The reality is that any job that currently pays under $25.00 an hour is likely to become a victim of automation. Many people insist that they’re irreplaceable, but the fact is that automation can easily take their job and employers are looking forward to the change because automation doesn’t require healthcare, pensions, vacation days, sick days, or salaries. Most importantly, automation does as its told.

In the story The rise of greedy robots, the author lays out the basis for an increase in automation that maximizes business profit at the expense of workers. Articles such as On the Phenomenon of Bullshit Jobs tell why people are still working a 40 hour work week when it truly isn’t necessary to do so. In short, if you really do insist on performing a task that is essentially pointless, the government and industry is perfectly willing to let you do so until a time when technology is so entrenched that it’s no longer possible to do anything about it (no, I’m not making this up). As mentioned earlier, even some relatively essential jobs, such as security, have a short life expectancy with the way things are changing.

The question of how automation will affect human employment in the future remains. Theoretically, people could work a 15 hour work week even now, but then we’d have to give up some of our consumerism—the purchase of gadgets we really don’t need. Earlier, I talked about jobs that are safer, more interesting, and more fulfilling. There are also those pointless jobs that the government will doubtless prop up at some point to keep people from rioting. However, there is another occupation that will likely become a major source of employment, but only for the nit-picky, detail person. In The thin line between good and bad automation the author explores the problem of scripts calling scripts. Even though algorithms will eventually create and maintain other algorithms, which in turn means that automation will eventually build itself, someone will still have to monitor the outcomes of all that automation. In addition, the search for better algorithms continues (as described in The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World and More data or better models?). Of course, these occupations still require someone with a great education and a strong desire to do something significant as part of their occupation.

The point of all this speculation is that it isn’t possible to know precisely how the world will change due to the effects of automation, but it will most definitely change. Even though automation currently has limits, scientists are currently working on methods to extend automation even further so that the world science fiction authors have written about for years will finally come into being (perhaps not quite in the way they had envisioned, however). Your current occupation may not exist 10 years from now, much less 50 years from now. The smart thing to do is to assume your job is going to be gone and that you really do need a Plan B in place—a Plan B that may call for an increase in flexibility, training, and desire to do something interesting, rather than the same mundane task you’ve plodded along doing for the last ten years. Let me know your thoughts on the effects of automation at [email protected].

Technology and Child Safety

This is an update of a post that originally appeared on January 20, 2016.

I wrote a little over seven years ago that I had read an article in ComputerWorld, Children mine cobalt used in smartphones, other electronics, that had me thinking yet again about how people in rich countries tend to ignore the needs of those in poor countries. I had sincerely hoped at the time that things would be different, better, in seven years. Well, they’re worse! We’ve increased our use of cobalt dramatically in order to create supposedly green cars. The picture at the beginning of the ComputerWorld article says it all, but the details will have you wondering whether a smartphone or an electric car really is worth some child’s life. That’s right, any smartphone or electric car you buy may be killing someone and in a truly horrid manner. Children as young as 7 years old are mining the cobalt needed for the batteries (and other components) in the smartphones and electric cars that people seem to feel are so necessary for life (they aren’t you know; food, water, clothing, shelter, sleep, air, and reproduction are necessities, everything else is a luxury).

The problem doesn’t stop when someone gets rid the smartphone, electric car, or other technology. Other children end up dismantling the devices sent for recycling. That’s right, a rich country’s efforts to keep electronics out of their landfills is also killing children because countries like India put these children to work taking them apart in unsafe conditions. Recycled wastes go from rich countries to poor countries because the poor countries need the money for necessities, like food. Often, these children are incapable of working by the time they reach 35 or 40 due to health issues induced by their forced labor. In short, the quality of their lives is made horribly low so that it’s possible for people in rich countries to enjoy something that truly isn’t necessary for life. To make matters worse, the vendors of these products build in obsolescence (making them unrepairable) so they can sell more products and make more money, increasing the devastation visited on children.

I’ve written other blog posts about the issues of technology pollution. However, the emphasis of these previous articles has been on the pollution itself. Taking personal responsibility for the pollution you create is important, but we really need to do more. Robotic (autonomous) mining is one way to keep children out of the mines and projects such as UX-1 show that it’s entirely possible to use robots in place of people today. The weird thing is that autonomous mining would save up to 80% of the mining costs of today, so you have to wonder why manufacturers aren’t rushing to employ this solution.

In addition, off world mining would keep the pollution in space, rather than on planet earth. Of course, off world mining also requires a heavy investment in robots, but it promises to provide a huge financial payback in addition to keeping earth a bit cleaner. The point is that there are alternatives that we’re not using. Robotics presents an opportunity to make things right with technology and I’m excited to be part of that answer in writing books such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition.

Unfortunately, companies like Apple, Samsung, and many others simply thumb their noses at laws that are in place to protect the children in these countries because they know you’ll buy their products. Yes, they make official statements, but read their statements in that first article and you’ll quickly figure out that they’re excuses and poorly made excuses at that. They don’t have to care because no one is holding them to account. People in rich countries don’t care because their own backyards aren’t sullied and their own children remain safe. It’s not that I have a problem with technology, quite the contrary, I have a problem with the manner in which technology is currently being made and supported. We need to do better. So, the next time you think about buying electronics, consider the real price for that product. Let me know what you think about polluting other countries to keep your country clean at [email protected].

Making Algorithms Useful

This is an update of a post that originally appeared on December 2, 2015.

Writing about machine learning and deep learning in my various books has been interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in books such as Machine Learning Security PrinciplesAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition:

  • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
  • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
  • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning and deep learning are already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

As I’ve written new books and updated existing ones, I’ve seen an almost magical progression in the capabilities of machine learning and deep learning applications such as ChatGPT, Chat Generative Pre-Trained Transformer, which can produce some pretty amazing output.

Some of these applications, such as Siri and Alexa, continue to learn as you use them. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question.

Machine learning is a big mystery to many people today, while other people have gained enough experience to have strong opinions about it. Because I continue to write new machine learning/deep learning books and update others, it would be interesting to hear your questions about machine learning and deep learning. After all, I’d like to tune the content of my books to meet the most needs that I can. Where do you see this technology headed? What confuses you about it? Talk to me at [email protected].

Robotics and Your Job

This is an update of a post that originally appeared on February 29, 2016.

I have written more than a few books now that involve robotics, AI, machine learning, deep learning, or some other form of advanced technology such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd Edition, Algorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition. People have often asked me whether a Terminator style robot is possible based on comments by people like Stephen Hawking and Elon Musk (who seems to have changed his mind). It isn’t, Ex Machina and The Terminator notwithstanding. I’m also asked whether they’ll be without work sometime soon (the topic of this post). (As an aside, deus ex machina is a literary plot device that has been around for a long time before the movie came out.)

Whether your job is secure depends on the kind of job you have, whether robotics jobs will actually save money and improve the technology we already have, what you believe as a person, and how your boss interprets all the hype currently out there. For example, if your claim to fame is flipping burgers, then you’d better be ready to get another job soon. McDonald’s has opened its first mostly fully automated store in Texas. Some jobs are simply going to go away, no doubt about it.

However, robots aren’t always the answer to the question. Many experts see three scenarios: humans working for robots (as in a doctor collaborating with a robot to perform surgery more accurately and with greater efficiency), humans servicing robots (those McDonald’s jobs may be going away, but someone will have to maintain the robots), and robots working for humans (such as that Roomba that’s currently keeping your house clean). The point is that robots will actually create new jobs, but that means humans will need new skills. Instead of boring jobs that pay little, someone with the proper training can have an interesting job that pays moderately well.

An interesting backlash against automation has occurred in several areas. So, what you believe as a person does matter when it comes to the question of jobs. The story that tells the tale most succinctly appears in ComputerWorld, Taxpayer demand for human help soars, despite IRS automation (the fact that the IRS automation is overloaded doesn’t help matters). Sometimes people want a human to help them. This backlash could actually thwart strategies such as the use of robotic police dogs, which don’t appear to be very popular with the public.

There is also the boss’ perspective to consider. A boss is only a boss as long as there is someone or something to manage. Even though your boss will begrudgingly give up your job to automation, you can be sure that giving up a job personally isn’t on the list of things to do. Some members of the press have resorted to viewing the future as a time when robots do everything and humans don’t work, but really, this viewpoint is a fantasy. However, it’s not a fantasy that companies such as Hitachi are experimenting with robot managers. Some employees actually prefer the consistent interaction of a robot boss. It’s unlikely that managers will take this invasion of their domain sitting down and do something to make using robots untenable.

Robots are definitely making inroads into society and children growing up with robots being a part of their lives will likely be more accepting of them. Still, there is some debate as to just how far robot use will go and how fast it will get there. The interaction between business and the people that businesses serve will play a distinct role in how things play out. However, all this said, your job will likely be different in the future due to the influences of robots. For the most part, I feel that life will be better for everyone after the adjustment, but that the adjustment will be quite hard. Let me know your thoughts on robots at [email protected].

IPython Magic Functions

This is an update of a post that originally appeared on April 25, 2016.

All of my current Python language books (and those I collaborated on with Luca Massaron): Machine Learning Security Principles, Algorithms for Dummies, 2nd Edition, Beginning Programming with Python For Dummies, 3rd Edition, Python for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition allow use of Jupyter Notebook (through Anaconda) or Google Colab to interact with the example code. Both of these IDEs extend the development environment in a number of ways, one of which is the use of magic functions. You see the magic functions in the code of these books as calls that begin with either one or two percent signs (% or %%). The most common of these magic functions is %matplotlib, which controls how IPython Notebook or Jupyter Notebook display plot output from the code.

You can find a listing of the most common magic functions in the Python for Data Science for Dummies Cheat Sheet. None of my books use any other magic functions, so this is also a complete list of magic functions that you can expect to find in our books. However, you might want to know more. Fortunately, the site at https://damontallen.github.io/IPython-quick-ref-sheets/ provides you with a complete listing of the magic commands (and a wealth of other information about Jupyter Notebook).

There are differences in magic function support between Jupyter Notebook and Google Colab, some of which are outlined in our books as needed. None of these differences will significantly affect your learning experience. However, it pays to know that Jupyter Notebook and Google Colab are only mostly the same, not precisely the same, and you’ll encounter differences. The screenshots in my books reflect the Jupyter Notebook version supported by that book, so what you might see on your screen when using magic functions in Google Colab may differ from the book.

Of course, you might choose to use another IDE—one that isn’t quite so magical as Jupyter Notebook or Google Colab. In this case, you need to remove those magic commands. Removing the commands generally won’t affect functionality of the code. The example will still work as explained in the book. However, the way that the IDE presents output could change. For example, instead of being inline, plots could appear in a separate window. Even though using a separate window is less convenient, either method works just fine. If you ever do encounter a magic function-related problem, please be sure to let me know at [email protected].

Apathy, Sympathy, and Empathy in Books

This is an update of a post that originally appeared on May 23, 2016.

I’ve written more than a few times about the role that emotion plays in books, even technical books. Technical books such as Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements and Machine Learning Security Principles are tough to write because they’re packed with emotion. The author not only must convey emotion and evoke emotions in the reader, but explore the emotion behind the writing. In this case, the author’s emotions may actually cause problems with the book content. The writing is tiring because the author experiences emotions in the creation of the text. The roller-coaster of emotions tends to take a toll. Three common emotions that authors experience in the writing of a book and that authors convey to the reader as part of communicating the content are apathy, sympathy, and empathy. These three emotions can play a significant role in the suitability of the book’s content in helping readers discover something new about the people they support, themselves, and even the author.

It’s a mistake to feel apathy toward any technical topic. Writers need to consider the ramifications of the content and how it affects both the reader and the people that the reader serve. For example, during the writing of Artificial Intelligence for Dummies, 2nd Edition, Python for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition Luca and I discussed the potential issues that automation creates for the people who use it and those who are replaced by it in the job market. Considering how to approach automation in an ethical manner is essential to creating a positive view of the technology that helps people use it for good. Even though apathy is often associated with no emotion at all, people are emotional creatures and apathy often results in an arrogant or narcissistic attitude. Not caring about a topic isn’t an option.

I once worked with an amazing technical editor who told me more than a few times that people don’t want my sympathy. When you look at sympathy in the dictionary, the result of having sympathy toward someone would seem positive, but after more than a few exercises to demonstrate the effects of sympathy on stakeholders with disabilities, I concluded that the technical editor was correct—no one wanted my sympathy. The reason is simple when you think about it. The connotation of sympathy is that you’re on the outside looking in and feel pity for the person struggling to complete a task. Sympathy makes the person who engages in it feel better, but does nothing for the intended recipient except make them feel worse. However, sympathy is still better than apathy because at least you have focused your attention on the person who benefits from the result of your writing efforts.

Empathy is often introduced as a synonym of sympathy, but the connotation and effects of empathy are far different from sympathy. When you feel empathy and convey that emotion in your writing, you are on the inside, with the person you’re writing for, looking out. Putting yourself in the position of the people you want to help is potentially the hardest thing you can do and certainly the most tiring. However, it also does the most good.

Empathy helps you understand that someone who loses a job to automation isn’t looking for a new career, the old one worked just fine. The future doesn’t look bright at all to them. Likewise, some with disabilities isn’t looking for a handout and they don’t want you to perform the task for them. They may, in fact, not feel as if they have a disability at all. It was the realization that using technology to create a level playing field so that the people I wanted to help could help themselves and feel empowered by their actions that opened new vistas for me. The experience has colored every book I’ve written since the first time I came to realize that empathy is the correct emotion to convey and my books all try to convey emotion in a manner that empowers, rather than saps, the strength the my reader and the people my reader serves.

Obviously, a good author has more than three emotions. In fact, the toolbox of emotions that an author carries are nearly limitless and its wise to employ them all as needed. However, these three emotions have a particular role to play and are often misunderstood by authors. Let me know your thoughts on these three emotions or about emotions in general at [email protected].