Is Security Research Always Useful?

This is an update of a post that originally appeared on February 19, 2016.

Anyone involved in the computer industry likely spends some amount of time reading about the latest security issues in books such as Machine Learning Security Principles. Administrators and developers probably spend more time than many people, but no one can possibly read all the security research available today. There are so many researchers looking for so many bugs in so many places and in so many different ways that even if someone had the time and inclination to read every security article produced, it would be impossible. You’d need to be the speediest reader on the planet (and then some) to even think about scratching the surface. So, you must contemplate the usefulness of all that research—whether it’s actually useful or simply a method for some people to get their name on a piece of paper.

What amazes me since I first wrote this blog post is that I have done a considerable amount of additional reading, including research papers, and find that most exploits remain essentially the same. The techniques may differ, they may improve, but the essentials of the exploit remain the same. It turns out that humans are the weakest link in every security chain and that social engineering attacks remain a mainstay of hackers. The one thing that has changed in seven years is that the use of machine learning and deep learning techniques has automated life for the hacker, much as these technologies have automated life for everyone else. In addition, a lack of proactive privacy makes it even easier than before for a hacker to create a believable attack by using publicly available information about an intended target.

As part of researching security, you need to consider the viability of an attack, especially with regard to your organization, infrastructure, personnel, and applications. Some of the attacks require physical access to the system. In some cases, you must actually take the system apart to access components in order to perform the security trick. Many IoT attacks fall into this category. Unless you or your organization is in the habit of allowing perfect strangers physical access to your systems, which might include taking them apart, you must wonder whether the security issue is even worth worrying about. You need to ask why someone would take the time to document a security issue that’s nearly impossible to see, much less perform in a real world environment. More importantly, the moment you see that a security issue requires physical access to the device, you can probably stop reading.

You also find attacks that require special equipment to perform. The article, How encryption keys could be stolen by your lunch, discusses one such attack. In fact, the article contains a picture of the special equipment that you must build to perpetrate the attack. It places said equipment into a piece of pita bread, which adds a fanciful twist to something that is already quite odd and pretty much unworkable given that you must be within 50 cm (19.6 in) from the device you want to attack (assuming that the RF transmission conditions are perfect). Except for the interesting attack vector (using a piece of pita bread), you really have to question why anyone would ever perpetrate this attack given that social engineering and a wealth of other attacks require no special equipment, are highly successful, and work from a much longer distance.

It does pay to keep an eye on the latest and future targets of hacker attacks. Even though many IoT attacks are the stuff of James Bond today, hackers are paying attention to IoT, so it pays to secure your systems, which are likely wide open right now. As one of my experiments for Machine Learning Security Principles, I actually did hack my own smart thermostat (after which, I immediately improved security). The number of IoT attacks is increasing considerably, so ensuring that you maintain electrical, physical, and application security over your IoT devices is important, but not to the exclusion of other needs.

A few research pieces become more reasonable by discussing outlandish sorts of hacks that could potentially happen after an initial break-in. The hack discussed in Design flaw in Intel chips opens door to rootkits is one of these sorts of hacks. You can’t perpetrate the hack until after breaking into the system some other way, but the break-in has serious consequences once it occurs. Even so, most hackers won’t take the time because they already have everything needed—the hack is overkill. However, this particular kind of hack should sound alarms in the security professional’s head. The Windows 11 requirement for the TPM 2.0 chip is supposed to make this kind of attack significantly harder, perhaps impossible, to perform. Of course, someone has already found a way to bypass the TPM 2.0 chip requirement and it doesn’t help that Microsoft actually signed off on a piece of rootkit malware for installation on a Windows 11 system. So, security research, even when you know that the actual piece of research isn’t particularly helpful, can become a source of information for thought experiments of what a hacker might do.

The articles that help most provide a shot of reality into the decidedly conspiracy-oriented world of security. For example, Evil conspiracy? Nope, everyday cyber insecurity, discusses a series of events that everyone initially thought pointed to a major cyber attack. It turns out that the events occurred at the same time by coincidence. The article author thoughtfully points out some of the reasons that the conspiracy theories seemed a bit out of place at the outset anyway.

It also helps to know the true sources of potential security issues. For example, the articles, In the security world, the good guys aren’t always good and 5 reasons why newer hires are the company’s biggest data security risk, point out the sources you really do need to consider when creating a security plan. These are the sorts of articles that should attract your attention because they describe a security issue that you really should think about.

The point is that you encounter a lot of information out there that doesn’t help you make your system any more secure. It may be interesting if you have the time to read it, but the tactics truly aren’t practical and no hacker is going to use them. Critical thinking skills are your best asset when building your security knowledge. Let me know about your take on security research at [email protected].

Automation and the Future of Human Employment

This is an update of a post that originally appeared on May 9, 2016.

It wasn’t long ago that I wrote Robotics and Your Job to consider the role that robots will play in human society in the near future. Of course, robots are already doing mundane chores and those list of chores will increase as robot capabilities increase. The question of what sorts of work humans will do in the future has crossed my mind quite a lot as I’ve written a number of AI, machine learning, and deep learning books such as Artificial Intelligence for Dummies, 2nd Edition. In fact, both Luca and I have discussed the topic at depth. It isn’t just robotics, but the whole issue of automation that is important. Robots actually fill an incredibly small niche in the much larger topic of automation. Although articles like The end of humans working in service industry? seem to say that robots are the main issue, automation comes in all sorts of guises. So, here it is seven years later and robot theme parks are still in the news and they are making an impact as security guards. In addition, Huis Ten Bosch still has Robot Kingdom going (you can select either Japanese or English as needed to read the information). The fact of the matter is that in seven years robots have become a significant part of most people’s lives and the impact will continue to grow. Not that I’m actually expecting an I Robot experience any time soon.

My vision for the future is that people will be able to work in occupations with lower risks, higher rewards, and greater interest. Unfortunately, not everyone wants a job like that. Some people really do want to go to work, clock in, place a tiny cog in a somewhat large wheel all day, clock out, and go home. They want something mindless that doesn’t require much effort, so losing service and assembly line type jobs to automation is a problem for them. In The Robots are Coming for Your Job, Too the author paints a pretty gloomy picture for anyone who thinks their service job will still exist in 50 years. The reality is that any job that currently pays under $25.00 an hour is likely to become a victim of automation. Many people insist that they’re irreplaceable, but the fact is that automation can easily take their job and employers are looking forward to the change because automation doesn’t require healthcare, pensions, vacation days, sick days, or salaries. Most importantly, automation does as its told.

In the story The rise of greedy robots, the author lays out the basis for an increase in automation that maximizes business profit at the expense of workers. Articles such as On the Phenomenon of Bullshit Jobs tell why people are still working a 40 hour work week when it truly isn’t necessary to do so. In short, if you really do insist on performing a task that is essentially pointless, the government and industry is perfectly willing to let you do so until a time when technology is so entrenched that it’s no longer possible to do anything about it (no, I’m not making this up). As mentioned earlier, even some relatively essential jobs, such as security, have a short life expectancy with the way things are changing.

The question of how automation will affect human employment in the future remains. Theoretically, people could work a 15 hour work week even now, but then we’d have to give up some of our consumerism—the purchase of gadgets we really don’t need. Earlier, I talked about jobs that are safer, more interesting, and more fulfilling. There are also those pointless jobs that the government will doubtless prop up at some point to keep people from rioting. However, there is another occupation that will likely become a major source of employment, but only for the nit-picky, detail person. In The thin line between good and bad automation the author explores the problem of scripts calling scripts. Even though algorithms will eventually create and maintain other algorithms, which in turn means that automation will eventually build itself, someone will still have to monitor the outcomes of all that automation. In addition, the search for better algorithms continues (as described in The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World and More data or better models?). Of course, these occupations still require someone with a great education and a strong desire to do something significant as part of their occupation.

The point of all this speculation is that it isn’t possible to know precisely how the world will change due to the effects of automation, but it will most definitely change. Even though automation currently has limits, scientists are currently working on methods to extend automation even further so that the world science fiction authors have written about for years will finally come into being (perhaps not quite in the way they had envisioned, however). Your current occupation may not exist 10 years from now, much less 50 years from now. The smart thing to do is to assume your job is going to be gone and that you really do need a Plan B in place—a Plan B that may call for an increase in flexibility, training, and desire to do something interesting, rather than the same mundane task you’ve plodded along doing for the last ten years. Let me know your thoughts on the effects of automation at [email protected].

Technology and Child Safety

This is an update of a post that originally appeared on January 20, 2016.

I wrote a little over seven years ago that I had read an article in ComputerWorld, Children mine cobalt used in smartphones, other electronics, that had me thinking yet again about how people in rich countries tend to ignore the needs of those in poor countries. I had sincerely hoped at the time that things would be different, better, in seven years. Well, they’re worse! We’ve increased our use of cobalt dramatically in order to create supposedly green cars. The picture at the beginning of the ComputerWorld article says it all, but the details will have you wondering whether a smartphone or an electric car really is worth some child’s life. That’s right, any smartphone or electric car you buy may be killing someone and in a truly horrid manner. Children as young as 7 years old are mining the cobalt needed for the batteries (and other components) in the smartphones and electric cars that people seem to feel are so necessary for life (they aren’t you know; food, water, clothing, shelter, sleep, air, and reproduction are necessities, everything else is a luxury).

The problem doesn’t stop when someone gets rid the smartphone, electric car, or other technology. Other children end up dismantling the devices sent for recycling. That’s right, a rich country’s efforts to keep electronics out of their landfills is also killing children because countries like India put these children to work taking them apart in unsafe conditions. Recycled wastes go from rich countries to poor countries because the poor countries need the money for necessities, like food. Often, these children are incapable of working by the time they reach 35 or 40 due to health issues induced by their forced labor. In short, the quality of their lives is made horribly low so that it’s possible for people in rich countries to enjoy something that truly isn’t necessary for life. To make matters worse, the vendors of these products build in obsolescence (making them unrepairable) so they can sell more products and make more money, increasing the devastation visited on children.

I’ve written other blog posts about the issues of technology pollution. However, the emphasis of these previous articles has been on the pollution itself. Taking personal responsibility for the pollution you create is important, but we really need to do more. Robotic (autonomous) mining is one way to keep children out of the mines and projects such as UX-1 show that it’s entirely possible to use robots in place of people today. The weird thing is that autonomous mining would save up to 80% of the mining costs of today, so you have to wonder why manufacturers aren’t rushing to employ this solution.

In addition, off world mining would keep the pollution in space, rather than on planet earth. Of course, off world mining also requires a heavy investment in robots, but it promises to provide a huge financial payback in addition to keeping earth a bit cleaner. The point is that there are alternatives that we’re not using. Robotics presents an opportunity to make things right with technology and I’m excited to be part of that answer in writing books such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd EditionAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition.

Unfortunately, companies like Apple, Samsung, and many others simply thumb their noses at laws that are in place to protect the children in these countries because they know you’ll buy their products. Yes, they make official statements, but read their statements in that first article and you’ll quickly figure out that they’re excuses and poorly made excuses at that. They don’t have to care because no one is holding them to account. People in rich countries don’t care because their own backyards aren’t sullied and their own children remain safe. It’s not that I have a problem with technology, quite the contrary, I have a problem with the manner in which technology is currently being made and supported. We need to do better. So, the next time you think about buying electronics, consider the real price for that product. Let me know what you think about polluting other countries to keep your country clean at [email protected].

Making Algorithms Useful

This is an update of a post that originally appeared on December 2, 2015.

Writing about machine learning and deep learning in my various books has been interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in books such as Machine Learning Security PrinciplesAlgorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition:

  • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
  • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
  • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning and deep learning are already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

As I’ve written new books and updated existing ones, I’ve seen an almost magical progression in the capabilities of machine learning and deep learning applications such as ChatGPT, Chat Generative Pre-Trained Transformer, which can produce some pretty amazing output.

Some of these applications, such as Siri and Alexa, continue to learn as you use them. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question.

Machine learning is a big mystery to many people today, while other people have gained enough experience to have strong opinions about it. Because I continue to write new machine learning/deep learning books and update others, it would be interesting to hear your questions about machine learning and deep learning. After all, I’d like to tune the content of my books to meet the most needs that I can. Where do you see this technology headed? What confuses you about it? Talk to me at [email protected].

Robotics and Your Job

This is an update of a post that originally appeared on February 29, 2016.

I have written more than a few books now that involve robotics, AI, machine learning, deep learning, or some other form of advanced technology such as Machine Learning Security PrinciplesArtificial Intelligence for Dummies, 2nd Edition, Algorithms for Dummies, 2nd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition. People have often asked me whether a Terminator style robot is possible based on comments by people like Stephen Hawking and Elon Musk (who seems to have changed his mind). It isn’t, Ex Machina and The Terminator notwithstanding. I’m also asked whether they’ll be without work sometime soon (the topic of this post). (As an aside, deus ex machina is a literary plot device that has been around for a long time before the movie came out.)

Whether your job is secure depends on the kind of job you have, whether robotics jobs will actually save money and improve the technology we already have, what you believe as a person, and how your boss interprets all the hype currently out there. For example, if your claim to fame is flipping burgers, then you’d better be ready to get another job soon. McDonald’s has opened its first mostly fully automated store in Texas. Some jobs are simply going to go away, no doubt about it.

However, robots aren’t always the answer to the question. Many experts see three scenarios: humans working for robots (as in a doctor collaborating with a robot to perform surgery more accurately and with greater efficiency), humans servicing robots (those McDonald’s jobs may be going away, but someone will have to maintain the robots), and robots working for humans (such as that Roomba that’s currently keeping your house clean). The point is that robots will actually create new jobs, but that means humans will need new skills. Instead of boring jobs that pay little, someone with the proper training can have an interesting job that pays moderately well.

An interesting backlash against automation has occurred in several areas. So, what you believe as a person does matter when it comes to the question of jobs. The story that tells the tale most succinctly appears in ComputerWorld, Taxpayer demand for human help soars, despite IRS automation (the fact that the IRS automation is overloaded doesn’t help matters). Sometimes people want a human to help them. This backlash could actually thwart strategies such as the use of robotic police dogs, which don’t appear to be very popular with the public.

There is also the boss’ perspective to consider. A boss is only a boss as long as there is someone or something to manage. Even though your boss will begrudgingly give up your job to automation, you can be sure that giving up a job personally isn’t on the list of things to do. Some members of the press have resorted to viewing the future as a time when robots do everything and humans don’t work, but really, this viewpoint is a fantasy. However, it’s not a fantasy that companies such as Hitachi are experimenting with robot managers. Some employees actually prefer the consistent interaction of a robot boss. It’s unlikely that managers will take this invasion of their domain sitting down and do something to make using robots untenable.

Robots are definitely making inroads into society and children growing up with robots being a part of their lives will likely be more accepting of them. Still, there is some debate as to just how far robot use will go and how fast it will get there. The interaction between business and the people that businesses serve will play a distinct role in how things play out. However, all this said, your job will likely be different in the future due to the influences of robots. For the most part, I feel that life will be better for everyone after the adjustment, but that the adjustment will be quite hard. Let me know your thoughts on robots at [email protected].

IPython Magic Functions

This is an update of a post that originally appeared on April 25, 2016.

All of my current Python language books (and those I collaborated on with Luca Massaron): Machine Learning Security Principles, Algorithms for Dummies, 2nd Edition, Beginning Programming with Python For Dummies, 3rd Edition, Python for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition allow use of Jupyter Notebook (through Anaconda) or Google Colab to interact with the example code. Both of these IDEs extend the development environment in a number of ways, one of which is the use of magic functions. You see the magic functions in the code of these books as calls that begin with either one or two percent signs (% or %%). The most common of these magic functions is %matplotlib, which controls how IPython Notebook or Jupyter Notebook display plot output from the code.

You can find a listing of the most common magic functions in the Python for Data Science for Dummies Cheat Sheet. None of my books use any other magic functions, so this is also a complete list of magic functions that you can expect to find in our books. However, you might want to know more. Fortunately, the site at https://damontallen.github.io/IPython-quick-ref-sheets/ provides you with a complete listing of the magic commands (and a wealth of other information about Jupyter Notebook).

There are differences in magic function support between Jupyter Notebook and Google Colab, some of which are outlined in our books as needed. None of these differences will significantly affect your learning experience. However, it pays to know that Jupyter Notebook and Google Colab are only mostly the same, not precisely the same, and you’ll encounter differences. The screenshots in my books reflect the Jupyter Notebook version supported by that book, so what you might see on your screen when using magic functions in Google Colab may differ from the book.

Of course, you might choose to use another IDE—one that isn’t quite so magical as Jupyter Notebook or Google Colab. In this case, you need to remove those magic commands. Removing the commands generally won’t affect functionality of the code. The example will still work as explained in the book. However, the way that the IDE presents output could change. For example, instead of being inline, plots could appear in a separate window. Even though using a separate window is less convenient, either method works just fine. If you ever do encounter a magic function-related problem, please be sure to let me know at [email protected].

Apathy, Sympathy, and Empathy in Books

This is an update of a post that originally appeared on May 23, 2016.

I’ve written more than a few times about the role that emotion plays in books, even technical books. Technical books such as Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements and Machine Learning Security Principles are tough to write because they’re packed with emotion. The author not only must convey emotion and evoke emotions in the reader, but explore the emotion behind the writing. In this case, the author’s emotions may actually cause problems with the book content. The writing is tiring because the author experiences emotions in the creation of the text. The roller-coaster of emotions tends to take a toll. Three common emotions that authors experience in the writing of a book and that authors convey to the reader as part of communicating the content are apathy, sympathy, and empathy. These three emotions can play a significant role in the suitability of the book’s content in helping readers discover something new about the people they support, themselves, and even the author.

It’s a mistake to feel apathy toward any technical topic. Writers need to consider the ramifications of the content and how it affects both the reader and the people that the reader serve. For example, during the writing of Artificial Intelligence for Dummies, 2nd Edition, Python for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition Luca and I discussed the potential issues that automation creates for the people who use it and those who are replaced by it in the job market. Considering how to approach automation in an ethical manner is essential to creating a positive view of the technology that helps people use it for good. Even though apathy is often associated with no emotion at all, people are emotional creatures and apathy often results in an arrogant or narcissistic attitude. Not caring about a topic isn’t an option.

I once worked with an amazing technical editor who told me more than a few times that people don’t want my sympathy. When you look at sympathy in the dictionary, the result of having sympathy toward someone would seem positive, but after more than a few exercises to demonstrate the effects of sympathy on stakeholders with disabilities, I concluded that the technical editor was correct—no one wanted my sympathy. The reason is simple when you think about it. The connotation of sympathy is that you’re on the outside looking in and feel pity for the person struggling to complete a task. Sympathy makes the person who engages in it feel better, but does nothing for the intended recipient except make them feel worse. However, sympathy is still better than apathy because at least you have focused your attention on the person who benefits from the result of your writing efforts.

Empathy is often introduced as a synonym of sympathy, but the connotation and effects of empathy are far different from sympathy. When you feel empathy and convey that emotion in your writing, you are on the inside, with the person you’re writing for, looking out. Putting yourself in the position of the people you want to help is potentially the hardest thing you can do and certainly the most tiring. However, it also does the most good.

Empathy helps you understand that someone who loses a job to automation isn’t looking for a new career, the old one worked just fine. The future doesn’t look bright at all to them. Likewise, some with disabilities isn’t looking for a handout and they don’t want you to perform the task for them. They may, in fact, not feel as if they have a disability at all. It was the realization that using technology to create a level playing field so that the people I wanted to help could help themselves and feel empowered by their actions that opened new vistas for me. The experience has colored every book I’ve written since the first time I came to realize that empathy is the correct emotion to convey and my books all try to convey emotion in a manner that empowers, rather than saps, the strength the my reader and the people my reader serves.

Obviously, a good author has more than three emotions. In fact, the toolbox of emotions that an author carries are nearly limitless and its wise to employ them all as needed. However, these three emotions have a particular role to play and are often misunderstood by authors. Let me know your thoughts on these three emotions or about emotions in general at [email protected].

Handling Source Code in Books

This is an update of a post that originally appeared on April 4, 2011.

One of the biggest conundrums for the technical writer is how to handle source code in a book. The goal is to present an easily understood example to the reader—one that demonstrates a principle in a clear and concise manner. In fact, complexity is a problem with many examples—the author tries to stuff too much information into the example and ends up obfuscating the very principles that the reader is supposed to obtain.

There is also the problem with pages because books have a limited number of them. The technical writer must balance the depth and functionality of the examples against a need to present as many examples as possible. Even if a book is balanced, some readers are going to be disappointed that the book doesn’t contain the example they actually needed. So, very often simplicity must win the day in creating application source code for a book, despite the desire of the author to present something more real world, something with additional glitz and polish.

Because the goal of an example is to teach, very often the examples you see in a book have more comments than those that you see in real life. An example in a book must include as much information as possible if the code is going to fulfill its purpose. Of course, book comments should illustrate all the best principles of creating comments in real code. In short, if real world code looked a bit more like book code, then its possible that developers would spend far less time trying to figure code out and more time making changes.

Some readers will take the author to task because the code may not always provide the error trapping that production code provides. In fact, as with many teaching environments, the safety features in code are often removed for the sake of clarity. This problem plagues other environments too. In the past, it was common for woodworking magazines post a note near the beginning of the magazine telling the reader that the safety devices have been removed for the sake of clarity and that no one in their right mind would actually work with woodworking equipment without the safety devices. Likewise, the code you see in a book often lacks sufficient error trapping, making the principle that the code demonstrates clearer, at the cost of fragility. You can usually cause book examples to break easily, but no one in their right mind would create production code like that.

Choosing good examples for a book is hard, so getting your input really is important. I may not be able to provide precisely the example you need or want, but I may be able to provide something similar in the next edition of the book. Of course, I won’t know your needs or wants unless you tell me about them. I’m always open to hearing your ideas. However, I’m not open to providing free consulting in the form of troubleshooting your error code unless you’re willing to hire me to do so. Please keep the discussion to ideas that you’d like to see in book updates by contacting me at [email protected].

Using Online IDEs

Many readers today want to code anywhere, using any device, and at any time. Obviously, hauling around a desktop system or even a high end laptop is not on the list of things they want to do. I thought it unique when a reader wrote me to say that coding on a smartphone should be included in my books. I have since had several more readers make the same request. So, I’m no longer surprised to hear about the various methods used to produce source code.

Personally, I think squinting while I code would be uncomfortable, but I definitely don’t want to hinder anyone’s learning process, so many (not all) of my introductory books now include a chapter on coding with mobile or online IDEs. For C++ All-In-One for Dummies, 4th Edition the mobile device tool of choice is CppDroid (a mobile IDE for C++). Python books, such as Beginning Programming with Python For Dummies, 3rd EditionPython for Data Science for Dummies, and Machine Learning for Dummies, 2nd Edition, rely on Google Colab (an online IDE). Just out of curiosity, I tried Google Colab using my smart TV and it did work—not that I plan to code using my TV anytime soon. Even though my latest book, Machine Learning Security Principles, doesn’t include any sort of IDE setup chapter, I did test the source code using Google Colab for that book as well.

Any time you change IDEs, you need to test your code to actually ensure that it will work with that IDE. Consequently, during the writing process I carefully tested every example on both my desktop system and the alternative IDE that the book supports. This dual testing process helps ensure you have a great learning experience. Unfortunately, I don’t have the time or resources to test every possible IDE out there. To obtain a good learning experience from my books, you need to choose one of the IDEs that I mention. In addition, it’s essential that you choose the correct version of the IDE, the one that matches the book, because vendors tend to introduce breaking changes into IDEs and compilers during the upgrade process.

When you contact me about an IDE issue, the first thing I need to know is which book (and edition of that book) you have, followed by the operating system and IDE version you’re using. Otherwise, it’s very tough for me to try to help you and I do want to help you within reason. No, I’m not going to try to support the alternative IDE that you like that was created for you by your long lost relative. I’ll only support the IDEs specifically mentioned in the book. That said, I do want to hear your input about IDEs at [email protected]. If you’re encountering a problem with the IDE that’s used in the book, I definitely need to know about it and I’ll post some helpful information about it on this blog.

Source Code Placement

This is an update of a post that originally appeared on October 12, 2015.

I always recommend that you download the source code for my books. The Verifying Your Hand Typed Code post discusses some of the issues that readers encounter when typing code by hand. However, I also understand that many people learn best when they type the code by hand and that’s the point of getting my books—to learn something really interesting (see my principles for creating book source code in the Handling Source Code in Books post). Even if you do need to type the source code in order to learn, having the downloadable source code handy will help you locate errors in your code with greater ease. I won’t usually have time to debug your hand typed code for you.

Depending on your platform, you might encounter a situation the IDE chooses an unfortunate place to put the source code you want to save. For example, a Windows System might choose the Program Files folder, which contains a space and doesn’t allow saving of files unless you specifically override the default rights. Fortunately, modern IDEs do manage to avoid many of these problems, but you still need to be aware that they could exist, especially when using an older IDE.

My recommendation for fixing these, and other source code placement problems, is to create a folder that you can access and have full rights to work with to store your source code. My books usually make a recommendation for the source code file path, but you can use any path you want. The point is to create a path that’s:

  • Easy to access
  • Allows full rights
  • Lacks spaces in any of the pathname elements
  • On a local drive, rather than a cloud drive in many cases

As long as you follow these rules, you likely won’t experience problems with your choice of source code location. If you do experience source code placement problems when working with my books, please be sure to let me know at [email protected].