Making Algorithms Useful

I’m currently engaged writing Machine Learning for Dummies. The book is interesting because it turns math into something more than a way to calculate. Machine learning is about having inputs and a desired result, and then asking the machine to create an algorithm that will produce the desired result from the inputs. It’s about generalization. You know the specific inputs and the specific results, but you want an algorithm that will provide similar results given similar inputs for any set of random inputs. This is more than just math. In fact, there are five schools of thought (tribes) regarding machine learning algorithms that Luca and I introduce you to in Machine Learning for Dummies:

  • Symbolists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: The origin of this tribe is in neuroscience. This group relies on backpropagation to solve problems.
  • Evolutionaries: The origin of this tribe is in evolutionary biology. This group relies on genetic programming to solve problems.
  • Bayesians: This origin of this tribe is in statistics. This group relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. This group relies on kernel machines to solve problems.

Of course, the problem with any technology is making it useful. I’m not talking about useful in a theoretical sense, but useful in a way that affects everyone. In other words, you must create a need for the technology so that people will continue to fund it. Machine learning is already part of many of the things you do online. For example, when you go to Amazon and buy a product, then Amazon makes suggestions on products that you might want to add to your cart, you’re seeing the result of machine learning. Part of the content for the chapters of our book is devoted to pointing out these real world uses for machine learning.

Some uses are almost, but not quite ready for prime time. One of these uses is the likes of Siri and other AIs that people talk with. The more you interact with them, the better they know you and the better they respond to your needs. The algorithms that these machine learning systems create get better and better as the database of your specific input grows. The algorithms are tuned to you specifically, so the experience one person has is different from an experience another person will have, even if the two people ask the same question. I recently read about one such system under development, Nara. What makes Nara interesting is that she seems more generalized than other forms of AI currently out there and can therefore perform more tasks. Nara is from the Connectionists and attempts to mimic the human mind. She’s all about making appropriate matches—everything from your next dinner to your next date. Reading about Nara helps you understand machine learning just a little better, at least, from the Connectionist perspective.

Machine learning is a big mystery to many people today. Given that I’m still writing this book, it would be interesting to hear your questions about machine learning. After all, I’d like to tune the content of my book to meet the most needs that I can. I’ve written a few posts about this book already and you can see them in the Machine Learning for Dummies category. After reading the posts, please let me know your thoughts on machine learning and AI. Where do you see it headed? What confuses you about it? Talk to me at John@JohnMuellerBooks.com.

 

A Future Including Virtual Reality

Seeing is believing—at least, that’s how it’s supposed to be. However, seeing may not mean believing anything in the future. During the building of the PC for Build Your Own PC on a Budget, I investigated various new technologies, including virtual reality, where what you see may not exist at all. Of course, gamers are eagerly anticipating the Oculus Rift, which promises to transform gaming with a monitor into an experience where you really feel as if you’re there. This kind of technology isn’t quite available yet, but will be soon. Even when the hardware is ready and the drivers work as promised, truly immersive games will take time to create. Look for this experience to evolve over time to the point where the Holodeck featured in Star Trek actually does become a reality.

To attract attention and become viable, however, technology must answer specific needs today. It was with great interest that I read Marines test augmented reality battlefield. Unlike the Oculus Rift, this technology actually does exist today and it demonstrates some of the early uses of virtual reality that you can expect to see. In this case, the background is real—it’s an actual golf course. The virtual reality system adds the hardware of war to the scene, including tanks, mortars, and features, such as smoke. What the marine sees is a realistic battlefield that doesn’t exist anywhere but the viewer’s glasses. This is the sort of practical use of virtual reality that will continue to drive development until we get a holodeck sometime in the future.

Virtual reality for gamers and the armed services is nice, but it’s also becoming a reality for everyone else. Samsung and Facebook are introducing a virtual reality solution for movie goers. That’s right, you’ll be able to strap some glasses to your head and get transported to a comfy living room with a big screen TV where you can watch the latest movies offered by Netflix. The Gear VR device promises to change the way that people see movies forever. This particular device actually works with your smartphone, so you need a compatible smartphone to use it. In addition to movies, Gear VR also promises to let you play virtual reality game and become involved in other immersive environments. All you really need is the right app.

An immersive experience, where you eventually won’t be able to tell real from created, is what virtual reality promises. Using virtual reality, you could travel to other parts of the world, explore the ocean depths, or even saunter through the solar system as if you’re really there, but still be in your own home. Virtual reality will eventually transform all sorts of environments, including the classroom. Imagine children going to school, interacting with other students, learning from the best instructors, and never leaving their home. A student could get a top notch education for a fraction of the cost that students pay today.

Coupling virtual reality with other technologies, such as robotics, could also allow people to perform a great many unsafe tasks in perfect safety. A human could guide a robot through a virtual reality connection to perform real world tasks that would be unsafe for a human to perform alone. Think about the use of the technology in fighting fires or responding to terrible events that currently put first responders at risk. Virtual reality will eventually change the way we view the world around us and I hope that the experience is as positive as vendors are promising today. Let me know your thoughts about virtual reality at John@JohnMuellerBooks.com.

 

Beta Readers Needed for Machine Learning for Dummies

Do machines really learn, or do they simply give the appearance of learning? What does it actually mean to learn and why would a machine want to do it? Some people are saying that computers will eventually learn in the same manner that children do. However, before we get to that point, it’s important to answer these basic questions and consider the implications of creating machines that can learn.

Like many seemingly new technologies, machine learning actually has its basis in existing technologies. I initially studied about artificial intelligence in 1986 and it had been around for a long time before that. Many of the statistical equations that machine learning relies upon have been around literally for centuries. It’s the application of the technology that differs. Machine learning has the potential to change the way in which the world works. A computer can experience its environment and learn how to avoid making mistakes without any human intervention. By using machine learning techniques, computers can also discover new things and even add new functionality. The computer is at the center of it all, but the computer output affects the actions of machines, such as robots. In reality, the computer learns, but the machine as a whole benefits.

Machine Learning for Dummies assumes that you have at least some math skills and a few programming skills as well. However, you do get all the basics you need to understand and use machine learning as a new way to make computers (and the machines they control) do more. While working through Machine Learning for Dummies you discover these topics:

  • Part I: Introducing How Machines Learn
    • Chapter 1: Getting the Real Story about AI
    • Chapter 2: Learning in the Age of Big Data
    • Chapter 3: Having a Glance at the Future
  • Part II: Preparing Your Learning Tools

    • Chapter 4: Installing a R Distribution
    • Chapter 5: Coding in R Using RStudio
    • Chapter 6: Installing a Python Distribution
    • Chapter 7: Coding in Python Using Anaconda
    • Chapter 8: Exploring Other Machine Learning Tools
  • Part III: Getting Started with the Math Basics

    • Chapter 9: Demystifying the Math behind Machine Learning
    • Chapter 10: Descending the Right Curve
    • Chapter 11: Validating Machine Learning
    • Chapter 12: Starting with Simple Learners
  • Part IV: Learning from Smart and Big Data
    • Chapter 13: Preprocessing Data
    • Chapter 14: Leveraging Similarity
    • Chapter 15: Starting Easy with Linear Models
    • Chapter 16: Hitting Complexity with Neural Networks
    • Chapter 17: Going a Step Beyond using Support Vector Machines
    • Chapter 18: Resorting to Ensembles of Learners
  • Part V: Applying Learning to Real Problems
    • Chapter 19: Classifying Images
    • Chapter 20: Scoring Opinions and Sentiments
    • Chapter 21: Recommending Products and Movies
  • Part VI: The Part of Tens
    • Chapter 22: Ten Machine Learning Packages to Master
    • Chapter 23: Ten Ways to Improve Your Machine Learning Models
    • Online: Ten Ways to Use Machine Learning in Your Organization

As you can see, this book is going to give you a good start in working with machine learning. Because of the subject matter, I really want to avoid making any errors in book, which is where you come into play. I’m looking for beta readers who use math, statistics, or computer science as part of their profession and think they might be able to benefit from the techniques that data science and/or machine learning provide. As a beta reader, you get to see the material as Luca and I write it. Your comments will help us improve the text and make it easier to use.

In consideration of your time and effort, your name will appear in the Acknowledgements (unless you specifically request that we not provide it). You also get to read the book free of charge. Being a beta reader is both fun and educational. If you have any interest in reviewing this book, please contact me at John@JohnMuellerBooks.com and will fill in all the details for you.

 

Contemplating the Issue of Bias in Data Science

When Luca and I wrote Python for Data Science for Dummies we tried to address a range of topics that aren’t well covered in other places. Imagine my surprise when I saw a perfect article to illustrate one of these topics in ComputerWorld this week, Maybe robots, A.I. and algorithms aren’t as smart as we think. With the use of AI and data science growing exponentially, you might think that computers can think. They can’t. Computers can emulate or simulate the thinking process, but they don’t actually think. A computer is a machine designed to perform math quite quickly. If we want thinking computers, then we need a different kind of a machine. It’s the reason I wrote the Computers with Common Sense? post not too long ago. The sort of computer that could potentially think is a neural network and I discuss them in the Considering the Future of Processing Power post. (Even Intel’s latest 18 core processor, which is designed for machine learning and analytics isn’t a neural network—it simply performs the tasks that processors do now more quickly.)

However, the situation is worse than you might think, which is the reason for mentioning the ComputerWorld article. A problem occurs when the computer scientists and data scientists working together to create algorithms that make it appear that computers can think forget that they really can’t do any such thing. Luca and I discuss the effects of bias in Chapter 18 of our book. The chapter might have seemed academic at one time—something of interest, but potentially not all that useful. Today that chapter has taken on added significance. Read the ComputerWorld article and you find that Flickr recently released a new image recognition technology. The effects of not considering the role of bias in interpreting data and in the use of algorithms has has horrible results. The Guardian goes into even more details, describing how the program has tagged black people as apes and animals. Obviously, no one wanted that particular result, but forgetting that computers can’t think has caused precisely that unwanted result.

AI is an older technology that isn’t well understood because we don’t really understand our own thinking processes. It isn’t possible to create a complete and useful model of something until you understand what it is that you’re modeling and we don’t truly understand intelligence. Therefore, it’s hardly surprising that AI has taken so long to complete even the smallest baby steps. Data science is a newer technology that seeks to help people see patterns in huge data sets—to understand the data better and to create knowledge where none existed before. Neither technology is truly designed for stand-alone purposes yet. While I find Siri an interesting experiment, it’s just that, an experiment.

The Flickr application tries to take the human out of the loop and you see the result. Technology is designed to help mankind achieve more by easing the mundane tasks performed to accomplish specific goals. When you take the person out of the loop, what you have is a computer that is only playing around at thinking from a mathematical perspective—nothing more. It’s my hope that the Flickr incident will finally cause people to start thinking about computers, algorithms, and data as the tools that they truly are—tools designed to help people excel in new ways. Let me know your thoughts about AI and data science at John@JohnMuellerBooks.com.

 

Computers with Common Sense?

The whole idea behind products, such as Siri, is to give computers a friendlier face. Much like the computer on the Enterprise in Star Trek, you converse with the machine and get intelligent answers back much of the time. The problem is that computers don’t currently have common sense. A computer really doesn’t understand anything anyone says to it. What you’re seeing is incredibly complex and clever programming. The understanding is in the math behind the programming. Computers truly are machines that perform math-related tasks with extreme speed and perfection.

It was with great interest that I recently read an article on the Guardian, Google a step closer to developing machines with human-like intelligence. The opening statement is misleading and meant to bedazzle the audience, but then the article gets into the actual process behind computers that could emulate common sense well enough that we’d anthropomorphize them even more than we do now. If the efforts of Professor Geoff Hinton and others are successful, computers could potentially pass the Turing Test in a big way. In short, it would become hard to tell a computer apart from a human. We very well could treat them as friends sometime in the future (some people are almost there now).

Articles often allude to scientific principles, but don’t really explain them. The principle at play in this case is the use of sentiment analysis based on words and word n-grams. You can build a sentiment analysis by using machine learning and multiclass predictors. Fortunately, you don’t have to drive yourself nuts trying to understand the basis for the code you find online. Luca and I wrote Python for Data Science for Dummies to make it easier to understand the science behind the magic that modern applications seemingly ply. Let me know your thoughts about the future of computers with common sense at John@JohnMuellerBooks.com.