Understanding the Effects of Net Neutrality on Web Programmers

There has been a lot of hubbub about net neutrality. I even saw not one, but two articles about the topic in my local newspaper the other day. Of course the discussion has been going on for a while now and will continue to go on—eventually ending up in the courts. My initial interest in the topic is different from almost every other account you read. While everyone else seems to be concerned about how fast their app will run, I’m more concerned about getting new applications out and allowing them to run correctly on a wide range of systems.

Both HTML5 Programming with JavaScript for Dummies and CSS3 for Dummies speak to the need of performance testing. Neither book covers the topic in detail or uses exotic techniques, but it’s an issue every good programming book should cover. Of course, I had no idea at the time I wrote these books that something like net neutrality would become fact. The developer now has something new to worry about. Given that no one else is talking much about developer needs, I decided to write Considering Net Neutrality and API Access. The article considers just how developers are affected by net neutrality.

If net neutrality remains the law of the land, developers of all types will eventually have to rethink strategies for accessing data online as a minimum. However, the effects will manifest themselves in even more ways. For example, consider how net neutrality could affect specialty groups such as data scientists. It will also affect people in situations they never expected. For example, what happens when net neutrality assures equal access speeds for the x-ray needed to save your life and that online game the kid is playing next to you? Will people die in order to assure precisely equal access. So far, I haven’t found anyone talking about these issues. There just seems to be this nebulous idea of what net neutrality might mean.

My thought is that we need a clearer definition of precisely what the FCC means by equal access. It’s also important to define exceptions to the rule, such as medical needs or real time applications, such as self-driving cars. The rules need to spell out what fair really means. As things sit right now, I have to wonder whether net neutrality will end up being another potentially good idea gone really bad because of a lack of planning and foresight. What are your ideas about net neutrality? Let me know at John@JohnMuellerBooks.com.

 

Getting Python to Go Faster

No one likes a slow application. So, it doesn’t surprise me that readers of Professional IronPython and Beginning Programming with Python For Dummies have asked me to provide them with some tips for making their applications faster. I imagine that I’ll eventually start receiving the same request from Python for Data Science for Dummies readers as well. With this in mind, I’ve written an article for New Relic entitled 6 Python Performance Tips, that will help you create significantly faster applications.

Python is a great language because you can use it in so many ways to meet so many different needs. It runs well on most platforms. It wouldn’t surprise me to find that Python eventually replaces a lot of the other languages that are currently in use. The medical and scientific communities have certainly taken a strong notice of Python and now I’m using it to work through Data Science problems. In short, Python really is a cool language as long as you do the right things to make it fast.

Obviously, my article only has six top tips and you should expect to see some additional tips uploaded to my blog from time-to-time. I also want to hear about your tips. Make sure you write me about them at John@JohnMuellerBooks.com. Be sure to tell me which version of Python you’re using and the environment in which you’re using it when you write. Don’t limit your tips to those related to speed either. I really want to hear about your security and reliability tips too.

As with all my books, I provide great support for all of my Python books. I really do want you to have a great learning experience and that means having a great environment in which to learn. Please don’t write me about your personal coding project, but I definitely want to hear about any book-specific problems you have.

 

 

Keeping Your CSS Clean

It happens to everyone. Even with the best intentions, your code can become messy and unmanageable. When that code is compiled into an executable, the compiler can perform some level of cleanup and optimization for you. However, when you’re working with a text-based technology, such as Cascading Style Sheets (CSS), the accumulated grime ends up slowing your application measurably, which serves to frustrate users. Frustrated users click the next link in line, rather than deal with an application that doesn’t work as they think it should. It doesn’t take long to figure out that you really must keep your CSS clean if you plan to keep your users happy (and using your application).

Manually cleaning your code is a possibility, as is keeping your code clean in the first place. Both solutions can work when you’re a lone developer or possibly working as part of a small team. The problem begins when you’re part of a larger team and there are any number of people working on the code at the same time. As the size of the team increases, so does the potential for gunky code that affects application speed, reliability, and security negatively. In order to clean code in a team environment, you really do need some level of automation, which is why I wrote Five Free Tools to Clean Up Your CSS. This article provides good advice on which tools will help you get the most out of your application.

The cleaner you keep your code, the faster the application will run and the less likely it is to have reliability and security problems. Of course, there are many other quality issues you must consider as part of browser-based application development. Messy CSS does cause woe for a lot of developers, but it isn’t the only source of problems. I’ll cover some of these other issues in future posts. What I’d like to hear now is what you consider the worst offenders when it comes to application speed, reliability, and security problems. Let me know about your main source of worry at John@JohnMuellerBooks.com.

 

A History of Microprocessors

Every once in a while, someone will send me a truly interesting link. Having seen a few innovations myself and possessing a strong interest in history, I read the CPU DB: Recording Microprocessor History on the Association for Computing Machinery (ACM) site with great interest. The post is a bit long, but essentially, the work by Andrew Danowitz, Kyle Kelley, James Mao, John P. Stevenson, and Mark Horowitz does something that no other site does, it provides you with a comprehensive view of 790 different microprocessors created since the introduction of Intel’s 4004 in November 1971. The CPU DB is available for anyone to use and should prove useful for scientist, developer, and hobbyist alike.

Unlike a lot of the work done on microprocessors, this one hasn’t been commissioned by a particular company. In fact, you’ll find processors from 17 different vendors. The work also spans a considerable number of disciplines. For example, you can discover how the physical scaling of devices has changed over the years and the effects of software on processor design and development.

A lot of the information available in this report is also available from the vendor or a third party in some form. The problem with vendor specification sheets and third party reports is that they vary in composition, depth, and content—making any sort of comparison extremely difficult and time consuming. This database makes it possible to compare the 790 processors directly and using the same criteria. A researcher can now easily see the differences between two microprocessors, making it considerably easier to draw conclusions about microprocessor design and implementation.

Not surprisingly, it has taken a while to collect this sort of information at the depth provided. According to the site, this database has been a work in progress for 30 years now. That’s a long time to research anything, especially something as esoteric as the voltage and frequency ranges of microprocessors. The authors stated their efforts were hampered in some cases by the age of the devices and the unavailability of samples for testing. I would imagine that trying to find a usable copy of a 4004 for testing would be nearly impossible.

You’ll have to read the report to get the full scoop of everything that CPU DB provides. The information is so detailed that the authors resorted to using tables and diagrams to explain it. Let’s just say that if you can’t find the statistic you need in CPU DB, it probably doesn’t exist. In order to provide a level playing field for all of the statistics, the researchers have used standardized testing. For example, they rely on the Standard Performance Evaluation Corporation (SPEC) benchmarks to compare the processors. Tables 1 and 2 in the report provide an overview of the sorts information you’ll find in CPU DB.

This isn’t a resource I’ll use every day. However, it is a resource I plan to use when trying to make sense of performance particulars. Using the information from CPU DB should remove some of the ambiguity in trying to compare system designs and determine how they affect the software running on them. Let me know what you think of CPU DB at John@JohnMuellerBooks.com.

 

Considering the Performance Triangle

It has been at least ten years ago now that I was reading an article and considering its ramifications in light of my current book. The name of the article has fallen by the wayside over the years, but the book was entitled, “.NET Development Security Solutions.” I was considering two important questions in the creation of my book:

 

  • Is it possible to create a truly secure application?
  • What is the actual cost of a secure application?


They’re essential questions and the answers to them continue to have an effect on my book writing efforts. The answer to the first question is yes—it’s possible to write a truly secure application. However, in order to obtain such an application, it can’t have any connectivity to anything. The second that an application has any connectivity, whatsoever, with anything else, it becomes contaminated by that connection. Don’t just think about users entering incorrect data here. The data on a hard drive can be corrupted, as can the memory used to hold the application. For that matter, someone could probably make the argument that it’s possible to create serious problems in the processor. My readers are a truly ingenious lot, so I’m sure if I gave you enough time, you could come up with an application that has no connectivity whatsoever with anything, but the usefulness of such an application is debatable. What good is an application with no inputs or outputs? I came to the conclusion, after quite a few hours of thought, that the most secure application in the world does nothing at all, interacts with nothing at all, and is pretty much worthless.

As I considered this security dilemma, it occurred to me that a truly secure application has no speed and is completely unreliable, given that it doesn’t do anything at all. As I worked through this book, I eventually came up with a performance triangle, where these sides are based on:

 

  • Security: The ability of the application to prevent damage to the application data, application code, the host system, and any connected systems.
  • Reliability: The accessibility of the application by the user. A reliable application is available for use at all times and makes every possible feature available to the user in a manner that precludes damage to the data or the system.
  • Speed: The pace at which an application can accept input, perform tasks, and provide output.


In order to increase speed, you must remove code that provides security and reliability. (I’m assuming here that the code is already as efficient as you can make it.) Likewise, when adding security features, you decrease
application availability in some situations and also add code that affects application speed. Making code reliable (which infers accessibility) reduces security and speed. The performance triangle has worked out for me for many years now as the basis for thinking about how application code actually works and the tradeoffs I must consider while writing it.

The use of the performance triangle isn’t based on language or platform. Whether you work on a Macintosh, Linux system, or Windows, you need to consider the effects of the performance triangle on your application irregardless of the host system. Likewise, language choice doesn’t affect the performance triangle, except that some languages provide more features that enhance a particular triangle element more than others. For example, C++ provides a significant speed advantage over many applications, but at a known cost to both security and reliability because the programmer is made responsible to adding the required safeguards. C# provides more safeguards, which enhances reliability, but at the cost of speed.

A problem in the current application design environment is that developers often focus on one element of the triangle (normally speed or security) at the cost of the other two. In order to build an application that performs well in a specific situation, the developer must consider the ramifications of all three triangle elements—an incredibly difficult task in some situations.

Even the questions that developers ask about the three elements of the performance triangle are often flawed. For example, many developers equate reliability with up time, but up time is only part of the reliability picture. An application that produces inconsistent results, denies required access to the user, or damages data in some way is unreliable. Robust applications explore all of the elements fully.

In the end, developers are usually faced with an imperfect scenario. Conflicting requirements and real world necessities often serve to weaken a good application design so that the developer must carefully craft the application to meet the maximum number of these requirements with the fewest possible faults. The situation comes down to one of weighing risk. A developer needs to ask which issue is most likely to cause the most harm and then write code that addresses that issue.

What sorts of design issues are you facing when developing applications? Do you use something like the performance triangle to help you consider the risks incurred when choosing one course over another in application development? Let me know your thoughts at John@JohnMuellerBooks.com.