The Bionic Person, One Step Closer

When The Six Million Dollar Man first arrived on the scene in January of 1974, most people thought it was simply another science fiction television show. The addition of The Bionic Woman in January 1976 was just more good entertainment. The only problem is that these shows really aren’t just entertainment anymore. I’ve already discussed the use of exoskeletons to help those who have lost use of their legs in Exoskeletons Become Reality. No, none of the people using these devices can run 60 mph or make incredible leaps—that part is still science fiction, but I’m beginning to wonder for how long. (Just in case you’re interested, there is also a bionic arm in the works.) Today I read an article entitled, “Australians implant ‘world first’ bionic eye” that appears to take the next step in the use of bionics with humans. Never in my wildest dreams did I imagine that these things would happen when I originally wrote Accessibility for Everybody. I’m happy that they have !

Of course, the bionic eye of today is quite limited. Early bionic eyes have relied on a camera built into a pair of glasses to help someone see. You need a lot of hardware to make these eyes work and the best you can hope to achieve in many cases is to see light and dark. The part I find interesting about this new bionic eye is that the apparatus is actually inserted into the person’s living eye on top of the retina (yes, you still need the glasses, but just for the camera part of the technology)! This is a true innovation because it means that we’re headed in the direction of bionics becoming nearly impossible to detect. Once this technology leaves the laboratory, the doctors envision the person being able to see a 1,024 × 1,024 image. OK, that’s not HDTV standard, but it’s a lot better than someone who is blind has today.

In many respects, the technology advances we’re seeing today are both amazing and a bit scary at the same time. Scientists are literally probing every element of the human body, discovering how they work well enough to help people live better lives, and then using technology to fill in the gaps. I see a time coming when no one will have to suffer with a devastating loss that significantly limits the enjoyment of life. What do you think about the coming of the real bionic person? How far do you think this technology might go? Let me know your thoughts at [email protected].

 

Thinking About Robotic Physicians

I have had a long term interest in enhancing the human condition using technology in a positive way (which is the main reason I wrote Accessibility for Everybody). For example, I explored how exoskeletons can help those who don’t have use of their legs to walk as if they did. The Robotics in Your Future post started things off though by reviewing the topic of robotics as they relate to humans. Recently I read an article about robotic physicians in ComputerWorld. The robot is simply a method for a real person to interact with a patient over a distance. Using the robot’s functionality, a doctor can perform a number of checks on a patient and learn what has gone wrong. The technology is obviously in its infancy at this point, but I already had questions about it as soon as I read the article.

While writing Determining When Technology Hurts, I tried to consider the negative aspects of a particular technology. For example, a doctor doesn’t have a face-to-face environment in which to interact with the patient in this case. Consequently, the doctor could miss subtle cues as to the actual issues that a patient faces. This sort of technology depends on the doctor’s ability to use instruments that are attached to a robot in a remote location. The doctor may not even know whether the instruments are fully functional and providing accurate information. I’m sure the technology will eventually include safeguards (and may even include some now), but these concerns are something that we as a society must ponder before making the technology generally available. Of course, there is the major issue of dealing with the human reaction to a robotic doctor. I’m sure many people will refuse to the submit to the cold hand of technology in place of the warm hands of a real doctor.

Even with these concerns, however, there is real potential for the robotic physician. For one thing, you can find any number of articles online about the expected shortfall in doctors. There simply won’t be enough doctors to go around at some point. Some plans for addressing this shortfall include using nurses to perform more of the work normally associated with doctors. Of course, because a nurse doesn’t have the same level of training, there are some serious issues with this approach. The robotic physician could help address the shortfall, especially in rural areas where patients typically have to wait now for the one day a week that a specialist visits.

The robotic physician could also fill in when there is no doctor available. Smaller, isolated communities could finally have a doctor available, even if that doctor isn’t physically present. A robotic doctor will also be necessary as our ventures into space increase. It’s also easy to imagine larger nursing homes staffing a robotic doctor who could help with critical patients until physical help can arrive and take over. The loss of life will be reduced in such situations because the doctor could be there in seconds. In short, this is an exciting development in technology that will have practical uses as long as we’re careful in applying it.

How would you react to a robotic doctor? Would you even let it touch you? I would imagine that human reluctance will be one of the major issues we’ll have to overcome, but I’d like to hear your take on the matter at [email protected].

 

Determining When Technology Hurts

I’ve been talking with a friend recently about a disturbing trend that I’m witnessing. Technology has started hurting people, more than helping people, in a number of ways. Actually, it’s not the technology that’s at fault, but the misuse and abuse of that technology. One of my goals as an author is to expose people to various technologies in a way that helps them. This goal is one my major reasons for writing books like Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements. It’s also the reason I’m constantly looking at how our society interacts with technology.

I’m sure that there is going to be some sort of course correction, but currently, our society has become addicted to technology in a way that harms everyone. You could be addicted to technology if you’ve ever experienced one of these symptoms:

  • You’re with friends, family, or acquaintances, but your attention is so focused on whatever technology you’re using at the moment that you lose all track of the conversation. It’s as if these other people aren’t even there.
  • You find yourself making excuses to spend just one more minute with your technology, rather than spend time with your family or friends.
  • In some cases, you forgo food, sleep, or some other necessity in order to spend more time with your technology.
  • Suddenly you have more electronic friends than physical friends.
  • You can’t remember the last time you turned all of your technology off, forgot about it, and spent the day doing anything else without worrying about it.
  • You’ve had some sort of accident or mishap because your technology got in the way.
  • Attempting even small tasks without your technology has suddenly become impossible.


Technology is meant to serve mankind, not the other way around. For example, I was quite excited to learn about the new exoskeleton technologies that I wrote about in my Exoskeletons Become Reality post. The idea that I’m able communicate with people across the world continues to amaze me. Seeing Mars through the eyes of the rovers is nothing short of spectacular. Knowing that someone is able to live by themselves, rather than in an institution, because of their computer sends shivers up my spine. These are all good uses of technology.

However, these good uses have become offset by some of the news I’ve been reading. For example, it has been several years now since scientists and doctors have begun raising concerns about texting being worse than drunk driving=. Drunk driving is a serious offense, of course, and no one wants to undersell that. If you get caught doing it, you will no doubt be in need of an attorney for DUI charges to help you out. What these various groups haven’t considered is that anything that distracts you while driving is bad. For example, radios now have so many gadgets that you can get quite engrossed in trying to get what you want out of them. Except for turning the radio on or off, or perhaps changing the station, I now leave my hands off the radio unless I’m parked. The fact that I daily see cars weaving to and fro in front of me as the driver obviously plays with something in the front seat or on the dashboard tells me that other people aren’t quite as able to turn off the urge to fiddle. Small distractions like this are often the main cause of accidents. It is so important to focus when driving, as an accident, no matter how small can be life-changing. It can also cost a lot of money and you may need to click here to find out more about how a lawyer can help when you’ve been involved in some way. Especially if you have been involved in an accident whereby the other person was at fault for being distracted.

I know of more than a few people who are absolutely never disconnected from their technology. They actually exhibit addictive behavior when faced with even a short time away from their technology. It’s not just games, but every aspect of computer use. Some people who work in IT can’t turn off from their computer use even when on vacation-they take a computer with them. I’ve talked about this issue in my Learning to Unplug post.

I look for the situation to become far worse before it become better. This past Sunday I was listening to a show on the radio that talked about how banks would like to get rid of any use of physical money. You’d carry an electronic wallet in your smartphone and that wallet would provide access to all of your money. In short, even if you’d like to unplug, you can’t because now you depend on that smartphone for the basics in life. At some point, everyone will have to have smartphone simply to survive if the banks have their way.

Of course, why bother with a smartphone when you can embed the computer right into the human body? The science exists to do this now. All that has to happen is that people lose their wariness of embedded computer technology-just as they have with every other form of technology to come along. Part of the method for selling this technology will undoubtedly be the ability to control your computer with your mind.

Technology is currently embedded in humans to meet special needs. For example, if you have a pacemaker, it’s likely that the doctor can check up on its functionality using a wireless connection. However, even here, humans have found a way to abuse technology as explained in my An Update On Special Needs Device Hacking post. What has changed since then is that the entertainment industry has picked up on this sick idea. It’s my understanding that NCIS recently aired a show with someone dying of this very attack. Viewers probably thought is was the stuff of science fiction, but it’s actually science fact. You really can die when someone hacks into your pacemaker.

The implications of what these various groups are working are quite disturbing. As technology becomes more and more embodied within humans, the ability to be alone, ever, will be gone. Any thought you have will also be heard by someone else. There won’t be any privacy; any time to yourself. You’ll be trapped. It’s happening right now and everyone seems to be quite willing to rush toward it at breakneck speed.

The day could come when your ability to think for yourself will be challenged by the brainwaves injected by some implanted device. Theoretically, if the science goes far enough, the ability to even control your own body will be gone. Someone is probably thinking that I sound delusional or perhaps paranoid-I truly hope that none of the future technologies I’ve read about ever come into wide use.

In the meantime, the reality is that you probably could use a break from your technology. Take time to go outside and smell the flowers. Spend an afternoon with a physical friend discussing nothing more than the beautiful day or the last book you read. Go to a theater and watch a play or a movie with your technology left at home. Eat a meal in peace. Leave your smartphone at home whenever you can. Better yet, turn it off for a day or two. Unplug from the technology that has taken over your life and take time to live. You really do owe it to yourself.

Accessibility on Windows 8 Metro

Anyone who reads my books knows that accessibility is a major concern for me because I see computers as a means for leveling the playing field for those who have special needs. In fact, my desire to make things as accessible as possible is the reason for writing Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements. Microsoft has always made a strong effort to keep Windows and its attendant applications accessible—at least, to a point. You still need a third party application such as JAWS to make Windows truly accessible (the application developer must also cooperate in this effort as described in my many programming books). Naturally, I’ve been curious about how the Metro interface will affect accessibility.

Here is the problem. The most accessible operating system that Microsoft ever created was DOS. That’s right—the non-graphic, single tasking operating system is a perfect match for those who have special needs precisely because it doesn’t have any bells or whistles to speak of. Screen readers have no problem working with DOS and it’s actually possible to use a considerable number of assistive aids with DOS because it requires nothing more than text support. Of all the the graphical environments that Microsoft has produced, I’ve personally found the combination of Windows XP and Office 2003 to be the most accessible and feature rich. The introduction of the Ribbon with Office 2007 actually reduces accessibility. If you have trouble seeing all of those fancy icons and the odd layout of the Ribbon, you’re not going to enjoy working with the Ribbon.

I installed and tried the developer version of Windows 8 to test it for accessibility. Now, it’s a pre-beta product and there aren’t any Windows 8 products out for applications such as JAWS, so I have to emphasize that I didn’t test under the best of conditions. In fact, you could say that my test was unfair. That said, I did want to see how bad things actually are. Let me say that JAWS works acceptably, but not great, with the classic interface. It doesn’t work at all with the new Metro interface (at least, I couldn’t get it to work). So, unless you’re willing to trust Microsoft completely, you’re out of luck if you have a visual need at the moment. Things will improve, that much is certain, but it’s important to keep a careful eye on how Windows 8 progresses in this area.

The new version of Narrator
does come with some new features. Some of the features may seem like
glitz at first, but they’re really important. For example, the ability
to speed the voice up or slow it down, and the ability to use different
voices, helps with cognition. A more obvious improvement is the ability
to use different commands with Narrator. Narrator will also work with
Web pages now as long as you’re willing to use Internet Explorer as your
browser.

It’s with this in mind that I read the post about Windows 8 accessibility entitled, “Enabling Accessibility.” Let me be up front and say that accessibility is an important issue to Microsoft—at least, it has been in the past. According to this post, 15% of the people using computers worldwide have accessibility needs. The more important piece of information is that the number of people with accessibility needs is going to increase because the population is aging and things such as eyesight deteriorate as we get older.

From what I garnered from the post, developers are going to have to jump through an entirely new set of hoops to make their applications accessible in Windows 8. Some developers already have problems making their applications accessible and some simply don’t care to make their applications accessible. If you fall into the former category, you can read my A Visual Studio Quick Guide to Accessibility (Part 1) and A Visual Studio Quick Guide to Accessibility (Part 2) posts in addition to reading my books. If you fall into the latter category, you’re going to find it harder to support users in the future and will definitely see reduced sales because the number of people with accessibility needs is increasing.

Microsoft is improving the Assistive Technologies (ATs) it provides with Windows in order to meet new accessibility requirements. However, my experience with these ATs is that they help people with minor problems, not someone who has a major issue. Even the author of the blog post acknowledges this deficiency in Microsoft’s support. So, if you really do need to use an eye gaze system to work with Windows, you’re going to have to wait for an update to your software before you can use Windows 8 and that update will be longer in coming due to the Metro interface with all the new hoops it introduces.

Part of the new developer interface revolves around the enhanced experience that a combination of HTML 5 and XAML provide. In addition, Windows 8 will require developers to use the new Web Accessibility Initiative-Accessible Rich Internet Applications (WAI-ARIA) standard. The plus side of the change is that it does adhere to standards that other platforms will use—the minus side is that developers will have to learn yet another programming paradigm. If you want a quick overview of how this will actually work, check out, “Accessible Web 2.0 Applications with WAI-ARIA.” The quick take is that, despite Microsoft’s claims to the contrary, developers will need to do more now than simply fill in a few properties in their applications to make the application accessible. You’ll actually have to code the accessibility information directly into the HTML tags.

The post provided by Microsoft on Windows 8 accessibility support leaves out a few unpleasant details. For example, it gives the impression that your Visual Studio Express 2010 application is accessibility ready right from the start. That’s true to an extent. However, the author leaves out important details such as providing speed keys for users who need them (the requirement does appear in a bullet list; how Windows 8 will help you implement them isn’t). The current templates don’t provide for this need and the Metro interface will make it harder to add them.

One of the most positive changes is that Microsoft is going to test Metro applications for accessibility. If the application meets the baseline (read minimal) requirements, the developer will be able to market it as accessible. At least those with special needs will be able to find software that meets a minimal level of accessibility. However, that minimal level still might not fulfill every Section 508 requirement (something that companies commonly sidestep as being inconvenient). In fact, I’m willing to go out on a limb here and state that minimal is probably not going to be enough to help many of the people with accessibility needs. You’ll be able to support JAWS at a basic level, but more complex software and setups will require additional help from developers.

One of the things you should keep in mind is that Microsoft is proactive to an extent about accessibility. They even provide a special Microsoft Accessibility site to provide updates about their strategy. However, I’ve been finding myself tested with their direction as of late. The interfaces they’re putting together seem less accessible all the time. I’d love to get input from anyone who uses their tools daily to meet specific needs. Talk to me about accessibility requirements, especially those needed to make Metro usable, at [email protected].

 

Exoskeletons Become Reality

It wasn’t very long ago (see Robotics in Your Future) that I wrote about the role of robotics in accessibility, especially with regard to the exoskeleton. At that time, universities and several vendors were experimenting with exoskeletons and showing how they could help people walk. The software solutions I provide in Accessibility for Everybody are still part of the answer, but more and more it appears that technology will provide more direct answers, which is the point of this post. Imagine my surprised when I opened the September 2011 National Geographic and found an article about eLEGS in it. You can get the flavor of the article in video form on the National Geographic site. Let’s just say that I’m incredibly excited about this turn of events. Imagine, people who had no hope of walking ever again are now doing it!

We’ve moved from experimental to actually distributing this technology—the clinical trials for this device have already begun. The exoskeleton does have limits for now. You need to be under 6 foot 4 inches tall and weigh less than 220 pounds. The candidate must also have good upper body strength. Even so, it’s a great start. As the technology evolves, you can expect to see people doing a lot more walking. Of course, no one who has special needs is running a marathon in this gear yet. However, I can’t even begin to imagine the emotion these people feel when they get up and walk for the first time. The application of this technology is wide ranging. Over 6 million people currently have some form of paralysis that this technology can help.

eLEGS is gesture-based. The way a person moves their arms and upper body determines how the device reacts. Training is required. The person still needs to know how to balance their body and must expend the effort to communicate effectively with the device. I imagine the requirements for using this device will decrease as time goes on. The gestures will become less complex and the strength requirements less arduous.

So, what’s next? Another technology I’ve been watching for a while now is the electronic eye. As far as I know, this device hasn’t entered clinical trials as of yet, but the scientists are working on it. (It has been tested in Germany and could be entering trials in the UK.) The concept is simple. A camera in a special set of glasses transmits visual information to a chip implanted in the person’s eyeball. The chip transmits the required signals to the person’s brain through the optical nerve.  However, the implementation must be terribly hard because our understanding of precisely how all of this works is still flawed.

Even so, look for people who couldn’t walk to walk again soon and those who couldn’t see to see again sometime in the future. There will eventually be technologies to help people hear completely as well. (I haven’t heard of any technology that restores the senses of smell, taste, or touch to those who lack it.) This is an exciting time to live. An aging population will have an increasing number of special needs. Rather than make the end of life a drudge, these devices promise to keep people active. Where do you think science will go next? Let me know at [email protected].

Review of HTML5 Step by Step

Microsoft has thrown developers yet another curve—Windows 8 will rely on HTML5 and JavaScript for a programming interface. The revelation has many developers horrified. For me, it means updating my HTML and JavaScript skills, which was one motivation for reading the book reviewed in today’s post. HTML5 Step by Step, written by Faithe Wempen, provides a quick method of getting up to speed on HTML5.

This book is designed to aid anyone who wants to know how to work with HTML5, which means that it starts out extremely simple. The book avoids the ever popular Hello World example, but the example it does provide is small and easily understood. The chapters don’t remain simple, however, so even if you have some experience with HTML, you can use this book to update your skills. You’ll likely want to start around Chapter 3 if you are experienced and skim through the material until you come to something unfamiliar, which could happen relatively fast given the changes in HTML5.

HTML5 Step by Step is light on theory and reference information, but heavy with hands on experiences. It relies on using Notepad as an editor, which may seem like an odd choice, until you read the “Why Learn HTML in Notepad?” section of the Introduction. The author’s reasoning is akin to the same reasoning I would use, which is to make sure that the reader types everything and understands why a particular element is required. If you really want to get the most out of this book, you have to do the exercises in Notepad as the author suggests. Otherwise, I guarantee you’ll miss out on something important. Faithe has made a great choice of teaching aids in this case.

Chapter 1 is most definitely designed for the rank novice. It even shows how to set up the examples directory as a favorite in Notepad. However, unlike many books, the rank novice should read the book’s Introduction because Faithe does provide some needed information there, such as the “Understanding HTML Tags” section.

Chapter 2 gets the reader started with some structural elements. Faithe covers everything that the reader is likely to need for a typical starter Web page. I wish that the chapter had covered <meta> tags in a little more detail, or at least provided a table listing them, but this book does have an emphasis on hands on exercises, so the omission isn’t a glaring one. As an alternative to including the information, an update could provide a URL that lists the tags so the reader knows where to go for additional information.

By Chapter 3, the reader is formatting text and starting to make the sample site look pretty. I really thought Faithe did a nice job of moving the reader along at a fast, but manageable pace. She shows the reader how to make effective use of tag combinations, such as the <kbd> (keyboard) and <b> (bold) tags.

There is the smallest amount of reference information in some chapters. For example, Chapter 4 contains a table on page 50 showing the list attributes. These references are very small and quite helpful, but the reader should understand that the emphasis is on doing something and that the reference material may not be complete. For example, the special symbols table on page 56 is missing the em dash, which is something most people use.

The book progresses at a reasonable pace. Never did I find myself rushed. The examples all seem to work fine and I didn’t find missing steps in the procedures. The author uses an adequate number of graphics so that the reader doesn’t get lost. I liked the fact that every exercise ends with a cleanup section and a list of the major points that the reader should have gotten from the exercise.

Readers who are only interested in new tags will need to wait until Chapter 9 to see one. The <figure> tag makes an appearance on page 141. However, even some professionals didn’t use all of the HTML4 tags and it really does pay to start at Chapter 3 and look for something you don’t recognize. It may surprise you to find that an earlier chapter contains a somewhat new (but not new to HTML5 tag) that you’ve passed by.

There are a few nits to pick with this book. The first is that the author places the accessibility information in an appendix where almost no one is going to read it. The information should have appeared as part of the rest of the book as appropriate. In addition, the author misses the big point that most people today have some sort of special need addressed by accessibility aids. The number of people who are colorblind alone is 8 percent of the male population and 0.5 percent of the female population. This book is unlikely to help you create a truly accessible sitenot that this is the reason you’re buying the book.

The second is that Appendix C doesn’t really help very much with the additions and subtractions for HTML5. For example, Appendix C doesn’t tell you about the new <aside> tag. If you want a quick list of the new tags, check out the www.w3schools.com HTML5 New Elements page. (I checked the missing <aside> tag against a number of other sites, such as About.com.) The point is that Appendix C won’t give you the complete picture. Again, this isn’t one of the selling points of the book, but the list should have been complete.

The third is that there isn’t really enough information about why something is done or why it changedsimply that it must be done or that it did change. The reader probably doesn’t want a full blown history of the change, but the why of something can make understanding and, more importantly, remembering a concept easier. Still, this particular nit is minor since the purpose of the book is to get you started with HTML5 quickly and not to explore it at a theoretical level.

Overall, HTML5 Step by Step is a great book for the novice who wants to learn how to create Web pages. It’s also an acceptable book for someone who’s experienced with HTML coding, but wants to get up-to-date on the HTML5 changes quickly. This book is definitely designed for someone who wants to do something, rather than simply read about it. If you don’t perform the exercises, you won’t get much out of the book.

 

An Update On Special Needs Device Hacking

I previously posted an entry entitled Security and the Special Needs Person where I described current hacking attempts against special needs devices by security researchers. In that post, I opined that there was probably some better use of the researcher’s time. Rather than give hackers new and wonderful ways to attack the human race, why not find ways to develop secure software that would discourage attempts in the first place? Unfortunately, it seems as if the security researchers are simply determined to keep chewing on this topic until someone gets hurt or killed. I never even considered this topic in my book, “Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements” because it wasn’t an issue at the time of publication, but it certainly is now.

Now there is a ComputerWorld article that talks about wearable devices used to jam the signals of hackers trying to attack those with special needs devices. What do we do next—encase people in a Faraday cage so no one can bother them? I did find the paper referenced in the article, “They Can Hear Your Heartbeats: Non-Invasive Security for Implantable Medical Devices” interesting, but must ask why such measures even necessary. If security researchers would wait until someone actually thinks of an attack before they came up with a remedy, perhaps no one would come up with the attack.

The basis of the shielding technology mentioned in the ComputerWorld article is naive. Supposedly, the shield lets the doctor gain access to the medical device without allowing the hacker access. Unfortunately, if the doctor has access, so does the hacker. Someone will find a way to overcome this security measure, probably a security researcher, and another shield will have to be created that deflects the new attack. The point is that if they want the devices to be truly safe, then they shouldn’t send out a radio signal at all.

The government is involved now too. Reps. Anna G. Eshoo (D-CA) and Edward J. Markey (D-MA), senior members on the House Energy and Commerce Committee, have decided to task the the Government Accountability Office (GAO) with contacting the Federal Communications Commission (FCC) about rules regarding the safety and security of implantable medical devices. I can only hope that the outcome will be laws that make it illegal to even perform research on these devices, but more likely, the efforts will result in yet more bureaucracy and red tape.

There are a number of issues that concern me about the whole idea of people wearing radio transmitters and receivers full time. For one thing, there doesn’t seem to be any research on the long term effects of wearing such devices. (I did find research papers such as, “In-Body RF Communications and the Future of Healthcare” that describe the hardware requirements for transmission, but research on what RF will do to the human body when used in this way seems sadly lacking.) These devices could cause cancer or other diseases. Fortunately, the World Health Organization (WHO) does seem to be involved in a little research on the topic and you can read about it in their article entitled, “What are electromagnetic fields?“.

In addition, now that the person has to wear a jammer to protect the implantable medical device, there is a significant chance of creating interference. Is there a chance that the wearer could create unfortunate situations where the device intended to protect them actually causes harm? The papers I’ve read don’t appear to address this issue. However, given my personal experiences with electromagnetic interference (EMI), it seems quite likely that the combination of implantable medical device and jammer will almost certainly cause problems.

In summary, we have implanted medical devices that use radio signals to make it more convenient for the doctor to monitor the patient and possibly improve the patient’s health as a result. So far, so good. However, the decision to provide this feature seems shortsighted when you consider that security researchers just couldn’t leave well enough alone and had to find a way for hackers to exploit the devices. Then, there doesn’t seem to be any research on the long term negative effects of these devices on the patient or on the jammer that now seems necessary to protect the patient’s health. Is the potential for a positive outcome really worth all of the negatives? Let me know at [email protected].

Security and the Special Needs Person

I’ve written quite a bit about special needs requirements. In my view, everyone who lives long enough will have a special need sometime in their life. In fact, unless you’re incredibly lucky, you probably have some special need right now. It may not be a significant special need (even eyeglasses are a special need), but even small special needs often require another person’s help to fix.

Accessibility, the study of ways to accommodate special needs, is something that should interest everyoneespecially anyone who has technical skills required to make better accessibility aids a reality. It was therefore with great sadness that I read an eWeek article this weekend describing how one researcher used his talents to discover whether it was possible to kill someone by hacking into the device they require to live. Why would someone waste their time and effort doing such a terrible thing? I shook my head in disbelief.

There is a certain truth to the idea that the devices we use to maintain health today, such as insulin pumps, are lacking in security. After all, they are very much like any other Supervisory Control And Data Acquisition (SCADA) device, such as a car, from a software perspective and people are constantly trying to find ways to break into cars. However, cars are not peoplecars are easily replaced devices used for transport. If someone breaks into my car and steals it, I’m sad about it to be sure, but I’m still alive to report the crime to the police. If someone hacks into my pacemaker and causes it to malfunction, I’m just as dead as if they had shot me. In fact, shooting me would probably be far less cruel.

I know that there is a place for security professionals in the software industry, but I’ve become increasingly concerned that they’re focused too much on breaking things and not enough on making them work properly. If these professionals spent their time making software more secure in the first place and giving the bad guys fewer ideas of interesting things to try, then perhaps the software industry wouldn’t be rife with security problems now. Unfortunately, it’s always easier to destroy, than to create. Certainly, this sort of negative research gives the security professionals something to talk about even though it potentially destroys someone’s life in the process.

I’d like to say that this kind of behavior will diminish in the future, but history says otherwise. Unless laws are put in place to make such research illegal, well meaning security professionals will continue dabbling in matters that would be best left alone until someone dies (and even then the legal system will be slow in reacting to a significant problem). I doubt very much that time spent hacking into special needs devices to see just how much damage one can do helps anyone. What is your thought on the matter? Does this sort of research benefit anyone? Let me know what you think at [email protected].

A Visual Studio Quick Guide to Accessibility (Part 2)

In my previous post, A Visual Studio Quick Guide to Accessibility (Part 1), I discussed one particularly important accessibility feature. The use of keyboard accelerators is essential because many people use them to improve productivity. Making them visible is important because you can’t use a feature you can’t see. This post won’t cover all of the ideas and concepts for Visual Studio developers found in Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements. However, it does provide an overview of the most essential accessibility featuresthose that every application should have.

The first feature is the tooltip. Every application should include a ToolTip control. You can then provide a description of every user-accessible control in the ToolTip property for that control. It’s important to stress user-accessible in this case. For example, you won’t provide a ToolTip property value for a Label in most cases because the Label simply marks a user-accessible control such as a TextBox. When the user hovers the mouse over the control, the application displays a helpful bit of information about that control. Screen readers used by those with visual needs also read each of the tooltips to describe the controls to the user. A few rules for using the ToolTip control are:

 

  • Keep the ToolTip text short. Imagine trying to listen to long diatribes about control functionality when working with a screen reader.
  • Make the ToolTip text specific. Tell the user precisely what the control is designed to do.
  • Emphasize the user’s interaction. Tell the user how to interact with the control, such as “Type the message you want displayed.” or “Click to display a message on screen.”


In addition to the ToolTip control, every control has three special accessibility properties as shown here.

Accessibility0201

These properties have specific purposes:

 

  • AccessibleDescription: A description of how the user should interact with the control. In fact, I normally use the same text as the ToolTip property for this entry and rely on the same rules for creating it.
  • AccessibleName: The name that will be reported to accessibility aids. I normally provide the same text that appears on the control’s caption, minus the ampersand used for the keyboard accelerator.
  • AccessibleRole: The task that the control performs. In most cases, Default works just fine. However, when your control performs an unusual task, you should choose one of the more specific entries so that the accessibility aid can help the user interact with the control.


Make sure you fill out each of the properties so that accessibility aids have the best chance of making your application useful to those who have special needs. In fact, it shouldn’t surprise you to discover that AccessibleRole is already filled out for most controls, so you really only need to fill out two properties in most cases.

The final two changes appear on the form itself. Whenever possible, assign controls to the AcceptButton and CancelButton properties. The AcceptButton provides a means of accepting the content of a dialog box or form by pressing Enter. On the other hand, the CancelButton property makes it possible to reject changes to the form or dialog box by pressing Esc. It’s true that you’ll find situations where you can’t assign a control to one or both properties because there isn’t a default acceptance or cancellation control, but this limitation applies to very few applications.

Accessible applications are significantly easier for everyone to use. Less experienced users can benefit from the inclusion of tooltips. Keyboardists benefit from the presence of keyboard accelerators. Adding these features isn’t difficult or time consuming, so all developers should be adding them.  Let me know your thoughts about accessibility and whether you’d like to see additional accessibility posts at [email protected].

 

A Visual Studio Quick Guide to Accessibility (Part 1)

One of the most important accessibility aids that also applies to common users is the keyboard accelerator (or keyboard shortcut). In fact, this issue figures prominently in both C# Design and Development and Accessibility for Everybody: Understanding the Section 508 Accessibility Requirements. Just why Microsoft decided to turn this feature off in Windows 7 is a complete mystery to me. All of the pertinent examples in Professional Windows 7 Development Guide include keyboard accelerators, but you can’t see them. I’ve received a number of queries about this problem and decided that this two-part post on accessibility for Visual Studio developers is really necessary.

First, let’s discuss the keyboard accelerator from a programming perspective. A keyboard accelerator is the underline you see on a button, label, menu, or other control. You press Alt+Letter to perform the equivalent of a click or selection with the associated control. For example, most people know that you press Alt+F to access the File menu in an application that has keyboard accelerators properly implemented.

To create a keyboard accelerator, the developer precedes the letter or number with an ampersand (the & character). For example, to make the File menu respond to Alt+F, the developer would type &File in the development environment. I’ve always strongly encouraged the use of keyboard accelerators as a must have for any application because many keyboardists are seriously hindered by an application that lacks them. In fact, you’ll find the keyboard accelerators used in the vast majority of my books, even for examples that aren’t related to programming in any way.

Second, some developers who feel as I do about keyboard accelerators are upset that adding them to applications no longer works (apparently). Windows 7 hides the keyboard accelerators for some odd reason known only to Microsoft. The Ribbon interface provides an equivalent when the developer provides it, but we’re talking about all of the applications (the vast majority) that don’t use the Ribbon interface. It turns out that you must manually turn the keyboard accelerator support back on. Follow this steps to accomplish the task manually:

 

  1. Open the Control Panel, followed by the Ease of Access applet. Click the Ease of Access Enter link. You’ll see the list of options shown here:Accessibility0101
  2. Click Make the Keyboard Easier to Use. You’ll see a list of options for making the keyboard easier to use. Near the bottom of the list you’ll see the Underline Keyboard Shortcuts and Access Keys option shown here.
    Accessibility0102
  3. Check this option and then click OK. The system will now display the underlines as before.

One of the biggest questions I had while working through this is whether there is a method for turning this feature on or off at the command line. There isn’t any WMIC command that I’ve found to make the change (nor any other command for that matter), so I’m currently trying to discover the associated registry keys. I’ve found one so far. The On value in the HKEY_CURRENT_USER\Control Panel\Accessibility\Keyboard Preference key must be changed to 1. However, that value alone doesn’t make the change work, so there are more keys and values to find. If anyone has some clues to provide me about this particular need, please let me know at [email protected]. In the meantime, I’ll continue looking for the elusive registry updates required to automate this change.