Rethinking the Role of Keyboards in Accessible Technology – Part One

Rethinking the Role of Keyboards in Accessible Technology – Part One

  • Comments 2

This blog post was written by Alex Li, Senior Accessibility Policy and Standards Strategist at Microsoft. Alex works with national and international standards organizations to improve the level of accessibility for people with disabilities.

----

When I entered the field of accessibility ten years ago, the first golden rule I learned was that a keyboard must control all software and content features.

Today, this golden rule is found throughout accessibility public policy. But, it is time to re-examine the underlying assumptions of this once undisputable idea. Let’s start by looking at problems it was supposed to solve.

The first issue was that a mouse and similar pointing devices would not work if a user couldn’t see what a pointing device highlighted. Keyboard accessibility allowed individuals with no vision to use computing devices if they could receive appropriate audio feedback and distinguish keys through tactile feedback. 

But, the development of touch interface coupled with audio feedback showed that blind users could effectively navigate and control computers and tablets without a keyboard. This new approach provided a more natural user interface and audio feedback with or without a keyboard.

Keyboard accessibility was also supposed to help users with limited fine-motor control, who couldn’t accurately manipulate a mouse and found a keyboard easier to use. But, a keyboard is hardly the only option these days. Voice recognition has been available as a built-in feature and a separate product for a long time.  And Kinect introduced a whole new way to interact with a computer, through gestures.  Finally, eye-gazing technology is becoming more mainstream. All of these options have strengths and weaknesses, but there is no reason to require a keyboard over these other alternatives.

Other input hardware, such as puff-and-sip, imitates keyboard commands and can make sense for operating systems designed around a keyboard. But, when that isn’t the case, input options should reflect an operating system’s primary interaction model: gesture; speech; or touch. It makes little sense to use keyboard commands when a keyboard isn’t the main input method.

I am not suggesting that keyboard accessibility is irrelevant. But, it is just one of many possible solutions that can make computers more accessible for users with limits on their vision or motor control. As we consider legal and technical accessibility requirements, we should recognize that keyboard accessibility should not be a universal solution or a strict requirement. 

 

Leave a Comment
  • Please add 3 and 6 and type the answer here:
  • Post
  • But if a application is keyboard accessible then it is also (1) scriptable with third-party AT solutions (2) controllable by switches of all kinds. So you know that you will be able to get it to work with all sorts of clients and needs. You could have an eyegaze system that fires a button that sends a whole bunch of keystrokes, for example. Keyboard accessibility makes other AT work.

    Without this requirement you have no such guarantee that you will be able to get your other AT to work, and if it is not a strict requirement then programmers will ignore it.

    If the blind community is moving away from keyboard use then a powerful advocate for keyboard accessibility has been lost, true, but the need is still there for many many clients and client groups, some of which are far less articulate and organised.

  • Alasdair, you are correct that current AT depends largely on "imitating" the keyboard interaction.  But if we are talking about future AT built for future platforms that may or may have a keyboard, then it may not make so much sense to "imitate" the keyboard.  The point is that useful assumptions in the past may not be as useful in the future.

Page 1 of 1 (2 items)