This blog post was written by Daniel Hubbell, Senior Marketing Communications manager at Microsoft. Daniel’s career spans more than 12 years at Microsoft and his current role is focused on increasing the awareness of accessibility with consumers and regularly speaks at events and conferences on the topics of Accessible Technology.
When the PC mouse gained broad adoption in the early 1980’s it revolutionized the way in which most people interacted with computers by adding a more natural method of interacting with the PC. Since that time, the promise of more natural user interfaces like touch and gestures have been explored and continually improved. In the last several years the industry has finally come to a point where these more natural ways of interacting with technology are becoming commonplace and, more importantly, reliable.
Prior to the popularity of the mouse a person would have to use text based commands that were input using a keyboard. This could be described as the technology equivalent of passing hand written notes back and forth in science class. A method that is effective in the transmission of simple information, yet hardly as efficient as speaking in person where the nuance of tone, body language or other non-verbal input can also be used in context of the conversation. Today it’s not hard to imagine that the PC could interpret and respond to the same types of verbal and non-verbal information that you and I do when speaking face-to-face.
I have always held the belief that in order to create the easiest and most flexible user experience for people with disabilities, seniors or even my kids that technology needs to be flexible and personal. Traditionally the various input methods for the PC have been seen as separate and distinct. For example you would either use the keyboard and mouse or you would use a touch screen. You would use Speech recognition or you would use a head tracking mouse. But what if it was possible to combine all of these allowing a person to seamlessly rely on each of these when it makes most sense? Imagine your PC being smart enough to only respond to speech commands when you were actually looking directly at it, much like when people talk to each other? Or flicking your finger up and down on the screen to scroll through a web page or Word document while simultaneously using the keyboard to type.
The good news is that some of these scenarios are already becoming a reality. With the introduction of Windows 8, the keyboard and touch experience are seamlessly integrated. With Kinect for Xbox 360, speech commands are only recognized when you first call it by name, just like I would do in a crowded room when trying to get your attention from a distance.
Tomorrow as a part of the “Silvers Summit” at the Consumer Electronics Show (CES) I will be speaking on a panel with some industry colleagues on the topic of how advances in more natural ways of interacting with our technology are poised to revolutionize how people over the age of 55 use the PC. Much of the conversation will be focused on the fact that these advances are just the beginning and how the opportunities for making the PC easier to use for people of all abilities are seemingly endless. You can follow the conversation on Twitter with #SilversSummit or @MSFTEnable.
sounds fascinating. we need creativity like this in the disability arena and I for one applaud your efforts.
nice shared, if you want to read more about the same technology like kinect plz browse