I ran across this aricle today: http://www.vnunet.com/news/1160755 about the British Computing Society encouraging UK researchers to investigate implementing more human-like behavior in computers.  The article talks some about cognitive processing, invesitgating mental disorders and intelligent robots.

But, surprisingly, speech was not mentioned.  I personally think that speech is the next natural interface with computers.  But does natural equal human-like?  In my opinion, yes.  But not as the current technology stands.  I think that in order for people to use speech widely on desktop systems that they want an interaction that rivals that of speaking to a human - more on the lines of semantic interpretation.  We need to make speech be exciting for a user, to provide some value.  An alternate way of typing is novel, but not that thrilling - especially given that typing is taught to most students in high school or younger.  For persons with RSI or some other disability, I realize that dictation (good dictation and command and control) may be enough.  But, for the other users, they want more - they want to accomplish something that they can't do otherwise, or accomplish something in a faster, more natural way.  Currently, I don't think there's any SR system on the market that allows for this sort of experience.

I think we are coming close in embedded applications, but we are limited by memory.  I've seen academic demos of some really amazing applications that do add value to the end user - online recognition and translation for example.  Things like controlling smart homes are intriguing as well.  I know that when I'm laying on the couch I would love to just be able to say "set temperature to 68 degrees" and have it work.  Save me some work and I'll use SR for everything.  That's the message I'm getting from most non-impaired users.