During this year’s college tour, our chief Research and Strategy Office, Craig Mundie, demonstrated how technology can help solve the world’s toughest problems. One of the demos is on Natural User Interface (NUI) in a search scenario (see the video below). I was very excited to see the demo because it combined a number of NUI elements such as gesture, speech, and even eye-tracking. If the future allows users to interact with technology in a number of ways (i.e. various input methods), then it’s really interesting to find out how users will accomplish their tasks at hands with a combination of these inputs. For example, as I’m writing this blog post with keyboard, I can say “Computer, search ‘Craig Mundie.” Or, if I want to send this demo link to a friend, I can say “Computer, create an email with XX link embedded to Joe.” I don’t need to stop writing and go to a search browser or an email client to do another task then come back. This speech interaction is independent of my writing flow and and allows me to multi-task. I would really like that!