Microsoft Research Connections Blog

The Microsoft Research Connections blog shares stories of collaborations with computer scientists at academic and scientific institutions to advance technical innovations in computing, as well as related events, scholarships, and fellowships.

Kinect Sign Language Translator - part 2

Kinect Sign Language Translator - part 2

  • Comments 3

Today, we have the second part of a two-part blog posted by program managers in Beijing and Redmond respectively—second up, Stewart Tansley:

When Microsoft Research shipped the first official Kinect for Windows software development kit (SDK) beta in June 2011, it was both an ending and a beginning for me. The thrilling accomplishment of rapidly and successfully designing and engineering the SDK was behind us, but now the development and supporting teams had returned to their normal research work, and I was left to consider how best to showcase the research potential of Kinect technology beyond gaming.

Since Kinect’s launch in November 2010, investigators from all quarters had been experimenting with the system in imaginative and diverse applications. There was very little chance of devising some stand-out new application that no one had thought of—since so many ideas were already in play. So I decided to find the best of the current projects and “double down” on them.

Sign Language + Kinect = a new world of communication 

But rather than issuing a public global call—which we didn’t do, because so many people were proactively experimenting with Kinect technology—we turned to the Microsoft Research labs around the world and asked them to submit their best Kinect collaborations with the academic world, thus bringing together professors and our best researchers, as we normally do in Microsoft Research Connections.

We whittled twelve outstanding proposals to five finalists and picked the best three for additional funding and support. One of those three was the Kinect Sign Language Translator, a collaboration among Microsoft Research Asia, the Chinese Academy of Sciences, and Beijing Union University.

Incredibly, the Beijing-based team delivered a demonstration model in fewer than six months, and I first saw it run in October 2012, in Tianjin. Only hours earlier, I had watched a seminal on-stage demo of simultaneous speech translation, during which Microsoft Research’s then leader, Rick Rashid, spoke English into a machine learning system that produced a pitch-perfect Chinese translation—all in real time, on stage before 2,000 Chinese students. It was a "Star Trek" moment. We are living in the future!

Equally inspiring though, and far away from the crowds, I watched the diminutive and delightful Dandan Yin gesture to the Kinect device connected to the early sign language translator prototype—and words appeared on the screen! I saw magic that day, and not just on stage.

Sign. Speak. Translate. Communicate.Nine months later, in July 2013, we were excited to host Dandan at the annual Microsoft Research Faculty Summit in Redmond—her first trip outside China. We were thrilled with the response by people both attending and watching the Summit. The sign language translator and Dandan made the front page of the Seattle Times and were widely covered by Internet news sites.

We knew we had to make a full video of the system to share it with others and take the work further. Over a couple of sweltering days in late July (yes, Seattle does get hot sunny days!), we showed the system to Microsoft employees. It continued to capture the imagination, including that of Microsoft employees who are deaf.

We got the chance to demonstrate the system at the Microsoft annual company meeting in September 2013—center stage, with 18,000 in-person attendees and more than 60,000 watching online worldwide. This allowed us to bring Dandan and the Chinese research team back to Seattle, and it gave us the opportunity to complete our video.

That week, we all went back into the studio, and through a long hard day, shot the remaining pieces of the story, explaining how the system could one day transform the lives of millions of people who are deaf or hard or hearing—and all of us—around the world.

I hope you enjoy the video and are inspired by it as much we are.

We look forward to making this technology a reality for all! We would love to hear your comments.

Stewart Tansley, Director, Microsoft Research Connections

Learn more

Leave a Comment
  • Please add 4 and 4 and type the answer here:
  • Post
  • I have a profoundly Deaf daughter with other disabilities.  These disabilities are making it difficult for writing to become a way for her to communicate with the hearing world.  It would be a huge blessing to have this kind of technology available for her.  I can't wait to see what the team will put together next!!

  • I Need to Know What Courses or What Tools did you use to do this application ?

  • Hi Dalia,

    Thank you for your question. The underlying technology we used is the Kinect for Windows SDK. You can find Kinect for Windows SDK developer learning resources at www.microsoft.com/.../develop.

Page 1 of 1 (3 items)