Der deutsche Education Blog

Microsoft Research Connections Blog

The Microsoft Research Connections blog shares stories of collaborations with computer scientists at academic and scientific institutions to advance technical innovations in computing, as well as related events, scholarships, and fellowships.

  • Microsoft Research Connections Blog

    Try F#—Data Console to Big and Broad Data

    • 5 Comments

    Today, we are excited to announce the latest release of Try F#, a set of resources that makes it easy to learn and program with F# in your browser. It’s available over a wide range of platforms and doesn’t require a download of Microsoft Visual Studio. Try F# quickly reveals the value of the versatile F# programming language.

    Learn how to program in F#. Create and share code with the new release of Try F#

    Try F# enables users to learn F# through new tutorials that focus on solving real-world problems, including analytical programming quandaries of the sort that are encountered in finance and data science. But Try F# is much more than a set of tutorials. It lets users write code in the browser and share it with others on the web to help grow a community of F# developers.

    This latest release of Try F# is an evolution that keeps the tool in synch with the new experiences and information-rich programming features that are available in F# 3.0, the latest version of the language. The tutorials incorporate many domains, and help users understand F#’s new powerful “type providers” for data and service programming in the browser-based experience.

    F# has become an invaluable tool in accessing, integrating, visualizing, and sharing data analytics. Try F# thus has the potential to become the web-based data console for bringing “big and broad data,” including the associated metadata, from thousands of sources (eventually millions) to the fingertips of developers and data scientists. Try F# helps fill the need for robust tools and applications to browse, query, and analyze open and linked data. It promotes the use of open data to stimulate innovation and enable new forms of collaboration and knowledge creation.

    For example, to answer a straightforward question such as, “Is US healthcare cost-effective?” researchers now need to look at several datasets, going back and forth between an integrated development environment (IDE) and webpages to figure out if they’ve found what they need.

    With Try F#, a researcher can quickly and easily access thousands of schematized and strongly-typed datasets. This presents huge opportunities in today’s data-driven world, and we strongly encourage all developers and data scientists to use Try F# to seamlessly discover, access, analyze, and visualize big and broad data.

    Evelyne Viegas, Director of Semantic Computing, Microsoft Research Connections
    Kenji Takeda, Solutions Architect, Microsoft Research Connections

     
    Learn More

  • Microsoft Research Connections Blog

    Joint lab marks 10 years of collaborative research in natural language processing

    • 5 Comments

    The following is the first of three blogs on the contributions of the Microsoft Research Asia Joint Lab Program (JLP), which recently celebrated its tenth anniversary. The JLP brings together the resources of Microsoft Research and major Chinese universities, facilitating collaboration on state-of-the-art research, academic exchange, and talent incubation. This blog focuses on the Microsoft-Harbin Institute of Technology joint lab (Microsoft-HIT; officially the China Ministry of Education–Microsoft Key Laboratory of Natural Language Processing and Speech, Harbin Institute of Technology).

    Professor Sheng Li talks about joint labs at the Microsoft Research Asia Summer School, hosted by the Microsoft-HIT lab in 2005.Think of countries that have more than one official language. Which ones come to mind? Canada, with two official tongues? Switzerland with four? How about China, which has no less than eight official languages and more than 50 unofficial but widely spoken indigenous tongues. Each of these languages is cherished as a cultural treasure in China, but the multiplicity of minority languages seriously impedes economic, technological, scientific, and educational exchanges between minority groups and the Mandarin-speaking Han, who make up a majority of China’s population.

    Resolving this linguistic tangle is exactly the sort of challenge that prompted the creation of the Microsoft Research Asia Joint Lab Program (JLP), and it is the research focus of Microsoft-HIT. Since 2004, Microsoft-HIT researchers have published over 500 academic journal papers and, during just the last five years, presented more than 30 essays at such high-level events as the ACM-SIGIR Conference and the International Joint Conference on Artificial Intelligence (IJCAI).

    The fruits of this labor can be seen in a Microsoft-HIT project called Minority Language Machine Translation. The project’s goal is to bridge the linguistic and cultural gulfs that separate different ethnic and national groups, both in China and around the world, and, potentially help preserve endangered minority languages. The project prototype is based on Microsoft Research’s Microsoft Translator Hub, a platform for machine translation between different languages. Utilizing the Microsoft Azure cloud-computing service, the prototype allows users to upload language and translation data and thus build a repository of lexical and grammatical information that can facilitate bilingual translation. While the work to date has focused on machine translation between Mandarin, English, and Uyghur, the underlying principles can be applied to translating between any two languages.

    But this project isn’t the only focus of Microsoft-HIT. The joint lab also aims to serve as a talent incubator, mentoring the young researchers who will be the leaders of tomorrow. Microsoft-HIT not only employs a large number of the university’s faculty and graduate students, it also holds an annual summer seminar on natural language processing. Since 2004, the summer seminar has provided more than 2,000 students an opportunity to develop their skills and laid the foundation for advanced research in language processing and speech technology.

    Professor Sheng Li, seen here at the 2014 Microsoft Research Asia Faculty Summit, was instrumental in establishing the Microsoft-HIT joint lab.
    Professor Sheng Li, seen here at the 2014 Microsoft Research Asia Faculty Summit, was instrumental in establishing the Microsoft-HIT joint lab.

    Although the Microsoft-HIT joint lab dates from 2004, it antecedents stretch back to last century, when, during the 1990s, Microsoft Research Asia worked with Harbin Institute of Technology professor Sheng Li to set up a laboratory on machine translation. In 2000, it became one of the first labs in the Microsoft Research Joint Lab Program and in 2004, the Chinese Ministry of Education (MOE) accorded official recognition to this joint effort, designating it as a MOE-Microsoft Key Laboratory.

    Professor Li, who is still deeply involved in the joint lab, credits it with having provided valuable experience to many young faculty members and promising students. He notes that many of these talented researchers have gone onto careers in related industries, but that a significant number choose to stay in the joint lab as either HIT professors or Microsoft researchers.

    With the past 10 years of this program as a guide, we look forward to the next decade and beyond, confident that the Microsoft Research-HIT joint lab will foster even greater talent cultivation and research collaboration.

    Tim Pan, Director of University Relations, Microsoft Research Asia,

    Learn more

  • Microsoft Research Connections Blog

    Kinect Sign Language Translator - part 2

    • 5 Comments

    Today, we have the second part of a two-part blog posted by program managers in Beijing and Redmond respectively—second up, Stewart Tansley:

    When Microsoft Research shipped the first official Kinect for Windows software development kit (SDK) beta in June 2011, it was both an ending and a beginning for me. The thrilling accomplishment of rapidly and successfully designing and engineering the SDK was behind us, but now the development and supporting teams had returned to their normal research work, and I was left to consider how best to showcase the research potential of Kinect technology beyond gaming.

    Since Kinect’s launch in November 2010, investigators from all quarters had been experimenting with the system in imaginative and diverse applications. There was very little chance of devising some stand-out new application that no one had thought of—since so many ideas were already in play. So I decided to find the best of the current projects and “double down” on them.

    Sign Language + Kinect = a new world of communication 

    But rather than issuing a public global call—which we didn’t do, because so many people were proactively experimenting with Kinect technology—we turned to the Microsoft Research labs around the world and asked them to submit their best Kinect collaborations with the academic world, thus bringing together professors and our best researchers, as we normally do in Microsoft Research Connections.

    We whittled twelve outstanding proposals to five finalists and picked the best three for additional funding and support. One of those three was the Kinect Sign Language Translator, a collaboration among Microsoft Research Asia, the Chinese Academy of Sciences, and Beijing Union University.

    Incredibly, the Beijing-based team delivered a demonstration model in fewer than six months, and I first saw it run in October 2012, in Tianjin. Only hours earlier, I had watched a seminal on-stage demo of simultaneous speech translation, during which Microsoft Research’s then leader, Rick Rashid, spoke English into a machine learning system that produced a pitch-perfect Chinese translation—all in real time, on stage before 2,000 Chinese students. It was a "Star Trek" moment. We are living in the future!

    Equally inspiring though, and far away from the crowds, I watched the diminutive and delightful Dandan Yin gesture to the Kinect device connected to the early sign language translator prototype—and words appeared on the screen! I saw magic that day, and not just on stage.

    Sign. Speak. Translate. Communicate.Nine months later, in July 2013, we were excited to host Dandan at the annual Microsoft Research Faculty Summit in Redmond—her first trip outside China. We were thrilled with the response by people both attending and watching the Summit. The sign language translator and Dandan made the front page of the Seattle Times and were widely covered by Internet news sites.

    We knew we had to make a full video of the system to share it with others and take the work further. Over a couple of sweltering days in late July (yes, Seattle does get hot sunny days!), we showed the system to Microsoft employees. It continued to capture the imagination, including that of Microsoft employees who are deaf.

    We got the chance to demonstrate the system at the Microsoft annual company meeting in September 2013—center stage, with 18,000 in-person attendees and more than 60,000 watching online worldwide. This allowed us to bring Dandan and the Chinese research team back to Seattle, and it gave us the opportunity to complete our video.

    That week, we all went back into the studio, and through a long hard day, shot the remaining pieces of the story, explaining how the system could one day transform the lives of millions of people who are deaf or hard or hearing—and all of us—around the world.

    I hope you enjoy the video and are inspired by it as much we are.

    We look forward to making this technology a reality for all! We would love to hear your comments.

    Stewart Tansley, Director, Microsoft Research Connections

    Learn more

Page 5 of 125 (374 items) «34567»