Download Research Tools
Today, we are excited to announce the latest release of Try F#, a set of resources that makes it easy to learn and program with F# in your browser. It’s available over a wide range of platforms and doesn’t require a download of Microsoft Visual Studio. Try F# quickly reveals the value of the versatile F# programming language.
Try F# enables users to learn F# through new tutorials that focus on solving real-world problems, including analytical programming quandaries of the sort that are encountered in finance and data science. But Try F# is much more than a set of tutorials. It lets users write code in the browser and share it with others on the web to help grow a community of F# developers.
This latest release of Try F# is an evolution that keeps the tool in synch with the new experiences and information-rich programming features that are available in F# 3.0, the latest version of the language. The tutorials incorporate many domains, and help users understand F#’s new powerful “type providers” for data and service programming in the browser-based experience.
F# has become an invaluable tool in accessing, integrating, visualizing, and sharing data analytics. Try F# thus has the potential to become the web-based data console for bringing “big and broad data,” including the associated metadata, from thousands of sources (eventually millions) to the fingertips of developers and data scientists. Try F# helps fill the need for robust tools and applications to browse, query, and analyze open and linked data. It promotes the use of open data to stimulate innovation and enable new forms of collaboration and knowledge creation.
For example, to answer a straightforward question such as, “Is US healthcare cost-effective?” researchers now need to look at several datasets, going back and forth between an integrated development environment (IDE) and webpages to figure out if they’ve found what they need.
With Try F#, a researcher can quickly and easily access thousands of schematized and strongly-typed datasets. This presents huge opportunities in today’s data-driven world, and we strongly encourage all developers and data scientists to use Try F# to seamlessly discover, access, analyze, and visualize big and broad data.
—Evelyne Viegas, Director of Semantic Computing, Microsoft Research Connections—Kenji Takeda, Solutions Architect, Microsoft Research Connections
Today, we have the second part of a two-part blog posted by program managers in Beijing and Redmond respectively—second up, Stewart Tansley:
When Microsoft Research shipped the first official Kinect for Windows software development kit (SDK) beta in June 2011, it was both an ending and a beginning for me. The thrilling accomplishment of rapidly and successfully designing and engineering the SDK was behind us, but now the development and supporting teams had returned to their normal research work, and I was left to consider how best to showcase the research potential of Kinect technology beyond gaming.
Since Kinect’s launch in November 2010, investigators from all quarters had been experimenting with the system in imaginative and diverse applications. There was very little chance of devising some stand-out new application that no one had thought of—since so many ideas were already in play. So I decided to find the best of the current projects and “double down” on them.
But rather than issuing a public global call—which we didn’t do, because so many people were proactively experimenting with Kinect technology—we turned to the Microsoft Research labs around the world and asked them to submit their best Kinect collaborations with the academic world, thus bringing together professors and our best researchers, as we normally do in Microsoft Research Connections.
We whittled twelve outstanding proposals to five finalists and picked the best three for additional funding and support. One of those three was the Kinect Sign Language Translator, a collaboration among Microsoft Research Asia, the Chinese Academy of Sciences, and Beijing Union University.
Incredibly, the Beijing-based team delivered a demonstration model in fewer than six months, and I first saw it run in October 2012, in Tianjin. Only hours earlier, I had watched a seminal on-stage demo of simultaneous speech translation, during which Microsoft Research’s then leader, Rick Rashid, spoke English into a machine learning system that produced a pitch-perfect Chinese translation—all in real time, on stage before 2,000 Chinese students. It was a "Star Trek" moment. We are living in the future!
Equally inspiring though, and far away from the crowds, I watched the diminutive and delightful Dandan Yin gesture to the Kinect device connected to the early sign language translator prototype—and words appeared on the screen! I saw magic that day, and not just on stage.
Nine months later, in July 2013, we were excited to host Dandan at the annual Microsoft Research Faculty Summit in Redmond—her first trip outside China. We were thrilled with the response by people both attending and watching the Summit. The sign language translator and Dandan made the front page of the Seattle Times and were widely covered by Internet news sites.
We knew we had to make a full video of the system to share it with others and take the work further. Over a couple of sweltering days in late July (yes, Seattle does get hot sunny days!), we showed the system to Microsoft employees. It continued to capture the imagination, including that of Microsoft employees who are deaf.
We got the chance to demonstrate the system at the Microsoft annual company meeting in September 2013—center stage, with 18,000 in-person attendees and more than 60,000 watching online worldwide. This allowed us to bring Dandan and the Chinese research team back to Seattle, and it gave us the opportunity to complete our video.
That week, we all went back into the studio, and through a long hard day, shot the remaining pieces of the story, explaining how the system could one day transform the lives of millions of people who are deaf or hard or hearing—and all of us—around the world.
I hope you enjoy the video and are inspired by it as much we are.
We look forward to making this technology a reality for all! We would love to hear your comments.
—Stewart Tansley, Director, Microsoft Research Connections
As the saying goes: everything is bigger in Texas. And coming this weekend, March 8 to 10, there will be a couple of Texas-sized telescopes at the South by Southwest (SXSW) Interactive Festival in Austin. Housed in the mammoth NASA Experience Tent, a wall-sized display will show off Microsoft Research’s WorldWide Telescope (WWT), demonstrating the amazing capabilities of the world’s largest virtual telescope. Outside, on the lawn of the Long Center, there will be a full-scale model of the next generation of the Hubble Telescope, the James Webb Space Telescope (JWST)—a truly impressive piece of engineering that’s the size of a tennis court.
Microsoft Research is partnering with NASA, Northrop Grumman, and the Space Telescope Science Institute to offer a truly interactive exhibit, with University of Texas, Austin, astronomy students on hand to show off details of the JWST model on Microsoft Surface devices. Meanwhile, WWT will provide festival goers with an immersive virtual experience as they fly through the universe and explore the planets and stars. As you may know, the WWT brings together imagery from the world’s best ground and space-based telescopes and combines it with 3-D navigation. It also includes guided tours of interesting places in the sky, created and narrated by astronomers and educators.
WorldWide Telescope Experience
In addition to the huge WorldWide Telescope display, Microsoft Perceptive Pixel stations will be accessible, enabling visitors to explore space, Earth, and history—all at their fingertips. By using Microsoft Research ChronoZoom, a candidate for a 2013 SXSW Interactive Award, visitors will be able to explore all of history—from the Big Bang to today—and see connections that cut across disciplines and cultures. Prominent participants at SXSW Interactive will include Microsoft researchers, such as Jonathan Fay, who will deliver daily talks on the WWT and participate in the panel session, “Beyond Hubble: NASA's Next Great Telescope (JWST).”
James Webb Space Telescope
Another of my Microsoft Researcher colleagues, Donald Brinkman, will take part in the “Big Heritage, Big Quilts, and Big Canvases” panel discussion on the use of applications to visualize works of cultural significance. Donald’s panel will feature demos of applications built on Microsoft Pixelsense and Surface devices that provide both scholars and the public with an intimate and interactive experience of cultural touchstones, such as AIDS Memorial Quilt, the largest community-created piece of folk art in the world.
In addition to the schedule of great talks, we will also be using Skype to broadcast live daily from the NASA clean room at Goddard Space Center for audience Q&A.
We look forward to seeing you in Texas for truly unique and interactive experience. —Dan Fay, Director of Earth, Energy, and Environment; Microsoft Research Connections