Download Research Tools
As the saying goes, “Seeing is believing.” But with computers, that’s only half the story. Cameras are becoming an ever-present part of our world. They are built into cell phones and laptops, and dot the landscape in storefronts and on street corners. Their pervasive images present us with a wealth of information. So how do we extract information from these images and use it?
One hundred excited students from across Russia and the Commonwealth of Independent States (CIS) converged at Moscow State University for the 3rd Annual Microsoft Research Summer School.
That question set the scene for 100 excited students from across Russia and the Commonwealth of Independent States (CIS), a quarter of whom were women, who converged on Moscow State University (MSU) for the third annual Microsoft Research Summer School. This year’s session focused on the intricacies of computer vision, with activities led by Microsoft Research experts and leading European academics.
The summer session began with a special welcome from Nikolay Pryanishnikov, president of Microsoft Russia. “Supporting young talent is traditionally one of our key strategic priorities,” Pryanishnikov told the students. “We are confident that, with the help of events like this Microsoft Research Summer School, our young specialists will be able to realize their ideas, reach new peaks, and increase the innovation potential of the Russian economy.”
The students were busy throughout the week; each day was packed with intensive academic talks, demonstrations, and hands-on laboratory sessions that were designed to educate attendees about fundamental and state-of-the-art techniques in computer vision. Andrew Fitzgibbon, a principal researcher at Microsoft Research Cambridge, gave a detailed description of how decades of computer vision research, along with ground-breaking ideas from Microsoft Research, came together to make Kinect technology a reality. The summer session also featured industry talks: Aram Pakhchanian of ABBYY, a Moscow-based company that specializes in optical character recognition, and Michael Nikonov of iPi Soft, a company that specializes in motion capture technology, talked about how to create a startup company in computer vision.
Andrew Blake, managing director of Microsoft Research Cambridge, was delighted to lecture and talk to the enthusiastic students. “It was clearly a splendidly vibrant event, with tremendous enthusiasm from the students,” he said. “This really is a landmark event for Microsoft in Russia. It marks a milestone in the maturity of the developing links between Microsoft Research, Microsoft Research Connections, and Moscow State University.”
Anton Konushin, head of the Vision Group at MSU, hopes that others can benefit from the Summer School. “Our school was truly a most selective one, with only one out of five students was accepted to the school. But with video lectures available online soon, we hope that this 400 students who hadn't made it to the event, can also become familiar with materials. We plan to make the influence of the school to Russian computer vision community a long-lasting one."
At the end of the week, students departed the summer school filled with enthusiasm and a deeper insight into how computer vision can change our world.
—Fabrizio Gagliardi, Director, Microsoft Research Connections EMEA (Europe, the Middle East, and Africa)
It was barely a year ago that European scientific and industry leaders came together with the goal of developing, testing, and deploying a high-quality, interoperable cloud platform for industry and research. The result was VENUS-C, which stands for Virtual multidisciplinary EnviroNments USing Cloud infrastructures.
Jointly sponsored by the European Commission and a consortium of 14 partners, among them Microsoft Research, VENUS-C was conceived to meet the needs of seven different research and commercial areas: bioinformatics, systems biology, drug discovery, civil engineering, civil protection, civil emergencies, and marine biodiversity. VENUS-C has since developed into a functional, operational platform, and is being used for 15 new pilots that received seed funds after an open call elicited 60 proposals from across Europe.
Conceived to meet the needs of seven research and commercial areas, VENUS-C is now a functional platform that is being used for 15 new pilot programs in various fields.
The success of VENUS-C is hardly surprising, since it offers powerful computing resources, open solutions, and a user-centric focus—all without the upfront costs of expensive IT installations. What’s more, VENUS-C’s massive computing power helps expedite research, speeding the time from hypothesis to result.
Vladimir Sykora, co-founder of Molplex, a small U.K. company working on drug discovery, and recipient of one of the 15 pilot grants, remarks, “Thanks to the VENUS-C platform, we will be able to do in a few weeks molecular computations that would have taken a year to complete on our own servers. This application allows us to quickly estimate the activity in the human body of new chemical compounds.”
VENUS-C also offers the cloud advantages of scalability, providing resources as and when needed. This is enormously valuable in many areas of scientific research when peak computing needs occur sporadically and often unpredictably.
As Costas Papadachos of the Geophysical Laboratory at Aristotle University notes, “Geoscientists involved in the difficult territory of earthquake impact assessment have much to gain from initiatives like VENUS-C. Our involvement offers a prime opportunity to access unprecedented resources, only when and where necessary for earthquake impact estimation and related information dissemination, without worrying how to build and maintain the corresponding infrastructures and operational tools.” Papadachos heads one of the 15 new pilot experiments on the VENUS-C cloud platform.
Cloud computing can provide new approaches to data collection and management, too. Using the VENUS-C platform, Collaboratorio, an Italian-based micro-enterprise that is managing one of the civil engineering scenarios, is collecting data on the performance of new buildings, creating a database of qualitative and statistical information that can be used to find the designs that best fit specific environmental and urban contexts. The cloud will help Collaboratorio’s researchers mine the data to identify trends, perform extrapolation studies, and address common challenges that are related to building and environmental impact.
As VENUS-C embarks on its second year, we look forward to the platform performing even bigger and more complex applications. This is a very precocious one-year-old!
—Fabrizio Gagliardi, Director, Microsoft Research Connections EMEA
Robotics technology plays an increasingly important role in search-and-rescue missions. Robots are used to explore areas that are deemed too dangerous or difficult for human teams to access. They can, for example, be used to investigate a hazardous material spill or search for disaster survivors. In the case of a disaster, a robot may save the life of not only the victims but also the rescue workers who might otherwise place themselves in harm’s way to search for survivors. Because of the life-saving potential of search-and-rescue robots in emergency situations, researchers are investigating better ways to control the robots in stressful and challenging environments.
Robots that are used for search and rescue are essentially an extension of the human rescue team. Cameras, microphones, and other sensors that are attached to the robot transmit critical information to the rescue team, who typically controls the robot’s movement remotely. Until recently, rescuers who managed a search-and-rescue robot normally had to manipulate complicated joysticks, dials, and switches on a very elaborate controller with multiple electro-mechanical parts. As described in our blog entry last year, the robotics lab at the University of Massachusetts, Lowell (UML), has developed a natural user interface (NUI) controller that promises greater finesse and control of robots during search-and-rescue operations.
Today we are pleased to present a new short video that highlights the accomplishment of this work and gives you an update on its status.
Building the DREAM Controller
The Lowell robotics lab takes a NUI approach for the Dynamically Resizing Ergonomic and Multi-Touch (DREAM) Controller, which has been in development since 2008. Two Microsoft technologies underlie the DREAM Controller: the Microsoft Robotics Developer Studio (used for simulation) and Microsoft Surface (the user interface).
The Microsoft Surface is a coffee-table-sized device with a computer inside and a touch-sensitive interface on top. The Surface allows multiple users to interact with the computer simultaneously by using whole-hand or multiple-finger gestures. These gestures enable rescue teams to control robots with greater dexterity than they could with traditional robotics controllers—and precise control of the robots is critical for search-and-rescue efforts. In addition, the Surface permits more than one robot to be controlled simultaneously—previously not possible with a single controller.
To use the DREAM Controller implemented on the Surface, users simply place their hands on the interface. The DREAM Controller identifies the user’s fingers and thumbs and displays a virtual “joystick” beneath their hand. The user then uses their thumb to manipulate the virtual joystick. There are up to four degrees of freedom (two on each thumb, that is, X-Y on each), enabling the control of four different dimensions.
The Lowell team (Holly Yanco and Mark Micire) is also developing a series of pre-programmed gestures with guidance from expert search-and-rescue volunteers. The goal is to develop code that enables the DREAM Controller to recognize specific gestures that rescue workers make naturally during a search-and-rescue operation, thereby facilitating and accelerating rescue efforts.
The novel NUI approach to robotics that was employed by the Lowell robotics lab in this socially significant application helped the DREAM Controller project win one of eight grants that Microsoft Research offered under our Social Human Robot Interaction Request for Proposals (RFP). The grant award included financial support, a donated Microsoft Surface, and access to the Microsoft Research team.
I think the DREAM Controller project truly shows what a better first response system—using NUI technology—could look like in the very near future. Check out the video!
—Stewart Tansley, Senior Research Program Manager, Microsoft Research Connections