Der deutsche Education Blog

February, 2011

Microsoft Research Connections Blog

The Microsoft Research Connections blog shares stories of collaborations with computer scientists at academic and scientific institutions to advance technical innovations in computing, as well as related events, scholarships, and fellowships.

February, 2011

  • Microsoft Research Connections Blog

    Registration Now Open for Microsoft Biology Foundation Workshop

    • 0 Comments

    Microsoft Biology FoundationWe recently posted a preview of the Microsoft Biology Foundation (MBF) for development evaluation purposes. Now, we're following up with a special, free, one-day MBF workshop on March 11, 2011, in Redmond, Washington, hosted by the Microsoft Biology Initiative. The workshop includes a quick introduction to Microsoft Visual Studio 2010, the Microsoft .NET Framework, C#, and the MBF Object Model. Plus, our hands-on lab will give you the opportunity to write a sample application that employs the file parsers, algorithms, and web connectors in MBF.

    We will also cover some MBF training modules throughout the day, including:

    • Module 1: Introduction to Visual Studio 2010 and C#. This comprehensive introduction to the Microsoft Visual Studio programming environment and Microsoft .NET will teach you how to create a project, how to get started with C#, and how to perform runtime debugging. Also, you will get hands-on lab experience by building applications in Visual Studio 2010.
    • Module 2: Introduction to the Microsoft Biology Foundation. This overview will introduce you to the MBF basics through discussions of its scenarios and architectures and includes a starter project. The starter project is a hands-on lab that will help you get the experience you need to work with sequences, parsers, formatters, and the transcription algorithm that is supplied in MBF.
    • Module 3: Working with Sequences. In this module, you'll learn more about the Sequence data type in MBF, including how to load sequences into memory and save them, the different sequence types available, how to use sequence metadata, and how data virtualization support enables support for large data sets in a hands-on lab setting.
    • Module 4: Parsers and Formatters. In Parsers and Formatters, you'll explore MBF's built-in sequence parsers, formatters, alphabets, and encoders. This module will also introduce the method of expanding MBF with custom alphabets, parsers, and formatters. The hands-on lab will walk you through the steps that are required to build a simple custom parser and formatter for a fabricated biology data format.
    • Module 5: Algorithms. In this module, you will examine the algorithms that are defined in MBF for sequence alignment, multi-sequence alignment, sequence fragment assembly, transcription, translation, and pattern matching against sequences; you'll also learn how to create custom algorithms. The hands-on lab will walk you through the steps that are required to build an application to run algorithms against sequences loaded with MBF and will teach you how to perform sequence alignment, assembly, and transformations.
    • Module 6: Web Services. This module will introduce Microsoft .NET web services, the web service architecture in MBF, the built-in web service support in MBF for BLAST (Basic Local Alignment Search Tool), and ClustalW. You will also learn how to call these services asynchronously and will watch a detailed example of how to build custom service wrappers. In the hands-on lab, you'll build an application that executes the BLAST algorithm by using web services against handlers for BLAST, pass sequences and sequence fragments to BLAST, change the BLAST parameters, and display the results from a BLAST run.

    We hope you will join us for this free one-day event. Whether your goal is to get trained on MBF or simply to evaluate MBF and its Microsoft .NET model, you can expect to get a tremendous return on your time investment.

    For complete details about the day, or to register, please see the MBF Workshop website. We look forward to meeting you on March 11 in Redmond.

    —Beatriz Diaz Acosta, Senior Research Program Manager, Health and Wellbeing, Microsoft Research Connections

    Learn More

  • Microsoft Research Connections Blog

    SenseCam Documents Daily Life for Patients with Memory Loss

    • 0 Comments

    Human memory is all too fallible. We all misplace items or forget to run an errand occasionally; our memories of specific events can fade with time as well. But severe memory issues can have a devastating impact on quality of life for individuals with clinically diagnosed memory disorders that are related to acquired brain injury (for example, an accident) or neurodegenerative diseases (for example, Alzheimer's disease).

    There is no cure for memory loss. In the past, neuropsychologists had to rely on fairly primitive devices (such as photo albums, diaries, and electronic reminders) to help patients cope with memory conditions. Technology is rapidly evolving, however, and providing new opportunities to help patients.

    SenseCam - a memory-enhancing camera developed by Microsoft Research CambridgeA notable development in the field is the SenseCam, a memory-enhancing camera developed by Microsoft Researchers at the Cambridge campus and subsequently licensed to Vicon. Vicon sells the SenseCam as a medical device, the Vicon Revue, which has been named one of the 100 best innovations of 2010 by Popular Science. The SenseCam uses a wide-angle lens to document the patient's day—including places visited and people seen—creating visual "memories" through pictures. The camera, which is worn around the neck, takes a photograph:

    • Every 30 seconds
    • When movement is detected
    • When a lighting change is detected

    At the end of the day, the patient downloads the images to a computer. These images create visual reminders of events from throughout the day—essentially, they are digital memories. These SenseCam images appear to stimulate the episodic memory of patients who view them. Unlike staged (or posed) photographs, which tend to change the nature of the very moment being captured, SenseCam images are recorded passively, with no conscious effort or intervention. Combined with the relatively large number of images, this seems to have a powerful effect on recall. Numerous patients have benefitted from true autobiographical recall through this technology; typically, a handful of images stimulates the same feelings and emotions the wearer had when they occurred.

    Ultimately, we hope that SenseCam will have the potential to alleviate the onset of Alzheimer's disease in at-risk patients. Multiple studies around the globe, funded by Microsoft External Research, have helped us understand how SenseCam can help patients with a variety of memory-loss conditions. These studies include:

    1. Addenbrookes Hospital and the Medical Research Council (MRC) Cognition and Brain Sciences Unit, Cambridge, United Kingdom
      Researchers at Addenbrookes used functional Magnetic Resonance Imaging (fMRI) to identify how SenseCam affects neurological activity in different areas of the brain. Participants used SenseCam in their daily lives. Researchers then asked them to answer questions about the images that were captured by SenseCam. By tracking brain activity through fMRI scans, researchers demonstrated that patients were recalling true memories—and not just reciting information from the SenseCam.
    2. Adam Zeman, Professor of Cognitive Neurology, Exeter University, United Kingdom
      Epilepsy is the most common chronic neurological condition; as many as 50 percent of epilepsy patients report significant memory problems. Professor Zeman, a leading cognitive neurology researcher, has tested SenseCam with transient epileptic amnesia (TEA) patients who have reported severe autobiographical memory problems as a result of temporal lobe epilepsy.
    3. Professor Phil Barnard, Medical Research Council (MRC) Cognition and Brain Sciences Unit, Cambridge, United Kingdom
      Professor Phil Barnard, internationally renowned in the field of cognition, led a study to determine how SenseCam can help patients with Alzheimer's disease.
    4. Professor Ron Baecker, University of Toronto, and Professor Yaakov Stern, Columbia Medical School, United States
      This joint study evaluated the therapeutic value of SenseCam in patients who are in the early stages of dementia (Alzheimer's disease). It showed that patients' personal well-being, including enjoyment; sense of identity; memory of people, places, and events; and their conversations with family were enhanced through reminiscence by using SenseCam pictures and other imagery.
    5. Professor Roberto Cabeza, Duke University, United States, and Professor Martin Conway, University of Leeds, United Kingdom
      This research study focuses on healthy adults under the age of 30 and above the age of 70. All participants use written diaries, audio recording, and SenseCam to record daily activities. Subsequently, each subject will be stimulated with a selection of these records while they are in an fMRI scanner. The resulting fMRI images will allow researchers to measure the effect that viewing SenseCam image sequences has on participants' neural activity, and to compare these results between different age groups and between different forms of stimulation (for example, image sequences versus audio recordings versus written diaries). The study will reveal any differences between the effectiveness of SenseCam in the young versus older populations. It will also demonstrate whether SenseCam use has the ability to improve cognitive ability in the healthy population.

    The SenseCam was recently featured in TIME magazine and is currently on display at the Science Museum in London. For more information, see the Introduction to SenseCam.

     —Steve Hodges, Principal Hardware Engineer, Microsoft Research, and Kristin Tolle, Director, Natural User Interfaces Team, External Research division of Microsoft Research

     Learn More

  • Microsoft Research Connections Blog

    Celebrating Richard Feynman at TEDxCaltech

    • 0 Comments

    Dr. Richard FeynmanOne of the responsibilities for us as researchers is to have the courage to challenge accepted "truths" and to seek out new insights. Richard Feynman was a physicist who not only epitomized both of these qualities in his research but also took enormous pleasure in communicating the ideas of physics to students. Feynman won the Nobel Prize for his computational toolkit that we now call Feynman Diagrams. The techniques he developed helped the physics community make sense of Quantum Electrodynamics (QED) after the war, when the entire community was in a state of confusion about how to handle the infinities that appeared all over the place when one tried to make a perturbative expansion in the coupling.

    Feynman was the subject of a recent TEDxCaltech conference, fittingly called, "Feynman's Vision: The Next 50 Years." The event was organized in recognition of the 50-year anniversary of Feynman's visionary talk, "There's Plenty of Room at the Bottom," in which he set out a vision for nanoscience that is only now beginning to be realized. It is also 50 years since he gave his revolutionary "Feynman Lectures on Physics," which educated generations of physicists.

    I had the honor of speaking about Feynman's contributions to computing, from his days at Los Alamos during the war, his Nobel Prize winning computational toolkit (Feynman Diagrams), and his invention of quantum computing, By striving to think differently, he truly changed the world. The following are some highlights from my presentation.

    Parallel Computing Without Computers

    Feynman worked on the Manhattan Project at Los Alamos in the 1940s with Robert Oppenheimer, Hans Bethe, and Edward Teller. In order to make an atom bomb from the newly-discovered trans-uranic element, Plutonium, it was necessary to generate a spherical compression wave to compress the Plutonium to critical mass for the chain reaction to start. It was, therefore, necessary to calculate how to position explosive charges in a cavity to generate such a compression wave; these calculations were sufficiently complex that they had to be done numerically. The team assigned to perform these calculations was known as the "IBM team," but it should be stressed that this was in the days before computers and the team operated on decks of cards with adding machines, tabulators, sorters, collators, and so on. The problem was that the calculations were taking too long, so Feynman was put in charge of the IBM team.

    Feynman immediately discovered that because of the obsession with secrecy at Los Alamos, the team members had no idea of the significance of their calculations or why they were important for the war effort. He went straight to Oppenheimer and asked for permission to brief the team about the importance of their implosion calculations. He also discovered a way to speed up the calculations. By assigning each problem to a different colored deck of cards, the team could work on more than one problem at once. While one deck was using one of the machines for one stage of the calculation, another deck could be using a different machine for a different stage of its calculation. In essence, this is a now-familiar technique of parallel computing—the pipeline parallelism familiar from the Cray vector supercomputers, for example.

    The result was a total transformation. Instead of completing only three problems in nine months, the team was able to complete nine problems in three months! Of course, this led to a different problem when management reasoned that it should be possible to complete the last calculation needed for the Trinity test in less than a month. To meet this deadline, Feynman and his team had to address the more difficult problem of breaking up a single calculation into pieces that could be performed in parallel.

    Feynman Diagrams

    My next story starts in 1948 at the Pocono Conference where all the great figures of physics—Niels Bohr, Paul Dirac, Robert Oppenheimer, Edward Teller, and so on—had assembled to try to understand how to make sense of the infinities in QED. Feynman and Schwinger were the star speakers, but Feynman was unable to make his audience understand how he did his calculations. His interpretation of positrons as negative energy electrons moving backwards in time was just too hard for them to accept. After the conference, Feynman was in despair and later said, "My machines came from too far away."   

    Less than a year later, Feynman had his triumph. At an American Physical Society meeting in New York, Murray Slotnick talked about some calculations he had done with two different meson-nucleon couplings. He had shown that these two couplings indeed gave different answers. After Slotnick's talk, Oppenheimer got up from the audience and said that Slotnick's calculations must be wrong since they violated Case's Theorem. Poor Slotnick had to confess that he had never heard of Case's Theorem and Oppenheimer informed him that he could remedy his ignorance by listening to Professor Case present his theorem the following day.

    That night, Feynman couldn't sleep so he decided to re-do Slotnick's calculations by using his diagram techniques. The next day at the conference, Feynman sought out Slotnick, told him what he had done, and suggested they compare results. "What do you mean you worked it out last night?" Slotnick responded. "It took me six months!" As the two compared answers, Slotnick asked, "What is that Q in there, that variable Q?" Feynman replied that the Q was the momentum transfer as the electron was deflected by different angles. "Oh," Slotnick replied. "I only have the limiting value as Q approaches zero. For forward scattering." Feynman said, "No problem, we can just set Q equal to zero in my formulas!" Feynman found that he had obtained the same answer as Slotnick.

    After Case had presented his theorem, Feynman stood up at the back of the audience and said, "Professor Case, I checked Slotnick's calculations last night and I agree with him, so your theorem must be wrong." And then he sat down. That was a thrilling moment for Feynman, like winning the Nobel Prize—which he did much later—because he was now sure that he had achieved something significant. It had taken Slotnick six months to do the case of zero momentum transfer while Feynman had been able to complete the calculation for arbitrary momentum transfer in one evening. The computational toolkit that we now call Feynman Diagrams have now penetrated to almost all areas of physics and his diagrams appear on the blackboards of physicists all around the world. This toolkit is undoubtedly Feynman's greatest gift to physics and the story perfectly illustrates Feynman's preference for concrete, detailed calculation rather than reliance on more abstract theorems.

    The Physics of Computation

    At the invitation of his friend Ed Fredkin, Feynman delivered a keynote lecture at "The Physics of Computation" Conference at MIT in 1981. Feynman considered the problem of whether it was possible to perform an accurate simulation of Nature on a classical computer. As Nature ultimately obeys the laws of quantum mechanics, the problem reduces to simulating a quantum mechanical system on a classical computer. Because of the nature of quantum objects like electrons, truly quantum mechanical calculations on a classical computer rapidly become impractical for more than a few 10's of electrons.

    Feynman then proceeded to consider a new type of computer based on quantum mechanics: a quantum computer. He realized that this was a new type of computer: "Not a Turing machine, but a machine of a different kind." Interestingly, Feynman did not go on to explore the different capabilities of quantum computers but simply demonstrated how you could use them to simulate true quantum systems.

    By his presence at the conference, Feynman stimulated interest both in the physics of computation and in quantum computing. At this conference 30 years later, we heard several talks summarizing progress towards actually building a quantum computer. In the last five years of his life, Feynman gave lectures on computation at Caltech, initially with colleagues Carver Mead and John Hopfield, and for the last three years by himself.

    I was fortunate enough to be asked by Feynman to write up his "Lectures on Computation." The lectures were a veritable tour de force and were probably a decade ahead of their time. Feynman considered the limits to computation due to mathematics, thermodynamics, noise, silicon engineering, and quantum mechanics. In the lectures, he also gave his view about the field of computer science: He regarded science as the study of natural systems and classified computer science as engineering since it studied man-made systems.

    Inspiring Later Generations

    Feynman said that he started out very focused on physics and only broadened his studies later in life. There are several fascinating biographies of Feynman but the one I like best is No Ordinary Genius by Christopher Sykes. This is a wonderful collection of anecdotes, interview, and articles about Feynman and his wide range of interests—from physics, to painting, to bongo drums and the Challenger Enquiry. Feynman was a wonderful inspiration to the entire scientific community and his enjoyment of and enthusiasm for physics is beautifully captured in the TV interview, "The Pleasure of Finding Things Out," produced by Christopher Sykes for the BBC. Feynman is forever a reminder that we must try to think differently in order to innovate and succeed.

    Tony Hey, corporate vice president of the External Research Division of Microsoft Research

    Related Links

     

Page 2 of 3 (7 items) 123