Download Research Tools
Think about supercomputers of the recent past. Just 15 years ago, supercomputers were rare and exotic machines. Government laboratories in the United States and Japan spent hundreds of millions of dollars on custom computing rigs and specialized facilities to house them, in a bid to tackle the world’s toughest problems.
But now there is an alternative that is more attractive for scientists and businesses. Today, you can rent supercomputing horsepower by the hour online from public cloud providers. Amazing.
Windows Azure can help ensure that you’re not paying more than you can afford for your supercomputing time and it makes overall management of large-scale computations very simple. Unlike other cloud providers, Windows Azure has no virtual memory (VM) image you need to manage or store in your account; with tens of thousands of instances, this could add up—both from a management and cost standpoint. And Windows Azure provides the operating system for you (and keeps it up to date with patches)—you just copy your application to Windows Azure and run it in the cloud.
The Microsoft HPC Pack 2012 (a free download that will be available from the Microsoft Download Center later this year) makes it very easy to manage compute resources and schedule your jobs in Windows Azure. You take the proven cluster management tool from Windows Server, connect it to Windows Azure, and then let it do the work. All you need to get started is a Windows Azure account. A set-up wizard takes care of the preparation, and the job scheduler runs your computations.
What’s more, there’s no commitment: you can pay as you go, or you can negotiate a discount if you are going to use a lot of core hours. As Bill Hilf, general manager of product management for Windows Azure observes, it’s easy to manage a wide range of sizes and types of workloads on Windows Azure. Like Bill, we, too, are extremely enthusiastic about the possibilities offered by the supercomputing prowess of Windows Azure. Such massive computational power is critical for “big data” studies that increase our understanding of complex systems.
The genome-wide association study (GWAS) is a case in point. Microsoft Research conducted a 27,000-core run on Windows Azure to crunch data from this study. With the nodes busy for 72 hours, 1 million tasks were consumed—the equivalent of approximately 1.9 million compute hours. If the same computation had been run on an 8-core system, it would have taken 25 years to complete!
The GWAS offers a powerful approach to identifying genetic markers that are associated with human diseases. It used data from a Wellcome Trust study of the British population, which examined some 2,000 individuals and a shared set of about 13,000 controls for each of seven major diseases. But as in all genome-wide association studies, this study had to overcome this significant problem: to study the genetics of a particular condition, say heart disease, researchers need a large sample of people who have the disorder, which means that some of these people are likely to be related to one another—even if it’s a distant relationship. This means that certain positive associations between specific genes and heart disease are false positives, the result of two people sharing a common ancestor rather than sharing a common propensity for clogged coronaries. In other words, your sample is not truly random, and you must statistically correct for “confounding,” which was caused by the relatedness of your subjects.
This is not an insurmountable statistical problem: there are so-called linear mixed models (LMMs) that can eliminate the confounding. Use of these, however, is a computational problem, because it takes an inordinately large amount of computer runtime and memory to run LMMs to account for the relatedness among thousands of people in your sample. In fact, the runtime and memory footprint that are required by these models scale as the cube and square of the number of individuals in the dataset, respectively. So, when you’re dealing with a 10,000-person sample, the cost of the computer time and memory can quickly become prohibitive. And it is precisely these large datasets that offer the most promise for finding the connections between genetics and disease.
To avoid this computational roadblock, Microsoft Research developed the Factored Spectrally Transformed Linear Mixed Model (better known as FaST-LMM), an algorithm that extends the ability to detect new biological relations by using data that is several orders of magnitude larger. It allows much larger datasets to be processed and can, therefore, detect more subtle signals in the data.
By using Windows Azure, Microsoft Research ran FaST-LMM on data from the Wellcome Trust, analyzing 63,524,915,020 pairs of genetic markers, looking for interactions among these markers for bipolar disease, coronary artery disease, hypertension, inflammatory bowel disease (Crohn’s disease), rheumatoid arthritis, and type I and type II diabetes. The result: the discovery of new associations between the genome and these diseases—discoveries that could presage potential breakthroughs in prevention and treatment.
Results from individual pairs and the FaST-LMM algorithm are available via online query in Epistasis GWAS for 7 common diseases in the Windows Azure Marketplace (free access), so researchers can independently validate results that they find in their lab.
Today’s smartphones have put a computer in your pocket. Now, with cloud computing through Window Azure, you have a supercomputer in your—well, not in your pocket, but probably within your budget. Whatever your big-data concerns, Windows Azure can provide supercomputing power at an affordable price.
—David Heckerman, Distinguished Scientist, Microsoft Research; Robert Davidson, Principal Software Architect, Microsoft Research, eScience; Carl Kadie, Principal Research Software Design Engineer, Microsoft Research, eScience; Jeff Baxter, Development Lead, Windows HPC, Microsoft; Jennifer Listgarten, Researcher, Microsoft Research Connections; and Christoph Lippert, Researcher, Microsoft Research Connections
Do you have any agreetment or Project to use this type of supercomputing technology for organization like ESA or NASA?
Jorge Gonzalez Segura
In the video below, David Heckerman, Distinguished Scientist, Microsoft Research, talks about the work
Posted by Dennis Gannon Director of Cloud Research Strategy, Microsoft Research
Over the past two years, we have seen growing interest from the scientific community in using public clouds for research. As part of the original Cloud Research Engagement