Download Research Tools
Leaving an infirm, elderly relative alone at home can be a torment for both the senior citizen, who may face inconvenient or even life-threatening situations, and the family, who worries about the health and safety of their loved one. Unfortunately, this troubling scenario is becoming common in China and other Asian countries, as these nations join the worldwide trend of aging populations. China’s Harbin Institute of Technology (HIT) has launched a program to aid senior citizens who live alone. Called Smart Home Technologies, the program uses the Microsoft Research Lab of Things (LoT), a platform that facilitates research involving connected devices in homes. The program builds on the impressive progress that has been made in using technology for safety monitoring and emergency detection, and it offers great hope to elderly people and their loved ones.
Professor Nie controls the lights in room via the Smart Home platform based on LoT.
Joining the Lab of Things community
At the core of the program is a model smart home equipped with sensors that compile data on the well-being of the home’s inhabitants. This model home serves as a test environment for researchers from China and other Asian countries. HIT Professor Lanshun Nie, the program’s primary investigator, explains that the goal is to create alternatives to full-time home surveillance systems, which are prohibitively expensive for most families.
The first step for HIT researchers was to see how home sensors are being used to monitor senior citizens in Western countries. Then the HIT program developed a smart home environment that took into account cultural nuances specific to the behaviors and treatment of the aging populations in Asian societies. Nie worked with Arjmand Samuel, Microsoft Research’s senior research program manager for LoT, to design this Asian-specific test environment. Samuel also connected Nie with LoT researchers around the world, so that the HIT team could evaluate and adapt their deployment approaches. “With its diverse, global research community, Lab of Things enables research rooted in particular cultural contexts but driven by global trends”, says Samuel.
Nie and Samuel stress that it is critical to study different software frameworks connecting multiple heterogonous devices and multiple networks. Furthermore, these frameworks must support many concurrent applications; enable reliable data collection; and communicate between home devices, the cloud, and smartphones—and they must make it easy for developers to create third-party applications. The beauty of LoT, says Nie, is that it helps HIT researchers build such frameworks to provide a closed-loop service that enables researchers to focus on the sensor and data service, while LoT manages tasks like sensor registration, monitoring, and data transition.
Professor Nie and his student, Xue Li, test the robotic trolley.
Smart home in action: the robotic trolley
One of the first prototypes to emerge from the Smart Home Technologies is an intelligent robotic trolley that provides medical support to elderly people—sort of like an early version of the healthcare robot, Baymax, from the recent animated movie Big Hero 6. The trolley is designed to carry medications and offer reminders to take them on time. “Taking medicines is important, but sometimes older folks forget,” observes Nie. “We are also designing the trolley to recognize and react to some urgent situations,” he adds. “When the sensors detect a medical emergency—for instance, an asthma attack—the trolley will be activated and deliver the medicine the patient needs.” Nie further explains that the trolley might also use data from a Kinect sensor to detect and respond to abnormal situations, such as when an older adult falls or convulses.
According to Nie, the Kinect sensor, shown here, could help detect emergencies.
Getting from the trolley to a full-blown health and safety home monitoring system will take some work, but the HIT team and their LoT collaborators at Microsoft Research are convinced that Smart Home Technologies is on the right track. And for countless elderly shut-ins and their anxious families, that day cannot come too soon.
—Bei Li, Research Program Manager, Microsoft Research
Imagine a street dance in which the participants interact not just with their flesh-and-blood counterparts but also with lights and sounds controlled by the dancers’ own movements. That’s what visitors to SummerSalt, an outdoor arts festival in Melbourne, Australia, experienced. The self-choreographed event came courtesy of Encounters, an installation created by the Microsoft Research Centre for Social Natural User Interfaces (SocialNUI), a joint research facility sponsored by Microsoft, the University of Melbourne, and the state government of Victoria. Held in a special exhibition area on the grounds of the university’s Victorian College of the Arts, Encounters featured three Kinect v2 sensors installed overhead.
During a VIP Encounters event on the evening of February 21, several hundred people took part in a Kinect “walk through,” during which dancers and other performers from Victorian College of the Arts mingled with the crowd to create social interactions captured by the Kinect sensors. The results were spectacular visual and audio effects, as the participants came to recognize that their movements and gestures controlled the music and sound effects as well as the light displays on an enormous outdoor screen.
Social interactions facilitated by natural user interfaces were the focus of the Encounters event.
Researchers from SocialNUI conducted qualitative interviews while members of the public interacted with their Kinect-generated effects, probing for insights into the social implications of the experience. As Frank Vetere, director of SocialNUI, explained, “The center explores the social aspects of natural user interfaces, so we are interested in the way people form, come together and explore the public space. And we are interested in the way people might claim and re-orient the public space. This is an important part of taking technological developments outside of our lab and reaching out to the public and other groups within the University.”
Su Baker, director of the Victorian College of the Arts, said, “One of the great crossovers that’s happening now in art is [its] relationship [with] emerging technologies, and we have a number of students with a real interest in how emerging technologies can be used in their work.”
This unique, cross-disciplinary collaboration was a wonderful success, delighting not only the NUI researchers and art students but also the public participants.
—John Warren, Senior Research Program Manager, Microsoft Research
Machine learning is the cornerstone of today’s modern data analysis. The gurus of “big data” analytics are all well versed in machine learning, but most domain specialists still must hire data scientists to meet their data-analysis needs. It's inevitable, though, that the data-modeling chain will become largely automated—simplified to the point where off-the-shelf data transformation tools will be as pervasive as those for word processing and spreadsheets. Data analysis will then be like driving a car: the user will focus on the route to the destination, without worrying about how the engine works.
We refer to this vision as the automation of machine learning, or AutoML for short. To help advance towards this grand goal, ChaLearn, an organization that promotes machine-learning challenges, has launched a contest to help democratize machine learning. Built on the new CodaLab platform, the contest offers US$30,000 in prizes donated by Microsoft. More than 60 teams already have entered the contest during the Prep round, and now, until October 15, 2015, you can enter any of five additional rounds: novice, intermediate, advanced, expert, or master. Visit the ChaLearn Automatic Machine Learning Challenge site to see the deadlines for each round. You can enter even if you have not participated in previous rounds.
Five rounds remain in the Automatic Machine Learning Challenge, each round consisting of AutoML and Tweakathon phases.
The contest problems are drawn from a variety of domains. They include challenges in the classification of text, the prediction of customer satisfaction, the recognition of objects in photographs, the recognition of actions in video data, as well as problems involving speech recognition, credit ratings, medical diagnoses, drug effects, and the prediction of protein structures.
Five datasets of progressive difficulty are introduced during each round. The rounds alternate between (1) AutoML phases, during which submitted code is blind tested in limited time on our platform, using datasets you have never seen before; and (2) Tweakathon phases, in which you are given time to improve your methods by tweaking them on those datasets and running them on your own systems, without computational resource limitation and without requirement of code submission.
During the novice round, which runs through April 14, you will encounter only binary classification problems, with no missing values and no categorical variables. All the datasets are formatted as simple data tables—no sparse matrix format, though one dataset does include a lot of zeros. The classes are balanced. The number of features does not exceed 2,000, and the number of examples does not exceed 6,000. The metric of evaluation is simply classification accuracy.
For more details, read our white paper.
Enter the AutoML challenge for a rich learning and research experience, and a chance to win!
—Isabelle Guyon, President, ChaLearn; Evelyne Viegas, Director, Microsoft Research; Rich Caruana, Senior Researcher, Microsoft Research