Virtual golfers may be eagerly anticipating the upcoming EA Sports PGA Tour for Xbox One, which is slated to include Kinect for Windows motion controls. But real golfers—the ones who actually hit the links—don’t have to wait to enjoy the golfing benefits of Kinect. They can use the power of the Kinect sensor to capture, analyze, and improve their golf swing, thanks to two golf-swing analysis products from Belgium-based Guru Training Systems: My Swinguru and Swinguru Pro.
Designed for use by serious amateur golfers, My Swinguru detects flaws in the user’s golf swing and offers remedies. The golfer simply takes a swing in front of the Kinect sensor, which captures the entire motion in 3D. There are no wires or markers to interfere with the golfer’s swing—just one Kinect sensor that provides state-of-the-art, three-dimensional motion capture.
Designed for use by amateur golfers, My Swinguru uses a single Kinect sensor to capture the golfer'sswing in three dimensions.
The data collected by the Kinect sensor is crunched by a Windows PC running the Swinguru software, which uses a unique combination of synchronized 2D and 3D captures to measure key elements of the swing at any moment, something not possible with traditional video techniques. The golfer receives immediate feedback that detects flaws and recommends remedial drills. And My Swinguru automatically records the golfer’s swings for comparative replay.
Designed for use by professional golf instructors, Swinguru Pro provides simultaneous top, side, and front views on the same screen. Pause, forward, and back controls allow instructors to drill down frame-by-frame. Each training session, with all its swing data, is automatically saved, so it can be replayed and compared to earlier or later sessions. The Pro version also allows swing motions to be recorded as a series of pictures, which enables a sophisticated “match-your-posture” function. This function freezes the golfer’s set up, 9 o’clock, and top-of-backswing positions for comparison and direct feedback. In addition, Swinguru Pro provides balance tracking, including a view that shows the golfer’s center of mass displacement during the swing. It also includes automated drawing tools, which make it easy for users to compare body position in swing after swing.
As demonstrated in this video, Swinguru Pro provides additional analyses for use by teaching professionals.Like My Swinguru, it uses just one Kinect sensor, which enables wireless motion capture.
Enhanced with Kinect for Windows v2
Initially developed for use with the original Kinect for Windows sensor, both versions of Swinguru have now been adapted for use with Kinect for Windows v2. Guru Training Systems CEO Sabastien Wulf is delighted with the improvements enabled by the new sensor. “The Kinect v2 sensor is a revolution for our use in sports motion analysis. Not only does the v2 sensor use a wider angle time-of-flight camera, which allows us to reduce the minimum distance from the sensor for full body tracking, it also increases the image resolution tremendously, which enables a much enhanced user experience. What’s more, its new infrared time-of-flight depth sensor, combined to its new infrared illuminator, makes it so much more resistant to direct sunlight for 3D full body tracking.”
With help from Kinect for Windows and Swinguru, this golf season could be your best yet.
The Kinect for Windows Team
In case you hadn't noticed, the Windows Store added something really special to its line-up not too long ago: its first Kinect applications. The ability to create Windows Store applications had been a longstanding request from the Kinect for Windows developer community, so we were very pleased to deliver this capability through the latest Kinect sensor and the public release of the Kinect for Windows software development kit (SDK) 2.0.
The ability to sell Kinect solutions through the Windows Store means that developers can reach a broad and heretofore untapped market of businesses and consumers, including those with an existing Kinect for Xbox One sensor and the Kinect Adapter for Windows. Here is a look at three of the first developers to have released Kinect apps to the Windows Store.
Nayi Disha – getting kids moving and learning
You wouldn’t think that Nayi Disha needs to broaden its market—the company’s innovative, Kinect-powered early education software is already in dozens of preschools and elementary schools in India and the United States. But Nayi Disha co-founder Kartik Aneja is a man on a mission: to bring Nayi Disha’s educational software to as many young learners as possible. “The Windows Store gives us an opportunity to reach beyond the institutional market and into the home market. What parent doesn’t want to help their child learn?” asks Aneja, somewhat rhetorically. In addition, deployment in the Windows Store could help Nayi Disha reach schools and daycare centers beyond those in the United States and India.
Parents and teachers who discover Nayi Disha in the Windows Store will be impressed by its creative approach to learning. Based on Howard Gardner’s widely acclaimed theory of multiple intelligences, Nayi Disha appeals to children who learn best through movement, music, and storytelling. Each lesson teaches an important skill, such as from learning to count and recognizing common foods.
Comparisons with Kaju, available in the Windows Store, teaches children about number values. Here, we see "Gator" swimming through the app's main menu.
These lessons are imparted through stories featuring Kaju, a friendly space-traveling alien, whose adventures and misadventures get the kids up and moving—and learning. For example, in one story Kaju is ejected from his spaceship and lands on an interstellar number line. To return to the spacecraft, he must jump sequentially from digit to digit, until he reaches a specified number. But here’s the rub: Kaju only jumps if the kids jump and call out the correct numbers. This, of course, is where the Kinect sensor comes into play. The sensor sees the children jumping and hears them counting, and Kaju responds accordingly. Watching a roomful of preschoolers joyfully leap and count as they work to get their alien friend back to his space capsule, you can see how Nayi Disha makes learning fun. The youngsters are acquiring important skills, but all they know is they’re having fun. In fact, their identification with Kaju is so strong that one little girl referred to Aneja as “Kaju’s papa.”
YAKiT: bringing animation to the masses
It doesn’t take much to get Kyle Kesterson yakking about YAKiT—the co-founder and CEO of the Seattle-based Freak’n Genius is justifiably proud of what his company has accomplished in fewer than three years. “We started with the idea of enabling anybody to create animated cartoons,” he explains. But then reality set in. “We had smart, creative, funny people,” he says, “but we didn’t have the technology that would allow an untrained person to make a fully animated cartoon. We came up with a really neat first product, which let users animate the mouth of a still photo, but it wasn’t the full-blown animation we had set our sights on.”
Then something wonderful happened. Freak’n Genius was accepted into a startup incubation program funded by Microsoft’s Kinect for Windows group, and the funny, creative people at YAKiT began working with the developer preview version of the Kinect v2 sensor.
Now, Freak’n Genius is poised to achieve its founders’ original mission: bringing the magic of full animation to just about anyone. Its Kinect-based technology takes what has been highly technical, time consuming, and expensive and makes it instant, free, and fun. The user simply chooses an on-screen character and animates it by standing in front of the Kinect v2 sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character. What’s more, with the ability to create Windows Store apps, Kinect v2 stands to bring Freak’n Genius’s full animation applications to countless new customers.
YAKiT's Kinect-powered app makes it possible for anyone to create humorous animations in real time.
“When we tested the Kinect-based product with users, they loved it,” says Kesterson. “We had a couple of teenaged girls create animated foods—like apples and broccoli—for a school report on nutrition. They got so into the animation that soon they were making fruit-attacking zombies, and before we knew it, they were on the floor from laughing so hard. Their mother said to me ‘I’ve got to get this.’ That’s when I knew that we’d have a winner in the Windows Store.”
3D Builder: commoditizing 3D printing
As any tech-savvy person knows, 3D printing holds enormous potential—from industry (think small-batch manufacturing) to medicine (imagine “bio-printing” of body parts) to agriculture (consider bio-printed beef). Not to mention its rapid emergence as source of home entertainment and amusement, as in the printing of 3D toys, gadgets, and gimcracks. It was with these capabilities in mind that, last year, Microsoft introduced the 3D Builder app, which allows users to make 3D prints easily from a Windows 8.1 PC.
Now, 3D Builder has taken things to the next level with the incorporation of the Kinect v2 sensor. “The v2 sensor generates gorgeous 3D meshes from the world around you,” says Kris Iverson, a principal software engineer in the Windows 3D Printing group. “It not only provides precise depth information, it captures full-color images of people, pets, and even entire rooms. And it scans in real scale, which can then be adjusted for output on a 3D printer.”
3D Builder uses Kinect v2 to create accurate, three-dimensional models, ready for 3D printing.
Beyond its scanning fidelity, the Kinect-enabled version of 3D Builder also lets users automatically refine and repair their 3D models prior to printing. In addition, it gives them to power to manipulate the print image, combining or deleting objects or slicing them into pieces. Users can see the reconstruction as it happens and revise it on the fly.
The Kinect-enabled version of 3D Builder is now available in the Windows Store, opening up the enhanced possibilities of three-dimensional printing to both home users and a wider audience of professionals. For home users, the app enables the creation of 3D portraits and gadgets that are more realistic. Hobbyists can print through their Windows 8.1 computer directly to their own 3D printer, or they can send the data via the cloud to 3D printing service. For professionals, most of whom will likely use cloud-based printing services, 3D Builder offers the potential to print in range of materials, including plastics, ceramics, and metals.
While home enthusiasts seem the most likely first adopters of the new app, the appeal to professionals is clear. For example, Iverson recounts an experience when he was showing 3D Builder at Maker Faire New York last year. An event planner asked him where she might get the app, which she mused would be perfect for creating 3D mementos at weddings and bar mitzvahs. To Iverson, this is just the tip of the iceberg. “The Kinect v2 version of 3D Builder and its availability in the Windows Store really puts the pieces together, making a complex technology super simple for anyone.”
Nayi Disha, YAKiT, and 3D Builder represent just a thin slice of the potential for Kinect apps in the Windows Store. Whether the apps are educational, entertainment, or tools, as in these three vignettes, or intended for healthcare, manufacturing, retailing, or other purposes, Kinect v2 and the Windows Store offer a new world of opportunity for both developers and users.
Swedish neurologists and software developers have teamed up to create a potentially groundbreaking way to monitor patients with Parkinson’s disease. The secret ingredient? Kinect for Windows.
Softronic, an IT management and consulting company headquartered in Stockholm, joined forces with the renowned Karolinska University Hospital to create an easy-to-use, affordable way to follow up remotely with Parkinson’s patients. Taking advantage of Kinect’s depth sensing, high-definition video, and skeletal tracking, the application can assess five movements based on the Unified Parkinson’s Disease Rating Scale (URSDRS), a widely used tool for assessing the status of Parkinson patients. For example, the Kinect application measures finger taps (tapping the thumb with the index finger in rapid succession), leg agility (tapping the heel on the ground in rapid succession raising the leg), and rapid alternating movements of the hands (simultaneously moving the hands vertically and horizontally). The software analyzes the movement data and presents the results in an interface for the physician’s assessment.
The project's interface helps the physician easily assess the patient's status.
Parkinson’s patients routinely visit their physician just once or twice a year, but the Kinect-based application lets doctors remotely monitor patients on an as-needed basis. It thus enables the physician to focus on those patients whose condition merits additional remote follow-up and perhaps a change in medication—or even a trip to the clinic or hospital.
The system measures the patient's finger-tap ability, one of the standardmeasurements used to assess the motor function of Parkinson's patients.
Another advantage of the system is its affordability, the Kinect sensor being far less costly than standard telemedicine equipment. Moreover, the presence of a Kinect sensor in the household does not indicate that someone is suffering from a disease, a comfort to patients who prefer to keep their condition private.
The system is currently undergoing clinical trials in Sweden, and the doctors and developers are already exploring additional functionality, particular gamification, which could make rehabilitation exercises more fun. They are also looking into using the system to help educate physicians about Parkinson’s disease and to provide data for second opinions.
NuiStar, a software development company based in Nanjing, China, has utilized the Kinect sensor and SDK (software development kit) to bring cutting-edge technology to China’s schools and community centers. The centerpiece of the company’s offerings is their natural user interface, or NUI, which is built around the Kinect sensor’s ability to recognize and respond to users’ natural gestures and voice commands. This user-friendly capability truly makes NUI the star of the NuiStar software.
In NuiStar’s Joystar games, children use intuitive body movements to "play" their way through educational content. Watch the NuiStar video.
The value of natural human-computer interactions is illustrated dramatically in NuiStar’s educational products, which provide digital content for preschool and elementary school levels. Consider the company’s Joystar Preschool Game Series, a collection of 20 game-like apps that encourage youngsters to use intuitive body movements to “play” their way through the content, thereby engaging the students in an immersive and enjoyable learning process.
The body-controlled interactions come naturally to the children, thus reducing the time they spend learning how to play the game and providing educators with a robust and easy-to-implement teaching tool. Moreover, the interface is innately engaging and satisfying to young students, who enjoy being active in the classroom. And Joystar’s games support multiple players, a feature that not only boosts participation rates but also builds cooperative and team-based skills.
In addition, since these Kinect-based, gamified apps get students up and moving, they help to meet China’s requirement that physical activity be integrated into the curriculum. The whole-body engagement inherent in NuiStar’s educational content keeps students physically active even when weather or time constraints preclude traditional outdoor exercises. In initial assessments, a group of 300 preschoolers using NuiStar’s software showed a 33% increase in average exercise time. NuiStar is now working with teachers to implement more robust methodologies to evaluate the efficacy of the games’ educational content.
The NuiStar team has also developed two Kinect-enabled, gesture-controlled training apps for students: one on fire safety and the other on pedestrian safety. The fire safety app tests the student’s knowledge of how to evacuate a burning school and monitors his or her behaviors during a simulated fire in a school building. In the pedestrian safety app, students must walk through a simulated city street scene, practicing traffic safety rules in this virtual-reality environment. By enabling students to respond naturally by using gestures, these pilot apps help them learn how to cope with dangerous situations through an interactive trial-and-error method that is more vivid and engaging than passive instructional methods.
While enhancing learning is NuiStar’s current focus, the company is also bringing its NUI technology to the community technology centers that have become commonplace in China’s towns. These centers are designed to provide local residents with access to technology and services that are otherwise out of their reach. But because many rural residents and older citizens are unfamiliar with modern technology, center employees often spend a great deal of time showing them how to use the hardware and software and putting them at ease with technology. NuiStar’s Kinect-based programs address these issues in a novel way, reassuring new users with an innately natural method of interaction. To new users, waving an arm or making a gesture is infinitely more accessible than typing on a keyboard or using a mouse to navigate menu options.
NuiStar's Kinect-enabled gesture controls help patrons at a community tech center intuitivelynavigate unfamiliar technology. The patrons here are watching a video of the equestrian eventfrom last summer’sYouth Olympic Games in Nanjing.
Recently, NuiStar added a content management interface tool that lets teachers and community center employees integrate their current multimedia resources into the Kinect system. This further reduces the learning curve for students and local residents, making technology more accessible and useful.
From the classroom to the community center, NuiStar is working to make technology intuitive and interactive for children and adults.
The following blog was guest authored by Kyle Banuelos, co-founder of Stublisher Inc., a startup that engineers interactive experiences for cutting-edge campaigns, events, spaces, and connected devices.
Mini-computers, powerful microcontrollers, and sensors (such as Kinect), are increasingly becoming more affordable and customizable. Frameworks and libraries built for these technologies are typically open source and written in approachable, high-level languages. This fundamental democratization of technology has spawned a generation of “creative technologists,” hybrid artists/coders whose role is largely experimental. As budding members of this creative coding community, we at Stublisher develop software that thrives at the interplay of physical environments and human participation. These projects range from unique brand activations to purely artistic endeavors.
We are fascinated by Kinect’s affordance of participatory experiences, because we believe that interactivity as a medium spawns understanding, significance, and lasting takeaway. We’re particularly exploring how to harness the artistic potential of the Kinect sensor to further our mission of connecting people through shared experience. This objective is the basis for everything we create, including our latest Kinect for Windows projects: Branch and Pinscreen. Both of these Kinect-enabled experiences fostered individual and collaborative exploration, interpretation, and expression.
Stublisher was commissioned to build an interactive art piece for a 2014 Halloween extravaganza in Portland, Oregon. The result was Branch—an evolving, illusionary light sculpture that tracked and outlined viewers’ bodies via a Kinect sensor, using the information to create an interactive, explorative experience. From the outset, we focused our conceptualization around three specific questions:
Through our initial conversations, we arrived at a central theme that drove the key aesthetic attributes and development of our piece: altered perception.
Technically, a single processing sketch that included a rudimentary mapping utility allowed us to position a series of virtual LED strands on a two-dimensional plane. Information from the Kinect data stream was returned in real-time, updating the actual LED strands that were driven by a network of LED-specific microcontrollers. A computer was housed inside a custom-fabricated, matte black enclosure, where the Kinect sensor was discreetly positioned.
In addition to transforming this otherwise static work into an expressive canvas by tracking viewers’ movements, Kinect for Windows empowered people to find meaning in Branch through connection with their peers around a shared interest of dance expression.
Recently, Portland organized its inaugural Startup Week, a five-day celebration of our local community. A series of independently organized and managed events took place across the city, and we hosted the closing party at Stublisher’s headquarters. With world-renowned DJ MICK on the decks, we wanted to further encourage community celebration and concert participation within our space.
This was a great opportunity to exhibit and gather feedback on a smaller, internal experiment that we had hacked together, which was inspired by pinscreen animation—a technique that uses a surface filled with movable pins to create textures and shapes. The system that powered Pinscreen was built in TouchDesigner, using the depth data from a Kinect v2 sensor to drive the motion of hundreds of particles through a 3D scene. Incoming data was smoothed to create fluid, procedural motion that subtly augmented the architecture of our office.
Pinscreen resonated strongly with the audience, fostering choreographed efforts of manipulation. Pinscreen taught us that sometimes the simplest applications can not only delight, but achieve our objective: to connect people through shared experience.
The various experimental and practical applications of the Kinect hardware by the creative coding community serves as continuous inspiration for our own interactive experiments at Stublisher, and we’re eager to further explore Kinect’s potential for cultivating shared experiences.
Kyle Banuelos, Co-founder, Stublisher
We’ve grown accustomed to seeing the Kinect sensor put to amazing uses in healthcare—from stroke rehabilitation, to fall prevention, to seizure monitoring—but even we were surprised to learn that Kinect for Windows is now being used to watch the doctor! That’s right: researchers at the University of California, San Diego (UCSD) are using the Kinect for Windows sensor as part of an innovative tool to monitor how physicians interact with patients during consultations.
The project, called Lab-in-a-Box, is the brainchild of UCSD researcher Nadir Weibel and his colleagues at the San Diego Veterans Affairs (VA) Medical Center. Designed to fit unobtrusively in a doctor’s office, the apparatus detects when physicians are so focused on their computer screen that they fail to establish eye contact and one-to-one rapport with the patient. The Kinect sensor plays a key role in the process, as its depth camera accurately records the movements of the physician’s head and body. An independent eye-tracker device detects the doctor’s gaze, while a microphone picks up the doctor-patient conversation. The Lab-in-a-Box software merges all of these data streams and synchronizes them with data on the doctor’s computer usage, combining everything to give a detailed picture of the physician’s interaction with the patient.
The Kinect sensor captures a video and depth image; the latter is then overlaid with joint and gaze estimation based on yaw, roll, and pitch.
Weibel, who is a research assistant professor in the Department of Computer Science and Engineering, devised the Lab-in-a-Box in part to help physicians deal with the ever-expanding barrage of digital medical records. With so much digital information about the patient, doctors can find themselves staring at their computer or tablet screen, instead of focusing attention on the patient.
The Lab-in-a-Box, which will be used only with the consent of both the doctor and the patient, is currently being tested at the UCSD Medical Center and the San Diego VA Medical Center as part of a study funded by the Agency for Healthcare Research and Quality and directed by physician and researcher Zia Agha. Its developers hope that it will help medical personnel run their practices more efficiently and with greater doctor-patient rapport.
The Kinect for Windows Team
You can read about the improvements that Kinect for Windows v2 offers over its predecessor, but seeing the differences with your own eyes is really, well, eye-opening—which is why we’re so pleased by this YouTube video posted by Microsoft MVP Josh Blake of InfoStrat. In it, Blake not only describes the improvements in skeletal tracking provided by the v2 sensor and the preview SDK 2.0 (full release of SDK 2.0 now available), he actually demonstrates the differences by showing side-by-side comparisons of himself and others being tracked simultaneously with the original sensor and the more robust v2 sensor.
As Blake shows, the v2 sensor tracks more joints, with greater anatomical precision, than the original sensor. His video also highlights the major improvements in hand tracking that the v2 sensor and SDK 2.0 provide, and, with the help of two colleagues, he demonstrates how Kinect for Windows v2 can track more bodies than was possible with the original sensor and prior releases of the SDK.
When asked how the improved skeletal-tracking capabilities can be utilized, Blake responded, “It helps improve several different scenarios. The more accurate anatomical precision is particularly useful in health and rehabilitation apps, as well as for controlling virtual avatars more accurately.” He also finds great potential in the enhanced hand-tracking capabilities, noting that “recognizing the two-finger point pose in addition to the hand open and hand closed poses means we have more options for developing interesting deep interactions.”
Finally, Blake points out that the ability to track the movements of up to six individuals will be valuable in a variety of situations, such as showroom scenarios or workplace applications that involve multiple people. “Before, users had a hard time understanding why the application would respond to two people but not more, or how to get it to switch to a new person,” he says. “The support for six full skeletons also means that we don’t have to compromise in how many people can interact with an application or experience at once.“
Telemedicine has become one of the hot trends in healthcare, with more and more patients and doctors using smartphones and tablets to exchange medical information. The convenience of not having to travel to the doctor’s office or clinic is a big part of the appeal—as is the relief of not wasting valuable time thumbing through outdated waiting-room magazines when an appointment runs late. And for patients living in isolated or underserved areas, telemedicine offers care that might otherwise be unattainable. Despite these advantages, telemedicine can be coldly impersonal, lacking the comfort of interacting with another human being.
Silicon Valley-based Sense.ly is working to bring a human face to telemedicine. The company’s Kinect-powered “nurse avatar” provides personalized patient monitoring and follow-up care—not to mention a friendly, smiling face that converses with patients in an incredibly lifelike manner. The nurse avatar, affectionately nicknamed Molly, has access to a patient’s records and asks appropriate questions related directly to the patient’s past history or present complaints. She has a pleasant, caring demeanor that puts patients at ease. Interacting with her seems surprisingly natural, which, of course, is the goal.
With the help of Kinect for Windows technology, Sense.ly's nurse avatar, called Molly, can respond to patients’ speech and body movements.
By using Kinect for Windows technology, Sense.ly enables Molly to recognize and respond to her patient’s visual and spoken inputs. The patient stands or sits in from of a Kinect sensor, which captures his or her image and sends it to Molly. Does the patient have knee pain? She can show Molly exactly where it hurts. Is the patient undergoing treatment for bursitis that limits his range of motion? He can raise his affected arm and show Molly whether his therapy is achieving results. In fact, the Kinect sensor’s skeletal tracking capabilities allow Sense.ly to measure the patient’s range of motion and to calculate how it has changed from his last session. What’s more, with Kinect providing a clear view of the patient, Molly can help guide him or her through therapeutic exercises.
A growing number doctors and hospitals are recognizing the value of applications such as Sense.ly. In fact, the San Mateo Medical Center is one of several major hospitals that have recently added Molly to their staff, so to speak. The value of such solutions is particularly striking in handling patients who suffer from long-term conditions that require frequent monitoring, such high blood pressure or diabetes.
Solutions like Sense.ly also provide a clear cost benefit for providers and insurers, as treating a patient remotely is less costly and generally more efficient than onsite care. In a recent pilot program, the use of Sense.ly reduced patient calls by 28 percent and freed up nearly a fifth of their day for the clinicians involved in the program.
Most importantly, Sense.ly’s Kinect-powered nurse avatar offers the promise of better health outcomes, the result of more frequent medical monitoring and of patients’ increased involvement in their own care. Something to think about the next time you’re stuck in the doctor’s waiting room.
Visitors to Seattle’s 2014 Decibel Festival expected the avant-garde. After all, this four-day event celebrated innovations in electronic music, visual art, and new media. But even the most jaded attendees must have been surprised to encounter the Cube, a 4-foot-square block of transparent acrylic that catapulted them into an interactive dance party.
The Cube uses four Kinect v2 sensors to detect the dancers and trace their movements, integrating them into the visual experience. (Photo: Scott Eklund)
Powered by five computers and four Kinect v2 sensors working together, the Cube drew in curious onlookers, capturing their images and incorporating them into the installation. As participants stood in front of it, the Cube reacted, pulsating to music and tracing the movements of those around it. The Kinect sensors could detect up to three people on each side of the Cube. And thanks to the transparent nature of the structure, participants could see others through the Cube, so a dancer on one side of the Cube could react to the movements of a partner on another side. In fact, the hands of dancers on opposite sides appeared to be linked by virtual ribbons. Their individual dance moves thus merged into sinuous visual collaborations, enabling the Cube to create a virtually connected dance whose participants were in different physical spaces.
A key technical challenge in creating the Cube was to link together the four Kinect sensors inside the structure, so that the devices could “talk” to each other. Abram Jackson, a program manager with Microsoft Exchange Server who helped with the technical engineering, described the problem. “We had to take all four of the Kinect sensors, map out a cohesive view of the room to keep track of where the people were, even if they changed to a different sensor, so the images displayed on the Cube would still make sense to that person,” he explains.
The innovative dance-party coding work was done in conjunction with Stimulant, a Seattle-based digital design firm. The Cube is thus a prime example of the kind of innovation that occurs when the creative development community takes hold of the Kinect v2 hardware and its SDK (software development kit). As Rick Barraza, senior technical evangelist at Microsoft, observed, “It’s about evolutionary innovation versus revolutionary innovation. We won’t reach that next level until we encourage creativity.” Barraza and his colleagues actively encourage hackers of all stripes—developers, designers, art directors, and hobbyists—to experiment with Microsoft products. He even organized an Ambient Creativity Hackathon this past summer, which inspired an eclectic group of hackers to let their imaginations soar during three days of experimentation with Kinect for Windows v2.
What’s next in this process of evolutionary innovation? Well, for the Cube, the next goal is to scale up to an even bigger size, or to link multiple Cubes together so they can communicate with one another. Whatever happens, we’ll be eager to report on it!
Angles are all around us: the hands of a clock, the blades of a pair of scissors, the corner of a countertop. But while angles are ubiquitous in our environment, acquiring an abstract understanding of angles and their measurement, which is critical to our ability to solve geometric and trigonomic problems, is challenging for many youngsters. Happily, recent research using a Kinect sensor shows promise in helping elementary-school students rise to this challenge.
Carmen Petrick Smith, assistant professor of mathematics education at the University of Vermont (center), works with undergraduate education majors (left to right) Tegan Garon, Sam Scrivani, and Kiersten Barr on movements that are used to help elementary school children learn geometry. (Credit: Andy Duback; used by permission)
Professor Carmen Petrick Smith at the University of Vermont and Professor Barbara King at Florida International University exposed 20 third- and fourth-grade students to a Kinect-enabled learning experience designed to help the youngsters discover key properties of angles by moving their own bodies. The students’ understanding of angles was measured by a test administered prior to the Kinect experience. With that baseline data in place, the students positioned themselves in front of the Kinect sensor and made various angles with their arms.
The Kinect sensor captured image and depth data about the students' arm positions, using the data to create an onscreen graphic representation in which arrows duplicated the angle formed by a student’s arms. In addition, the program turned the monitor screen one of four colors, depending on whether the student’s arms formed an acute, right, obtuse, or straight angle. Next, the experience changed the lengths of the arrows in the graphic representation and added a virtual protractor that measured the angle in the graphic display. These visual cues helped students see that the degree of an angle is not dependent on the length of its “arms” (a common misconception among young pupils). Moreover, the addition of the protractor also helped the students acquire a sense of how angles are measured in degrees.
Following the Kinect experience, the students were again tested on their understanding of angles. Overall, students showed a statistically significant improvement after the angle activity. The Kinect program logged information about the students’ arm movements at a rate of 30 frames per second, resulting in an average of 6,619.6 recorded frames per student. This copious detailed data was crucial in analyzing the students’ learning and how their comprehension was (or in some cases, wasn’t) aided by the body movements.
The idea that body movements can enhance cognitive understanding has been established in a number of experiments, and this study provides further evidence of the value of incorporating kinesthetic learning in the curriculum. We’re delighted to see yet another example of how the Kinect sensor can be a valuable educational tool.
The Kinect for Windows Team