Samsung SUR40 with Microsoft PixelSensesamsunglfd.com/solution/sur40.do
When David Brown comes to town you know you're in for a treat. He is a recognized leader in Surface application design and development, going back to the early days of Microsoft Surface 1.0. His latest project "NUIverse," a whimsical play on words inviting people to explore the universe through natural user interface (NUI), is an amazing example of the kind of applications that can only be fully realized on the Samsung SUR40 for Microsoft Surface.
Last month David shared his time with us to demonstrate his NUIverse project, updated for the Microsoft Surface 2.0 platform. It's hard to convey in a blog video just how visually stunning this application looks, but even more amazing is how easy he makes it to control the complexity of the Solar System and night sky. Everytime we check-in with David the NUIverse gains new features and interaction refinements -- far too many to show at once. This application is a must have for any outer space enthusiast.
Be sure to check out our video deep dive into the NUIverse on the Surface YouTube Channel. To learn more about NUIverse, and David Brown's past Surface projects, please checkout his blog at http://drdave.co.uk/blog
A month or two ago we published the Tagged Objects for Surface 2.0 Whitepaper. It includes some best practices that will allow you to use tags to detect physical objects on the Microsoft Surface device.
That said, there are some scenarios when you don’t need to identify a large number of objects. You just want to track an object or two, but you want to make sure that is really reliable.
You could consider creating your own custom tag format that is perhaps more forgiving (i.e. bigger dot sizes, redundancy bits of information, more spacing between the dots, etc.) – but this has two drawbacks:
Another much simpler way to do something similar is to lean on Microsoft Surface’s ability to track blobs and report the blob size. Microsoft Surface does the image processing for blob, finger and tag detection in hardware, so it happens much faster than any software based algorithm.
In the video below I show how to create an object that has two reflective regions of different sizes. This allows me to compute the orientation and position of the object. I can calculate the orientation by taking the arctangent of the centers of the two blobs (hint: use atan2, not atan, since we want a full 360 degree orientation). In this case I calculate the center of the object, as the midpoint between the centers of the two blobs.
For those interested, I have posted the code to http://code.msdn.microsoft.com/Tracking-objects-without-c68cc31a