March, 2010 - Microsoft PixelSense Blog - Site Home - MSDN Blogs
Blog Entries
  • Microsoft PixelSense Blog

    GDC 2010: Creating multi-touch, multi-user games on Surface

    • 0 Comments

    Are you into game design? Have you been designing multi-touch games? Microsoft Surface is a commercial multi-user platform where the end users are the same audience that you create games for on other devices. Games on Surface bring two, four, six or more people together in one social space. They are perfect for the entertainment and hospitality industry where you’ll find experiences on Surface today.

    Come see us at GDC. Our session info:

    When: Thursday, March 11, 2010 9–10am
    Location: Room 301, South Hall
    Track: Game Design

    In the session we’ll go over how Surface is positioned for game creators and what it takes to create games for this amazing platform that works with massive multi-touch, multiple players and real-world objects. Joining Microsoft in this session will be students from Carnegie Mellon University showing their table gaming concept based on Dungeons & Dragons by Wizards of the Coast; and our Surface Strategic Partner Vectorform showing games they’ve created for retail and hospitality.

    We hope to see you there!

    - Eric (follow Surface on Twitter and Facebook)

    Galactic Alliance, by Vectorform
    SurfaceScapes proof-of-concept, by Carnegie Mellon University; Wizards of the Coast, Dungeons & Dragons, and D&D are trademarks of Wizards of the Coast LLC.  Wizards of the Coast’s trademarks and logos are used with permission. © 2010 Wizards of the Coast LLC
    Drift-N-Drive, by Vectorform
  • Microsoft PixelSense Blog

    Using the RawImage in a WPF application.

    • 3 Comments

    One common question I hear is:

    How can you consume the raw image provided by the core APIs in a WPF application? How do you display it on the screen?

     

    I just extracted it from the code I used to “process the image” created by the cup on the monster demo.

    Here is a simple answer to that question, the code is “AS IS” – use at your own risk.

     

    Follow these steps:

     

    1.        Add the Microsoft.Surface.Core assembly  to your project so you can reference  it.

    2.        After your SurfaceWindow is initialized, do the necessary setup so that you can start receiving Raw Image events and data.

     

    // Copyright © Microsoft Corporation.  All Rights Reserved.

    // This code released under the terms of the

    // Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html.)


            /// <summary>

            /// Default constructor.

            /// </summary>

            public SurfaceWindow1()

            {

                InitializeComponent();

     

                SourceInitialized += new EventHandler(InitializeCore);

     

                // Add handlers for Application activation events

                AddActivationHandlers();

            }

     

     

            void InitializeCore(object sender, EventArgs e)

            {

                // Create a target for surface input, and start

                // receiving normalized raw images.

                contactTarget = new ContactTarget(
                                new WindowInteropHelper(this).Handle,
                                EventThreadChoice.OnCurrentThread);

                // See step 3 for definition of OnContactTargetFrameReceived.

                contactTarget.FrameReceived += OnContactTargetFrameReceived;  

                contactTarget.EnableInput();

                contactTarget.EnableImage(ImageType.Normalized);

            }

     

    3.        Now that you are receiving the raw image, do something with it. I my case I write it to a WriteableBitmap so that I can use it as a source to an Image in my visual tree. I write to the bitmap every 100 milliseconds. 

    // Copyright © Microsoft Corporation.  All Rights Reserved.

    // This code released under the terms of the

    // Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html.)

     

            // Assuming the following members of the class

            private byte[] normalizedImage;

            private ImageMetrics normalizedImageMetrics;

     

            private static WriteableBitmap writeableBitmap;

     

            // remembers the last time we showed the raw image.

            private long oldTimeStamp;

     

            /// <summary>

            /// Handler for the FrameReceived event. Here we get the

            /// rawimage data from FrameReceivedEventArgs object.

            /// </summary>

            /// <param name="sender">ContactTarget that received

            ///  the frame</param>

            /// <param name="e">Object containing information about

            ///  the current frame</param>

            private void OnContactTargetFrameReceived(

                object sender,

                FrameReceivedEventArgs e)

            {

                long now = DateTime.Now.Ticks;

     

                lock (this)

                {

                    long ticksDelta = Math.Abs(now - oldTimeStamp);

     

                    // Update image every 100 milliseconds.

                    if (ticksDelta > 100 * 10000)

                    {

                        int paddingLeft, paddingRight;

     

                        Rect imageBound = new Rect();

                        imageBound.X = 0;

                        imageBound.Y = 0;

                        imageBound.Width = 1024;

                        imageBound.Height = 768;

     

                        if (e.TryGetRawImage(

                            ImageType.Normalized,

                            (int)imageBound.Left,

                            (int)imageBound.Top,

                            (int)imageBound.Width,

                            (int)imageBound.Height,

                            out normalizedImage,

                            out normalizedImageMetrics,

                            out paddingLeft,

                            out paddingRight))

                        {

                            WriteToImage(normalizedImage,
                                normalizedImageMetrics);

                            imageToShow.Source = writeableBitmap;

                        }

     

                        oldTimeStamp = now;

                    }

                }

            }

     

            /// <summary>

            /// The WriteToImage method updates the

            /// WriteableBitmap by using unsafe code to write

            /// a pixel into the back buffer.

            /// </summary>

            /// <param name="normalizedImage">The raw image</param>

            /// <param name="metrics">The size of the raw image</param>

            static void WriteToImage(byte[] image, ImageMetrics metrics)

            {

                int maxRow = metrics.Height;

                int maxCol = metrics.Width;

     

                if (writeableBitmap == null)

                {

                    writeableBitmap =

                        new WriteableBitmap(

                            maxCol,

                            maxRow,

                            96,

                            96,

                            PixelFormats.Bgr32, null);

                }

     

                writeableBitmap.Lock();

     

                unsafe

                {

                    for (int y = 0; y < maxRow; y++)

                    {

                        for (int x = 0; x < maxCol; x++)

                        {

                            int color_data = 0;

                            int pBackBuffer = (int)writeableBitmap.BackBuffer;

     

                            pBackBuffer += y * writeableBitmap.BackBufferStride;

                            pBackBuffer += x * 4;

     

                            // Set Red, Green, Blue respectively

                            color_data = image[x + y * metrics.Stride] * 2 << 16;

                            color_data |= image[x + y * metrics.Stride] * 2 << 8;

                            color_data |= image[x + y * metrics.Stride] * 2 << 0;

     

                            *((int*)pBackBuffer) = color_data;

                        }

                    }

                }

     

                try

                {

                    writeableBitmap.AddDirtyRect(new Int32Rect(0, 0, maxCol, maxRow));

                }

                catch (Exception ex)

                {

                    string s = ex.ToString();

                }

                finally

                {

                    writeableBitmap.Unlock();

                }

            }

     

     

     

    That was it! I hope this is helpful. Let me know if you have questions.

     

    Luis Cabrera
    Platform Program Manager

    Microsoft Surface

  • Microsoft PixelSense Blog

    Application Design for Microsoft Surface

    • 0 Comments

     

    cebit-1 As we mentioned previously, Microsoft Surface is at CeBIT in Germany. Along with showcasing Surface in education, Microsoft DPE is presenting on how to develop applications for Surface with our partner UID.

    Recently, we caught up with UID to talk about their experiences creating their winning application for the Touch First Contest.


    Here’s what they had to say.

    Surface: Can you describe the winning application from Surface Touch First Contest?

    UIDUID: The UID team created a customizable Portfolio application that showcases a company and its team, competencies, and past projects in an intuitive and innovative way using Surface. The UID Portfolio addresses key customer needs including quick and easy access to information by projects, methods, industries, and contact persons.

    The gestures are easy to learn and to remember. Thus, the concept uses a list of standard gestures utilized by other multi-touch applications or devices such as rotate, drag & drop, resize, etc., but a few special gestures have also been developed for this application.

    Using object recognition was also a must: The application has three possible objects, which are laid on the surface and initiate an action. One of them is the industry cube, another one is a character (symbolizing the human-being who is the center of the UI development) and the last one is a contact card used for the UID moderator.

    Surface: What was the process for creating this experience? Where did you start and what did you want to accomplish?

    UID: Our aim was to develop an application that combines an attractive design, clearly structured information, intuitive gestures and the novelty of object recognition. To accomplish this, we first created an interaction concept. We tried to find a simple way to represent the content on the Surface unit and to enable the user an easy access to information. Then, each element and its specific visual interaction were designed. Icons, shapes and colours were adjusted to the features of MS Surface.

    UID2 To design an application for MS Surface means to consider its characteristics. For example: MS Surface has a 360 degree interface which invites users to explore the application together and discuss the topics shown. The application shouldn’t define a default orientation. Therefore, each element displayed is available from all sides of the table, and can be manipulated directly by the user.

    We developed the concept and design of our application based on the user-centered design process (UCD): User requirements, scribbles and design alternatives were tested on MS Surface. Parallel with the UCD our software engineers started an agile process to be able to react flexibly to new requirements. We used Scrum as a process framework for the development of the application. A product backlog and regular meetings helped us to solve problems, plan further steps and finish the process/the programming in time. Furthermore, a close collaboration of usability engineers, software engineers, and designers is absolutely necessary. During the process of development we had to dismiss or redefine our ideas and plans several times.

    Surface: How has your winning application been put into use?

    UID: This application has been used at exhibitions (i.e. Internet World in Munich), at various conferences and for customer dialogues. In fact, we are currently using the UID Portfolio application ourselves to feature, enabling potential customers to get information on UID and services while having a first-hand experience in an example of the type of work we do with Surface . As we are specialized in designing, testing and developing intuitive, innovative, and attractive user interfaces, this claim is reflected in our own presentation.

    Surface: What other applications / experiences would you be interested in creating on Surface?

    UID: The advantages of MS Surface are particularly used in the range of knowledge management or consulting. In our opinion MS Surface can be used beyond presentation purposes. Currently, we’re exploring other possibilities and are very optimistic that we will work in others projects MS Surface and its fascinating technology.

    Surface: Would you mind sharing what you will be doing at CeBIT with Surface?

    UID_presenting UID: We are part of the Microsoft Developer Kino (Hall 4, Booth 26). Heiko Lewandowski, Head of Software Engineering at UID, will share the various unique challenges and opportunities designers and developers have when working with Surface and what it takes to create an exciting and user-friendly application.. His presentation “Challenge MS Surface – a challenge for Design and Development” gives an insight into the different stages of development and describes how decisions have been made concerning software and design. And, of course, you can experience our application in person.

    We are looking forward to meeting you at the CeBIT or visit us online!

    > More video from UID

    Learn about becoming a Microsoft Surface Partner on our Microsoft Surface QuickStart site.

    - Eric (follow Surface on Twitter and Facebook)

Page 1 of 4 (10 items) 1234

March, 2010