Hopefully you’ve had a chance to grab the Photo Mosaics application and set it up in your own environment, if not, I’d encourage you to at least read the initial blog post introducing this series to get some context on what the application does. 

Below you’ll see images captured from a PowerPoint presentation, with accompanying screen-by-screen narrative to walk you through the workflow of the application.  You can flip through the slides at your leisure using the numbered links below the image, or simply download the full presentation if you prefer.

This deck provides a walkthrough of the workflow of the Windows Forms client application and the Windows Azure cloud application described in my blog series. The source code for the application is also available for download.

The slides that follow build the architecture from the point of initiating a client request. Aspects of the architecture that are not actively involved in a given step will be greyed out allowing you to focus on the specific actions at hand.

In the first step of the process, the end-user runs the Windows Forms client application and selects an image from her local machine to be converted to a photo mosaic. It is not apparent from this slide, but when the client application starts up, it also hosts a WCF service endpoint which is exposed on the Windows Azure AppFabric Service Bus. You’ll see how that plays a role in the application a bit later on.

The end-user next selects an image library (stored in Windows Azure Storage). An image library contains the tiles used to construct the mosaic and is implemented as a blob container with each of the images stored as blobs within that container. The container is marked with an access policy to permit public access to the blobs, but enumeration of the containers and of the blobs must occur via an authorized request. That request is made via a WCF service housed in the ClientInterface Web Role in response to the end-user browsing for an image library.

The red arrow indicates a WCF service call, while the blue lines indicate data transfer.

Once a specific image library is selected and the list of image URIs has been returned, the end-user can peruse each image in the Tessera Preview section of the client user interface (“tessera” by the way the term for an individual mosaic tile). The preview functionality accesses the blob images directly via an HTTP GET request by setting the ImageLocation of a PictureBox control to the image URI, for example http://azuremosaics.blob.core.windows.net/mylibrary/0001.jpg.

Here the purple line is meant to indicate a direct HTTP request.

Once the user has selected the image to be converted as well the image library containing the tiles and furthermore has specified the tile size (1 to 64 pixels square) and the number of segments (or slices) in which to divide the image when processing it, she submits the request via a WCF service call to the ClientInterface Web Role, passing the original image bytes and the other information.

The ClientInterface Web Role stores the original image in a blob container named imageinput, renaming that image to a GUID to uniquely identify it. It also places a message on a Window Azure queue named imagerequest containing a pointer to the blob that it just stored as well as additional elements like the URI of the tile library container and the number of slices and tile size specified by the end-user.

I’ve introduced orange as the color for queues and messages that move in and out of them.

In response to that same request from the end-user, the ClientInterface Web Role logs an entry for the new request in the Windows Azure table named jobs. Each job is identified by the GUID that the ClientInterface assigned to the request, the same GUID used to rename the image in blob storage. Additionally, the ClientInterface makes a call via the AppFabric Service Bus to invoke a notification service that is hosted by the client application (the self-hosted one I mentioned at the beginning of the walkthrough). This call informs the client application that a new request has been received by the service and prompts the client to display the assigned GUID in the notification area at the lower right of the main window.

Listening to the imagerequest queue is the JobController Worker Role. When a new message appears on that queue, the JobController has all of the information it needs to start the generation of a new mosaic, namely, the URI of the original image (in the imageinput container), the number of slices in which to divide the image, the tile size to use (1-64 pixels), and the URI of the container housing the images that will be the tesserae for the mosaic.

The JobController accesses the original image from the blob container (imageinput) and evenly splits that image into the requested number of slices, generating multiple images that it stores as individual blobs in the container named sliceinput. Each blob retains the original GUID naming convention with a suffix indicating the slice number. Additionally a message corresponding to each slice is added to the slicerequest queue.

A second Worker Role, the ImageProcessor, monitors the slicerequest queue and jumps into action when a new message appears there. A slice request message may represent an entire image or just a piece of one, so this is a great opportunity to scale the application by providing multiple instances of the ImageProcessor. If there are five instances running, and the initial request comes in with a slice count of five, the image can be processed in roughly 1/5th of the time.

Currently the number of slices requested is an end-user parameter, but it could just as easily be a value determined dynamically. For instance, the JobController could monitor the current load, the instance count, and the incoming image size to determine an optimal value – at that point in time - for the number of slices. Hypothetically speaking, it could also implement a business policy in which more slices are generated to provide a speedier result for premium subscribers, but a free version of the service always employs one slice.

The job of the ImageProcessor is to take the slice of the original image along with the URI of the desired tile library and the tile size and create the mosaic rendering of that slice. As you might expect, this is the most CPU intensive aspect of the application – another reason it’s a great candidate for scaling out.

The underlying algorithm isn’t really important from a cloud computing perspective, but since you’re probably interested, it goes something like this:

for each image in the image library 
   shrink the image to the desired pixel size (1 to 64 square) 
   calculate the average color of the resized image by adding up all 
     the R, G, and B values and dividing by the total number of pixels 

for each pixel in the original slice 
   calculate the Euclidean distance between the R, G, B components 
     of the pixel’s color and the average color of each tile image in the library 
   determine which tile images are ‘closest’ in color 
   pick a random tile image from those candidates 
   copy the tile image into the analogous location in the generated mosaic slice 

You’ll notice here that the ImageProcessor also writes to another Windows Azure table called status. In actuality, all of the Windows Azure roles write to this table (hence the * designation), but it makes the diagram look too much like a plate of spaghetti to indicate that! The current implementation has the ImageProcessor role adding a new entry to the table for each 10% of completion in the slice generation.

Once an image slice has been ‘mosaicized’, the ImageProcessor role writes the mosaic image to yet another blob container, sliceoutput, and adds a message to another queue, sliceresponse, to indicate it has finished its job. As you might expect by now, the message added to this queue includes the URI of the mosaic image in the sliceoutput container.

The JobController monitors the sliceresponse queue as well as the imagerequest queue, which I covered earlier. As each slice is completed, the JobController has to determine whether the entire image is now also complete. Since the slices may be processed in parallel by multiple ImageProcessor instances, the JobController cannot assume that the sliceresponse messages will arrive in any prescribed order.

What the JobController does then is query the sliceoutput container whenever a new message arrives in the sliceresponse queue. The JobController knows how many slices are part of the original job (that information was carried in the message), so it can determine if all of the processed images are now available in sliceoutput. If not, there’s nothing really for the JobController to do. If, however, the message that arrives corresponds to the completion of the final slice, then….

The JobController stitches together the final image from each of the processed slices and writes the new, completed image to a final blob container, imageoutput. It also signals that it’s finished the job by adding a message to the imageresponse queue.

At this point in time, the job is done and the only thing left to do is really housekeeping. Here the JobController responds to the message it just placed on the imageresponse queue. While this may seem redundant – why not just do all the subsequent processing at once, in the previous step? – it separates the task and provides a point of future extensibility.
What if, for instance, you wanted to introduce another step in the process or pass the image off to a separate workflow altogether. With the queue mechanism in place, it wouldn’t be hard for some other new Worker Role to monitor the queue for the completion message. In response to the completion message, the JobControllerdoes two things:
  1. updates the appropriate entry (keyed by GUID) in the job table to indicate that it’s completed that request, and
  2. uses the Service Bus to notify the client that the job is complete.
On the client end, the response is to update the status area in the bottom right of the main window and refresh the job listing with the link to the final image result (if the Job List window happens to be open).

Now that the image is complete it can be accessed via the client application, or actually by anyone knowing the URI of the image: it’s just an HTTP GET call to a blob in the imageoutput container (e.g., http://azuremosaics.blob.core.windows.net/imageoutput/{GUID}).

The final piece of the application to mention is functionality accessed via a secondary window in the client application, the Job List. This window is a master-detail display, using DataGridViews, of the jobs submitted by the current client along with the status messages that were recorded at various points in the workflow (I specifically called out the status data added by ImageProcessor, but there are other points as well). To display this data, the client application makes additional WCF calls to the ClientInterface Web role to request the jobs for the current client as well as to retrieve the status information for the currently selected job. Here the WCF service essentially implements a Repository pattern to abstract the entities in the two Windows Azure tables.

Lastly, for completeness, here is the full workflow in glorious living color!