I've been getting this question a lot, even from members of my team. Why are the coordinates so weird in deep zoom? They are just totally confusing for laying out images! Why do you have this odd ViewportOrigin and ViewportWidth and why are the numbers I put in there so different from what I think they should be?

These are questions I get every day and I am getting tired of explaining how it works and why it is the way it is. However, I thought it would be fun to start off with the history of why things are the way they are. It's not because we're crazy (maybe a little), but most of it has to do with the way Deep Zoom was created and integrated into Silverlight.

Why Viewports?
So let's start with a little bit of background. Deep Zoom was started in livelabs, Rado and I are not really part of the Silverlight team. Our first modest goal was to get single deep images that render really fast into Silverlight. We thought that was quite a lofty goal and we also thought we'd likely spend most of our time getting the performance up to par given that Silverlight has a software rasterizer.

The scenario we had in mind was for folks to create very deep images, possibly images with certain parts of them being particularly deep. An example of this may be the kinds of images you can easily create with Deep Zoom Composer. Now if you think about single images that are really deep, well then Viewports make a ton of sense. There isn't really a scenario where you zoom out - most of the time you will want to zoom into areas of interest in your large image. So if you have the following image (dotted lines are there for illustration), The viewportOrigin and ViewportWidth coordinates are exactly the coordinates of the image inscribed inside the larger image, in logical coordinates in the larger (original) image's coordinates. Zooming to the small images area requires a straightforward setting for ViewportOrigin and ViewportWidth to the coordinates of the small image (in logical coordinates between 0 and 1).

Then, in the above example, if I want to display the Gray area on screen completely, I set ViewportOrigin = new Point(0,0), and ViewportWidth = 1. Makes perfect send.

If I want to zoom to the green rect, I set ViewportOrigin = new Point(0.5, 0.5), and ViewportWidht = 0.5. Wow, that's easy.

But now collections!
Last year around Christmas Rado said he thinks we'd be able to get collections to work. No way I thought, that sounds impossible. So when we got to designing the API, well we had two choices: either we make SubImages work exactly the same way as MultiScaleImages, or we make SubImages more condusive to laying out on screen like images. The advantage of the latter being the layout being more intuitive for viewing large collections of images, however the disadvantage being that zooming into a SubImage would be completely different and less intuitive from MultiScaleImage single image scenarios. We were stuck between a rock and a hard place. Either we make the API consistent, or we make it intuitive for a specific scenario and less intuitive for another. Since we didn't know how people would use Deep Zoom (turns out people are much more interested in collections - i.e. laying out lots of medium sized images on screen vs really deep images) we frequently hear that the way SubImages work is counter intuitive. And it was designed to be intuitive by being consistent, but I'm not sure that consistency was the right choice in this case! I'd love to hear what people think.

So if you want to lay out a SubImage on screen, you still have to deal with Viewports and ViewportWitdths, something that's designed to work well for zooming in, but not for laying images out (which is really "zooming out").

The best way to think about this is that the top left corner of the image you are displaying is always (0, 0), and the right edge of the image is always at Width = 1. So if you want to zoom out, or make the image smaller, you have to make the ViewportWidth bigger, and you have to move the ViewportOrigin to the top left, i.e. make it negative.

Example:

So you can see, it all makes sense somewhere. By the way - I added some helper functions to the template that ships with Deep Zoom Composer that translates coordinate spaces between the Viewport space and a regular image space if you can't think in Viewport space for laying things out (I can't).