We have been exploring the technique of pre-rasterizing vector images, and the last time around we took a look at a particular example where pre-rendering our vector image performed better than real-time rendering when the image is small, but that the pre-rasterized version began to take much longer as the size of the controls go larger.

One thing to take note of, however, is that we are pre-rasterizing to an image of the exact size and shape as our final output will be. However, we can not safely make the assumption that this will always be the case, so it's worth a short digression to investigate what might happen if we were to pre-rasterize to a different size, and then allow the framework to scale our image on our behalf.

Why is this important? This is a technique that I know that people use. I have heard many people discussing the technique of creating a bitmap at the largest possible size, and then scaling it to fit the screen. (Most recently, I heard this in a discussion of rendering images for the .NET Compact Framework, in order to fit the varying screen sizes offered by any of the devices that might be using that software.) There is a performance cost involved; pre-rendering to the largest possible size and then scaling down will reduce performance. Obviously, there are more computations involved than a simple memory move. But what is this performance cost?

To investigate this, I began with the same house image that I used for my last entry, and rasterized it. For one test run, I created a Bitmap object whose width and height were equal to 1/2 the width and height of the client rectangle. For another test run, I created a Bitmap object whose width and height were equal to 2 times the width and height of the client rectangle. In both cases, I rendered to the full client rectangle. Finally, as a control, I rendered to a Bitmap object of the target size. I ran 100 iterations at each of the various sizes of the image. What were the results?

Rendering Performance of Scaled Bitmaps: Size of Bitmap (as a proportion of the original) vs. Rendering Speed (in ms)

In both cases, the performance was worse. While the performance penalty decreased slightly over time, the change was not dramatic. The image that we scaled up from 1/2 the original size took an average of 1.46 times longer to render at each of the sizes measured. (It also looked pixelated - another shortcoming.) The image that we scaled down (the option more people would be likely to choose, since the image ends up looking much better) required an average of 3.71 times as long to render at each of the measured sizes! As the image size grows larger, this can become very significant.

Of course, this is not the end of the story. These measurements assume the default interpolation mode. What happens if we change the interpolation mode on the Graphics object we are using? In every situation, scaling a bitmap from 2 times the size of the client rectangle to the size of the client rectangle reduces performance, but the result vary based on the interpolation mode. Let's take a glance at the results:

  • Nearest Neighbor - 314%
  • Default - 354%
  • Bilinear - 355%
  • Low - 358%
  • High Quality Bicubic - 585%
  • High - 586%
  • High Quality Bilinear - 1087%
  • Bicubic - 1618%

As you use more and more complex interpolation methods, it takes more and more time to render. (I must admit that I do not understand why an interpolation mode labeled High Quality Bicubic would outperform an interpolation mode simply labeled Bicubic by a factor of 3 to 1, but those are the results I am seeing. I ran them both several times just to make sure.)

In any case, if you are hunting down ways to optimize rendering performance, eliminating resizes can potentially provide you with a drastic performance bump, especially when you are scaling down and image quality is critical.