A tale of many haggis

A tale of many haggis

  • Comments 3

Once upon a time there lived a bored young aristocrat named Stanley. Growing tired of his indolent lifestyle, Stanley decided to go into the manufacturing business, so he purchased a haggis factory, which was going cheap as its previous owner had died in a tragic golfing accident.

On his first day as the new owner, Stanley arrived in front of the factory gates bright and early, eager to find out what his new investment was capable of. Here is what he saw:

  • The foreman arrived at 7:15, and unlocked the building
  • The workers arrived at 7:30 on the dot
  • The boiler was fired up at 7:50, while the rest of the staff were cleaning the machinery
  • The first sheep was delivered at 8:23
  • The boiler reached operating temperature at 8:48, and the first haggis was added to the pot
  • This haggis finished cooking at 11:55, and was moved to the cooling rack
  • It was packaged for distribution at 1:20 in the afternoon

"Yikes!", thought Stanley. "It took six hours to prepare a single haggis. Assuming I can sell this for $12, that gives an income of $2 per hour; nowhere near enough to cover payroll for my 20 staff. I fear this investment may have been a mistake."

Stanley has made the same error as many beginning graphics programmers, who render a single model (or sometimes just the default CornflowerBlue template) and then post on Internet forums complaining about the resulting framerate.

It is obviously ridiculous to judge the throughput of a factory by examining just one haggis. Sure, it takes a while to clean the equipment and heat up the cooker, but you only have to do that once in the morning. If you were to make 100 haggis, these could all cook at the same time in the same pot, so would take no longer than a single one. If you wanted 200, they might not all fit in the pot at the same time, but you could reuse the existing hot water, and the second batch of 100 haggis could be cooking at the same time as the first batch was cooling.

Graphics cards work the same way. When you see something like this:

  • CornflowerBlue runs at 800 frames per second
  • Adding a 100 triangle model gives 500 frames per second

it is easy to worry that your framerate will decrease by 300 each time you add 100 triangles. If this was true, drawing more graphics would result in:

  • 200 triangles = 200 fps
  • 300 triangles = -100 fps

Huh? A negative framerate is obviously impossible. This proves there must be something wrong with my logic.

My first mistake was to assume that framerate is a linear scale, when in fact the framerate is equal to one divided by the amount of time spent drawing each frame. To convert into linear millisecond units, we must divide 1000 by the framerate:

  • CornflowerBlue = 800 fps = 1.25 ms
  • 100 triangles = 500 fps = 2 ms

Looking at the difference between these frame times, it took 0.75 milliseconds to draw 100 triangles. Time is a linear scale, so we can predict how performance will change as we add more triangles:

  • 200 triangles = 2.75 ms = 364 fps
  • 300 triangles = 3.5 ms = 286 fps
  • 400 triangles = 4.25 ms = 235 fps

But this estimate is still too pessimistic, because graphics drawing time is not linear with regard to how many things are being drawn. In some cases, adding more triangles might be free, if the hardware is able to boil them up in the same pot it is already using to cook your previous graphics. In most cases, adding more triangles will slow you down, but by less than you would expect from measuring just a few in isolation.

It is appealing to think we might be able to predict the performance of a full game by measuring something smaller and simpler, but this is not possible, because we have no way to know how much of that small measurement represents real work, versus how much is just warming up the boiler and cleaning our equipment ready to start cooking.

In fact, measuring the framerate of a game that does only a small amount of work tells you pretty much nothing. If you want to know how long it will take to make a large number of haggis, the only accurate way to find out is to crank up the production line and actually make that many haggis!

  • I find measurements like "FPS" (gauche and broad-stroke though they may be) more useful in isolating violations of performance thresholds in the fine tuning stage than in implementation optimization... Looking for poor performance, and then check-pointing the in-flight load impacting the performance load relative systems. Catching exceptional, unanticipated systemic scenarios/states "in the act" for forensic analysis has it's place.

    To use the metaphor, once we have a reasonably-performing haggis factory, we want to know when the production line spontaneously sputters to a stall/halt, and be able to inspect the line to find out why.

    (If the point of the original post was to question the value of basing design/implementation decisions "predictively" from aggregated metrics at runtime... I'm not arguing against that. If anything, the same is true in the application of profiling that I outlined: You can't predict the problem potential of a system from derivative metrics without capturing the underlying state of the actors in the system.)

  • Mmm. Haggis. Cough Cough.  <Loud Wretching>

    Cough Cough.

    I'm okay those Rocky Mountain Oysters I just ate almost came up though.  

  • I too got bogged down in such haggis-related tomfoolery.

    Not only have I learned a lesson today, but I am now going to buy a haggis.

Page 1 of 1 (3 items)
Leave a Comment
  • Please add 3 and 3 and type the answer here:
  • Post