MotoGP: The Price Is Right

MotoGP: The Price Is Right

  • Comments 4

Most game design documents include a resource budget section, where you are expected to write things like "my AI will use 5% of the CPU and 1.5 megabytes of RAM".

This is bulls**t, and intellectually dishonest. It is simply not possible to predict resource usage with that level of precision before the code has been written. Of all the teams that have written such budgets, I bet only a tiny fraction even had the tools to measure their results against the plan, let alone any intention of actually enforcing it!

The only truly accurate way to know what resources a design will need is to implement it and then measure the results. But going over budget can be horrifically expensive:

  • If you use too much memory, your game will crash
  • If you use too much time, poor framerate will make it unplayable
  • If you are lucky, these problems can be fixed by optimizing a few pieces of code
  • If you are less lucky you may also have to change artwork, which is usually more expensive than code optimization because the work must be repeated for every model or level, rather than just one performance hotspot
  • If you are really unlucky you may need to change fundamental design assumptions such as the size of levels or number of entities on screen, which can have knock-on effects throughout the entire game
  • There goes your profit margin
  • If this takes too long, the publisher may get nervous and decide to kill the project entirely

Yikes! We'd better make a conservative original estimate, then, to avoid any danger of ending up in such a mess...

But this is a competitive marketplace. Especially in the AAA space where many companies are making variants of the same genres, if one team guesses conservatively while their competitor pushes a little harder, they will end up with reviews saying "well, it's ok, but this other similar game has better graphics and more cars on the track at a time". Even for indie games that depend on original gameplay concepts, implementation quality is still important. Nobody is going to want to play a crappy implementation, no matter how good the core idea is.

It's like The Price Is Right. You must guess as high as possible to win, but are disqualified if you go over the limit.

There are several ways to mitigate the risk:

  • Prototype important features to gather data and refine your estimates
  • Prefer late-binding decisions, creating knobs that can be tweaked without causing huge knock-on effects
  • Include a couple of ripcord features that can be cut in case of emergency
  • Be a good guesser (experience helps with this)
  • Be lucky!

I started writing this post because I wanted to talk about a MotoGP example of a late-binding knob, but this is long enough already, so I will defer that for later.

The main ripcord feature in MotoGP was multisampling. Thankfully our guesses turned out pretty good, so we didn't have to cut this, but had everything fallen to pieces at the end of development, the flick of a switch could have saved 2.3 meg of RAM, a ton of GPU time, and (unusually, on account of how we were memory bound on a UMA machine) also sped up our CPU code.

I once worked with a guy who built even more explicit ripcords into his projects. First thing when starting a new game he would allocate a megabyte array that was never used, and insert a millisecond delay into the Update method. A year or two later, when everyone around him is tearing out their hair and wailing in despair: tada!

  • That last comment is amazing. Was his intention to force the devs to be efficient with resources, and only cut them slack if it was absolutely necessary?

  • It’s a hard code 90/90 rule! Flick a switch and you gain your 5%. The 90/90 rule basically states that you should use 95% of the recourses 95% of time. If you’re lower you have room to grow if you’re higher you risk ‘horrifically expensive’ problems Shawn talks about.

  • > That last comment is amazing. Was his intention to force the devs to be efficient with resources, and only cut them slack if it was absolutely necessary?

    The goal is to be slightly conservative and aim to use 90% of the CPU and 90% of memory, rather than maxing out everything at 100%, in order to have a little headroom and make sure the game can ship on time.

    But the trouble is, enforcing that you only use 90% of something is hard verging on impossible! If you run out of memory, you get an obvious crash, but there is no such in-your-face warning as you inch past an  arbitrary point like the 90% threshold.

    Rather than building a ton of infrastructure to measure and report that 90% margin, this guy took advantage of the built-in failure that happens when you hit 100% resource usage, and just shifted the goalposts so that what would have been 90% usage becomes 100%. When the game is mostly finished, and has been optimized for this new version of 100%, he can shift the goalposts back again and be running nice and stable at 90% resource usage.

  • That really is a clever solution.  But you guys don't use this as the 90/90 rule?

    "The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time." lol

    Take care guys,

    Ash

Page 1 of 1 (4 items)
Leave a Comment
  • Please add 8 and 8 and type the answer here:
  • Post