Temporal sampling frequency (aka 'framerate')

Temporal sampling frequency (aka 'framerate')

  • Comments 6

Game player:  "d00d, teh framerate totally sux0rz!"

Game developer:  "we are experiencing aliasing due to low sampling frequency along the temporal axis..."

One of the many decisions that goes into making a game is whether it is better to run at a high framerate, which means drawing many frames per second and thus having little time to spend on each, or should we choose a lower framerate, get more time per frame, and thus be able to draw larger numbers of higher quality objects?  The perfect balance is a matter of heated debate, varying with the game, genre, and personal preference, but there are some widely accepted truths:

  • Dropping below 30 fps is rarely a good idea
  • 30 fps is generally considered ok, but 60 is better
  • Higher than 60 fps offers rapidly diminishing returns

Yet movies are animated at a mere 24 fps!  Why has Hollywood chosen a framerate so much lower than most game developers?

Perhaps this is just a historical legacy, preserved for backward compatibility with decisions made in the 1920s?  But when IMAX was designed in the late 1960s, they specified new cameras, film, projectors, and screens, while keeping the 24 fps sampling frequency.  And in the early 21st century, Blu-ray and HD DVD fought an entire format war during the transition to high definition digital video, but oh look, still 24 fps.  It sure looks like the movie world just doesn't see any reason to go higher, similar to how few game developers care to go above 60 fps.

Ok, next theory: perhaps the difference is because games are interactive, while movies are just prerecorded entertainment?  The lower the framerate, the more latency there will be between providing an input and seeing the resulting change on screen.  The pause button on my DVR remote has ~.5 sec latency, which is irrelevant when watching my favorite romantic comedy but would be a showstopper when trying to nail a Halo headshot.

And a final theory: perhaps the difference is due to aliasing?  Realtime graphics are usually point sampled along the time axis, as we render individual frames based on the state of the game at a single moment in time, with no consideration of what came before or what will happen next.  Movies, on the other hand, are beautifully antialiased, as the physical nature of a camera accumulates all light that reaches the sensor while the shutter is open.  We've all seen the resulting blurry photos when we try to snap something that is moving too quickly, or fail to hold the camera properly still.  Motion blur is usually considered a flaw when it shows up uninvited in our vacation snapshots, but when capturing video it provides wonderfully high quality temporal antialiasing.

So which theory is correct?  Do games care more than movies about framerate because of latency, or because of aliasing?

We can find out with a straightforward experiment.  Write a game that runs at 60 fps.  Make another version of the same game that runs at 30 fps.  Make a third version at 30 fps with super high quality temporal antialiasing (aka motion blur).  Get some people to play all three versions.  Get more people to watch the first people playing.  Compare their reactions.

  • If aliasing is a significant factor, the watchers will be able to distinguish 60 fps from 30 fps, but unable to distinguish 60 fps from motion blurred 30 fps
  • If latency is a significant factor, people playing the game will be able to distinguish 60 fps from motion blurred 30 fps, even though the watchers cannot
  • If everybody can tell the versions apart, both theories must be wrong
  • If nobody can tell any difference, we might as well just leave the whole game at 30 fps and be done with it  :-)

If you try this experiment, you will find the results depend on which game you choose to test with.  Many observers do indeed think motion blurred 30 fps looks the same as 60 fps, so temporal aliasing is surely important.  Players also find the two equivalent with some games, while reporting a big difference with other games.  So the significance of latency depends on the game in question.

Sensitivity to latency is directly proportional to how hands-on the input mechanism is.  When you move a mouse, even the slightest lag in cursor motion will feel very bad (which is why the mouse has a dedicated hardware cursor, allowing it to update at a higher framerate than the rest of whatever app is using it).  Likewise for looking around in an FPS, or pinch zooming on a touch screen.  You are directly manipulating something, so expect it to respond straight away and for the motion to feel pinned to your finger.  Less direct control schemes, such as pressing a fire button, moving around in an FPS, driving a vehicle, or clicking on a unit in an RTS, can tolerate higher latencies.  The more indirect things become, the less latency matters, which is why third person games can often tolerate lower framerates than would be acceptable in an FPS.

What can we learn from all this rambling?

  • If we have a game at 60 fps, and are trying to find room to add more sophisticated graphics, an interesting option might be to drop down to 30 fps while adding motion blur.  As long as we can implement a good blur for less than the cost of drawing one 60 fps frame, we may be able to achieve equivalent visual quality while freeing up a bunch of GPU cycles.

  • If we have a game at 30 fps or lower, we should avoid the sort of input behaviors that will make this latency objectionable to the player.  Conversely, if we have a game that uses only the sort of input not sensitive to latency, there is less point bothering to make it run at 60 fps!

Yeah. yeah, so I should talk about how to actually implement motion blur.  Next time...

  • One quick question: Suppose that the game runs at 60 fps, and a human can perceive max. 12 fps as individual images (however, the brain can still tell the difference between 25 fps and 30 fps; but this shouldn't matter now). Wouldn't that mean that with 60 fps we get 5 images from the screen blurred together into one single mental image, thus introducing a natural motion blur?

    Furthermore, I can certainly tell apart a 24 fps movie and a 30 fps movie. The 24 fps movie has the certain "cinema" look; the 30 fps movie more looks like a home video. This may be a reason why everyone sticks to the 24 fps sampling rate.

  • Peter Jackson is shooting the Hobbit at 60 fps. We might start seeing a change from the standard 24 fps in the near future.

  • excelent intro shawn, i'm very much looking forward to your next posts on the subject (especially how it can be performed optimally in xna!)

    ps: i don't think you made any mention of FXAA, which seems to be a very cheap and high quality "deferred friendly" AA.   it was easy for my gfx guy to "drop it in" to our current xna based deferred lighting renderer: timothylottes.blogspot.com/.../nvidia-fxaa.html

  • Jackson is filming The Hobbit at 48 fps, not 60. But it could definitely spell the end of 24 fps (especially with digital cameras where the cost of extra frames is negligible). He posted an interesting write-up of the background and whys of it here: www.facebook.com/.../10150222861171558 .

    You mention the 60-30 split. I'm curious whether variable with motion blur all the time makes sense for PC (and other platforms with enough hardware variation that hitting a desired fixed step framerate across all devices would be tricky). Would motion blur at 60+ fps on machines with high powered GPUs be fine or would it just look weird? I imagine the amount of blur would be so small as to hardly be noticeable, but I thought I'd ask since it'll be a while before I have a chance to write something to test it.

  • Interesting article - I like the idea of temporal sampling / temporal antialiasing, never thought of it that way but that's exactly what it is.

    An interesting thing I've noticed is that film's low 24fps framerate lends it a certain sense of surreal-ness and dignity - when you watch making-of's shot on video, or outtakes, at a higher framerate or with more continuous motion blur, it looks more commonplace somehow. There's probably other things going on there with lighting, film quality etc. but I think framerate plays a big part.

  • It will be interesting for sure if The Hobbit is actually released at 48 fps (right now it sounds like they are unsure whether they will end up downsampling prior to release or not).

    Filming at a higher framerate could make a lot of sense for an effects heavy movie regardless of what format you choose for the final master. The hardest part of adding CGI to filmed material, not to mention blending multiple layers of green-screened film, is getting the different layers to sit right together. For convincing results, you must exactly match lighting and shadows, any camera movements, camera settings such as focus, and, yes, motion blur!

    (I'm fascinated by watching early computer effects in movies and trying to spot which things they avoided doing - a great way to tell what things the technology could not handle well at that time :-)  For instance it is very noticeable when tech arrived at the point where the camera could move, and those motions be captured so computer graphics could exactly match the live film.  Prior to that, scenes with computer effects would only ever contain very simple linear camera pan type motions which could be manually duplicated with reasonable accuracy.  And prior even to that, any scenes with computer effects used to avoid any camera motion at all!)

    Seems to me that, similar to how it makes sense to edit artwork using a high resolution uncompressed format even if you will eventually scale down to a much smaller JPEG format, it could be easier to pull off complex multilayer matte effects at a higher framerate, then only downsample the final result, as opposed to compositing layers that already include a higher degree of motion blur.

Page 1 of 1 (6 items)
Leave a Comment
  • Please add 1 and 6 and type the answer here:
  • Post