Framerate Misconception

|
The FPS (frames per second) benchmark is often misinterpretted by newbie game programmers. A common question on the Gamedev.Net forums is "Why is my framerate dropping so much?" The poster usually goes on to say that with a blank screen being cleared their FPS >1000 and as soon as they draw a triangle it is down to 500, or so.

This is because of the common misunderstanding of what the framerate represents. It represents how many times the scene is being drawn per frame. What you should really be interested in is how long it takes to draw a scene. Most top end games aim for somewhere around 30FPS, that is the scene is getting rendered 30 times per second and that it takes 1/30 second to draw. Back to the original example, a scene rendering at 1000FPS is being drawn in 0.0001 seconds. If the FPS is halved to 500FPS, the kneejerk reaction is that it is going substantially slower. Actually the frame is being drawn twice as slow in 0.002 of a second. This is very unsubstantial, a time-taken increase of 0.001 seconds.

Take an example at the other end of the scale, if your scene is presently being drawn at 60FPS and you add some new lovely feature and it is now being drawn at 30FPS, that is a step up from 0.016 of a second to 0.033. You are stepping up by 0.016 of a second, which means that what you have added is 16 times more expensive than in the previous example..

Don't be shocked to find your FPS dropping rapidly in the early stages of adding features to your renderer, be more focused on how expensive certain rendering features are.

"Local Scope" Macros

|
Came across some code at work today that appeared to be causing some massive compile times for a particular file. Some colleagues of mine were recording times of >5 minutes to which i sceptically replied "are you sure, sometimes i find it doesn't report files as having finished compiling and will carry on and link perfectly fine". Mr X decided to take a look at the file and narrowed down the cause to some "local scope" macros. A simple example:

void foo()
{
#define SOMETHINGCOMMON( x ) std::cout << "This is a number: " << x

SOMETHINGCOMMON( 1 )
SOMETHINGCOMMON( 2 )
SOMETHINGCOMMON( 3 )
SOMETHINGCOMMON( 4 )
SOMETHINGCOMMON( 5 )
SOMETHINGCOMMON( 6 )
SOMETHINGCOMMON( 7 )
SOMETHINGCOMMON( 8 )
SOMETHINGCOMMON( 9 )
SOMETHINGCOMMON( 10 )
SOMETHINGCOMMON( 11)
SOMETHINGCOMMON( 12 )

#undef SOMETHINGCOMMON
}


Forget for now that this is only outputting to std::cout and that the macro isn't necessarily taking an integer. What does using this method buy you that writing it as a function doesn't? It's possibly alot quicker to use and it doesn't 'clutter' any namespace. Are these important? Not half as important as the nasties that accompany using macros. It's not debuggable, it's not as readable and it's not maintainable.

The code found at work was substantially more complicated that this and involved expanding thousands of lines of more-than-one-line macro definitions. Quite why this was taking upwards of 5 minutes i don't know. Maybe it was the compiler saying, "What are you doing to me?!?!", in which case it did it's job because it was promptly changed.

Please, please avoid using these when ever you can and just use functions. It saves everyone and the compiler alot of headaches.