From time-to-time, I'm going to post a code snippet with a subtle bug in it for people interested in tracking down such things. Here's the first one (C/C++):
void CopyArg(TCHAR * pszArg)
_tcsncpy(g_szFoo, pszArg, (sizeof g_szFoo) / (sizeof(TCHAR)));
//other logic -- null term the string, etc.
What's wrong with this code? The problem with it is that it sizes the chars in g_szFoo in two different places: once when the global is defined, and again in the sizeof(TCHAR) reference in the _tcsncpy() call. Why is that bad? What happens if someone changes g_szFoo to explicitly refer to a narrow or wide char type? He has to remember to also change the sizeof(TCHAR) reference in the string copy. If he doesn't, he may see a buffer overrun, depending on the type chosen and whether _UNICODE is defined. How do you fix this? Like this:
_tcsncpy(g_szFoo, pszArg, (sizeof g_szFoo) / (sizeof g_szFoo));
Now we're happy regardless of what base data type g_szFoo has.