From time-to-time, I'm going to post a code snippet with a subtle bug in it for people interested in tracking down such things.  Here's the first one (C/C++):

TCHAR g_szFoo[10];

void CopyArg(TCHAR * pszArg)

{

      _tcsncpy(g_szFoo, pszArg, (sizeof g_szFoo) / (sizeof(TCHAR)));

      //other logic -- null term the string, etc.

      return;

}

What's wrong with this code?  The problem with it is that it sizes the chars in g_szFoo in two different places:  once when the global is defined, and again in the sizeof(TCHAR) reference in the _tcsncpy() call.  Why is that bad?  What happens if someone changes g_szFoo to explicitly refer to a narrow or wide char type?  He has to remember to also change the sizeof(TCHAR) reference in the string copy.  If he doesn't, he may see a buffer overrun, depending on the type chosen and whether _UNICODE is defined.  How do you fix this?  Like this:

void CopyArg(TCHAR * pszArg)

{

      _tcsncpy(g_szFoo, pszArg, (sizeof g_szFoo) / (sizeof g_szFoo[0]));

      //other logic -- null term the string, etc.

      return;

}

Now we're happy regardless of what base data type g_szFoo has.