I recently ran into a piece of code that looked like this:
int len = cchIn;
strncpy(dest, src, len - 1);
This is bad, because strncpy is defined as so:
char *strncpy( char *strDest, const char *strSource, size_t count );
The original complaint was that we were passing a signed int into a function that takes an unsigned int, and while this is a bad thing, it won't really save you. In fact, if we had this:
unsigned int len = cchIn;
It's just as vulnerable as the previous version, and perhaps worse because the code analysis tool might not call our attention to it. What happens if we somehow make len = 0? 0 – 1, whether signed or unsigned, still passes 0xffffffff to strncpy, which is really the same thing as just calling strcpy! Oops.
This illustrates why strncpy_s has a different signature –
errno_t strncpy_s( char *strDest, size_t sizeInBytes, const char *strSource, size_t count );
Even if you foul up count, you're unlikely to foul up sizeInBytes, which would then catch the error.
Lessons learned – while it is always better to use the type the function wanted in the first place, and you should always be careful of signed-unsigned mismatches, just using unsigned numbers might not save you. Second lesson – doing math inside a function argument where all sorts of bad things can happen and you can't check anything isn't wise. Last and most important lesson is that there are good reasons why we now have strncpy_s and friends, and just because you're not using strcpy doesn't mean your code is safe.
Personally, I prefer to use string classes to prevent a lot of this sort of mayhem. When IIS 6 was being developed, this was one of the major changes they made to prevent buffer overruns. Given IIS 6's really good security record, I'd say they have some strong arguments for following their example. We make use of a lot of these in Office, too.