Windows and the .Net Framework have the concept of "best-fit" behavior for code pages and encodings. Best fit can be interesting, but often its not a good idea. In WideCharToMultiByte() this behavior is controlled by a WC_NO_BEST_FIT_CHARS flag. In .Net you can use the EncoderFallback to control whether or not to get Best Fit behavior. Unfortunately in both cases best fit is the default behavior. In Microsoft .Net 2.0 best fit is also slower.
The underlying problem that best fit behavior tries to solve is "Gee, Unicode has about a gajillion more characters than 1252, how do we get them all in?". Unfortunately that's the problem, they won't all fit in. 1252 has 256 characters, the nearly 100,000 Unicode characters just won't fit. So what Best Fit tries to do is to cram as many characters as possible into the limited set of the code page by mapping them to things they might look like. So c with a dot above, ċ, U+010b is mapped to a plain old c with no dot. Japanese full-width forms are mapped to their half width forms, etc. There are lots of problems with this solution, and I'll mention some of them here:
Its also worth noting that there are a few rare cases where best fit can happen when decoding data with MultiByteToWideChar or Encoder.GetString or the Decoder class.
For both Windows and Microsoft .Net, the best plan is to use Unicode when possible, either UTF-8 or UTF-16 is usually a good choice. Sometimes its not possible, usually because of a protocol limitation. Often best fit behavior is a poor choice when hitting a protocol problem, since such protocols are usually explicit and such mappings could cause security holes or protocol violations. In those cases finding extensions or newer protocols that handle Unicode are good, but some, like e-mail headers [;)], we're stuck with.
In Windows you can disable the best fit behavior by using the WC_NO_BEST_FIT_CHARS flag. In the framework you can do so by changing the EncoderFallback and DecoderFallback. Encoder.GetEncoding(xxx, EncoderFallback.ReplacementFallback, DecoderFallback.ReplacementFallback) or ExceptionFallback are good choices. Note that in the .Net 2.0 there is no "best fit" fallback, except for an internal best fit fallback that is used by default, so once you change a class's EncoderFallback or DecoderFallback you cannot easily retrieve the best fit fallback.
If you are aware of the limitations of the fallbacks and want consistent behavior anyway, one option to consider is making your own fallback. I made a prototype fallback that uses Normalization to decompose a string to its component parts. This is particularly nifty because characters can be decomposed to their component parts. By doing this, things like the kPa symbol can change to k + P + a. It still doesn't work across all languages though since ü would still become a u instead of a ue in German. So even though this can be a fun experiment, it's still better to Use Unicode!