We have an unfortunate bug in .Net v2.0+ that causes encoding or decoding of more than 2GB of data to fail. That's a lot of data, but it still shouldn't do that. This is a problem with our built in fallbacks.
Ironically, if you encounter bad bytes then the bug is reset and you're "good" for another 2GB. This bug happens to most of our code pages for valid data, but some optimizations make it unlikely to happen in Unicode, ASCII & Latin-1. There are some workarounds. Some of these don't work if you're insulated from the decoder/encoder (like using a StreamWriter):
Hope that helps,