The BCL team is having some design discussions on how to fix a bug that was recently found.  I thought you might want to chime in as some of the implications have wide applicability.  

 

Here is the thread from Anthony… comments welcome.

 

 

I would like to get some discussion of this design/compatibility issue:

 

<BugNumber>               Parse and TryParse on all numerical base types should check for negative NumberStyles

 

The Issue

The Parse and TryParse routines on our base data types that take either NumbersStyles or DateTimeStyles effectively don’t do any range checks on them. You can do the following and it will not throw:

 

            Int32.Parse(s, (NumberStyles) 0xFFFFFFFF);

            Int32.Parse(s, (NumberStyles) (-1));

 

Strangely enough we actually do a little bit of validation on the styles for both numbers and dates. We validate certain corner cases where styles or formats don’t make any sense together. However, nothing blows up if you use a style value that we have not defined yet.

 

I don’t think there is any doubt that whatever we do, it should be consistent across data types and across both Parse and TryParse.

 

The issue is: given that we already shipped 2 versions with no range checks, should we put them in now? There is a slight backward compatibility risk, and some forward compatibility concerns.

 

A broader issue is: how important are range checks for flags-style enums, anyway? I would think that we would advocate range checks for sure if we could wind the clock back to V1.0. However, if someone from the old days remembers a reason why these were left off, please speak up.

 

Considerations

This issue is practically performance neutral. We already have a site that does some checks and throws if they are invalid, so all this is doing is adding a couple of branches. We would not fix this by calling Enum.IsDefined, rather we would compare against a special range and throwing.

 

The backward compatibility issue is that someone could have written pathological code that passes in a bogus enum value, but otherwise worked fine, and adding these range checks would break them at upgrade time. They are somewhat at risk anyway because we could decide to attribute meaning to those flags in a future version anyway; it could just do something unexpected. By missing the argument checks in V1.0, we effectively made any further additions to the enums breaking changes. If we want to be really hard core we need to freeze these enums and create new ones if we want any more opt-in behavior.

 

A forward compatibility question arises: Supposing you want to use a new flag value in Whidbey, but you app also works down-level or locked back. If we follow this pattern, the code would just throw down level. If we don’t, the code would not throw, but it would not use the opt-in feature and might fail in a more subtle way. Which is worse? They are both kind of bad in a way.

 

The difficulty of this scenario is: the reason it feels safe to break this is because we think people won’t be doing this. However, if that is the case, why would we bother to fix it? The argument is that more people will use the framework in the long run and would rather get a good argument check. The counter argument is that letting people know they have used an invalid flag at upgrade time is not the idea timing. A CDP might be better.

 

Options

1.       Hard core non-breaking. We don’t add the argument checks, and we freeze the enums. We would need to add new enums for both numbers and dates for any opt-in behavior. We would need to take a DCR to change the existing flags that have been added already to DateTimeStyles and add them in a different way.

2.       Hard core argument-checking. We add the argument checks in Whidbey, and consider it to be a reasonable breaking change for the future of the platform. We then keep extending those enums as the need arises.

3.       Loosy-goosy. We could say that we don’t want to put the argument checks in, but we don’t want to freeze the enums either. If people use invalid bits, they do so at their own risk and they might get difference behavior across versions, but we won’t break them if we don’t strictly need to.

 

Kit said definitely #2. I originally said #3, but I think I have talked myself into #2 as well. However, with any potentially breaking change, I want to get some broader opinion on it.

 

Does anyone see any objection to doing #2?

 

Side Issues

The correct way to validate a flags enum is actually quite hard. You can’t use the tantalizing Enum.IsDefined. The correct way is to hack up an “inverse” of the defined bits and compare against that, which is very hard to discover and icky to code. Alternatively, you can do a range check based on the unused bits, that that is also very counter-intuitive.

 

One problem is that you can’t just create a magic API on the Enum to fix either flags or non-flags range checks, because you explicitly want to bake in the values at compile time rather than run-time. It would be good if there was a magic compile intrinsic like Enum.IsValidRange or Enum.IsValidFlags that got expanded to a range check at compile time instead of a function call. Possibly we should discuss this idea with the C# team for the future.

 

One cheat is that we could create an API, and it would be safe for us to use them because we would always bind to the same version of the Enum class. However, we would have the same problem with Enum.IsDefined and we would have to advise other library builders not to use it. The performance of these sorts of APIs is a problem also.