Covariance and Contravariance in C#, Part Nine: Breaking Changes

Covariance and Contravariance in C#, Part Nine: Breaking Changes

Rate This
  • Comments 34

Today in the last entry in my ongoing saga of covariance and contravariance I’ll discuss what breaking changes adding this feature might cause.

Simply adding variance awareness to the conversion rules should never cause any breaking change. However, the combination of adding variance to the conversion rules and making some types have variant parameters causes potential breaking changes.

People are generally smart enough to not write:

if (x is Animal)
  DoSomething();
else if (x is Giraffe)
  DoSomethingElse(); // never runs

because the second condition is entirely subsumed by the first. But today in C# 3.0 it is entirely sensible to write

if (x is IEnumerable<Animal>)
  DoSomething();
else if (x is IEnumerable<Giraffe>)
  DoSomethingElse();

because there did not used to be any conversion between IEnumerable<Animal> and IEnumerable<Giraffe>. If we turn on covariance in IEnumerable<T> and the compiled program containing the fragment uses the new library then its behaviour when given an IEnumerable<Giraffe> will change. The object will be assignable to IEnumerable<Animal>, and therefore the “is” will report “true”.

There is also the issue of existing source code changing semantics or turning compiling programs into erroneous programs. For example, overload resolution may now fail where it used to succeed. If we have:

interface IBar<T>{} // From some other assembly
...
void M(IBar<Tiger> x){}
void M(IBar<Giraffe> x){}
void M(object x) {}
...
IBar<Animal> y = whatever;
M(y);

Then overload resolution picks the object version today because it is the sole applicable choice. If we change the definition of IBar to

interface IBar<-T>{}

and recompile then we get an ambiguity error because now all three are applicable and there is no unique best choice.

We always want to avoid breaking changes if possible, but sometimes new features are sufficiently compelling and the breaks are sufficiently rare that it’s worth it. My intuition is that by turning on interface and delegate variance we would enable many more interesting scenarios than we would break.

What are your thoughts? Keep in mind that we expect that the vast majority of developers will never have to define the variance of a given type argument, but they may take advantage of variance frequently. Is it worth our while to invest time and energy in this sort of thing for a hypothetical future version of the language?

  • Jon

    I'd rather learn a few new language features than a heap of libraries and their individual ways of dealing with what's left from the language. The language is at the core of what we're doing, and a lot of people are dedicating way to little time to learning it (as opposed to learning IDEs and designers, frameworks and libraries, VS guidance automation stuff and VSTS, ...)

    C# is the language for code-based productivity. (I'm sure I read that somewhere.) People who prefer the complexity in the stuff orbiting around the language are better served with VB.NET - the languages are finally beginning to actually enter their different Roadmaps instead of looking like the same language with two different syntax-skins (Sun liked to say that, and I'm glad it's no longer true).

    For code-based productivity, you need powerful features. Sooner or later I hope we'll even see some meta-programming or AOP mechanisms in C#. You really don't have to understand them at every level just to benefit, most of the stuff ist just for sophisticated library developers anyway.

    Which is true for variance too, btw. You complain that it was hard to explain why a List<string> could not be returned as an IEnumerable<object>. Now what makes you think that it's harder to explain why this is now possible? People who don't like to think about stuff like that are not going to complain that they don't get compiler errors anymore. Just like they don't complain that covariance works for arrays now.

    How hard do you think it is to explain that while it won't work for List<string>, you _can_ return a string[] as an IEnumerable<object> today, and make people not only understand the difference, but also be aware that they could write that works differently for arrays and for collection classes? Just bugs waiting to happen.

    And then I imagine this in the context of LINQ, where most of the people will have no idea of what a from/select statement is transformed into, have no real idea of how IEnumerable/IQueryable, extension methods and Lambdas work together to actually compile this, but will find it to "just work" most of the time. Except when they run across the missing covariance of IEnumerable, that is.

  • While it seems to me all the focus is on List<T> (the co-called generic co/contra-variance), please keep in mind that some of us are also wanting the simpler co/contra-variance for sub-methods.  

    public class SubC : SuperC

    public class A

    public virtual SuperC Method1(SubC subc)

    public class B : A

    public override SubC Method1(object subc)

    Which, if I understand the concept correctly, does not require any additional syntax.  Certainly B has always been allowed to return a more restrictive subset of values.  Its just a matter of letting us express it so that the type-system is made aware of that fact.   E.g.

    B b = GetB();

    b.Method1().SomeMethodOnlyAvailableOnSubC();

    As for letting B handle wider parameters, not sure if that would require any extra syntax.  Seems like it wouldn't, that it should just be allowed.  B has to meet the contract that A defines.  If B does anything above and beyond that, there is no harm, it has not violated A's contract by doing more.

    I was hoping that VS2008 would have sub-method co/contra-variance, but its not there in beta 2.  Any chance that the final release will?  Or is that logic pretty much all tied in with the generic co/contra-variance (i.e. C# 4.0)?

  • There will be no features in C# in the final release that were not in the final beta. Adding features after final beta means shipping features that have never been beta tested, and we try very hard not to do that.

    In this series I explicitly did NOT discuss "override variance". I am well aware that a lot of people want this feature, and I may do another series on it in the future, but that's not what I've been talking about here.

    Override variance is completely orthogonal to interface/delegate variance. They have nothing to do with each other (except insofar as interface variance might make more scenarios eligible for override variance.)

    And there is no such thing as C# 4.0.  Remember, this is all hypothetical discussion at this point. We have not announced any such product, so it is premature to be discussing specifics of its feature set!

  • So nicely step by step blogged by Eric Lippert for &quot;Covariance and Contravariance&quot; as &quot;Fabulous

Page 3 of 3 (34 items) 123