Future Breaking Changes, Part One

Future Breaking Changes, Part One

Rate This
  • Comments 14

We on the C# team hate making breaking changes.

As my colleague Neal called out in his article on the subject from a couple years ago, by “breaking change” we mean a change in the compiler behaviour which causes an existing program to either stop compiling entirely, or start behaving differently when recompiled. We hate them because they can cause intense customer pain, and are therefore barriers to upgrading. We work hard on improving the quality of the C# toolset every day and want our customers to gain the benefits of upgrading; they cannot do that if doing so causes more pain than the new features are worth.

Which is not to say that we do not make breaking changes occasionally. We do. But in those circumstances where we have to, we try to mitigate the pain as much as possible. For example:

  • We try to ensure that breaking changes only affect unusual corners of the language which real production code is unlikely to hit.
  • When we do cause breaking changes, we sometimes introduce heuristics and warnings which detect the situation and warn the developer.
  • If possible, breaking changes should move the implementation into compliance with the published standard. (Often breaking changes are a result of fixing a compliance bug.)
  • We try to communicate the reasoning behind a breaking change crisply and succinctly.
  • And so on.

I could write a whole series of blog articles about specific breaking changes – and in fact, I have done so many times over the years. (For example, here, here, here, here and here.)

Given that we hate breaking changes, clearly we want to ensure that as we add new features to the C# language we do not break existing features. New features start out with points against them and have to justify their benefits. If the feature is a breaking change, that is hugely more points against it.

For example, adding generics to C# 2.0 was a breaking change. This program, legal in C# 1.0, no longer compiles:

class Program  {
    static void M(bool x, bool y) {}
    static void Main() {
        int N = 1, T = 2, U = 3, a = 4, b = 5;
        M(N < T, U > (a+b));
    }
}

because we now think that N is a generic method of one argument. But the compelling benefit of generics greatly outweighed the pain of this rather contrived example, so we took the breaking change. (And the error message now produced diagnoses the problem.)

But what I want to talk about in this set of articles is something a bit more subtle than these specific breaking changes.

Given that we hate breaking changes, we want to ensure that as we add new features to the C# language we are not setting ourselves up to have to make breaking changes in the future. If we are, then the feature needs to be justified against not only any breaking changes it is presently introducing, but also against the potential for breaking changes in the future. We do not want to introduce a new feature that makes it harder for us to introduce entirely different features in the future unless the proposed new feature is really great.

Implicitly typed lambdas are an example of a feature which will cause us innumerable breaking change headaches in the future, but we believe that the compelling user benefit of them is so high (and the breaks are sufficiently isolated to corner cases) that we are willing to take that pain.

Next time on FAIC, I’ll describe in detail how it is that implicitly typed lambdas are going to make it harder to get new features into the C# type system because of potential breaking changes.

  • That being the case, how come Nullable<>.Value and .HasValue stuck around in C# 2.0, making it impossible to lift methods from types to their corresponding nullables without a breaking change in the future?

    Were those properties never even evaluated from that POV? Or was it that the last-minute DCR was already so large in scope that there was no time to fix the remaining issues?

  • I was not on the C# team then, so I really don't know what the process was when evaluating the nullable DCR.

  • Whenever you bring up the issue of breaking changes, I wonder why source files (or assemblies, or IL code) are not marked with version numbers to indicate what interpretation the proceeding code should be interpreted under.

    For example, precede all C#3.0 code with "<C#3.0>" or similar.  Code without this tag will be compiled with the old compiler.  Code with the this tag will be compiled with the new compiler.  IL code can be marked similarly for the interpreter, if need be.

    Obviously such a compiler directive has been considered.  May you explain what is wrong with this idea, and maybe give and example where it may be a source of excruciating pain?

    Thanks

  • I'm sure that opinions vary, but as a developer, I really do not mind breaking changes.

    When I develop an application, I do so under a specific version. It is ridiculous to assume that will or even should function the same on newer versions without extensive verification and testing. Break whatever you need. Just issue a warning or error, as deemed appropriate, and I will fix my code.

    I think that Microsoft makes too much of a fuss over breaking changes. In particular, the VC++ team. There are known standard violations (such as two-phase name lookups) which have yet to be fixed because they seem to be afraid that it will break too many people's code. The code is already broken, even if unknown. Unfortunately the diagnostics are not in place to notify developers of the problems to fix.

    Do not be so afraid to break code at the cost of real improvement. If you are still hesitant, then just add a compiler flag to enable both versions.

    Looking forward to future versions of C#.

    PS Can we pleeeeeease get generic variant support in C#? I'm constantly needing to drop down to the IL level...

    http://research.microsoft.com/~akenn/generics/ECOOP06.pdf

  • Just out of interest, why did C# have trouble with the piece of code you cite?

    I tried the same thing in C++ out of curiosity more than anything else, and it gets it right - even if a template (class or function) named N exists, as long as it is outside the functions scope. And even if you leave the spaces out.

    Maybe I just don't understand the C# scoping rules well enough (i'm not a C# programmer after all), but I would have thought that the N in the local scope won and since it isn't a generic the parser wouldn't treat the code as a call to a generic.

    As I said, not a nit pick, just curiosity.

  • Stewart, just a guess since I have no particular knowledge of the answer to your question, but my guess would be that C# endeavors to keep the syntactic parsing separate from the semantic analysis. That is to say, the angle brackets in Eric's example need to be parsed as either generic parameter brackets or greaterthan/lessthan at a lower layer than the layer that has knowledge of what names are in scope.

    Presumably it would be futile to even attempt to maintain any separation like that in C++ so the syntactic parser is permitted to "reach up" to a higher layer for semantic information to enable it to parse correctly. I'm entirely unconvinced this is better ;)

  • I remember the days when I understood Eric's blog. *sigh*

    Bring back the VBScript blogging days...

  • Stewart: Stuart is correct. We try hard to keep the syntactic analysis of the language independent of the semantic analysis. Languages which do not have this property (eg, JScript) can be very difficult to analyze correctly.

    I do not know what rules C++ uses for parsing templates.

  • Kyle: That is not a bad idea.  We do not do what you suggest, but we do have a related feature.  The C# 2.0 and 3.0 compilers have a /langversion switch. This switch does NOT cause the C# 2.0 compiler to act like the C# 1.0 compiler -- if that's what you want, just use the 1.0 compiler!  Rather, it gives an error for every program which uses a feature not found in the specified version.

  • maro: I am presently researching how hard it would be to add some variance to hypothetical future versions of the C# language. Please send me an email describing the scenarios in which having generic variance would make your life easier; the more real-world customer feedback we have, the more likely we are to design a feature which meets your needs.

  • I second maro's generic variance support. I often run into situations where my arguments, often a list of something, are stronger typed than method signatures require. As a workaround, I need to create a new list, downcasting to the expected type. This is both non-productive and a waste of resources.

    Variance support would also eliminate a class of required "hacks" as seen in IEnumerable<T> needing to implement IEnumerable etc.

    maro: Excellent article. I hope Microsoft can implement something like that in the near future.

  • My favorite flaming topic: the C# event invocation.

    Today, I still fail to understand why we need to check for listeners before be invoke our events. The whole point with the publisher/subscriber model is to make the publisher unaware of its subscribers. The null check should have been built in from the start.

    I proposed this change some time ago, and was told it can't be done, as it can break existing code. I fail to understand that, too.

    I don't want to:

    if (myEvent != null) myEvent();

    What I want is:

    myEvent();  // period.

    Please save us from all the duplication.

    There. I got it out. Thanks for reading :)

  • Last time I mentioned that one of the subtleties of programming language design is weighing the benefit

  • We've been talking a lot lately about managing breaking changes on our team, so I spent some time reading

Page 1 of 1 (14 items)