Putting a base in the middle

Putting a base in the middle

Rate This

Here’s a crazy-seeming but honest-to-goodness real customer scenario that got reported to me recently. There are three DLLs involved, Alpha.DLL, Bravo.DLL and Charlie.DLL. The classes in each are:

public class Alpha // In Alpha.DLL
{
  public virtual void M()
  {
    Console.WriteLine("Alpha");
  }
}

public class Bravo: Alpha // In Bravo.DLL
{
}

public class Charlie : Bravo // In Charlie.DLL
{
  public override void M()
  {
    Console.WriteLine("Charlie");
    base.M();
  }
}

Perfectly sensible. You call M on an instance of Charlie and it says “Charlie / Alpha”.

Now the vendor who supplies Bravo.DLL ships a new version which has this code:

public class Bravo: Alpha
{
  public override void M()
  {
    Console.WriteLine("Bravo");
    base.M();
  }
}

The question is: what happens if you call Charlie.M without recompiling Charlie.DLL, but you are loading the new version of Bravo.DLL?

The customer was quite surprised that the output is still “Charlie / Alpha”, not “Charlie / Bravo / Alpha”.

This is a new twist on the brittle base class failure; at least, it’s new to me.

Customer: What’s going on here?

When the compiler generates code for the base call, it looks at all the metadata and sees that the nearest valid method that the base call can be referring to is Alpha.Foo. So we generate code that says “make a non-virtual call to Alpha.Foo”. That code is baked into Charlie.DLL and it has the same semantics no matter what Bravo.DLL says. It calls Alpha.Foo.

Customer: You know, if you generated code that said “make a non-virtual call to Bravo.Foo”, the CLR will fall back to calling Alpha.Foo if there is no implementation of Bravo.Foo.

No, I didn’t know that actually. I’m slightly surprised that this doesn’t produce a verification error, but, whatever. Seems like a plausible behaviour, albeit perhaps somewhat risky. A quick look at the documented semantics of the call instruction indicates that this is the by-design behaviour, so it would be legal to do so.

Customer: Why doesn’t the compiler generate the call as a call to Bravo.Foo? Then you get the right semantics in my scenario!

Essentially what is happening here is the compiler is generating code on the basis of today's static analysis, not on the basis of what the world might look like at runtime in an unknown future. When we generate the code for the base call we assume that there are not going to be changes in the base class hierarchy after compilation. That seemed at the time to be a reasonable assumption, though I can see that in your scenario, arguably it is not.

As it turns out, there are two reasons to do it the current way. The first is philosophical and apparently unconvincing. The second is practical.

Customer: What’s the philosophical justification?

There are two competing "mental models" of what "base.Foo" means.

The mental model that matches what the compiler currently implements is “a base call is a non-virtual call to the nearest method on any base class, based entirely on information known at compile time.”

Note that this matches exactly what we mean by "non-virtual call". An early-bound call to a non-virtual method is always a call to a particular method identified at compile time. By contrast, a virtual method call is based at least in part on runtime analysis of the type hierarchy. More specifically, a virtual method identifies a "slot" at compile time but not the "contents" of that slot. The "contents" – the actually method to call – is identified at runtime based on what the runtime type of the receiver stuffed into the virtual method slot.

Your mental model is “a base call is a virtual call to the nearest method on any base class, based on both information known at runtime about the actual class hierarchy of the receiver, and information known at compile time about the compile-time type of the receiver.”

In your model the call is not actually virtual, because it is not based upon the contents of a virtual slot of the receiver. But neither is it entirely based on the compile-time knowledge of the type of the receiver! It's based on a combination of the two. Basically, it’s what would have been the non-virtual call in the counterfactual world where the compiler had been given correct information about what the types actually would look like at runtime.

A developer who has the former mental model (like, say, me) would be deeply surprised by your proposed behavior. If the developer has classes Giraffe, Mammal and Animal, Giraffe overrides virtual method Animal.Feed, and the developer says base.Feed in Giraffe, then the developer is thinking either like me:

I specifically wish Animal.Feed to be called here; if at runtime it turns out that evil hackers have inserted a method Mammal.Feed that I did not know about at compile time, I still want Animal.Feed to be called. I have compiled against Animal.Feed, I have tested against that scenario, and that call is precisely what I expect to happen. A base call gives me 100% of the safe, predictable, understandable, non-dynamic, testable behavior of any other non-virtual call. I rely upon those invariants to keep my customer's data secure.

Basically, this position is "I trust only what I can see when I wrote the code; any other code might not do what I want safely or correctly".

Or like you:

I need the base class to do some work for me. I want something on some base class to be called. Animal.Feed or Mammal.Feed, I don't care, just pick the best one - whichever one happens to be "most derived" in some future version of the world - by doing that analysis at runtime. In exchange for the flexibility of being able to hot-swap in new behavior by changing the implementation of my base classes without recompiling my derived classes, I am willing to give up safety, predictability, and the knowledge that what runs on my customer's machines is what I tested.

Basically, this position is "I trust that the current version of my class knows how to interpret my request and will do so safely and correctly, even if I've never once tested that."

Though I understand your point of view, I’m personally inclined to do things the safe, boring and sane way rather than the flexible, dangerous and interesting way. However, based on the several dozen comments on the first version of this article, and my brief poll of other members of the C# compiler team, I am in a small minority that believes that the first mental model is the more sensible one.

Customer: The philosophical reason is unconvincing; I see a base call as meaning “call the nearest thing in the virtual hierarchy”. What’s the practical concern?

In the autumn of 2000, during the development of C# 1.0, the behaviour of the compiler was as you expect: we would generate a call to Bravo.M and allow the runtime to resolve that as either a call to Bravo.M if there is one or to Alpha.M if there is not. My predecessor Peter Hallam then discovered the following case. Suppose the new hot-swapped Bravo.DLL is now:

public class Bravo: Alpha
{
  new private void M()
  {
    Console.WriteLine("Bravo");
  }
}

Now what happens? Bravo has added a private method, and one of our design principles is that private methods are invisible implementation details; they do not have any effect on the surrounding code that cannot see them. If you hot-swap in this code and the call in Charlie is realized as a call to Bravo.M then this crashes the runtime. The base call resolves as a call to a private method from outside the method, which is not legal. Non-virtual calls do matching by signature, not by virtual slot.

The CLR architects and the C# architects considered many possible solutions to this problem, including adding a new instruction that would match by slot, changing the semantics of the call instruction, changing the meaning of "private", implementing name mangling in the compiler, and so on. The decision they arrived at was that all of the above were insanely dangerous considering how late in the ship cycle it was, how unlikely the scenario is, and the fact that this would be enabling a scenario which is directly contrary to good sense; if you change a base class then you should recompile your derived classes. We don't want to be in the business of making it easier to do something dangerous and wrong.

So they punted on the issue. The C# 1.0 compiler apparently did it the way you like, and generated code that sometimes crashed the runtime if you introduced a new private method: the original compilation of Charlie calls Bravo.M, even if there is no such method. If later there turns out to be an inaccessible one, it crashes. If you recompile Charlie.DLL, then the compiler notices that there is an intervening private method which will crash the runtime, and generates a call to Alpha.M.

This is far from ideal. The compiler is designed so that for performance reasons it does not load the potentially hundreds of millions of bytes of metadata about private members from referenced assemblies; now we have to load at least some of that. Also, this makes it difficult to use tools such as ASMMETA which produce "fake" versions of assemblies which are then later replaced with real assemblies. And of course there is always still the crashing scenario to worry about.

The situation continued thusly until 2003, at which point again the C# team brought this up with the CLR team to see if we could get a new instruction defined, a "basecall" instruction which would provide an exact virtual slot reference, rather than doing a by-signature match as the non-virtual call instruction does now. After much debate it was again determined that this obscure and dangerous scenario did not meet the bar for making an extremely expensive and potentially breaking change to the CLR.

Concerned over all the ways that this behaviour was currently causing breaks and poor performance, in 2003 the C# design team decided to go with the present approach of binding directly to the slot as known at compile time. The team all agreed that the desirable behaviour was to always dynamically bind to the closest base class -- a point which I personally disagree with, but I see their point. But given the costs of doing so safely, and the fact that hot-swapping in new code in the middle of a class hierarchy is not exactly a desirable scenario to support, it's better to sometimes force a recompilation (that you should have done anyways) than to sometimes crash and die horribly.

Customer: Wow. So, this will never change, right?

Wow indeed. I learned an awful lot today. One of these days I need to sit down and just read all five hundred pages of the C# 1.0 and 2.0 design notes.

I wouldn’t expect this to ever change. If you change a base class, recompile your derived classes. That’s the safe thing to do. Do not rely on the runtime fixing stuff up for you when you hot-swap in a new class in the middle of a class hierarchy.

UPDATE: Based on the number of rather histrionic comments I've gotten over the last 24 hours, I think my advice above has been taken rather out of the surrounding context. I'm not saying that every time someone ships a service pack that has a few bug fixes that you are required to recompile all your applications and ship them again. I thought it was clear from the context that what I was saying was that if you depend upon base type which has been updated then:

(1) at the very least test your derived types with the new base type -- your derived types are relying on the mechanisms of the base types; when a mechanism changes, you have to re-test the code which relies upon that mechanism.

(2) if there was a breaking change, recompile, re-test and re-ship the derived type. And

(3) you might be surprised by what is a breaking change; adding a new override can potentially be a breaking change in some rare cases.

I agree that it is unfortunate that adding a new override is in some rare cases a semantically breaking change. I hope you agree that it is also unfortunate that adding a new private method was in some rare cases a crash-the-runtime change in C# 1.0. Which of those evils is the lesser is of course a matter of debate; we had that debate between 2000 and 2003 and I don't think its wise or productive to second-guess the outcome of that debate now.

The simple fact of the matter is that the brittle base class problem is an inherant problem with the object-oriented-programming pattern. We have worked very hard to design a language which minimizes the likelihood of the brittle base class problem biting people. And the base class library team works very hard to ensure that service pack upgrades introduce as few breaking changes as possible while meeting our other servicing goals, like fixing existing problems. But our hard work only goes so far, and there are more base classes in the world that those in the BCL.

If you find that you are getting bitten by the brittle base class problem a lot, then maybe object oriented programming is not actually the right solution for your problem space; there are other approaches to code reuse and organization which are perfectly valid that do not suffer from the brittle base class problem.

  • @carlos:

    > in a future update Bravo loses its override of M

    Well, removing members has always been considered a versioning-breaking practice, so this isn't anywhere as surprising as Eric's original case (which is merely adding a non-virtual member).

  • The customer is right, Eric is wrong. Eric's solution breaks separate compilation, and it breaks the simple OO delegation model, both of which are unacceptable IMO.

    I'm also surprised that it would be so difficult to implement this in the CLR. All the metadata required to resolve the base class call is available to the VM in the CIL. If derived classes indeed cannot access private members, then that member should not be in the publicly accessible vtable, so resolution isn't a problem. If it must be in the vtable for some odd implementation reason, then add protection information, ie. private/protected, to the slot resolution process so it skips protected members.

  • This is the basis for a great new interview question:

    What are the semantic differences between classes Bravo1 and Bravo2? Which class definition is better and why?

    class Bravo1 : Alpha {}

    class Bravo2 : Alpha { public override void M() { base.M() } }

  • Eric,

    With all due respect, I think that you are wrong.

    This behavior is quite disturbing.

    So much that I would even delay VS2010 just to fix it, although I know that it is *not* realistic. (SP1 would do fine.)

    I am rather concerned about the affect that this issue may have on my code.

    I agree with most of what the other commenters are saying here, so there is little need to repeat.

    Request: At the very least, you and your team need to sit down and really discuss this issue in depth.

    Please feel free to solicit us for more discussion. I am sure many of here feel passionately about this issue and would like to be part of the discussion / solution.

  • "if you change a base class then you should recompile your derived classes"

    That doesn't feel right.  My base classes exist as a rock for me to derive reliable, predictable and very well understood behavior from; they help me form version 1 of my application and when version 2 comes along they are often the favourite candidates for extension.  Some of these base classes are used outside of the core team and are deployed in applications that are not interested in the new functionality, but would like the latest base classes for the various fixes and rewrites in the classes and methods they actually do use.  They aren't very interested in rebuilding redistributing their derived classes though!

    In essence, how does the quoted sentence stack up against design by base class vs design by interface?  The whole "Framework Design Guidelines" angle is what I'm lining up against here.  Suggesting that assemblies containing derived classes should be recompiled when the base class changes doesn't feel practical in many, by no means alll, or indeed most, but certainly a significant number of, situations.

  • Two separate things

    First of all: Regarding Eric's "being wrong"

    There are two solutions I can think of.

    1) Runtime check on every "base" call made ever. Every single one.

    At that point, you have successfully mitigated the purpose of having compiled dlls in the first place. You have carefully avoided the benefits of compilation, and in order to fix a very obscure scenario.

    2) All classes which inherit from classes with virtual methods ANYWHERE in their heirarchy (this is all classes, by the way) will automatically generate

    public override VirtualFunction(params) { base.VirtualFunction(params); }

    That would solve the issue, but again, when you call ToString() on anything, you would end up with a very tall stack trace, which would obviously have performance implications.  This would have a measurable impact on ALL programs.

    3) Check if the reference dlls have been modified

    When check? Store the information where? What do you do if it HAS been modified? Dynamically recompile my own dll? What if there are irrelevant breaking changes?

    This solution is impossible.

    Ultimately, Eric's solution is the most pragmatic.

    Secondly,

    If after adding

     public override void M()

     {

       Console.WriteLine("Bravo");

       base.M();

     }

    to Bravo, which doesn't get executed, you then choose to SEAL method M, it will cause a runtime error, why is that?

  • I'm surprised that the opposition to this appears so strong. While I understand the concerns that some people have about libraries they don't have source for and particularly plug-ins, those are inherently risky situations anyway. If they break they break; and you should be prepared for it as part of your risk assessment. You don't have to be happy about it, but you should be prepared.

    I don't think those concerns should override common sense; by default, compiled code should not change its behaviour when a change like this occurs - when completely unknown code is inserted where no method existed before. That should only occur after the new method's been explicitly approved through recompilation (and retesting, etc.).

  • Gavin, the problem is that this breaks *silently*. Breaking explicitly is always better than breaking silently.

    Security example: method M() validates access to an account. Alpha.M() checks some conditions. Later on, Bravo.M() adds another layer of checks - WHICH IS IGNORED. In what way is this a good thing?

  • Marcel, fair point, and you might have convinced me (I'll have to think a bit about it, but my first inclination is certainly to agree with you). However, that doesn't seem to be the primary concern of most of those commenting, who want stuff to "just work" in line with their expectations, even though it's been pointed out why that's not a good idea. Maybe failing to crash is also a bad idea.

  • I wonder, what behavior the customer would expect, if Bravo were modified as follows:

    public class Bravo: Alpha

    {

     public void M(params int[] x)

     {

       Console.WriteLine("Bravo");

       base.M();

     }

    }

  • I've been thinking about the issue a bit longer and I think I have changed my mind...the customer is right.

    The main reason is that the code is "failing" silently to do what the coder expects. The obvious argument is that the coder should know how the compiler works but one of the main tenents of the c# team has always been to create a language that does what one expects. And seeing the vast majority of posts here, the compiler is failing big time and doing what no one expects silently.

    Eric uses security as an argument but plenty of posts have also argued that the current implementation can be as insecure or even more. If Bravo was to implement a new layer of data access security, the client could actually be fooled into a false sense of security as the new implementation in Bravo is ignored.

    Obviously when all dlls belong to the same developer the current implementation isnt a big deal. But when more than one developer is in the mix, things dont seem to work as well.

  • Gavin Greig wrote:

    "I don't think those concerns should override common sense; by default, compiled code should not change its behaviour when a change like this occurs - when completely unknown code is inserted where no method existed before. That should only occur after the new method's been explicitly approved through recompilation (and retesting, etc.)."

    Agreed. And maybe that is where the confusion is coming in...I would expect my code to run properly IF we were talking about an interpreted language and I swapped some new code into a function. With compiled languages, my expectations are different. Although I have not done much work in Java, I would expect the JVM to more or less function like an interpreted language (in this respect), whereas with the .NET framework, I would expect it to function more like a compiled language. Yes, both platforms use bytecode, but the way the bytecode is handled is different.

    To quote Wikipedia:

    "On .NET the byte-code is always compiled before execution, either Just In Time (JIT) or in advance of execution using the Native Image Generator utility (NGEN). With Java the byte-code is either interpreted, compiled in advance, or compiled JIT."

  • As a long time C++/recently turned C# dev I find this somewhat assuming. In C++ there is no base/super keyword and so you will likely write Alpha::M()  - although you could write Bravo::M().

    A common reaction to the brittleness of this (with respect to changes in the class hierarchy) is to typedef up an alias so you only have to change one thing when the class hierachy changes:-

    private:

       typedef Bravo base;

    [See http://stackoverflow.com/questions/180601/using-super-in-c]

    Thinking about how this actually behaves in C++, Eric's point make sense, but when considering the reason why this idiom is employed in C++ I think it matches the mental model of the majority here.

  • > As a long time C++/recently turned C# dev I find this somewhat assuming. In C++ there is no base/super keyword and so you will likely write Alpha::M()  - although you could write Bravo::M().

    Two things of note here.

    First, there is __super available as an extension in VC++.

    Second, in the example that Eric gives (rewritten for C++/CLI), even if you use Bravo::M(), the actual call compiled by VC++ to MSIL will still be to Alpha::M().

  • > There are two solutions I can think of.

    > 1) Runtime check on every "base" call made ever. Every single one.

    You miss one obvious solution: a single check at JIT time. It really only has to be done once per class load, because the entire inheritance chain for a given class is known at that point, and thus it is possible to determine, for any hypothetical "basecall" instruction in a body of a method of the class, which method it actually refers to. Once thus determined, it won't change later at runtime.

Page 6 of 9 (134 items) «45678»