Making the code read like the spec

Making the code read like the spec

Rate This
  • Comments 68

As I mentioned a while back, there are some bugs in the compiler code which analyzes whether a set of classes violates the “no cycles” rules for base classes. (That is, a class is not allowed to inherit from itself, directly or indirectly, and not allowed to inherit from one of its own nested classes, directly or indirectly.) The bugs are almost all of the form where we accidentally detect a cycle in suspicious-looking generic code but in fact there is no cycle; the bugs are the result of an attempt to modify the cycle detector from C# 1 rather than simply rewriting it from scratch to handle generics. Unfortunately we were unable to get the fixes into C# 4; these are obscure corner-case scenarios and the risk of doing the fix was considered too high.

There are a number of tricky issues here, mostly around the fact that obviously we cannot know whether a set of base types are circular until we know what the base types are, but resolving the program text to determine what type the base type string “C.D<E.F>” requires us to know the base types of C, because D might be a nested type of C’s base class, not of C, so we have a bit of a chicken-and-egg problem. The code which turns strings into types has to be robust in the face of circular base types because the base type cycle detector depends on its output!

So like I said, I’ve come up with a new algorithm that implements the spec more exactly, and I wanted to test it out. Rather than modifying the existing compiler to use it, I mocked it up in C# quickly first, just to give me something to play with. One of the problems that we have with the existing compiler is that it is not at all clear which parts of the code are responsible for implementing any given line in the spec. In my “maquette” of the compiler I wanted to make sure that I really was exactly implementing the spec; that might show up logical problems with either the implementation or the spec. I therefore wanted the code to read much like the spec.

This little hunk of code that I wrote made me inordinately happy. Here’s the spec:

A class directly depends on its direct base class (if any) and directly depends on the class within which it is immediately nested (if any). The complete set of classes upon which a class depends is the reflexive and transitive closure of the directly-depends-upon relationship.

First off, what is this thing called the “reflexive and transitive closure”?

Consider a “relation” – a function that takes two things and returns a Boolean that tells you whether the relation holds. A relation, call it ~>, is reflexive if X~>X is true for every X. It is symmetric if A~>B necessarily implies that B~>A. And it is transitive if A~>B and B~>C necessarily implies that A~>C. (*)

For example, the relation “less than or equal to” on integers is reflexive: X≤X is true for all X. It is not symmetric: 1≤2 is true, but 2≤1 is false. And it is transitive: if A≤B and B≤C then it is necessarily true that A≤C.

The relation “is equal to” is reflexive, symmetric and transitive; a relation with all three properties is said to be an “equivalence relation” because it allows you to partition a set into mutually-exclusive “equivalence classes”.

The relation “is the parent of” on people is not reflexive: no one is their own parent. It is not symmetric: if A is the parent of B, then B is not the parent of A. And it is not transitive: if A is the parent of B and B is the parent of C, then A is not the parent of C. (Rather, A is the grandparent of C.)

It is possible to take a nontransitive relation like “is the parent of” and from it produce a transitive relation. Basically, we simply make up a new relation that is exactly the same as the parent relation, except that we enforce that it be transitive. This is the “is the ancestor of” relation: if A is the ancestor of B, and B is the ancestor of C, then A is necessarily the ancestor of C. The “ancestor” relation is said to be the transitive closure of the “parent” relation.

Similarly we can define the reflexive closure, and so on.

When we’re talking about closures, we’re often interested not so much in the relation itself as the set of things which satisfy the relation with a given item. That’s what we mean in the spec when we say “The complete set of classes upon which a class depends is the reflexive and transitive closure of the directly-depends-upon relationship.”  Given a class, we want to compute the set that contains the class itself (because the closure is reflexive), its base class, its outer class, the base class of the base class, the outer class of the base class, the base class of the outer class, the outer class of the outer class… and so on.

So the first thing I did was I wrote up a helper method that takes an item and a function which identifies all the items that have the non-transitive relation with that item, and computes from that the set of all items that satisfy the transitive closure relation with the item:

static HashSet<T> TransitiveClosure<T>(
    this Func<T, IEnumerable<T>> relation,
    T item)
    var closure = new HashSet<T>();
    var stack = new Stack<T>();
    while(stack.Count > 0)
        T current = stack.Pop();
        foreach(T newItem in relation(current))
            if (!closure.Contains(newItem))
    return closure;
static HashSet<T> TransitiveAndReflexiveClosure<T>(
    this Func<T, IEnumerable<T>> relation,
    T item)
  var closure = TransitiveClosure(relation, item);
  return closure;

Notice that essentially what we’re doing here is a depth-first traversal of the graph defined by the transitive closure relation, avoiding descent into regions we’ve visited before. 

Now I have a mechanism which I can use to write code that implements the policies described in the specification.

static IEnumerable<TypeSymbol> DirectDependencies(TypeSymbol symbol)
    // SPEC: A class directly depends on its direct base class (if any) ...
    if (symbol.BaseTypeSymbol != null)
        yield return symbol.BaseTypeSymbol;
    // SPEC: ...and directly depends on the class within which it
    // SPEC: is immediately nested (if any).
    if (symbol.OuterTypeSymbol!= null)
        yield return symbol.OuterTypeSymbol;

Great, we now have a method that exactly implements one sentence of the spec – given a type symbol, we can determine what its direct dependencies are. Now we need another method to implement the next sentence of the spec:

static HashSet<TypeSymbol> Dependencies(TypeSymbol classSymbol)
    // SPEC:  The complete set of classes upon which a class
    // SPEC:  depends is the reflexive and transitive closure of
    // SPEC:  the directly-depends-upon relationship.
    return TransitiveAndReflexiveClosure(DirectDependencies, classSymbol);

That’s what I like to see: code that reads almost exactly like the spec.

Note also that I’ve made the design choice here to make these methods static methods of a policy class, rather than methods on the TypeSymbol class. I want to keep my policies logically and textually separated from my mechanisms. For example, I could be using the same classes that represent classes, structs, namespaces, and so on, to implement VB instead of C#. I want the policies of the C# language to be in a class whose sole responsibility is implementing these policies.

Another nice aspect of this approach is that I can now re-use my transitive closure computing mechanism when I come across this bit of the spec that talks about base interfaces of an interface:

The set of base interfaces is the transitive closure of the explicit base interfaces.

Unsurprisingly, the code in my prototype that computes this looks like:

static HashSet<TypeSymbol> BaseInterfaces(TypeSymbol interfaceSymbol)
    // SPEC: The set of base interfaces is the transitive closure
    // SPEC: of the explicit base interfaces.
    return TransitiveClosure(ExplicitBaseInterfaces, interfaceSymbol);

In fact, there are transitive closures in a number of places in the C# spec, and now I have a mechanism that I can use to implement all of them, should I need to for my prototype.

One final note: notice that I am returning a mutable collection here. Were I designing these methods for public consumption, I would probably choose to return an immutable set, rather than a mutable one, so that I could cache the result safely; these methods could be memoized since the set of dependencies does not change over time. But for the purposes of my quick little prototype, I’m just being lazy and returning a mutable set and not caching the result.

Hopefully we'll get this new algorithm into the hypothetical next compiler release.


(*) Incidentally, we are holding on to the wavy arrow ~> as a potential operator for future hypothetical versions of C#. We recently considered it to have the meaning a~>b() means ((dynamic)a).b(). The operator is known as “the naj operator” after Cyrus Najmabadi, who advocated the use of this operator to the C# design team. Anyone who has a great idea for what the naj should mean, feel free to leave a comment.

  • I'd prefer ~> to ?. for null member access, as using .? makes the code look to busy IMO. That said, I agree with the general sentiment that it seems somewhat foreign to C#.

    Speaking specifically to null-member-access, how would it affect call sites?

    class Foo {

      public void Bar()  //...


    // Somewhere else...

    Foo f = new Foo();

    Action baz = f~>Foo; // sets baz to null? syntax error?

  • ~> looks a bit twisted...

    a?.b should mean (a == null ? null : a.b), like it does in Groovy.

    You could then use a.?b to mean ((dynamic)a).b

    Assuming the current syntax is compatible, both would be quite convenient additions to C#.

  • I like the dynamic approach but with a twist. Could it be expanded to a fail safe dynamic binding? What I mean is a~>b() would behave like (if ((dynamic)a).b() is a valid resolution in runtime, execute b() otherwise do nothing)

    Sometimes it might be interesting to take advantage of a dynamic object's capability to do something that is not critical for the running code but might enhance the user experience or give addititional not essential information . This extra feature's availability might not always be well known in design time.

    Right now this obviously is possible with a try catch block, I'm just sugesting a small shortcut.

  • Add another vote on the use of .? (or possibly ?.) for null conditional invocation.   Using ~> and ~< for async patterns is certainly interesting/tempting, but I don't think it's quite ripe for locking into the language yet.

  • > FYI, VB6 and VBScript both have Imp and Eqv operators meaning "if" and "iff" respectively.

    Yep, I recall those - they actually hail from ancient BASIC days (IIRC, from the very first, original BASIC, in fact).

    As usual for BASIC logical operators, however, the twist was that they were actually bitwise: &B101 IMP &B011 = &B011, and &B101 EQV &B011 = &B001. I wonder if anyone ever used them as such in all 45 years of their existence, though...

    It's also the only reason why EQV was any different from plain = when comparing numbers. For booleans, EQV and = are, quite obviously, the same thing.

    > Hardly anyone ever used them and they were quietly dropped during the VB.NET design process.

    I believe Eiffel's "implies" operator is virtually always used in asserts, pre/postconditions and class invariants, and virtually never anywhere else. Given that VB6 didn't have DbC, and neither did VB.NET, the decision to drop them was unsurprising.

    Oh, and of course "A implies B" can also be written "(not A) or B". I do believe that the former is clearer, and reads more naturally in many cases, however.

  • @Olivier, .? being something completely different and unrelated to ?. sounds twisted to me.

    Eric, In wonder though why you write a.?b instead of a?.b which intuitively makes more sense to me (is "a" not null? then call b)

    As to the ~> ...tough call!

    What about introducing currying with it?

    var f = (int x, int y) => x + y;

    var z ~> f(5) or

    var z ~> f(y:6)

    z(2) // =7

  • > I'm not completely sure that's syntactically unambiguous but the only ambiguity I can see is if "x" can be parsed as both a variable-of-a-reference-type and as a value-type, AND y is one of the static members of Nullable<T>, AND the type of x has a method with the same parameters as the static member on Nullable.

    ?. is actually unambiguous, because the existing grammar doesn't allow for T? shorthand for Nullable<T> in type name position in a static call. Left side of operator . is always a primary-expression, and that is either itself a dotted memeber access of the form X.Y (in which case the definition is processed recursively), or a type keyword such as "int", or it's a simple-name - which may or may not refer to a type (there are other primary-expressions, but none of them can refer to a type).

    > a?.b should mean (a == null ? null : a.b), like it does in Groovy.

    > You could then use a.?b to mean ((dynamic)a).b

    Please, no. It would be so confusing if .? and ?. would both be operators that occur in exact same spot, but with vastly different meaning. Among other things, it would mean that you could mistype ?. as .?, and, depending on the type context, might not even get an error message until runtime (and even then only if "a" evaluates to null).

  • @Pavel I agree having a .? and ?. operator would be confusing.

    Personally I think we should get C# 4 out the door before we look at additional special syntax for dynamic.

    I also like Frank's currying idea though I'm not changing my vote about async invocation. Both would probably require some form of anonymous delegate type functionality since the Func<...> and Action<...> types only go so far.

  • Wait - you're shipping C# 4.0 with a bug that's been there since 2.0? Maybe I misunderstood that?

    Of course we are. We're shipping with all kinds of bugs that have been there since 2.0 and even 1.0. First off, there are the thousands of bugs that have been in the product forever that we've never found; we're shipping with all of those. Second, there are the hundreds of bugs that we know have been in the product for several versions and are unacceptable breaking changes to fix. Third, there are the bugs that we know are in the product, and are not unacceptable breaking changes, but require large amounts of work for little gain; that's time and effort taken away from real bugs that affect real people.

    The bug under discussion here is that this code does not compile today:

    class November<T> {}
    class Romeo : November<Romeo.Sierra.Tango>
        class Sierra
            class Tango {}

    The cycle detector incorrectly states that Romeo has a cyclic dependency on Tango, which plainly it does not. How badly would you like the mainline compiler to be destabilized in the pursuit of fixing that bug? Because fixing that bug requires me to rewrite five of the compiler passes that have dependencies on the strict ordering semantics of the metadata exporter; compiler passes that do things like figure out which methods to emit in what order, what types are base classes of what other types, and so on. We decided that the risk of destabilizing the compiler was way, way too high for the reward of allowing this incredibly unlikely code to compile. -- Eric

  • I'm definitely with those who want a "safe dot" (a != null ? a.b : null) operator. Has anyone written this up on Connect so I can vote for it?

    I'm also with those who think "a.?b" is far too busy -- especially since these would often tend to appear in long chains (a.?b.?c.?d). All those big honking question marks would distract from the intent of the code. And since it looks so wrong (periods and question marks just shouldn't go together), it would take a lot of work for my brain to parse it. I'd always be getting them reversed. ~> is at least directional so easy to get right.

    The Chrome / Oxygene / Delphi Prism language (all the same language, it's just changed names numerous times) has an operator for this, using the colon (a:b). I really like this -- it's only one character wide, so just as compact as a dot; and it suggests a dot while still being clearly distinct. Unfortunately, C# already uses colon for the ternary operator, and I don't know if the ambiguity could be worked out, especially since the "safe dot" really needs to have the same precedence as the "exploding dot".

    If colon isn't feasible, I'd vote for a~>b for the safe dot. It's got that "sort of an arrow" thing going. Or maybe just "a~b", using ~ as an infix operator. "a!b", for that matter; it would look a lot like Access VBA (shudder), but it's got the resemblance to a dot going for it. Another possibility might be "a..b", but that looks a little too much like the subrange operator in Delphi and Ruby.

  • > Notice that essentially what we’re doing here is a depth-first traversal of the graph defined by the transitive closure relation, avoiding descent into regions we’ve visited before.  

    Isn't it a breadth-first traversal? :)

  • > I'm definitely with those who want a "safe dot" (a != null ? a.b : null) operator. Has anyone written this up on Connect so I can vote for it?

    It's a very old feature request, actually (complete with "?." syntax):

  • I think there is a fair amount of value in having a "safe dot" operator, but I don't think I'm sold on it yet.  The ?. syntax is a bit difficult read. I like colon and exclamation point better because they are a single character, and I don't hear my middle English teachers yelling at me for bad punctuation when I see them.  Although "check null then access member" is a common pattern, a straight-forward solution already exists.  For me, ?. crosses the line from being terse into being cryptic.  I find

    if(a != null) a.B();

    to be terse enough for my needs.  I don't feel that they keystrokes I save by being able to write a!B() or a:B() are enough to compensate for the loss of a clear null check.

    For similar reasons, I find using the syntax a~>B() to denote an asynchronous method call to fall even further on the side of being cryptic.  I think I would feel differently if parallelism was a language feature of C# instead of a library feature provided by the BCLs.  I've only dug into the TPL a little bit, but I think it will actually lessen the need for an asynchronous call operator since most of the code can be oblivious to the fact that some processing is asynchronous.

  • Oops, I meant to say "middle school English" not "middle English".  I think "middle English" is what they taught Frodo.

  • Haha, gotta love this: "Hey! I've got a cool operator, (with a cool name) what can I do with it?"

    Made me think of this question:

Page 2 of 5 (68 items) 12345