Making the code read like the spec

Making the code read like the spec

Rate This
  • Comments 68

As I mentioned a while back, there are some bugs in the compiler code which analyzes whether a set of classes violates the “no cycles” rules for base classes. (That is, a class is not allowed to inherit from itself, directly or indirectly, and not allowed to inherit from one of its own nested classes, directly or indirectly.) The bugs are almost all of the form where we accidentally detect a cycle in suspicious-looking generic code but in fact there is no cycle; the bugs are the result of an attempt to modify the cycle detector from C# 1 rather than simply rewriting it from scratch to handle generics. Unfortunately we were unable to get the fixes into C# 4; these are obscure corner-case scenarios and the risk of doing the fix was considered too high.

There are a number of tricky issues here, mostly around the fact that obviously we cannot know whether a set of base types are circular until we know what the base types are, but resolving the program text to determine what type the base type string “C.D<E.F>” requires us to know the base types of C, because D might be a nested type of C’s base class, not of C, so we have a bit of a chicken-and-egg problem. The code which turns strings into types has to be robust in the face of circular base types because the base type cycle detector depends on its output!

So like I said, I’ve come up with a new algorithm that implements the spec more exactly, and I wanted to test it out. Rather than modifying the existing compiler to use it, I mocked it up in C# quickly first, just to give me something to play with. One of the problems that we have with the existing compiler is that it is not at all clear which parts of the code are responsible for implementing any given line in the spec. In my “maquette” of the compiler I wanted to make sure that I really was exactly implementing the spec; that might show up logical problems with either the implementation or the spec. I therefore wanted the code to read much like the spec.

This little hunk of code that I wrote made me inordinately happy. Here’s the spec:

A class directly depends on its direct base class (if any) and directly depends on the class within which it is immediately nested (if any). The complete set of classes upon which a class depends is the reflexive and transitive closure of the directly-depends-upon relationship.

First off, what is this thing called the “reflexive and transitive closure”?

Consider a “relation” – a function that takes two things and returns a Boolean that tells you whether the relation holds. A relation, call it ~>, is reflexive if X~>X is true for every X. It is symmetric if A~>B necessarily implies that B~>A. And it is transitive if A~>B and B~>C necessarily implies that A~>C. (*)

For example, the relation “less than or equal to” on integers is reflexive: X≤X is true for all X. It is not symmetric: 1≤2 is true, but 2≤1 is false. And it is transitive: if A≤B and B≤C then it is necessarily true that A≤C.

The relation “is equal to” is reflexive, symmetric and transitive; a relation with all three properties is said to be an “equivalence relation” because it allows you to partition a set into mutually-exclusive “equivalence classes”.

The relation “is the parent of” on people is not reflexive: no one is their own parent. It is not symmetric: if A is the parent of B, then B is not the parent of A. And it is not transitive: if A is the parent of B and B is the parent of C, then A is not the parent of C. (Rather, A is the grandparent of C.)

It is possible to take a nontransitive relation like “is the parent of” and from it produce a transitive relation. Basically, we simply make up a new relation that is exactly the same as the parent relation, except that we enforce that it be transitive. This is the “is the ancestor of” relation: if A is the ancestor of B, and B is the ancestor of C, then A is necessarily the ancestor of C. The “ancestor” relation is said to be the transitive closure of the “parent” relation.

Similarly we can define the reflexive closure, and so on.

When we’re talking about closures, we’re often interested not so much in the relation itself as the set of things which satisfy the relation with a given item. That’s what we mean in the spec when we say “The complete set of classes upon which a class depends is the reflexive and transitive closure of the directly-depends-upon relationship.”  Given a class, we want to compute the set that contains the class itself (because the closure is reflexive), its base class, its outer class, the base class of the base class, the outer class of the base class, the base class of the outer class, the outer class of the outer class… and so on.

So the first thing I did was I wrote up a helper method that takes an item and a function which identifies all the items that have the non-transitive relation with that item, and computes from that the set of all items that satisfy the transitive closure relation with the item:

static HashSet<T> TransitiveClosure<T>(
    this Func<T, IEnumerable<T>> relation,
    T item)
{
    var closure = new HashSet<T>();
    var stack = new Stack<T>();
    stack.Push(item);
    while(stack.Count > 0)
    {
        T current = stack.Pop();
        foreach(T newItem in relation(current))
        {
            if (!closure.Contains(newItem))
            {
                closure.Add(newItem);
                stack.Push(newItem);
            }
        }
    }
    return closure;
}
static HashSet<T> TransitiveAndReflexiveClosure<T>(
    this Func<T, IEnumerable<T>> relation,
    T item)
{
  var closure = TransitiveClosure(relation, item);
  closure.Add(item);
  return closure;
}

Notice that essentially what we’re doing here is a depth-first traversal of the graph defined by the transitive closure relation, avoiding descent into regions we’ve visited before. 

Now I have a mechanism which I can use to write code that implements the policies described in the specification.

static IEnumerable<TypeSymbol> DirectDependencies(TypeSymbol symbol)
{
    // SPEC: A class directly depends on its direct base class (if any) ...
    if (symbol.BaseTypeSymbol != null)
        yield return symbol.BaseTypeSymbol;
    // SPEC: ...and directly depends on the class within which it
    // SPEC: is immediately nested (if any).
    if (symbol.OuterTypeSymbol!= null)
        yield return symbol.OuterTypeSymbol;
}

Great, we now have a method that exactly implements one sentence of the spec – given a type symbol, we can determine what its direct dependencies are. Now we need another method to implement the next sentence of the spec:

static HashSet<TypeSymbol> Dependencies(TypeSymbol classSymbol)
{
    // SPEC:  The complete set of classes upon which a class
    // SPEC:  depends is the reflexive and transitive closure of
    // SPEC:  the directly-depends-upon relationship.
    return TransitiveAndReflexiveClosure(DirectDependencies, classSymbol);
}

That’s what I like to see: code that reads almost exactly like the spec.

Note also that I’ve made the design choice here to make these methods static methods of a policy class, rather than methods on the TypeSymbol class. I want to keep my policies logically and textually separated from my mechanisms. For example, I could be using the same classes that represent classes, structs, namespaces, and so on, to implement VB instead of C#. I want the policies of the C# language to be in a class whose sole responsibility is implementing these policies.

Another nice aspect of this approach is that I can now re-use my transitive closure computing mechanism when I come across this bit of the spec that talks about base interfaces of an interface:

The set of base interfaces is the transitive closure of the explicit base interfaces.

Unsurprisingly, the code in my prototype that computes this looks like:

static HashSet<TypeSymbol> BaseInterfaces(TypeSymbol interfaceSymbol)
{
    // SPEC: The set of base interfaces is the transitive closure
    // SPEC: of the explicit base interfaces.
    return TransitiveClosure(ExplicitBaseInterfaces, interfaceSymbol);
}

In fact, there are transitive closures in a number of places in the C# spec, and now I have a mechanism that I can use to implement all of them, should I need to for my prototype.

One final note: notice that I am returning a mutable collection here. Were I designing these methods for public consumption, I would probably choose to return an immutable set, rather than a mutable one, so that I could cache the result safely; these methods could be memoized since the set of dependencies does not change over time. But for the purposes of my quick little prototype, I’m just being lazy and returning a mutable set and not caching the result.

Hopefully we'll get this new algorithm into the hypothetical next compiler release.

*************

(*) Incidentally, we are holding on to the wavy arrow ~> as a potential operator for future hypothetical versions of C#. We recently considered it to have the meaning a~>b() means ((dynamic)a).b(). The operator is known as “the naj operator” after Cyrus Najmabadi, who advocated the use of this operator to the C# design team. Anyone who has a great idea for what the naj should mean, feel free to leave a comment.

  • There is an excellent discussion of how to do this properly at http://stackoverflow.com/questions/921180/c-round-up/926806#926806 ;)

  • I like the idea of a ~> operator, but I'm wondering why -> wasn't considered.

    The question presupposes a falsehood. It was seriously considered, and then rejected. -- Eric

    If you think of -> as primarily a pointer dereference it doesn't make sense, but if you think of it as a "do something then access a member" operator then it works. In unmanaged code the "do something" is dereference a pointer, while in managed code the "do something" is cast as dynamic.

    Indeed. We decided (1) making one operator do two conceptually rather different things was a bad idea, and (2) that if "dynamic" becomes part of the type system proper then there's less need of a special operator. -- Eric

  • The problem is when in an unsafe block you have to use both. Maybe by context the compiler could always figure out what you intend with the operator but reading the code would be a nightmare. C# has already enough context bound "key words", we dont need operators too.

  • I noticed that you wrote an extension method on a delegate type. Awesome.

    For Naj operator, make it shorthand for an asynchronous method call and then use the reverse Naj (or the Jan) to join the async call.

    Esentially:

    string x = "10";
    IAsyncResult asyncResult = int~>Parse(x, null, null);
    y = int<~Parse(asyncResult);

    Cute. Trouble is, x<~y already has a meaning in C#. -- Eric

  • Okay maybe no Jan operator, maybe this instead.

    string x = "10";

    IAsyncResult<int> asyncResult = int~>Parse(x, null, null);

    int y = asyncResult.End();

    w/

    interface IAsyncResult<T> : IAsyncResult

    {

    T End();

    }

  • Speaking about languages, in Polish, 'naj' is a preffix for creating a superlative from a comperative, for instance

    comparive: better - lepszy

    superlative: the best - najlepszy

  • One thing I find that a ~> could mean would be something like:

    a~>b()

    which expands to

    if (a != null)
    {
      a.b();
    }

    I find that I spend a lot of time wrapping method calls with that if statement, so a shortcut to implement would be great. Maybe not for this operator, but it might be a worthy consideration if it hasn't already come up in your discussions. I feel like using something with question marks (like the null coalescing operator) would make the operator be more identifiable but, hey, I'm no language architect. :)

    Indeed, we have considered this operator for that as well. You'd want to make it chain nicely, so that A()~>B()~>C() would work as well. We've also considered using A().?B().?C() -- Eric

  • Out of interest, do you use the fact that these are extension methods anywhere? I think I've written before about the slight (and only very occasional) annoyance that extension methods don't work "on" lambda expressions or method groups - do you have specific use cases where there's a benefit here?

  • I really like the way you translated the spec into code...

    By the way, I think there's a mistake on the last line of code :

    return TransitiveClosure(interfaceSymbol, ExplicitBaseInterfaces);

    Should be :

    return TransitiveClosure(ExplicitBaseInterfaces, interfaceSymbol);

  • At the risk of both nitpicking and premature optimization, I'd replace:

    if (!closure.Contains(newItem))

    {

       closure.Add(newItem);

       stack.Push(newItem);

    }

    with

    if (closure.Add(newItem)) stack.Push(newItem);

    But that glosses over the extremely nice main point here - yes, that's the sort of redesign that just feels _right_.

  • I like Nick's idea.  ?? helped with some null handling scenarios, but more often than not I need to call a method on the thing that may be null (and it isn't always convenient or possible to do something like (a ?? new AWithDefaultBehavior()).b()   )

  • I can see  the ~> operator having something to do with parallel programming and threads. Maybe shorthand for a lambda that should be spawned on a thread to simplify TPL stuff?

  • I like Robert Davis' proposal...but to get around <~ already having a meaning, couldn't you do ~< instead?

    IAsyncResult asyncResult = int~>Parse(x, null, null);

    y = int~<Parse(asyncResult);

  • I like the Async meaning for ~>. The only change I'd make is to have a Result property rather than an End() method:

    var parse = int~>Parse(x, null, null);

    int y = parse.Result;

    But I also agree with Nick that we could really use an operator for "x != null ? x.y : null". I'd like x?.y, x?.y() and x?[y] for that. I'm not completely sure that's syntactically unambiguous but the only ambiguity I can see is if "x" can be parsed as both a variable-of-a-reference-type and as a value-type, AND y is one of the static members of Nullable<T>, AND the type of x has a method with the same parameters as the static member on Nullable.

    Indeed. Or we could use x.?y() instead, which I believe is unambiguous. -- Eric

  • I'd rather prefer something involving "?" for null-member-access operator, because it would be consistent with existing use of "?" for nullable types and null coalescing operators.

    "~>" looks like something very different to me. I'm not sure it's even something I'd associate with member access - yes, C# inherits "->" for member-access-via-pointer from C, but in all my time writing C# code, I haven't used it a single time - despite using quite a lot of P/Invoke. Nor did I see it in other code I had to deal with. So it would seem to me that _the_ member access operator in C# is ".", and "->" is, for the most part, a curious relic which few people know about, and even fewer use. So any "member access with a twist" kind of operator should also include ".", IMO - like ".?" for null-coalescing one, and maybe ".~" for async? This would also let them be combined to something like ".?~" etc.

    As for ~> itself, how about logical implication? It's more useful when it sounds once you get to DbC and writing contracts (which is why Eiffel has it). E.g.:

      void Foo(string firstName, string lastName) {

          Contract.Requires(firstName == null ~> lastName == null); //

      }

    Basically anywhere you want to have a contract of "if X is true, then Y must also be true" - which is surprisingly often.

    Traditionally, either -> or => are used to denote implication, but both are taken already, and ~> seems to be the closest to either one of those two.

    FYI, VB6 and VBScript both have Imp and Eqv operators meaning "if" and "iff" respectively. Hardly anyone ever used them and they were quietly dropped during the VB.NET design process. -- Eric

Page 1 of 5 (68 items) 12345