Making the code read like the spec

Making the code read like the spec

Rate This
  • Comments 68

As I mentioned a while back, there are some bugs in the compiler code which analyzes whether a set of classes violates the “no cycles” rules for base classes. (That is, a class is not allowed to inherit from itself, directly or indirectly, and not allowed to inherit from one of its own nested classes, directly or indirectly.) The bugs are almost all of the form where we accidentally detect a cycle in suspicious-looking generic code but in fact there is no cycle; the bugs are the result of an attempt to modify the cycle detector from C# 1 rather than simply rewriting it from scratch to handle generics. Unfortunately we were unable to get the fixes into C# 4; these are obscure corner-case scenarios and the risk of doing the fix was considered too high.

There are a number of tricky issues here, mostly around the fact that obviously we cannot know whether a set of base types are circular until we know what the base types are, but resolving the program text to determine what type the base type string “C.D<E.F>” requires us to know the base types of C, because D might be a nested type of C’s base class, not of C, so we have a bit of a chicken-and-egg problem. The code which turns strings into types has to be robust in the face of circular base types because the base type cycle detector depends on its output!

So like I said, I’ve come up with a new algorithm that implements the spec more exactly, and I wanted to test it out. Rather than modifying the existing compiler to use it, I mocked it up in C# quickly first, just to give me something to play with. One of the problems that we have with the existing compiler is that it is not at all clear which parts of the code are responsible for implementing any given line in the spec. In my “maquette” of the compiler I wanted to make sure that I really was exactly implementing the spec; that might show up logical problems with either the implementation or the spec. I therefore wanted the code to read much like the spec.

This little hunk of code that I wrote made me inordinately happy. Here’s the spec:

A class directly depends on its direct base class (if any) and directly depends on the class within which it is immediately nested (if any). The complete set of classes upon which a class depends is the reflexive and transitive closure of the directly-depends-upon relationship.

First off, what is this thing called the “reflexive and transitive closure”?

Consider a “relation” – a function that takes two things and returns a Boolean that tells you whether the relation holds. A relation, call it ~>, is reflexive if X~>X is true for every X. It is symmetric if A~>B necessarily implies that B~>A. And it is transitive if A~>B and B~>C necessarily implies that A~>C. (*)

For example, the relation “less than or equal to” on integers is reflexive: X≤X is true for all X. It is not symmetric: 1≤2 is true, but 2≤1 is false. And it is transitive: if A≤B and B≤C then it is necessarily true that A≤C.

The relation “is equal to” is reflexive, symmetric and transitive; a relation with all three properties is said to be an “equivalence relation” because it allows you to partition a set into mutually-exclusive “equivalence classes”.

The relation “is the parent of” on people is not reflexive: no one is their own parent. It is not symmetric: if A is the parent of B, then B is not the parent of A. And it is not transitive: if A is the parent of B and B is the parent of C, then A is not the parent of C. (Rather, A is the grandparent of C.)

It is possible to take a nontransitive relation like “is the parent of” and from it produce a transitive relation. Basically, we simply make up a new relation that is exactly the same as the parent relation, except that we enforce that it be transitive. This is the “is the ancestor of” relation: if A is the ancestor of B, and B is the ancestor of C, then A is necessarily the ancestor of C. The “ancestor” relation is said to be the transitive closure of the “parent” relation.

Similarly we can define the reflexive closure, and so on.

When we’re talking about closures, we’re often interested not so much in the relation itself as the set of things which satisfy the relation with a given item. That’s what we mean in the spec when we say “The complete set of classes upon which a class depends is the reflexive and transitive closure of the directly-depends-upon relationship.”  Given a class, we want to compute the set that contains the class itself (because the closure is reflexive), its base class, its outer class, the base class of the base class, the outer class of the base class, the base class of the outer class, the outer class of the outer class… and so on.

So the first thing I did was I wrote up a helper method that takes an item and a function which identifies all the items that have the non-transitive relation with that item, and computes from that the set of all items that satisfy the transitive closure relation with the item:

static HashSet<T> TransitiveClosure<T>(
    this Func<T, IEnumerable<T>> relation,
    T item)
{
    var closure = new HashSet<T>();
    var stack = new Stack<T>();
    stack.Push(item);
    while(stack.Count > 0)
    {
        T current = stack.Pop();
        foreach(T newItem in relation(current))
        {
            if (!closure.Contains(newItem))
            {
                closure.Add(newItem);
                stack.Push(newItem);
            }
        }
    }
    return closure;
}
static HashSet<T> TransitiveAndReflexiveClosure<T>(
    this Func<T, IEnumerable<T>> relation,
    T item)
{
  var closure = TransitiveClosure(relation, item);
  closure.Add(item);
  return closure;
}

Notice that essentially what we’re doing here is a depth-first traversal of the graph defined by the transitive closure relation, avoiding descent into regions we’ve visited before. 

Now I have a mechanism which I can use to write code that implements the policies described in the specification.

static IEnumerable<TypeSymbol> DirectDependencies(TypeSymbol symbol)
{
    // SPEC: A class directly depends on its direct base class (if any) ...
    if (symbol.BaseTypeSymbol != null)
        yield return symbol.BaseTypeSymbol;
    // SPEC: ...and directly depends on the class within which it
    // SPEC: is immediately nested (if any).
    if (symbol.OuterTypeSymbol!= null)
        yield return symbol.OuterTypeSymbol;
}

Great, we now have a method that exactly implements one sentence of the spec – given a type symbol, we can determine what its direct dependencies are. Now we need another method to implement the next sentence of the spec:

static HashSet<TypeSymbol> Dependencies(TypeSymbol classSymbol)
{
    // SPEC:  The complete set of classes upon which a class
    // SPEC:  depends is the reflexive and transitive closure of
    // SPEC:  the directly-depends-upon relationship.
    return TransitiveAndReflexiveClosure(DirectDependencies, classSymbol);
}

That’s what I like to see: code that reads almost exactly like the spec.

Note also that I’ve made the design choice here to make these methods static methods of a policy class, rather than methods on the TypeSymbol class. I want to keep my policies logically and textually separated from my mechanisms. For example, I could be using the same classes that represent classes, structs, namespaces, and so on, to implement VB instead of C#. I want the policies of the C# language to be in a class whose sole responsibility is implementing these policies.

Another nice aspect of this approach is that I can now re-use my transitive closure computing mechanism when I come across this bit of the spec that talks about base interfaces of an interface:

The set of base interfaces is the transitive closure of the explicit base interfaces.

Unsurprisingly, the code in my prototype that computes this looks like:

static HashSet<TypeSymbol> BaseInterfaces(TypeSymbol interfaceSymbol)
{
    // SPEC: The set of base interfaces is the transitive closure
    // SPEC: of the explicit base interfaces.
    return TransitiveClosure(ExplicitBaseInterfaces, interfaceSymbol);
}

In fact, there are transitive closures in a number of places in the C# spec, and now I have a mechanism that I can use to implement all of them, should I need to for my prototype.

One final note: notice that I am returning a mutable collection here. Were I designing these methods for public consumption, I would probably choose to return an immutable set, rather than a mutable one, so that I could cache the result safely; these methods could be memoized since the set of dependencies does not change over time. But for the purposes of my quick little prototype, I’m just being lazy and returning a mutable set and not caching the result.

Hopefully we'll get this new algorithm into the hypothetical next compiler release.

*************

(*) Incidentally, we are holding on to the wavy arrow ~> as a potential operator for future hypothetical versions of C#. We recently considered it to have the meaning a~>b() means ((dynamic)a).b(). The operator is known as “the naj operator” after Cyrus Najmabadi, who advocated the use of this operator to the C# design team. Anyone who has a great idea for what the naj should mean, feel free to leave a comment.

  • @Pop.Catalin - the double colon is already used in C#:

    http://msdn.microsoft.com/en-us/library/c3ay4x3d(VS.80).aspx

  • Re: x<~y

    Has anyone ever wanted to check if a value is less than the bitwise NOT of another? Would this really break any real world code?

    I know I shouldn't dismiss breaking changes so lightly. I once saw this C expession... (!!p * i) in some real world code. (Return zero if p is null, or i if p is not null.) It seemed a strange way to do that until I observed that the house coding standard at the time prohibited the ?: operator.

    If someone had proposed changing the result of ! to zero or *any* non-zero value, they may have wondered if anyone in real world uses it in any other way than zero or non-zero.

  • Can someone provide a use case for ((dynamic)a).b()?  I'm not sure I understand where this would be useful.  I am imagining a scenario like this:

    class MyClass {

    // empty class definition

    }

    void Foo(MyClass c)

    {

        c ~>B();

    }

    // case 1

    dynamic d = GetDynamic();

    Foo(d);  // fails at runtime if GetDynamic() does not return something assignable to a MyClass

    // case 2

    MyClass c = new MyClass();

    Foo(c);  // fails at runtime since method B is not defined

    I will admit, I don't thoroughly understand the appropriate uses of dynamic yet, so I could be way off base.

  • Chris: I'm going to guess that the ((dynamic)A).B() scenario is mostly going to happen when A is an Object.

  • Even so, wouldn't it be possible to do this:

    // option 1
    void Foo(dynamic d)
    {
      d.B();
      d.C();
    }

    // option 2
    void Foo(object o)
    {
      dynamic d = (dynamic)o;
      d.B();
      d.C();
    }

    I apologize, I was not clear in my description. We were considering ~> being the "dynamic member access" operator *before* we hit on the idea of making dynamic a type in the C# type system. I meant to imply that the proposed operator would have the same semantics as ((dynamic)a).b() in the design we eventually settled on, not that it would actually be a syntactic sugar for that operation. We considered a number of possible designs for dynamic -- a dynamic block like the "unsafe" or "checked" block, new dynamic operators, a dynamic type, and so on. -- Eric

  • Ooh, how about ~> as defining an anonymous delegate type that's implicitly convertible to and from any delegate type with the right parameter and return types?

    int ~> bool x;

    Predicate<int> y = x;

    Func<int, bool> z = x;

    var func = (int x) => true;

    typeof(func) == typeof(int ~> bool); // true

    (Yeah, yeah, I know the CLR doesn't support any such concept, but I can dream...)

  • @Stuart

    > Does Hello World get written?

    Definitely yes, because arguments are evaluated in C# today, and their side effects are observed, before you get a NullReferenceException if the receiver is null. I would expect that our hypothetical operator ?. would only be different in that final step.

  • > I noticed you chose an iterative solution to finding the transitive closure instead of a recursive one.

    > Is this because of it's performance characteristics?

    Recursion can be evil in such cases, especially when it's not tail recursion (which C# doesn't optimize anyway) - compiler implementation limits suddenly become much smaller (you can realistically hit them with reasonable code), and harder to predict as well.

    Case in point: two days ago I filed a bug against F# which had to do with a compiler crashing with StackOverflowException when you had a series of nested if-else statements, once you've reached three hundred or so:

      if x = 1 then    ...
      else if x = 2 then     ...
      else if x = 3 then   ...

    (of course, the actual code did more interesting checks, and of course, it was generated rather than hand-written)

    Indeed, that is one reason. I am very worried about writing deeply recursive algorithms in C# that operate on user-supplied data because the consequences of blowing the stack are so severe, and we don't have tail recursion. I'm therefore tending towards writing algorithms that use heap-allocated stacks rather than using the system stack for recursion. It's good practice.

    Another reason is that such algorithms are potentially easier to debug. Particularly if the stacks are immutable! One can imagine writing a debug version of the algorithm that simply keeps all of the stacks around so that one can view the operation of the algorithm at any point in time during its history.

    Another is that recursive enumerators are inefficient in C#; they can turn O(n) operations into O(n lg n) or O(n^2) algorithms. I would rather use O(n) extra space and save on the time.

    -- Eric

  • Now that you've suggested that using heap-allocated stacks is good practice (I know, not exactly what you meant), is there any chance you could convince somebody to implement heap-allocated call stacks? That would make continuations possible to easily implement.

  • "Definitely yes, because arguments are evaluated in C# today, and their side effects are observed, before you get a NullReferenceException if the receiver is null. I would expect that our hypothetical operator ?. would only be different in that final step."

    I'm not opposed to that interpretation, but remember that the simplest way of describing the problem we want an operator to solve is "I want a shorthand for a == null ? null : a.b()". Which does not evaluate the operands of b if a is null.

    Other operators in C# that have "?" in them tend to be shortcutting operators - the ternary, and the ?? operator which does not evaluate it's right hand operand if the left hand is not null. I think there might be an expectation that "?." or ".?" would also shortcut.

  • @Stuart:

    Yes, now that you put it that way, it does make a good point.

  • IMHO, no need for the "naj" operator. Give me non-nullable pointer types and in-foof, and I'm happy.

  • IMHO, no need for the "naj" operator. Give me non-nullable pointer types and in-foof, and I'm happy.

  • What was I thinking? Of course I mean "non-nullable reference types". DOH.

  • @Erik:

    It's kind of a trick question, but if you had non-nullable reference types (say, T!), what values would you expect this code to produce?

      object![] a = new object![1];

      a[0]; // ?

      struct Foo { object! x; }

      Foo f = new Foo();

      f.x; // ?

    and a few more tricky cases:

      class Base {

         abstract void Foo();

         Base() { Foo(); }

      }

      class Derived : Base {

         object! o = new object();

         override void Foo() {

            o; // ? (assume call during instantiation)

         }

      }

      class Static {

          static object! x = y; // ?

          static object! y = new object();

      }

Page 4 of 5 (68 items) 12345