# Graph Colouring With Simple Backtracking, Part Two

### Graph Colouring With Simple Backtracking, Part Two

Rate This

Before I begin a quick note: congratulations and best wishes to David Johnson, currently the president of my alma mater, the University of Waterloo. The Queen has appointed him to be the next Governor General of Canada come this October. For those of you unfamiliar with the Canadian political structure, Queen Elizabeth is the sovereign ruler Canada; the Governor General acts as her direct representative in Canada and therefore has the (mostly ceremonial, but some real) powers of a head of state.

Moving on!

OK, we've got our graph class. Our basic algorithm for assigning colours will be to associate with each node in the graph a set of "possible colours", and then gradually reduce that set for each node until either we've proved that there is no possible colouring, or we've found one that assigns exactly one colour to each node. Therefore we will need a type that represents a set of possible colours.

First off, what type should represent a colour? There are already Color enums in the BCL, but is that really what we want? We're probably going to want to ask questions like "is there a colouration of this graph that uses only three colours?", rather than "is there a colouration of this graph that uses only green, blue and orange?"  The actual colours are kind of irrelevant. Let's say that a colour is simply an integer from zero to max-1. We can always make a mapping from integers to real colours if we want to.

Note that this design decision immediately introduces a point of possible confusion; last time we said that nodes in a graph were also represented as integers. If I were particularly concerned about that, I might devise two structs that each wrap an integer, say, GraphNode and GraphColour. Then I could let the compiler tell me if I were to accidentally assign something that is logically a colour to a variable that is logically a node.

This is a tough call. Basically we are trading off the time and effort of implementing and maintaining the unnecessary wrapper structs against the possibility of time and effort lost to accidental bugs. Does the cost justify the benefit? My feeling is in this case, no. I'm going to not implement these wrapper structs this time.

OK, so a colour is a plain integer. I want to represent a set of plain integers. A HashSet<int> seems like a reasonable off-the-shelf choice, but it has a few problems. First off, it is mutable; part of the point of this exercise is to program in an immutable style. Second, it's heavyweight; in order to maintain a set of integers it is allocating arrays and hash buckets and all kinds of stuff.

If we're willing to constrain the problem we can do a lot better than a HashSet<int> to represent a set of integers. Suppose for example that we say that we're never going to try to colour a graph with more than 32 colours, which seems reasonable. Given that constraint, we can represent a set of integers in a single 32 bit integer, by simply using the integer as a bit set. Integers are already immutable, so it seems natural to make an immutable set out of an integer.

Now, of course if that's our approach then we don't even need a type that wraps an integer. We could just use an integer. But, unlike in our example of the graph nodes and graph colours, we actually have operations we want to perform on this type, like "enumerate all your members". This is much more like a real abstract data type, so it's justifiable to make a real type out of it rather than simply using an integer directly.

// A super-cheap immutable set of integers from 0 to 31 ;
// just a convenient wrapper around bit operations on an int.
internal struct BitSet : IEnumerable<int>
{

public static BitSet Empty { get { return default(BitSet); } }

private BitSet(int bits) { this.bits = bits; }

public bool Contains(int item)

{
Debug.Assert(0 <= item && item <= 31);

return (bits & (1 << item)) != 0;
}

Debug.Assert(0 <= item && item <= 31);

return new BitSet(this.bits | (1 << item));

}

public BitSet Remove(int item)

Debug.Assert(0 <= item && item <= 31);

return new BitSet(this.bits & ~(1 << item));

}
IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); }
public IEnumerator<int> GetEnumerator()

{

for(int item = 0; item < 32; ++item)

if (this.Contains(item))

yield return item;

}
}

Let's again go through this line by line.

As with the graph class, I've made the type internal. Since I know that I'm the only one using this, I can be a bit more cavalier about not doing bounds checking with nice exceptions and whatnot. If this were a public type in the BCL then I'd have to do much more "retail" bounds checking to ensure that the items actually are integers from 0 to 31; instead, I'll just do that checking with debug mode assertions.

An immutable set of integers can logically be treated as a value, and this type is very small in terms of the data it contains, so I've made this a value type.

Unlike the graph class, here I am emphasizing the implementation detail of the set by naming the class BitSet instead of, say, SmallIntegerSet. Am I inconsistent? Yes, I contain multitudes. It feels like the right thing to do here is to emphasize that this is fundamentally a mechanism, rather than the carrier of semantics. I'm not sure why that is.

I could implement a more full-on interface like ICollection<int>, but that includes operations that I don't want, like the ability to mutate the collection. Rather than use an off-the-shelf interface in a confusing, off-label manner, I'll just implement the standard interface that I can implement correctly, IEnumerable<int>, and then implement my own Add and Remove methods that do the right thing.

I'm using the standard immutable object pattern of providing a static way to get at the empty collection. This is unnecessary; default(BitSet) gives you the right thing. But it reads much more nicely to say BitSet.Empty; it is clear from the code that you're getting an empty collection.

I've made the private data a signed integer rather than an unsigned integer. It's marginally safer to represent a bit vector as a uint because you run less risk of having a problem with sign bit issues. However, you also then have to do casts all over the show, which is ugly. I say that the math here is simple enough that we can verify that it is correct without worrying too much about sign issues.

The constructor which sets the internal state is private; this deprives the caller of the power of setting many bits at once. But if the caller has an int that has many bits set at once, then why are they using the BitSet in the first place? Apparently they're already willing to do bit twiddling in their own code to set those bits. Again the YAGNI principle says to not provide unneeded functionality.

The Contains, Add and Remove methods, as mentioned before, do not verify the validity of their arguments because the type is only used internally. Debug assertions tell the caller when an invariant expected by the type has been violated.

You might wonder why the GetEnumerator method does a loop with a yield return, rather than being implemented as

public IEnumerator<int> GetEnumerator()

{
return
from item in Enumerable.Range(0, 32)
where this.Contains(item)
select item;
}

Do you see why I chose to use a loop for this one, despite my stated desire to eschew loops in favour of queries?

Notice that I choose to not implement special versions of sequence operators that are extension methods, even though I could. For example, there's no "Count" property on this thing. The naive implementation of Count() will call GetEnumerator, which as we've seen enumerates all 32 possibilities. But the count of the bitset is the Hamming Weight, and there are many well-known algorithms for efficiently calculating the Hamming Weight of a bit vector. We could for example use the algorithm commonly attributed to Kernighan (though it has been discovered independently many times throughout the history of computer programming.) This algorithm clears the low bit until they're all cleared, and tells you how many times it had to do that:

public int Count()

{
int remaining = this.bits;
int count = 0;
while(remaining != 0)
{
remaining &= (remaining - 1);
count++;
}
return count;

}

The cost of this is proportional to the number of set bits, rather than always going through all 32. But who cares? 32 is a small number. Again, let's not add this complexity unless profile-driven empirical data suggests that naively counting the bitset entails an unacceptably high cost.

Notice that in contrast I did choose to implement Contains() myself rather than using the extension method. Had I used the extension method then obviously I would not have been able to use Contains() inside GetEnumerator; we would have gotten a stack overflow. I would have had to put the bit twiddling logic in GetEnumerator, and since I was going to have to write the bit twiddling logic anyway, it seemed reasonable to put it in its own method for general use.

Once more we've built a trivial little piece of code, but to do so, had to make many design decisions along the way. The fundamental question that has driven the entire design is "is it reasonable that we'll never have more than 32 colours?" We could trivially expand that to 64 by using a long instead of an int, but going farther than that would be a lot trickier.

Next time, let's actually solve a slightly harder problem; let's make an algorithm that actually solves colouring problems.

• Eric --- why didn't you make bits readonly?  Since you are only setting it in the constructor and since you want this to be an immutable type I would have thought you would have done that as part of the pattern.

I'm of two minds on that. On the one hand, "readonly" on any field clearly documents the intention of the programmer, and prevents those bugs where you accidentally write code that mutates the value of the variable. That's goodness.

On the other hand, readonly on a field of a value type is a lie. A dirty rotten lie. "Readonly" can be read as "I don't change this", which is accurate. But it can also be easily read incorrectly as "this never changes" which is patently false for a readonly field of a value type!

A readonly field of a reference type is not a lie; that thing really does never change (modulo nasty stuff like reflection, unsafe code, and so on.)  The reason that a reference type can make that guarantee is because the instance of the reference type owns the storage location associated with the field. A value type does NOT own the storage associated with a field; it is borrowing that storage from someone else who does own it.

For example:

struct S
{
public S(int x) { this.x = x;}
public void Evil(ref S s)
{
Console.WriteLine(this.x);
s = default(S);
Console.WriteLine(this.x);
}
}
...
S z = new S(123):
z.Evil(ref z);

and sure enough, the value of a readonly field has changed. The struct doesn't own s, the local variable z owns s, and there is no restriction on z changing, and thereby changing s in the process.

So that's why I'm a bit suspicious of readonly fields in a value type. They appear to be making a guarantee that they cannot actually make. But perhaps the benefit of the self-documenting, accidental-bug-prevention field is worth having to live with a lie. I'll update the code.

- Eric

• Perhaps it's a YAGNI thing but it seems to me that for this particular type, Union(BitSet) and Intersection(BitSet) methods would be so simple to add and so much more efficient than the naive implementations using the enumerators that they'd be worth putting in up-front?

Not having seen the coloring algorithm you are going to write, I don't know for sure that they will be used, but it seems entirely possible to me that they would be useful in such an algorithm.

Good idea. But as it turns out, I don't need them. - Eric

• In regard to readonly - I'm not sure I'd consider it a lie.

By virtue of being a "value" type, a struct represents a value, not a storage-location-for-a-value. When you say "s = default(S)" you're not assigning a new value to s.x in the same "instance" of S, you're replacing the entirety of s with a new "instance" of S (that happens to live in the same storage location).

Your Evil method deliberately confuses the issue by its use of a ref parameter of the same type as itself, such that "this" and "s" refer to the same thing but "s" is writable. What that says to me is "don't have nonstatic methods on a struct that take a ref parameter of the same type as the struct", rather than "don't put readonly on structs". As long as you avoid evil methods, "readonly" isn't a lie at all.

But that's just one way that you can make readonly lie; it looks contrived because it is contrived to show the problem in as little space as possible. But we can show this in many other ways:

struct S
{
public S(int x) { this.x = x; }
public Bogus(Action action)
{
Console.WriteLine(this.x);
action();
Console.WriteLine(this.x); // different!
}
}
struct T { public S s; ... }

T t = new T(new S(123), ... );
t.s.Bogus(()=>{t = default(T); });

now we have a method that no longer takes a ref to S, and the action only affects S indirectly. There are lots of ways that we can show readonly to be a lie, and any of them are sufficient to make it unreliable. Saying "as long as you avoid situations in which the mutation of a readonly field is apparent, mutations are not apparent" is a tautology; my point is that the mere fact that there are any such situations means that the feature is misleading. - Eric

• What, no Enumeratour?

• Couldn't be either System.Collections.BitArray or System.Collections.Specialized.BitVector32 used instead?

Sure. But remember the point of this exercise is to program in an immutable style; bit arrays and bit vectors are mutable reference types. As we'll see next time, when you use arrays (or array-like data structures) you have to either make copies if you want to mutate state without affecting the state of another object, or you have to have this complicated dance of mutating and then unmutating the state so that the state can be shared between code fragments that each expect the state to be different. This is complicated and error-prone; the whole point of an immutable collection is that it can be treated as a value. - Eric

• Eric, I love that you even mentioned using int-wrapper structs to guarantee compiler static type checking for integers which represent different domains. I've done exactly this thing for my company's model's identifiers using a T4 template to generate the struct code for each identifier type, e.g. we have StudentID, StaffID, CourseID, ClassID, etc. There are certain groups of similarly named models in our domain which could have their members be easily confused, e.g. Class and Course.

• Eric, This is shaping up as an excellent series (no suprise there)...two questions/comments..

1) ReadOnly value types....Would it not make sense to prohibit using a readonly as a ref (or out) parameter at the compiler CLR level?

2) Debug.Assert...I 100% agree with the logic that this is "robust enough for internal use", but why not use Code Contracts???

For the exact reasons you describe I don't assume that they are immutable in that way. I do however think it makes programmer intent _clear_ and so if someone comes along and decides the change things and messes up the immutability in some less subtle manner the compiler tells them about it.

Anyone who writes the sort of code you show to expose the issue should know what they are doing (I don't mean that in the general sense, I mean anyone working for me who does it!) so the readonly is a benefit of both intent/sanity checking.

I was surprised you didn't add it initially, though appreciate the subtleties involved.

• A perhaps minor nitpick about personal preference:

C# lets you ignore the return value of methods and has no const or similar annotation.  That makes it particularly hard to tell an immutable from a mutable structure.  I've seen code quite similar to this where people accidentally use such methods (IIRC, on the Rect structure from WPF) in a fashion that assumes the data structure is mutable when it actually isn't.

It's not always easy to avoid that confusion, but in particular, I find that avoiding names that are commonly used for mutable methods helps.  For example, Add/Remove could be called With/Without or similar.  Another less-pretty alternative is to use static methods.

e.g.:

bitset = bitset.With(x).Without(y);

//or

Immutability annotations or annotations for pure functions that could be used by the compiler to enforce that you may not ignore return values would be nice...

• "now we have a method that no longer takes a ref to S, and the action only affects S indirectly. There are lots of ways that we can show readonly to be a lie, and any of them are sufficient to make it unreliable. Saying "as long as you avoid situations in which the mutation of a readonly field is apparent, mutations are not apparent" is a tautology; my point is that the mere fact that there are any such situations means that the feature is misleading. - Eric"

Yes, I actually thought of that right after posting, but I felt like posting three comments in a row was a little excessive! :)

Here's my question, though: Is it possible to mutate a readonly field of a value type without the value type doing something that exposes the ability to do it? In other words - *in the case of your BitSet type* readonly isn't a lie, because it doesn't expose any callbacks nor call anything else that knows about BitSet that could overwrite it. Can you write a method that mutates BitSet.bits without changing the code of BitSet itself? If so I retract all my comments :)

I've always felt that the proper construction of value types is very difficult; that in order to write a value type that isn't confusing for consumers of the type you have to follow a very restrictive set of best practices, none of which are really laid out anywhere that I've found yet. Since I don't feel like I know the best practices well enough I usually avoid creating my own value types entirely. But I've always considered immutability to be vital; mutable value types confuse the heck out of me! So the idea of leaving off readonly makes me nervous.

By virtue of your comments on this post I'd say I can add "don't expose callbacks within instance methods", "don't take ref parameters in instance methods" and "don't call any methods in other types that are aware of this type" to the list of best practices. But your comments also reinforce the "writing good value types and understanding their behavior is hard" attitude that I already had...

• ... argh, yes you can mutate BitSet, because of GetEnumerator()...

void Confuse(BitSet set) { // where set has at least two bits set.

foreach (var bit in set) {

set = default(BitSet);

}

}

That could be fixed of course by having the GetEnumerator() method hold its own copy of the BitSet.

So, to add to the Stuart's-best-practices-for-value-types list: "If your class uses "this" inside any generator methods or lambdas, make sure to capture "this" explicitly into a local variable instead."

Even that isn't quite sufficient because even the first line of a generator method doesn't get called until the first time someone tries to iterate. So to be safe you'd have to make sure the GetEnumerator() captures "this" and THEN calls a generator method.

Value types R hard!

• By the way, I feel dumb for needing to ask, but is the reason you implemented GetEnumerator() as a loop rather than a query because it returns an Enumerator rather than an Enumerable? Because if so, you *could* implement it as:

var set = this;

return

(from item in Enumerable.Range(0, 32)

where set.Contains(item)

select item).GetEnumerator();

I'm not sure what tradeoffs that introduces, but I think it also solves my issue of mutating the set, and probably reduces the amount of generated code...

• "Do you see why I chose to use a loop for this one, despite my stated desire to eschew loops in favour of queries?"

I think it is faster as you save the cost of enumerating the range and calling into the where-iterator. You still have one iterator though.

• I have a bone to pick with you. Speaking of all the trade offs you made and mostly emphasis on keeping things simple as compared to more complex designs and functionality, have you considered the time cost of all those decisions you made? Of the two blog posts you have written so far highlighting those decisions?

It might have been much quicker to do the obvious, i.e., choose the mutable types, not make them sealed etc. than grapple with all these decisions instead.

I am not saying you shouldn't have made all these decisions, your excellent blog posts demonstrate the choices we face everyday not just in Software engineering but all across the board. But sometimes the act of decision is more costly than picking one of the choices and moving on. en.wikipedia.org/.../Satisficing is the source of my argument I came across few weeks ago.

• "It's not always easy to avoid that confusion, but in particular, I find that avoiding names that are commonly used for mutable methods helps.  For example, Add/Remove could be called With/Without or similar.  Another less-pretty alternative is to use static methods."

How about operator + and -?

bitset += x; bitset -= y;

bitset = bitset + x - y;

bitset1 += bitset2; // union - or could be |=

Or does that go against C#'s operator-overloading philosophy? Seems to me it's not too different from adding a char to a string.

other thought... it'd be nice if there were a way to make new BitSet { 1, 2, 3 }; work while still being immutable - that's always struck me as a deficiency in the way collection initialization is handled - there should be some way to use the syntax in a way that passes the arguments to a constructor (or maybe a factory method) instead of an .Add method

Page 1 of 2 (18 items) 12