I get a lot of questions about the rationale behind the design of the C# language. Over the next few months I'm going to try and make a point of posting a C# language design question up here every Monday. This week's question comes from Erik Meier.

I was wondering what the reason is to disallow user-defined conversions to/from interfaces?

In certain cases this could make sense, but most often adding a user defined conversion to/from an interface would override a built in conversion.

Consider this case:

interface I { }
class Base
{ }
class Derived : Base, I { }

Can you add a user defined conversion between I and Base?

Base b = ...;
I
i = ...;
i = (
I)b;
// conversions here
b = (Base)i;
// ... and here

Well, these conversions already exist. They do a dynamic type test, which will succeed if the actual source of the conversion was an instance of Derived for example. Allowing a user defined conversion would cause no end of confusion – do you get the built in compiler behavior, or the user defined behavior?

You could speculate that it would be reasonable to allow user defined conversions to/from sealed types (including sealed classes, and structs), provided that those types don’t implement the interface. Aside from being a confusing rule for the implementer of the conversion, it would result in some confusion for the reader. Reading code like this:

I i = new MyStruct();

It would be easy to conclude that MyStruct implements interface I. But in the presence of user defined conversions that would not be the case.

As it stands, conversions between any reference type and an interface type, or from any class type and a base or derived type is always identity preserving – you get the same instance. For value types, a conversion from a value type to one of its base types (System.Object, System.ValueType or System.Enum) or implemented interfaces is always a boxing conversion. A conversion from an interface, System.Object, System.ValueType or System.Enum to a value type is always an unboxing conversion.

These, relatively, simple rules make it easy to think about conversions involving reference types. As a programming discipline, I would recommend rarely defining user defined conversion operators, and even then, restricting user defined conversion operators to conversions where both the source and destination are value types. Otherwise you are almost certainly going to confuse someone.

Cheers,
Peter
C# Guy