Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
(Eric is out camping; this posting is prerecorded. I'll be back in the office after Labour Day.)
The word "type" appears almost five thousand times in the C# 4 specification, and there is an entire chapter, chapter 4, dedicated to nothing but describing types. We start the specification by noting that C# is "type safe" and has "a unified type system" (*). We say that programs "declare" types, and that declared types can be organized by namespace. Clearly types are incredibly important to the design of C#, and incredibly important to C# programmers, so it is a more than a little bit surprising that nowhere in the eight hundred pages of the specification do we ever actually define the word "type".
We sort of assume that the developer reading the specification already has a working understanding of what a "type" is; the spec does not aim to be either a beginner programming tutorial or a mathematically precise formal language description. But if you ask ten working line-of-business developers for a formal definition of "type", you might get ten different answers. So let's consider today the question: what exactly is this thing you call a "type"?
A common answer to that question is that a type consists of:
* A name* A set (possibly infinite) of values
And possibly also:
* A finite list of rules for associating values not in the type with values in the type (that is, coercions; 123.45 is not a member of the integer type, but it can be coerced to 123.)
Though that is not a terrible definition as a first attempt, it runs into some pretty nasty problems when you look at it more deeply.
The first problem is that of course not every type needs to have a name; C# 3 has anonymous types which have no names by definition. Is "string" the name of a type? What about "List<string>" -- does that name a type? Is the name of the string type "string" or "String", or "System.String", or "global::System.String", or all four? Does a type's name change depending on where in the source code you are looking?
This gets to be a bit of a mess. I prefer to think of types as logically not having names at all. Program fragments "12" and "10 + 2" and "3 * 4" and "0x0C" are not names for the number 12, they are expressions which happen to all evaluate to the number 12. That number is just a number; how you choose to notate it is a fact about your notational system, not a fact about the number itself. Similarly for types; the program fragment "List<string>" might, in its context, evaluate to refer to a particular type, but that type need not have that fragment as its name. It has no name.
The concept of a "set of values" is also problematic, particularly if those sets are potentially infinite "mathematical" sets.
To start with, suppose you have a value: how do you determine what its type is? There are infinitely many sets that can contain that value, and therefore infinitely many types that the value can be! And indeed, if the string "hello" is a member of the string type, clearly it is also a member of the object type. How are we to determine what "the" type of a value is?
Things get even more brain-destroying when you think, oh, I know, I'll just say that every type's set of values is defined by a "predicate" that tells me whether a given value is in the type or not. That seems very plausible, but then you have to think about questions like "are types themselves values?" If so, then "the type whose values are all the types that are not members of themself" is a valid predicate, and hey, we've got Russell's Paradox all over again. (**)
Moreover, the idea of a type being defined by a predicate that identifies whether a given value is of that type or not is quite dissimilar to how we typically conceive of types in a C# program. If it were, then we could have "predicate types" like "x is an even integer" or "x is not a prime number" or "x is a string that is a legal VBScript program" or whatever. We don't have types like that in the C# type system.
So if a type is not a name, and a type is not a set of values, and a type is not a predicate that determines membership, what is a type? We seem to be getting no closer to a definition.
Next time: I'll try to come up with a more workable definition of what "type" means to both compiler writers and developers.
(*) An unfortunate half-truth, as pointer types are not unified into the type system in the version of the C# language which includes the "unsafe" subset of functionality.
(**) When Russell discovered that his paradox undermined the entire arithmetic theory of Frege, he and Whitehead set about inventing what is now known as the "ramified theory of types", an almost impossible-to-understand theory that is immune to these sorts of paradoxes. Though the mathematical underpinnings of type theory/set theory/category theory/etc are fascinating, I do not understand them nearly well enough to go into detail here. Remember, what we're looking for is a definition of "type" that is amenable to doing practical work in a modern programming language.
I think that in the scope of modeling an application into the programing language, a type means some kind of creation which the programmer makes.
that's some integer coercion that turns 12.345 into 123. =)
Defining a type as a set of values is not a problem because you couldn't ask what +the+ type of an expression is: after all, indeed, due to subclassing, values may have multiple types. The subtype is just a (smaller) set of values that all belong to the set of values of the base type.
An important property of types not mentioned yet is that it describes a way to shape new objects - although that doesn't do for a definition either (since there are types that you cannot create new objects from, e.g. static types)
@yie, the type of 12.345 is DecaDouble. You have to assume that Eric is right... :-)
As an aside to people interested in it, there are in fact languages where things like "x is an even integer" or "x is a string that is a legal VBScript program" are types. This is a consequence of the Curry-Howard correspondence (which actually says that proof systems and programs can be expressed as each other, not types, so I'm glossing over some very necessary details for the full picture). Definitely worth checking out if mathematics don't scare you too much (there's lots of words and not many numbers in formal logic, so don't worry). Not the approach taken by mainstream languages like C#, obviously.
Actually, in the .NET/C# type system, for any given value, there is a type that is member specific (i.e., a subset of) all other types that the value or expression is a member of. That is what is usually what is referred to when one refers to "the" type of a value or expression. Simply calling a type a set of values is unsufficient, as not all sets of values are types.
Instead, I would define a type using something along the lines of "A logical construct that defines a set of possible values and what operations can be performed on those set of values."
Types are a tool programmers use to structure data and the valid operations on the data.
Defining a type as a set of valid values is too academic to be useful in implementing a compiler.
A type is metadata, the actual form of which necessarily depends on the features of the programming language.
Another problem with using the "set of values" definition is structurally identical compound types.
Imagine you have two classes, X and Y, whose definition are identical. For example, a class Height and class Width, which are basically wrappers over an Int (single public Int field named "value," and no methods). Should a Height with value 10 be considered the same type as a Width with value 10?
The set of values that they take are identical. But logically, they are different classes, and you want your type system to know the difference between Heights and Widths. After all, the main benefit of static type systems is checking program correctness. But in order to do so, you have to take two identical sets of values and somehow separate them.
my best shot: a type is a blueprint for values or references.
pointer types seem to be out of this category.
Just because Width and Height have the same structure, it doesn't mean they describe the same set of values. Surely an instance of Height with value=10 is not just of a different type from an instance of Width with value=10, but actually an entirely distinct value. It might have the same in-memory representation, but that's surely an unimportant implementation detail.
A Type is a set of rules for how to tread a region of memory.
I bet it's really called a "Type" because "Class" and "Structure" were already taken, it was late at night, it made sense and it sounded good.
Of course, the definition of a type is dependant on context - I think of types in the CLR and types in C++ and types in Haskell as completely different things, and they are all programming language types!
For CLR languages, the obvious answer is: A type is: An optional name, an optional namespace, an accessability, the assembly in which it is defined, an optional type from which it inherits, an optional set of types which it implements, an optional list of array dimensions, an optional type for which this is a pointer to, an optional list of generic arguments and a set of members (did I miss anything?). Obviously, that isn't terribly useful a definition for either semantics or implementation, but there you go :)
Semanticaly, a type is the association of a set of values with operations on them: 3 may be a value that is in the types byte, ushort, int, double, BigInteger, (in some languages) string, etc..., but without the association with a type (even if it's just object), you can't actually do anything with it - except perhaps, depending on your interpretation of when a value's existance begins, create a new instance of a type from it. Interestingly, this answers why expressions with the 'same' operations on the same values can have different results: eg 6 / 4 gives 1.5 with double division, but 1 with int division, and 100 * 2 gives 200 with byte multiplication, but -56 with sbyte multiplication.
I prefer splitting the type from it's definition, this simplifies anonymous types (a type without any definitions) and typedef/type forwarding (a type with multiple definitions).
I don't find it hard to believe that the "set of values" approach to defining types is not a total description of what we understand by "type", but I'm not yet convinced by Eric's arguments against it.
'suppose you have a value: how do you determine what its type is?'
This seems a strange question to ask. What is it to "have a value"? Are we talking about a person looking at marks on a page? A compiler looking at expressions in a program text? A CPU looking at bytes in RAM? In most of the cases we care about, I would have thought it is most unusual to have a value without knowing a type, because in order to have a value you must have had some sort of *symbol*, be it scribbles on paper or bytes in RAM, and some system for interpreting such symbols into some possible set of conceptual values, and so we already have at least one type that we know the value belongs to.
Furthermore, why do we want to determine a single type for the value "hello"? Surely the only reason a type is useful is when we need to work with a value that we *don't* entirely know in advance, but about which we know *something*. The type is the information that we have that constricts our unknown value from the set of all conceivable ideas to something useful which can be meaningfully processed. It seems more important to worry about the relationships between the types we care about than to fret over the difficulty of saying anything about the type of an arbitrary value from the set of all possible values.
When I was learning how to program I had a hard time in creating a mental modelof what types were.
I finally came up with something simple, when I reversed the logic.
I started by thinking what a program actually is to the computer.
*) What is a program?
It's a block of memory containing instructions for the cpu.
*) What is an (object) instance?
It's a block of memmory containing data those instructions should operate on.
*) Since both can occupy (physically, though not at the same time) the same memory address, you need to tell the difference.
*) Then what is a type?
It's a block of memory that describes (definitiion) how another block of memory is to be interpreted.
You can read a block of memory with the wrongtype, but that will (moslty) give you garbage.
This model has worked fine by me for years now.
There is some recursion going on here, (what describes the type?)
but that is ok, you need a compiler to build a compiler too.
(sorry if this is a double post, butfirst attempt was not accepted after 10 minutes, which weems a bit too long)