References are not addresses

References are not addresses

Rate This
  • Comments 76

[NOTE: Based on some insightful comments I have updated this article to describe more clearly the relationships between references, pointers and addresses. Thanks to those who commented.]

I review a fair number of C# books; in all of them of course the author attempts to explain the difference between reference types and value types. Unfortunately, most of them do so by saying something like "a variable of reference type stores the address of the object". I always object to this. The last time this happened the author asked me for a more detailed explanation of why I always object, which I shall share with you now:

We have the abstract concept of "a reference". If I were to write about "Beethoven's Ninth Symphony", those two-dozen characters are not a 90-minute long symphonic masterwork with a large choral section. They're a reference to that thing, not the thing itself. And this reference itself contains references -- the word "Beethoven" is not a long-dead famously deaf Romantic Period composer, but it is a reference to one.

Similarly in programming languages we have the concept of "a reference" distinct from "the referent".

The inventor of the C programming language, oddly enough, chose to not have the concept of references at all. Rather, Ritchie chose to have "pointers" be first-class entities in the language. A pointer in C is like a reference in that it refers to some data by tracking its location, but there are more smarts in a pointer; you can perform arithmetic on a pointer as if it were a number, you can take the difference between two pointers that are both in the interior of the same array and get a sensible result, and so on.

Pointers are strictly "more powerful" than references; anything you can do with references you can do with pointers, but not vice versa. I imagine that's why there are no references in C -- it's a deliberately austere and powerful language.

The down side of pointers-instead-of-references is that pointers are hard for many novices to understand, and make it very very very easy to shoot yourself in the foot.

Pointers are typically implemented as addresses. An address is a number which is an offset into the "array of bytes" that is the entire virtual address space of the process (or, sometimes, an offset into some well-known portion of that address space -- I'm thinking of "near" vs. "far" pointers in win16 programming. But for the purposes of this article let's assume that an address is a byte offset into the whole address space.) Since addresses are just numbers you can easily perform pointer arithmetic with them.

Now consider C#, a language which has both references and pointers. There are some things you can only do with pointers, and we want to have a language that allows you to do those things (under carefully controlled conditions that call out that you are doing something that possibly breaks type safety, hence "unsafe".)  But we also do not want to force anyone to have to understand pointers in order to do programming with references.

We also want to avoid some of the optimization nightmares that languages with pointers have. Languages with heavy use of pointers have a hard time doing garbage collection, optimizations, and so on, because it is infeasible to guarantee that no one has an interior pointer to an object, and therefore the object must remain alive and immobile.

For all these reasons we do not describe references as addresses in the specification. The spec just says that a variable of reference type "stores a reference" to an object, and leaves it completely vague as to how that might be implemented. Similarly, a pointer variable stores "the address" of an object, which again, is left pretty vague. Nowhere do we say that references are the same as addresses.

So, in C# a reference is some vague thing that lets you reference an object. You cannot do anything with a reference except dereference it, and compare it with another reference for equality. And in C# a pointer is identified as an address.

By contrast with a reference, you can do much more with a pointer that contains an address. Addresses can be manipulated mathematically; you can subtract one from another, you can add integers to them, and so on. Their legal operations indicate that they are "fancy numbers" that index into the "array" that is the virtual address space of the process.

Now, behind the scenes, the CLR actually does implement managed object references as addresses to objects owned by the garbage collector, but that is an implementation detail. There's no reason why it has to do that other than efficiency and flexibility. C# references could be implemented by opaque handles that are meaningful only to the garbage collector, which, frankly, is how I prefer to think of them. That the "handle" happens to actually be an address at runtime is an implementation detail which I should neither know about nor rely upon. (Which is the whole point of encapsulation; the client doesn't have to know.)

I therefore have three reasons why authors should not explain that "references are addresses".

1) It's close to a lie. References cannot be treated as addresses by the user, and in fact, they do not necessarily contain an address in the implementation. (Though our implementation happens to do so.)

2) It's an explanation that explains nothing to novice programmers. Novice programmers probably do not know that an "address" is an offset into the array of bytes that is all process memory. To understand what an "address" is with any kind of depth, the novice programmer already has to understand pointer types and addresses -- basically, they have to understand the memory model of many implementations of C. This is one of those "it's clear only if it's already known" situations that are so common in books for beginners.

3) If these novices eventually learn about pointer types in C#, their confused understanding of references will probably make it harder, not easier, to understand how pointers work in C#. The novice could sensibly reason "If a reference is an address and a pointer is an address, then I should be able to cast any reference to a pointer in unsafe code, right?"  But you cannot.

If you think of a reference is actually being an opaque GC handle then it becomes clear that to find the address associated with the handle you have to somehow "fix" the object. You have to tell the GC "until further notice, the object with this handle must not be moved in memory, because someone might have an interior pointer to it". (There are various ways to do that which are beyond the scope of this screed.)

Basically what I'm getting at here is that an understanding of the meaning of "addresses" in any language requires a moderately deep understanding of the memory model of that language. If an author does not provide an explanation of the memory model of either C or C#, then explaining references in terms of addresses becomes an exercise in question begging. It raises more questions than it answers.

This is one of those situations where the author has the hard call of deciding whether an inaccurate oversimplification serves the larger pedagogic goal better than an accurate digression or a vague hand-wave.

In the counterfactual world where I am writing a beginner C# book, I would personally opt for the vague hand-wave.  If I said anything at all I would say something like "a reference is actually implemented as a small chunk of data which contains information used by the CLR to determine precisely which object is being referred to by the reference". That's both vague and accurate without implying more than is wise.

  • "In most modern cars, the accelerator pedal is just an input device of a computer that actually controls the engine (fuel injection); in a very similar way, a reference lets the programmer control the object whose lifecycle is actually managed by the runtime. A use of a pointer, in this context, would be tantamount to messing with the fuel valves directly, bypassing the computer." That's how I would write my beginner's book in the counterfactual world. :-)

    I like simple, vivid illustartions: I believe they are a viable alternative to both "inaccurate oversimplifications" and "accurate digressions", not to mention always good fun to read.

    Another example: here in Australia a police psychologist, when asked to explain, in plain English, the difference between a schizophrenic and an ordinary man with delusions, said that "men with delusions simply build castles in the sky; schizophrenics actually move in and live there." :-)

  • (In response to DRBlaise)

    Perhaps I am in the minority, but I truly feel that Mr. Sharp's explanation is clearer without removing the word "address." In my humble opinion, using the word "reference" to explain the the term "reference" is more confusing. But, alas, perhaps I really am the only one who feels this way.

  • I run into this all the time.

    I want to do an operation on a sub-vector just as if it was the whole thing

    In C:

    foo(FooType *v,int length){ }

    FooType * v = malloc(n);
    foo(v + 10, n - 10)

    With a reference in C# there is no easy way to do this same thing. I end up passing the starting index to signify that the operation is to be on the starting index to the end instead of the whole array. So you have something like:

    foo(FooType[] v, int start) { }

    FooType[] v = new FooType[n];
    foo(v, 10)

    Not a whole lot of difference as coding goes but definitely not the same.

    Not the same, but both are pretty much equally ugly. Some food for thought: are there ways that you could make foo more general and thereby make the call site more attractive? For example:

    foo(IEnumerable<FooType> v) { }

    FooType[] v = new FooType[n];

    There is a performance and flexibility cost to treating an array as an IEnumerable -- you don't get the fast random access. But if you don't need random access and this operation is not your bottleneck, then you gain the flexibility of being able to pass any sequence to foo, not just an array. -- Eric

  • > Not the same, but both are pretty much equally ugly.

    Speaking of which - do you know of any reason why System.ArraySegment<T> struct exists in its present shape? As it is, it seems to be mirroring the "T[] array, int offset, int length" pattern precisely, and it doesn't even attempt to abstract it away, so there's nothing gained from using it. If it at least had an implicit conversion operator from an array, it would be marginally handy to save on typing for the most common "pass the entire array" case; i.e.:

       Foo(ArraySegment<FooType> v) { }

       FooType[] v = new FooType[n];

       Foo(v); // auto-convert to 0..Length

    Of course, some language-level syntactic sugar to slice arrays a la Python would be even better... but I really wonder why, for the lack of all of the above, is ArraySegment even there? Does anyone even use it? Or is it just a trace of some long-abandoned experiment along the aforementioned lines?

  • "You're passing enough information to allow the callee to find the referenced thing"

    Isn't that a address? Can't set of information used to find something referenced be called a "address", maybe in C land a address is usually a memory address, but I don't know if that has to be true for C# land (or that it matters if it's a memory address or not) ...

  • I think the issue here is really just the english language, and pointer is very concrete while reference isn't, and then you go on to explain that they're 'same same but different' which is confusing.

    Pointer - it points to something.

    Reference - it refers to something, and ultimately it's a pointer somewhere.

    Pointer is concrete. It points to a specific memory address (location in memory). You can shoot your foot off because you can set a pointer to anything and attempt to dereference it as anything else. You can load data from a file directly into memory and set a pointer to the start and index it like an array if you wish. The reality is your program is essentially one big array of bits and is free to interpret (or misinterpret) at will. Many bugs are associated with the incorrect interpretation of the contents of memory.

    For a reference though, it's abstract because it's very similar to a pointer, but we all have to pretend it's not. We know it also points to something, and while our referenced object doesn't change, the pointer held by the reference is free to change without our knowing.

    So I haven't liked any of the metaphors so far. It's not like an address written on a piece of paper because a street address never changes in the normal sense. In explaining things to beginners it behooves us to create sensible metaphors and not geeky ones and without complex contortions of the metaphor to make it work.

    I really dislike metaphors .. but ... consider this.

    Given a city, there is a car. It's your car - a Red Ford Mustang convertable with fresh pine scent and cool new wheels. There are millions of cars, and millions of Ford Mustangs, and each car can be moved around.

    Now, you're not allowed to hold a pointer to the car because the car can be moved without your knowledge. e.g. If the car is towed or your brother borrows it he might not re-park it in the same place. Trying to hold a pointer will get you in trouble. You're mad so myRedFord.Kick() could end up kicking someone else's car, or you might kick empty air, or you might kick a building.

    But I do have a car, and I always know exactly which one it is when I go and get it.

    So if we want to know what a reference is, it's the licence plate (registration) of the car. Any number of things can refer to the car via it's registration token, and the registration token can be used to locate the car at a specific point in time, but the pointer itself is only useful for very short time slices.

    Knowledge of pointers is essential to understanding a computer. And I don't think you need to handwave. You just need to say a reference is the thing (object) for all intents and purposes, and the reference probably contains the pointer but you never get to see the real pointer because it doesn't matter to the program you write. As you say, it's an implementation detail. The reference is all the book-keeping that goes on, and the reference is, in the end, not owned by the programmer, it's owned by the memory manager.

  • Timely, accurate, and unbiased.  Great article.

  • Perhaps the best explanation on references (IMO) can be found on Bruce Eckel's "Thinking in C++" on the chapter devoted to references and copy constructors. I know that is C++ but the concept can be carried over to C# without many modifications. Here's what Bruce Eckel says :


    References are like constant pointers that are automatically dereferenced by the compiler. The easiest way to think about a reference is as a fancy pointer. One advantage of this “pointer” is that you never have to wonder whether it’s been initialized (the compiler enforces it) and how to dereference it (the compiler does it).

    The point is that any reference must be tied to someone else’s piece of storage. When you access a reference, you’re accessing that storage.

    There are certain rules when using references:

      1. A reference must be initialized when it is created. (Pointers can be initialized at any time.)

      2. Once a reference is initialized to an object, it cannot be changed to refer to another object. (Pointers can be pointed to another object at any time.)

      3. You cannot have NULL references. You must always be able to assume that a reference is connected to a legitimate piece of storage.


    The first sentence pretty much nails it, a reference is like a constant pointer which means it can't change to point to something else. Also, it doesn't need explicit dereferencing like pointers do. Merely accessing the reference will allow us to access the referent it is tied to. Incrementing a reference is incrementing the referent itself, you can equate a reference to a label to the actual storage, the reference is the referent itself, a bit like an alias if you catch my drift (Handle is an equally acceptable word).

    Pointers on the other hand are addresses of something stored in memory. Note that pointers don't point to the object in memory, it only contains the address in which the object is currently present in. Since it is an address (essentially a number), pointers lend themselves to arithmetic, which in turn leads to a number of interesting (ab)uses of the concept.

    I would prefer that most high level languages not even have a pointer, just a reference is fine. Since references allow us to handle memory in a way that pointers do without the extra power, it can be considered a fairly "safe" pointer. References might use addresses at the end of the day but that is an implementation detail that should not be of any significance to end users of the language. By making it very difficult (if not impossible) to get to the actual address (the raw number) of a variable using its reference, we could eliminate a whole class of problems that could ensue from users making assumptions about what is inside a reference. It is enough to say a reference is an opaque thing that points/refers to an object in memory and allows us to manipulate the object directly.

  • For a novice OO programmer it suffices to say and it is TRUTH to say:

                 A pass by reference is passing the thing itself. (Original)

                 A pass by value is creating and passing a copy of the thing. (Copy)

    This started by talking about educating novice (C#) OO programmers. If we are going to educate OO programmers then let's not confuse them with talk about physical memory and addresses.

    We old school guys who started by coding hex or octal, moving on to assembler, and eventually to compiled and interpreted languages may need to think of the world in that way, but they don’t and we should not corrupt them.

    C and C++ are baroque tools.  Much like assembler in that they are philosophically closely tied to an understanding of the hardware of the machine. These old school languages are powerful, expressive, and capable of astonishing efficiency in the right circumstances, but they also produce brittle, costly to support, and expensive solutions. This required understanding of the physical implementation details in these languages is in fact one of great drivers behind the advent of “modern” languages like C#.

    We need to raise our eyes up from the circuit board and focus more on modeling the world, that’s what OO is about.

    We don’t teach new surgeons how to smelt iron to make a scalpel, so why teach a novice OO programmer about memory locations. We need both metallurgist and surgeons but we don’t train them the same way.

  • A good post, with some interesting comments. As I am in the process of posting some related material would it be OK to include a pointer to this post as a reference? <grin>

    My biggest complaint is the "We dont need to teach novices...(fill in the blank)". As a person with over 32 years of professional development experience (including 25 as the Chief Architect and CEO of my own company), I have reapeatedly found that programmers who do not understand the internals and fundamentals have severe problems that manifest in many ways.

    Moderns technologies mean that we do not have to really think about the lowest levels on a regular basis, but without a deep understanding of what happens at the actual hardware level, the quality of work does significantly suffer.

  • I applaud your ability to call out the vague idea of a reference, while still giving an explanation that is vague unto itself (and rightly so).

    Great article.

  • " 'a reference is actually implemented as a small chunk of data which contains information used by the CLR to determine precisely which object is being referred to by the reference'. That's both vague and accurate without implying more than is wise...."

    Not to be factious, but that explanation is not only vague, it ranks with many similar phrases that I have seen make novices go sobbing into the night a throw said book, containing said “reference” right out their bedroom window.

    Try a gentler approach: such as “A reference merely refers to a chunk of data that can be retrieved on-demand by the developer.  How said data is stored and maintained is entirely up to the CLR and is beyond the scope of this discussion.” Then and only then would I footnote an external source for a more specific detailed discussion, specifically warning the novice that to go there might cause them to become pre-maturely grey and could possibly result in an encounter with the infamous “Stray Pointer Dragon”!

  • A reference in C# is a "full service" pointer. Just pull up to the gas station and the attendant gives you want you want.

  • I notice you made no use of the word "abstraction" to describe references.

  • Anything but a literal is reference. Even "5" is but a name descrying an individual artefact of sustained enumeration: in the beginning there was nothing, and there was exactly one nothing, which gave us the value "1" - and then there were two things, the primal nothing and the value "1", and then there were three things, the primal nothing, the value "1", and the value "2", and by induction we have an infinity of integers, ordination and, most importantly, tenure for professors of discrete mathematics.


    int foo = 5;

    int bar = foo  + 1;

    begins with the assignment of a literal to a variable which is a named REFERENCE to storage. In the expression foo + 1, the symbol foo is RESOLVED to the value stored in whatever the run-time chooses to store it in, the point here being that it's a reference that must be resolved.

    Go with the hand-wave. By the time Grasshopper is ready to understand the answer, he won't need to ask the question.

Page 3 of 6 (76 items) 12345»