The Truth About Value Types

The Truth About Value Types

Rate This
  • Comments 68

As you know if you've read this blog for a while, I'm disturbed by the myth that "value types go on the stack". Unfortunately, there are plenty of examples in our own documentation and in many books that reinforce this myth, either subtly or overtly. I'm opposed to it because:

  1. It is usually stated incorrectly: the statement should be "value types can be stored on the stack", instead of the more common "value types are always stored on the stack".
  2. It is almost always irrelevant. We've worked hard to make a managed environment where the distinctions between different kinds of storage are hidden from the user. Unlike some languages, in which you must know whether a particular storage is on the stack or the heap for correctness reasons.
  3. It is incomplete. What about references? References are neither value types nor instances of reference types, but they are values. They've got to be stored somewhere. Do they go on the stack or the heap? Why does no one ever talk about them? Just because they don't have a type in the C# type system is no reason to ignore them.

The way in the past I've usually pushed back on this myth is to say that the real statement should be "in the Microsoft implementation of C# on the desktop CLR, value types are stored on the stack when the value is a local variable or temporary that is not a closed-over local variable of a lambda or anonymous method, and the method body is not an iterator block, and the jitter chooses to not enregister the value."

The sheer number of weasel words in there is astounding, but they're all necessary:

  • Versions of C# provided by other vendors may choose other allocation strategies for their temporary variables; there is no language requirement that a data structure called "the stack" be used to store locals of value type.
  • We have many versions of the CLI that run on embedded systems, in web browsers, and so on. Some may run on exotic hardware. I have no idea what the memory allocation strategies of those versions of the CLI are. The hardware might not even have the concept of "the stack" for all I know. Or there could be multiple stacks per thread. Or everything could go on the heap.
  • Lambdas and anonymous methods hoist local variables to become heap-allocated fields; those are not on the stack anymore.
  • Iterator blocks in today's implementation of C# on the desktop CLR also hoist locals to become heap-allocated fields. They do not have to! We could have chosen to implement iterator blocks as coroutines running on a fiber with a dedicated stack. In that case, the locals of value type could go on the stack of the fiber.
  • People always seem to forget that there is more to memory management than "the stack" and "the heap". Registers are neither on the stack or the heap, and it is perfectly legal for a value type to go in a register if there is one of the right size. If if is important to know when something goes on the stack, then why isn't it important to know when it goes in a register? Conversely, if the register scheduling algorithm of the jit compiler is unimportant for most users to understand, then why isn't the stack allocation strategy also unimportant?

Having made these points many times in the last few years, I've realized that the fundamental problem is in the mistaken belief that the type system has anything whatsoever to do with the storage allocation strategy. It is simply false that the choice of whether to use the stack or the heap has anything fundamentally to do with the type of the thing being stored. The truth is: the choice of allocation mechanism has to do only with the known required lifetime of the storage.

Once you look at it that way then everything suddenly starts making much more sense. Let's break it down into some simple declarative sentences.

  • There are three kinds of values: (1) instances of value types, (2) instances of reference types, and (3) references. (Code in C# cannot manipulate instances of reference types directly; it always does so via a reference. In unsafe code, pointer types are treated like value types for the purposes of determining the storage requirements of their values.)
  • There exist "storage locations" which can store values.
  • Every value manipulated by a program is stored in some storage location.
  • Every reference (except the null reference) refers to a storage location.
  • Every storage location has a "lifetime". That is, a period of time in which the storage location's contents are valid.
  • The time between a start of execution of a particular method and the method returning normally or throwing an exception is the "activation period" of that method execution.
  • Code in a method can require the use of a storage location. If the required lifetime of the storage location is longer than the activation period of the current method execution then the storage is said to be "long lived". Otherwise it is "short lived". (Note that when method M calls method N, the use of the storage locations for the parameters passed to N and the value returned by N is required by M.)

Now we come to implementation details. In the Microsoft implementation of C# on the CLR:

  • There are three kinds of storage locations: stack locations, heap locations, and registers.
  • Long-lived storage locations are always heap locations.
  • Short-lived storage locations are always stack locations or registers.
  • There are some situations in which it is difficult for the compiler or runtime to determine whether a particular storage location is short-lived or long-lived. In those cases, the prudent decision is to treat them as long-lived. In particular, the storage locations of instances of reference types are always treated as though they are long-lived, even if they are provably short-lived. Therefore they always go on the heap.

And now things follow very naturally:

  • We see that references and instances of value types are essentially the same thing as far as their storage is concerned; they go on either the stack, in registers, or the heap depending on whether the storage of the value needs to be short-lived or long-lived.
  • It is frequently the case that array elements, fields of reference types, locals in an iterator block and closed-over locals of a lambda or anonymous method must live longer than the activation period of the method that first required the use of their storage. And even in the rare cases where their lifetimes are shorter than that of the activation of the method, it is difficult or impossible to write a compiler that knows that. Therefore we must be conservative: all of these storage locations go on the heap.
  • It is frequently the case that local variables and temporary values can be shown via compile-time analysis to be unused after the activation period ends, and therefore can be treated short-lived, and therefore can go onto the stack or put into registers.

Once you abandon entirely the crazy idea that the type of a value has anything whatsoever to do with the storage, it becomes much easier to reason about it. Of course, my point above stands: you don't need to reason about it unless you are writing unsafe code or doing some sort of heavy interoperating with unmanaged code. Let the compiler and the runtime manage the lifetime of your storage locations; that's what its good at.

 

  • @Jon Skeet: but it's a fact that copying only happens in the stack and never on the heap (in the case of assignment and arguments). I know you can say it's irrelevant but mentioning the stack and heap helps keeping the discussion concrete.

    The article keeps mentioning Desktop CLR and MIcrosoft Implementation but that what's vast majority of where .NET programs are running i.e. this is the de facto CLR so why bother?

    To tell you the truth, I would be afraid of discussing the article with colleague without being labeled an uber geek cause for the large part, it doesn't really matter and saying value types stored in the stack and reference types in the heap is good enough.

  • DaRage, I agree with Jon. What the programmer need to know is that value types is copied by VALUE, no matter if it is at stack

  • Where does a value type get stored if it's large enough to qualify for the Large Object Heap?  (Not that a struct that large would be a good idea, but for the sake of completeness.)

  • After having posted the my previous comment, I've come up with its summary.

    Of course formally C# lacks the notion of stack. But it lacks this notion not in a storage-agnostic, but in storage-indeterministic way. In this condition of indeterminacy, the developers have invented a number of abstractions which help them to write programs capable of handling predictable sizes of input data. And stack is one of these abstractions. So we can say that C# itself in its indeterminacy gave birth to the notion of stack.

    Here we are left with an important philosophical question. If C# itself gave birth to the notion of stack, shouldn't we consider the stack to be an integral part of C#?

  • Hm... What I called "my previous comment" seems to be absent, so here it is:

    >"We've worked hard to make a managed environment where the distinctions between different kinds of storage are hidden from the user. Unlike some languages, in which you must know whether a particular storage is on the stack or the heap for correctness reasons."

    >"If if is important to know when something goes on the stack, then why isn't it important to know when it goes in a register? Conversely, if the register scheduling algorithm of the jit compiler is unimportant for most users to understand, then why isn't the stack allocation strategy also unimportant?"

    I cannot agree with your above cited points.

    First (and as an answer to the fist cited paragraph), C# is hardly unlike any 'some' language: for a developer it does matter where the values go. Consider an algorithm of at least O(n) space complexity. For C# we can say that an algorithm implementation which utilizes 'call stack' (i.e. an abstract storage, associated with a chain of nested function calls) for this O(n) is likely to fail (with stack overflow) for relatively small n, while an algorithm implementation  which utilizes some explicit storage (like Stack<T>) is likely to fail (with OOM) only for relatively large n. Given that, we can say that a C# developer should not use call stack for O(n) storage unless he knows what he's doing. Obviously we can't admit that "distinctions between different kinds of storage are hidden from the user"

    In fact I'm not aware of _languages_ that are stack-overflow-free (i.e. are storage-agnostic). But there exist language implementations (like chicken scheme) that don't have a notion of stack overflow for all hardware architectures they are available for (i.e. in a program compiled with chicken scheme you may recur untill you run out of address space for your heap).

    Second (and as an answer to the second cited paragraph) both register and stack allocation strategies are important. The way the program is written affects both of them. In turn register allocation affects application performance while stack allocation limits the size of input data the program can handle. But (I believe that) for a C# developer the latter usually matters much more than the former.

  • @DaRage Value types are always copied (barring ref semantics but the you're passing the something else instead) no matter where they reside, if they are within an array and you assign the value in index 1 to the value in index 2 you create a copy of what is in 1 and place it in 2. There is no requirement that this goes via the stack it is an excellent candidate for staying in registers or even being dealt with by some very funky logic in the CPU implementing a rapid memcpy.

    Understanding this is a help in understanding some of the reasons that making mutable value types is a very bad idea in most situations.  

  • @DaRage:

    Suppose you have a class with two fields, and both of those fields are of some large value type. What do you think happens when you write field2 = field1; ? It copies the value of field1 into field2... possibly via a stack value (I don't know, and it's frankly irrelevant). Copying happens, and both values are on the heap (probably; see all Eric's caveats about implementation). In what way does your claim that "copying only happens in the stack and never on the heap" hold up?

    You seem to have missed the point of Eric emphasizing the "this is for the desktop CLR" bit - it's trying to make it very clear that this is implementation specific, rather than being *inherently* part of C#. That's an important point to understand, even if you *also* want to know what happens to go on in the desktop CLR.

  • "In this condition of indeterminacy, the developers have invented a number of abstractions which help them to write programs capable of handling predictable sizes of input data. And stack is one of these abstractions."

    The stack is one of these abstractions the developers *of the implementation* created. There are other alternatives. They may all be stupid, and the stack may be the best one, but there is nothing that necessitates the stack. C# has not given birth to the notion of the stack, it has given birth to the need for some storage mechanism which the implementation takes care of.

    In cases where you really need to squeeze out performance, it may be worthwhile knowing where the implementation stores the data. But these are edge cases, and the story of where it's actually stored is complicated. There is no need to pretend that the storage location is an important distinguishing factor between value and reference types.

    The conceptual difference of values being unchangeable values like '10' and reference types being … well anything else (at least conceptually); the fact value types store the data, while reference types store references to the data; the implications this has for allocation, and how copying a value type is likely a larger task than copying a reference; these things are enough to be teaching people for them to make a good distinction between when to use a value and a reference type, based on conceptual semantics and performance characteristics, without caring whether it's stored on the stack, the heap, or in your mothers porridge.

  • >value types are stored on the stack when the value is a local variable or temporary

    Does it require "or a field of stack-stored variable"?

  • @[ICR]

    >"The stack is one of these abstractions the developers *of the implementation* created."

    That is true. But in fact stack+heap abstraction can be used to reason about all possible kinds of implementations. We can think of stack+heap as of a 'storage model' (analogy with memory model intended). With such a model at hand (which defines likely small stack, likely large heap, allocation strategies and stuff) developers can reason about their programs' correctness. If in additing the language implementaion in use defines how the model abstractions are mapped to hardware and software resources (e.g address space regions, registers and stuff) then the developers can reason of their programs in concrete numbers (e.g. we need X Mb RAM to handle Y Mb input) and can reason of their programs' performance as well (knowing which model abstractions are implemented in the most efficient way).

    C# didn't have a storage model, so the developers invented an unofficial, folklore one. The desire to have a storage model is natural, so all these stack+heap talks come not from ignorance but rather from wisdom.

  • @Jon Seek. I agree with you, maybe in that case the copying doesn't happen on the stack and it "maybe" happens in the heap. I don't know, and as you said, I shouldn't care.

    But still why is it so "important" to distinguish between the implementation detail and the specification when the implementation is overwhelmingly what matters. This is, to me, more theory check than a reality check. In that regard, you guys seem to be pissed off at something that's not that "important" after all.

  • I disagree some of what Eric has said and this discussion is getting pretty muddled so here is my attempt to clear it up.

    @[ICR] You said "there is nothing that necessitates the stack". But this is false. As long as a language has subroutines it has to have the concept of a call stack. Exotic hardware and such vague ideas are not needed either. The runtime will still have to use a stack.

    No it doesn't. A call stack is an implementation mechanism for lazy compiler writers like me, not a necessity. First off, let's not conflate the call stack with the storage for activation frames; there is no requirement whatsoever that they be the same data structure, and on some hardwares, they are different data structures. (I consider it unfortunate that in the x86 architecture they are the same data structure. If return addresses were not stored on the same stack as local variables then all those stack smashing attacks would be for naught.) Let's consider for now just the call stack function of the stack. I assure you that if I really wanted to write a language that has subroutines and did not use a stack *at all*, I could. I could transform the program into Continuation Passing Style. Since a CPS program has *no returns* there is no need for a call stack. You're never coming back. - Eric

    How the stack is implemented doesn't really matter. It could be a contiguous region of memory and a stack pointer or perhaps a linked list of stack frames stored in the heap. The hardware might help to manage the stack or it might not. The important thing is that it is that each stack frame has a lifetime equal to the what Eric calls the "activation period" of a method. (This is precisely the reason why it has to be a stack. Method activations have LIFO behaviour).

    Again, the data structure does not *need* to be a stack in order to get this LIFO behaviour. Rather, because method activations are conceptually LIFO, it is *convenient* to use a LIFO data structure like a stack. Again, the use of the stack for storing activation frames is a convenience, not a requirement. But more generally, you're making my point for me: the details of how the temporary store is implemented are implementation-defined. It need not be "the stack" - the one million byte pre-reserved per-thread data structure. The right way to reason about it is to think of the lifetime; don't think of it as "the stack", think of it as "the storage for short-lived data". When you say it that way, the statement becomes trivially false: "value types always go in the storage area for short-lived data". No, obviously they don't, because some of those values are going to be long-lived. - Eric

    Eric mentions registers but unlike the stack they really are an implementation detail. We could very well have a machine without registers (hold on, we have one already, it's called the CLR VM). Yes, registers are sometimes used to hold contents of locals. But as soon as you call another subroutine they have to be saved to the stack anyway.

    You keep on telling me what I *need* to do, but no, I don't need to do that. Again, I could generate the code for a method as a CPS transformation. Since the method call is never going to *return* then clearly I do not need to store the register values before the continuation is invoked because those register values are *never* going to be used again! Why store 'em if you ain't gonna read 'em? That's the awesome thing about CPS is that all these problems about keeping track of where you were and what you were doing simply go away. - Eric

    There IS a requirement that locals are stored on a stack. This is implicit in the concept of a subroutine and its "activation period". Closures do capture local variables. Conceptually, they save the current stack frame for the lifetime of the closure. (The are several ways to actually implement this.)

    No, there is not any such requirement. The idea that closures "save the current stack frame" is again an implementation detail that assumes that activations are stack frames by definition. Stack frames are just an implementation detail; there need not be any stack whatsoever, so why should there be "stack frames" if there's no stack in the first place? You seem to have this idea that just because all the implementations of C you've ever seen define a "frame" on the "stack" that that's how it's got to be. I'm here to tell you it isn't so. There are many interesting ways to write a compiler for a programming language and not all of them involve anything that resembles a stack frame for an activation. - Eric

    Eric is right that one shouldn't mix storage locations and types. However, talking about long-lived storage locations vs short-lived ones is terribly unhelpful. As he says, long-lived locations are always heap locations and short-lived locations are stack locations (the CLR doesn't have registers). Why not call them that?

    Because that is confusing an implementation detail (where the operating system puts the storage) with a semantic requirement of the language (that a particular storage have a particular lifetime.) - Eric

    It is true that "references and instances of value types are essentially the same thing". And references and instances of reference type are two different things. Perhaps this should be emphasised more, together with the dereference operator (".").

    Next Eric says "storage locations of instances of reference types are always treated as though they are long-lived". Another way of saying that is "instances of reference types are always stored in heap-locations". Surely that is what people mean when they say "reference types are allocated on the heap"?

    Sure. But they are not *required* to be. Again, this is an *economic* decision rather than a *requirement*. We could do lifetime analysis on instances of reference types and statically determine when they don't survive the method activation, and allocate them on the stack. We don't because the payoff of doing so is insufficiently high. - Eric

    We're now half-way there. Array elements and fields live inside arrays and classes so it is clear then that they are heap-allocated too. Closures are a special case (as I've said, conceptually they save the current stack frame).

    Then Eric says "it is frequently the case that local variables and temporary values can be shown via compile-time analysis to be unused after the activation period ends". Local variables can only be used after the activation period ends if they've been captured by a closure. But closures just save the current stack frame so local variables are always stored on the stack (conceptually). Since local variables only hold references or value types we can say that "local variables of value type are stack allocated".

    Now you're losing me. If a "stack frame", assuming such a thing exists, is "saved" then in what possible sense does it form a "stack"? It's not going to be popped in LIFO order anymore. The data structure is not logically a stack because it is not LIFO, and the implementation doesn't use "the stack", and yet you are saying it is *less* confusing to call that usage "the stack"? Less confusing to who? It's way more confusing to me, and I'm the guy who has to write the compiler! If you're going to talk about what goes on the stack and what goes on the heap, then be accurate about it. Saying "well, when I say "the stack" I don't mean "the stack", I mean the thing where we make a copy of some stack frame onto the heap so that it no longer has stack semantics" is confusing. - Eric

    To sum up, when someone says "value types are stored on the stack and reference types are stored on the heap" they usually mean "local variables of value type contain values directly and are stored on the stack whereas local variables of reference type contain a reference to instances of reference type which are stored on the heap".

    If that's what they mean then they should say that, instead of saying "value types are always stored on the stack". - Eric

    It is wrong to say that we don't need to reason about storage. A programmer has to be able to reason about the lifetime of objects he creates. He has to know that the lifetime of local variables follows that of the activation period of methods. And he has to understand that the lifetime of instances of reference types is indeterminate (hence the need for IDisposable). He has to know that when he needs more control over object lifetime he can use object pools etc.

    Now you are re-stating my point: yes, programmers need to reason about lifetime, not about storage. That's the whole point of this article: don't ask "where is this value stored?" - that's an implementation detail of the runtime - rather, ask "what is the lifetime of this value?" - that's a semantic requirement of the language that you can reason about independently of any knowledge of implementation details. - Eric

  • @petebu You are reasoning with a C++ background IMO where you need to know more on how things really work at a low level. With C# and the CLR,as you say, a programmer has to be able to reason about the lifetime of objects. That is completely true. But that does not mean that he needs to know how the heap or stack are involved. Like Eric said, that is an implementation detail.

    No one is saying that you shouldn't know when or when not to use IDisposable, when to use value types or reference types, etc. What isn't so clear to me and to others is if this knowledge necessarily needs to be tied with how to the CLR manages the stack, heap, registers etc. Its basically the same paradigm as general OOP. You need to know how to use an object and what methods are better for different purposes, but you dont actually need to know how the object works internally.

  • @DaRage: In order to reason about what's guaranteed, you need to be able to distinguish between what's specified and what's implemented.

    Suppose the C# team decided to create a reference type for each method, to hold the local variables for that method. The only thing on the stack in each frame might be a reference to an instance of that type. Suddenly *all* local variables are on the heap... and many things you may have previously stated become false, because you mixed up implementation and specification. If you think that's a crazy idea, consider that it's pretty much exactly what happens for iterator blocks.

    If you only ever reason in terms of what's guaranteed by the spec, then your arguments don't become invalid when the implementation changes in way which is backward-compatible in spec terms.

  • @Eric. About CPS. Of course you are right that you can have a separate stack for return addresses and local vars. But if you CPS transform the program, all you've done is moved the return address into the same structure (the activation frame) as the local vars. There is still a call stack hidden away there. The return is still there too (a jump to the return address). I still don't see how you can implement subroutines without it.

    But that is besides the point. Forget how it is implemented. A total beginner looking at his first program will see that when he calls a method, all the local vars disappear and that when the subroutine returns, a) local vars reappear with the same values as before and b) the computer knew where in the caller code to return to. In the other words the computer saved the local vars and return address. Now he notices that subroutine calls can be nested. So the computer must save a list of return addresses and local vars. Furthermore that list is only accessed in LIFO order. That is, it is a stack. This behaviour exists in all languages with subroutines (with the exception of those where vars have dynamic scope by default eg. Emacs Lisp). That's what I mean by a stack (not necessarily a concrete data structure) - all programmers must have a notion somewhat like this, so it is not just an implementation detail. Perhaps I am wrong on this so it's interesting to hear what others think. One could refrain from calling it a stack and talk about the "LIFO order of method activation". Or perhaps say that local variables have "lexical scope and dynamic extent/lifetime" but I am not sure if that helps (one then has explain the precise meaning of "dynamic").

    About registers. You said that "since the method call is never going to *return* then clearly I do not need to store the register values before the continuation is invoked because those register values are *never* going to be used again". When you invoke the continuation (that is, you jump to return address - they are the same thing) you will have to restore the registers in use before you entered the subroutine. But the point I was trying to make is that registers are a bit of a red herring here. Your entire blog post could have been written without mentioning registers. The CLR doesn't have them and there is no need to reach below the CLR to make your argument.

    About short/long-lived references. My main objection is to these two terms. "Short-lived" and "long-lived" relative to what. Is it measured in seconds, CPU cycles, method activation periods? The GC might run during the current method call - then the contents of a long-lived location might be freed before the local short-lived locations. The important thing about the stack-location isn't that it is relatively short-lived compared to a heap location but that the lifetime is a) deterministic and b) corresponds to method activations. For heap locations, the important idea is that (in a GCed language) the lifetime is indeterminate. If you don't like calling them stack and heap locations you need a better name that conveys the lifetime. Perhaps, "locations with dynamic lifetime" (these exist during the dynamic lifetime of a method activation) and "locations with indeterminate lifetime" (these exist until an indeterminate point in the future when the GC runs) ?

    Btw, I agree with your *original* blog post that we should concentrate on what is observable not on how it is implemented. When someone asks about the difference between value types and reference types one of the key differences that doesn't get mentioned much is that a value type represents the set of values that you think it does whereas a reference type represents the set of values that you think it does union the set containing null. Once we forbid nulls and make heavy use of immutable objects then the observable distinction between value types and reference types begins to fade away. This happens in F# for instance (unfortunately nulls can sneak in from C# code).

Page 2 of 5 (68 items) 12345