While talking to someone yesterday, I witnessed yet another instance of "value" versus "reference" confusion.

What do I mean? This is particularly confusing in .NET, because of some probably mistaken jargon choices on Microsoft's behalf.  Commonly in .NET, the terms "value" and "reference" come up in these contexts:
(1) Discerning between "value types" and "reference types"
(2) Discerning between "pass by value" and "pass by reference"
(3) Referring to an object's "value" or "reference".

I don't blame anyone when they get confused by these terms, I mean - when someone discusses .NET with me, and mentions "value" or "reference", I often end up asking them to disambiguate in which context they are referring.  I'll have examples in C#, VB, and J# in this post, to help people associate what I'm discussing with the actual code that they've probably seen.

1. Referring to "value types" versus "reference types"

This is probably the most common need to actually discern between the two, and is greatly associated with other .NET concepts such as nullity, boxing, heap vs. stack allocation, virtual methods, and tons of other things.  For Java users, the term "primitive" is synonymous with .NET's "value type" term.  What this means, is that a value type is what the CLR/CPU actually end dealing with when it comes down to performing computations, and comparisons.

Users familiar with how a program works are probably also familiar with the term program "stack" versus "heap".  Unfamiliar users need only understand some basic distinctions between the two:
    (1) They're both sections of memory in a program, but are allocated and treated in different manners.
    (2) The program "stack" is a memory section that exists for the currently executing part of the program, and typically holds reserved space for local variables.
    (3) The program "heap" is a part that contains reserved memory that might persist past the current function, thread, and possibly persist the entire lifetime of the program.  In some programs, there are other parts of memory that persist throughout the program, but this is the one that you'll be dealing with the most in .NET.
    (4) Memory reserved in the program stack, for a particular function, is reclaimed / reused automatically when the function returns.  If the function A calls function B before returning, function A's stack memory will persist all the way through until function A finishes, including the portion of the time that function B is being executed.

Now "objects" and "classes" are all about two things: containing things in memory, and inheritance.  The characteristics of these memory sections are extremely important with deciding how an object needs to be treated.  Here're the differences now, between "value types" and "reference types".

Under the hood, "reference types" are always allocated on the program heap.  When the CLR decides to make one, it first chunks out the appropriate amount of memory on the heap before running the constructor.  Whoever called for the creation of an instance of this reference type will then get back the "pointer", or the address of the object's memory in the heap.  Everywhere that consumes this type will deal exclusively with addresses to heap memory.  Most of the classes/types in .NET are reference types.  You declare a new reference type in C#, VB.NET, and J# using the "class" keyword.  If I wanted to define a reference type called "MyReferenceType", I'd use the following syntaxes:

      class MyReferenceType


Class MyReferenceType

End Class

The above three examples do / compile-to the exact same thing.  Now, value types differ in the way that you probably guessed... they're allocated out of the stack.  That means that when a function executes, all of the value types it uses are known before hand, and are laid out before execution actually begins in the function.  It also means that the memory allocated for them must go away when the function finishes (returns).  However, there are a couple of things that must be done in order to keep the CLR a safe/managed environment.

If you're using a value type in your function, how can you return it to your caller, if the memory allocated goes away/gets reused? Obviously that wouldn't be safe.  The way that this is resolved is that functions can only return a copy of that memory.  Likewise, when you pass a value type to a function, by default (I'll explain later in this post), it's copied.  This is a simpler allocation and reclamation process than heap objects (which are reclaimed by the garbage collector), so it's often faster.  However, it's only faster for small objects - otherwise all of the copying catches up with you.  In fact, your basic types are all value types - for example, the "int" in C#/J# and VB's "Integer" are value types.  You can declare your own custom value type in the following syntax:

      struct MyValueType {


   VB.NET: (requires all value types have at least one field)

      Structure MyValueType

         Private SampleField As Integer


   J#: (requires all value types to explicitly extend from System.ValueType)

      import System.*;

class MyValueType extends ValueType {

The VS 2005 C# IDE provides the additional feature that people can invoke the "Go to Definition" command on any of the built-in types, as if they were defined by your source code.  If that's the case, you'll see a file that's generated on the fly, that's in a C#-like syntax.  For example, "Go to Definition" on "int", in C# for VS 2005, brings me to:

   namespace System {

struct Int32 : IComparable, IFormattable, IConvertible, IComparable<int>, IEquatable<int> {

Which allows you to see that "int" is really an alias for "System.Int32", which is a value type since it uses the struct syntax.

So, this is just a starter on the difference between value types and reference types, but I'm taking a break.  Some more things that I'll cover in my next post, before finishing this point are:
   (1) Enums
   (2) Type unification, boxing/unboxing
   (3) Interfaces and inheritance for value types