When Are You Required To Set Objects To Nothing?

When Are You Required To Set Objects To Nothing?

Rate This
  • Comments 52

A quick follow up on my earlier entry on the semantics of Nothing in VBScript. I see code like this all the time:

Function FrobTheBlob()
  Dim Frobber, Blob
  Set Frobber = CreateObject("BitBucket.Frobnicator")
  Set Blob = CreateObject("BitBucket.Blobnicator")
  FrobTheBlob = Frobber.Frob(Blob)
  Set Frobber = Nothing
  Set Blob = Nothing
End Function

What's the deal with those last two assignments? Based on the number of times I've seen code that looks just like this, lots of people out there are labouring under the incorrect belief that you have to set objects to Nothing when you're done with them.

First off, let me be very clear: I'm going to criticize this programming practice, but that does NOT mean that you should change existing, working code that uses it! If it ain't broke, don't fix it.

The script engine will automatically clear those variables when they go out of scope, so clearing them the statement before they go out of scope seems to be pointless. It's not bad -- clearly the code works, which is the important thing -- but it needlessly clutters up the program with meaning-free statements. As I've ranted before, in a good program every statement has a meaning.

When I see code like this, the first thing I think is cargo cult programmer. Someone was told that the magic invocation that keeps the alligators away is to put a banana in your ear and then set objects to Nothing when you're done with them. They do, and hey, it works! No alligators!

Where the heck did this thing come from?; I mean, you don't see people running around setting strings to "" or integers back to zero. You never see it in JScript. You only ever see this pattern with objects in VB and VBScript.

A few possible explanations immediately come to mind.

Explanation #1: (Bogus) Perhaps some earlier version of VB required this. People would get into the habit out of necessity, and when it became no longer necessary, it's hard to break the habit. Many developers learn by reading old code, so those people would pick up on the old practice.

This explanation is bogus. To my knowledge there has never been any version of VB that required the user to explicitly deallocate all objects right before the variables holding them went out of scope. I'm aware that there are plenty of people on the Internet who will tell you that the reason they set their objects to Nothing is because the VB6 garbage collector is broken. I do not believe them. If you've got a repro that shows that the GC is broken, I'd love to see it.

Explanation #2: (Bogus) Circular references are not cleaned up by the VB6 garbage collector. You've got to write code to clean them up, and typically that is done by setting properties to Nothing before the objects go out of scope.

Suppose you find yourself in this unfortunate situation:

Sub BlorbTheGlorb()
  Dim Blorb, Glorb
  Set Blorb = CreateObject("BitBucket.Blorb")
  Set Glorb = CreateObject("BitBucket.Glorb")
  Set Blorb.Glorber = Glorb
  Set Glorb.Blorber = Blorb
  '
  ' Do more stuff here
  '

and now when the procedure finishes up, those object references are going to leak because they are circular.

But you can't break the ref by cleaning up the variables, you have to clean up the properties. You have to say

Set Blorb.Glorber = Nothing
Set Glorb.Blorber = Nothing

and not

Set Blorb = Nothing
Set Glorb = Nothing

Perhaps the myth started when someone misunderstood "you have to set the properties to Nothing" and took it to mean "variables" instead. Then, as in my first explanation, the misinformation spread through copying code without fully understanding it.

I have a hard time believing this explanation either. Because they are error-prone, most people avoid circular references altogether. Could it really be that enough people ran into circular ref problems and they all solved the problem incorrectly to cause a critical mass? As Tommy says on Car Talk: Booooooooooooogus!

Explanation #3: It's a good idea to throw away expensive resources early. Perhaps people overgeneralized this rule? Consider this routine:

Sub FrobTheFile()
  Dim Frobber
  Set Frobber = CreateObject("BitBucket.Frobber")
  Frobber.File = "c:\blah.database" ' locks the file for exclusive r/w access
  '
  ' Do stuff here
  '
  Set Frobber = Nothing ' final release on Frobber unlocks the file
  '
  ' Do more stuff here
  '
End Sub

Here we've got a lock on a resource that someone else might want to acquire, so it's polite to throw it away as soon as you're done with it. In this case it makes sense to explicitly clear the variable in order to release the object early, as we're not going to get to the end of its scope for a while. This is a particularly good idea when you're talking about global variables, which are not cleaned up until the program ends.

Another -- perhaps better -- design would be to also have a "close" method on the object that throws away resources if you need to do so explicitly.  This also has the nice result that the close method can take down circular references.

I can see how overapplication of this good design principle would lead to this programming practice. It's easier to remember “always set every object to Nothing when you are done with it“ than “always set expensive objects to Nothing when you are done with them if you are done with them well before they go out of scope“. The first is a hard-and-fast rule, the second has two judgment calls in it.

I'm still not convinced that this is the whole story though.

Explanation #4: I originally thought when I started writing this entry that there was no difference between clearing variables yourself before they go out of scope, and letting the scope finalizer do it for you.  There is a difference though, that I hadn't considered. Consider our example before:

Sub BlorbTheGlorb()
  Dim Blorb, Glorb
  Set Blorb = CreateObject("BitBucket.Blorb")
  Set Glorb = CreateObject("BitBucket.Glorb")

When the sub ends, are these the same?

  Set Blorb = Nothing
  Set Glorb = Nothing
End Sub

versus

  Set Glorb = Nothing
  Set Blorb = Nothing
End Sub

The garbage collector is going to pick one of them, and which one, we don't know. If these two objects have some complex interaction, and furthermore, one of the objects has a bug whereby it must be shut down before the other, then the scope finalizer might pick the wrong one! 

(ASIDE: In C++, the order in which locals are destroyed is well defined, but it is still possible to make serious mistakes, particularly with the bane of my existence, smart pointers. See Raymond's blog for an example.)

The only way to work around the bug is to explicitly clean up the objects in the right order before they go out of scope.

And indeed, there were widely-used ADO objects that had this kind of bug.   Mystery solved.

I'm pretty much convinced that this is the origin of this programming practice.  Between ADO objects holding onto expensive recordsets (and therefore encouraging early clears), plus shutdown sequence bugs, lots of ADO code with this pattern got written.  Once enough code with a particular pattern gets written, it passes into folklore that this is what you're always supposed to do, even in situations that have absolutely nothing to do with the original bug.

I see this all over the place. Here's some sample documentation that I copied off the internet:

You can save an instance of a persistent object using its sys_Save method. Note that you must call sys_Close on an object when you are through using it. This closes the object on the server. In addition you should set patient to Nothing to close the object in Visual Basic.

Dim status As String
patient.sys_Save
patient.sys_Close
Set patient = Nothing

Notice that calling the close method is a "must" but setting the variable to Nothing is a "should". Set your locals to Nothing : it's a moral imperative! If you don't, the terrorists have already won. (One also wonders what the string declaration there is for. It gets worse -- I've omitted the part of the documentation where they incorrectly state what the rules are for using parentheses. The page I got this from is a mass of mis-statements -- calling all of them out would take us very far off topic indeed.)

I would imagine that there are lots of these in the MSDN documentation as well.

What is truly strange to me though is how tenacious this coding practice is. OK, so some objects are buggy, and sometimes you can work around a bug by writing some code which would otherwise be unnecessary. Is the logical conclusion “always write the unnecessary code, just in case some bug happens in the future?”  Some people call this “defensive coding”.  I call it “massive overgeneralization“. 

True story: I found a performance bug in the Whidbey CLR jitter the other day. There's a bizarre situation in which a particular mathematical calculation interacts with a bug in the jitter that causes the jitter to run really slowly on a particular method. It's screwing up our performance numbers quite badly.  If I change one of the constants in the calculation to a variable, the problem goes away, because we no longer hit the buggy code path in the jitter.

They'll fix the bug before we ship, but consider a hypothetical. Suppose we hadn't found the bug until after we'd shipped Whidbey. Suppose I needed to change my code so that in runs faster in the buggy Whidbey CLR. What's the right thing to do?

Solution One:  Change the constant to a variable in the affected method.  Put a comment as long as your arm in the code explaining to future maintenance programmers what the bug is, what versions of the framework causes the problem, how the workaround works, who implemented the workaround, and how to do regression testing should the underlying bug be fixed in future versions of the framework.  Realize that there might be similar problems elsewhere, and be on the lookout for performance anomalies.

Solution Two: Change all constants to variables. And from now on, program defensively; never use constants again -- because there might someday be a future version of the framework that has a similar bug. Certainly don't put any comments in the code. Make sure that no maintenance programmers can possibly tell the necessary, by-design uses of variables from the unnecessary, pointless uses. Don't look for more problems; assume that your heuristic solution of never using constants again is sufficient to prevent not only this bug, but future bugs that don't even exist yet. Tell other people that “constants are slower than variables“, without any context.  And if anyone questions why that is, tell them that you've been programming longer than they have, so you know best.  Maybe throw in a little “Microsoft suxors, open source rulez!” rhetoric while you're at it -- that stuff never gets old.

Perhaps I digress. I'd like to take this opportunity to recommend the first solution over the second.

This is analogous to what I was talking about the other day in my posting on Comment Rot. If you hide the important comments amongst hundreds of trivial comments, the program gets harder to understand.  Same thing here -- sometimes, it is necessary to write bizarre, seemingly pointless code in order to work around a bug in another object.  That's a clear case of the purpose of the code being impossible to deduce from the syntax, so call it out!  Don't hide it amongst a thousand instances of identical really, truly pointless code.

  • The good reason? ADO.

    Lots of the ADO samples in MSDN contain the Set <object> = Nothing, and for good reason, you don't want to keep database connections open (connection pooling aside) or tables locked.

    Of course there's always non-deterministic garbage control to through into the mix. Your object may fall out of scope, but who knows if it has been disposed yet. Not applicable to good old asp, but specifically calling dispose methods still applies in .net. Or does it? <g>
  • Not as bad as the Dim x as new xyz problems you get in VB. The only circumstances I've ever set stuff to nothing rather than letting the runtime do it is when destroying large collections. If you do it yourself then you can put up a progress bar, otherwise it looks like the worlds stopped (and if it's started swapping to disk then life is really bad)
    One possible reason is down to cut and pasting demo code which used to use globals perhaps?
  • In Access, it is 'convention' to do a .Close on all DAO.Recordset objects and DAO.Database objects, and set them = Nothing, like:

    Dim rs as DAO.Recordset
    Dim db as DAO.Database

    Set db = CurrentDB()
    Set rs = db.OpenRecordset("SELECT * FROM etc")
    'etc
    rs.Close
    db.Close
    Set rs = Nothing
    Set db = Nothing


    This is 'convention' because apparently there are/were bugs in earlier versions of Access (up to and including Access 97) where the Access window wouldn't close when you tried to close the application, no matter what you did. Ensuring your code always did the above fixed that problem.


    Here's a Google thread where Michael Kaplan [MS] hints at the bug, if not explicitly so:

    http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&safe=active&selm=uLgaFdnX%23GA.226%40upnetnews05

    Also, here's something written by Michael Kaplan, though it confirms the bug in a straightforward manner, it is debunked three lines later:

    http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&safe=active&selm=73i6a9%24158%40bgtnsc01.worldnet.att.net


    So, in summary:
    There are, or at least were, at one time, bugs that could be eliminated by ensuring we followed the 'convention'. Whether or not that is still the case, I don't know. Maybe get in touch with him; he's still around. Now I'm even more confused than when I began. But I'll still .Close/Set = Nothing everything until I hear different.
  • I don't think it's a myth because that's how we ended our query to a SQL 6.5 db from ASP 2.0. If we didn't, the db server would lock up, plain and simple.

    These days? Probably unnecessary, but if the code works, who am I to complain?
  • Matt Curland has made several posts about this on various VB newsgroups, confirming that setting local variables to Nothing immediately before they go out of scope isn't required, although admittedly there may have been good reasons for this in older versions of VB. I think this was Eric's point, there are of course times when you want to explicitly set a local object to Nothing before it goes out of scope, but it's not necessary to do it just before a local object goes out of scope.

    Some of Matt's arguments were:

    1. Why not also call SysFreeString and SafeArrayDestroy on your string and array variables if you don't trust VB to clean up after you?
    2. With statements generate hidden local variables, but you can't access them. If you use a With statement with an object, how are you going to set the hidden object to Nothing?
    3. It's inefficient because VB runs the same code again as part of its normal teardown. In fact, explicitly setting objects to Nothing results in a vbaCastObj call whereas the normal teardown results in faster vbaFreeObj calls.

    A collection of Matt's posts on the subject can be found here:

    http://tinyurl.com/3ajhk
  • I think the origin probably came about because most ASP pages were originally written without any functions/subs whatsoever. Therefore you would nearly always have statements following the last known use for an object ref.

    Probably that and getting rid of expensive objects like DB connections, recordsets or COM+ objects early meant that the advice became prominent.

    I've worked on sites where (despite my protestations) it was "best practice" to do this - and to write if statements like
    if (foo) then
    and not to have an Option Explicit statement at all. These sites weren't small sites - they were 100s of millions of hits per day sites that your granny uses.
  • See Coding Techniques and Programming Practices

    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnvsgen/html/cfr.asp

    See 25+ ASP Tips to Improve Performance and Style

    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnasp/html/asptips.asp

    ... et al. seemingly infinitum.

    The point is that object references should generally be released as soon as they are no longer needed, and that by getting into this practice the programmer will be less likely to omit such an action where it really is needed. By doing this explicitly I suppose the theory was to make programmers be more aware of the process in general.

    Trying to explain the semantics of object reference handling to people can be difficult because so many of them actually are cargo cult programmers. You don't want to see some of the scripts hacked together by box admins I've had to unravel, and Joe ASP can be just as scary a coder. Even funnier is trying to explain to a Javascript guy the difference between a Function and a Sub, and the syntax of calls in VB/VBScript. It doesn't help that the Windows Script CHM is so full of erroneous and/or misleading examples of VBScript calls.

    The poor ASP coder is almost doomed from the start. It takes a whole lot of discipline to avoid some nasty spaghetti in ASP pages. I'm not sure they have a clue when an object reference variable goes out of scope. I'm not even sure anybody has defined "page scope" to these folks very clearly in the first place.

    I suspect that in many cases people had trouble with data access objects because they would close a recordset but leave the underlying connection open.

    Then there is the issue of connection pool management, but now I'm just being redundant. I think that was one of the core issues behind early release of object references in ASP pages and other pooled-connection scenarios.

    Obviously the need to do explicit releases is situational. Out of scope is out of scope. I just think we're looking at a combination of a lack of understanding in some cases and an attempt to practice a "good habit" in others.

    It's all a little reminiscent of the semicolon rules in C, C++, Java, and Javascript. The latter seems to be the most bizarre. Reading the ECMA spec it almost looks like the rule is "you need one when you need one except they're optional lots of places and times - and the script engine will take a stab when it feels like inserting one."

    Oh, for the clarity of Algol 60 again!
  • Sorry to step on you RichB, my think time is nearly as long as my talk time I guess.

    Why Option Explicit was not forced by default in ASP I'll never understand. You're right, I too know of ASP applications getting thousands of hits daily that fall over if you insert Option Explicit at the head of each page. Typically they're full of dead code too, not only server-side but in their client-side Javascript.

    I just LOVE getting hauled in to unravel problems when they try to make small changes to these things and the house of cards comes down. Why companies don't hire outsiders to do code audits is beyond me, they'd save a bundle by stopping these things before their mission-critical application is down.

    ASP.Net seems to have only made things worse, though I admit I'm dealing with all new screwball stuff there.
  • Easy: it's people not understanding scope.

    When one does not understand scope, and sees/figures out that this is the right way to do things:

    Private X As Object

    Public Sub DoIt
    Set X = New Foo
    X.Bar
    X.Close
    Set X = Nothing
    End Sub

    This practice becomes "necessary" even in

    Public Sub DoIt
    Dim X As Object
    Set X = New Foo
    X.Bar
    X.Close
    Set X = Nothing
    End Sub

    Really. Try it. Next time you see this code, ask the author about scope.
  • Re: Matt Curland posts -- thanks for the links to those. Matt is The Man. One of these days I'll tell you guys about a conversation Matt and I once had about the differences between VBScript and VB's default property resolution Intellisense logic -- it is ARCANE to say the least.

    Re: ASP -- excellent point, and one that I had not considered. In ASP it is sometimes very difficult to know where you are and what scope you're in.

    However, just to clarify, my beef is specifically with people who clear object variables IMMEDIATELY before they go out of a local scope. It is a really good idea to clear variables that hold expensive resources as soon as you're done with them if you're going to go do other stuff before they fall out of scope.

    Re: Semicolon rules: I blogged about that already, see

    http://weblogs.asp.net/ericlippert/archive/2004/02/02/66334.aspx

  • Eric,

    I agree with you on this issue. Almost.

    In some older versions of ADO (I don't know if the problem has since been fixed, or if the real root cause was Q255986), if you raised an error before explicitly cleaning up local ADO recordets (closing them and setting them to nothing) you got:

    Method '~' of object '~' failed

    This meant you needed to:

    1. cache error details
    2. clean up recordsets (close and set to nothing)
    3. raise original error

    Search Google Groups (Advanced Search) using Message ID uRi4ddceBHA.186@cppssbbsa01.microsoft.com (see particularly the 4th point).

    Seeya
    Matthew
  • Take Outs for 28 April 2004
  • Setting objects to Nothing is a convention required by the company I work at, despite the fact that many advanced programmers here know it's not required. In fact, automatic cleanup is one of the better reasons to use VB at all, and having a convention to do it yourself makes the code a lot worse. I think our reasoning is that there are times where you have to set objects to nothing to deal with circular references, and that carried over into all uses of objects.

    I almost started laughing when I saw that your "random example" was from Intersystems, as we use Intersystems Cache for our back end database. However, we don't use any of the VB-integration features that that web page refers to.
  • Just to chime in on the ASP side of things, back in the day when www.learnasp.com was the (virtually) definitive resource, the information distributed to the ASP coding masses was thus:
    http://www.learnasp.com/learn/nothing.asp
    (see also the links on that page)

    I have personally always advocated explicitly closing (where applicable) and destroying objects because:

    1) It was never made clear (in official Microsoft documentation) how effective/reliable the garbage collection was (anecdotal evidence at the time indicated that it wasn't too hot, but ADO Connection and Recordset objects probably muddied the waters)

    2) It was never clear whether or not the garbage collection automatically calls the .Close method when deallocating an object that has such a method.

    3) Explicitly closing & destroying objects *every* time (even when not strictly necessary, such as when it's about to go out of scope) makes for a more consistent coding style.

    4) I thought that if there was a performance penalty to pay, it would be insignificant in the grand scheme of things, especially if it made apps more stable (i.e. a small price to pay)

    5) I wrote myself a neatly encapsulated sub to do the work for me with minimum effort (see http://marcustucker.com/blogold/200403archive001.asp#1078920890001 )

    6) It's hard work explaining how scopes work to newbies and so it's easier just to tell them to follow the rule (which I guess makes me guilty of encouraging cargo cult programming)


    There are probably more reasons, but I can't think of them now! In any case, it's nice to have you (Eric and others) set the record straight for those that *really* want to know what's going on...
  • hmmm. I think there is a bit of misunderstanding here about what setting something to nothing actually means whether you do it yourself or the runtime does it for you.

    Eric you're right, you SHOULDN'T need to do this other than the valid reasons stated(and Matt C. is the definitive resource on this) - but this is only true if everyone sticks to the COM spec - basically they implement components with no bugs,

    All you (or the runtime) is doing is asking the object allocated to release - IUnknown.Release. It doesn't have to. It is in control of its own lifetime - not you. Therefore any object is free to keep itself around as long as it likes due to the object controlling its own reference counting.

    What VB(A) does is say, ok I KNOW I've got a valid object pointer here, and I know that cos the 'new' should have caused a ref counter to increment (either by doing a QI - set mything = new thing OR set mything = myotherthing) and I got a non-null pointer back (and a 0 HResult). What I can't guarantee is that when I call Release (= nothing) the object will go away.

    I believe the reason that *many* people adopt this style in VB shops is because the behaviour can be different - ie you can force some components to free objects that otherwise wouldn't be released by the runtime and this is based on order.

    COM is based on contractual adherence to the COM spec. However, there is no runtime enforcement of this spec (which is why .net is so cool). This opens up the possiblity that I can write a component that you call from VBA that will do something semantically different if you call set = nothing explicitly. Wow - you say, how is this? - we're both calling the same method (Release).

    One reason is due to the clear-up order. You always free the top-level object (you don't even know about any others created by it). This should cascade down freeing everything else FIRST. i.e. the last object in the chain that is created is the first to disappear (you can free an object with extant refs as it's ref count is always > 0).

    But what if you don't stick to this rule. Well, maybe calling release twice will make a difference if the component implementer screwed up their ref counting. Maybe set x = nothing yourself will affect the cleanup order and exhibit different semantics to the runtime. For example:

    Dim obj1 as object
    Dim obj2 as object

    set obj1 = new class1
    set obj2 = new class1

    which one does the runtime release first? what about adding:

    Dim obj3 as object
    obj2.dostuff(obj3)

    what's the order now. Because YOU can control the clearup order (to some extent) the result MAY be different to if the runtime does it.

    I have seen the best practice of setting everything to nothing a hundred times. I think it's because there are a lot of component libraries out there with bugs in that don't implement the spec correctly. So, that fact that the runtime should release all the refs (and the spec effectively says the order in which this happens is irrelevant) - it might not be.

    I remember data bound classes being an easy way to cause this kind of problem (and basically leak) if you didn't clear your own references explicitly.

    I don't believe this is the only reason, anyone think of some more?
Page 1 of 4 (52 items) 1234