• The Old New Thing

    When should your destructor be virtual?

    • 20 Comments

    When should your C++ object's destructor be virtual?

    First of all, what does it mean to have a virtual destructor?

    Well, what does it mean to have a virtual method?

    If a method is virtual, then calling the method on an object always invokes the method as implemented by the most heavily derived class. If the method is not virtual, then the implementation corresponding to the compile-time type of the object pointer.

    For example, consider this:

    class Sample {
    public:
     void f();
     virtual void vf();
    };
    
    class Derived : public Sample {
    public:
     void f();
     void vf();
    }
    
    void function()
    {
     Derived d;
     Sample* p = &d;
     p->f();
     p->vf();
    }
    

    The call to p->f() will result in a call to Sample::f because p is a pointer to a Sample. The actual object is of type Derived, but the pointer is merely a pointer to a Sample. The pointer type is used because f is not virtual.

    On the other hand, the call to The call to p->vf() will result in a call to Derived::vf, the most heavily derived type, because vf is virtual.

    Okay, you knew that.

    Virtual destructors work exactly the same way. It's just that you rarely invoke the destructor explicitly. Rather, it's invoked when an automatic object goes out of scope or when you delete the object.

    void function()
    {
     Sample* p = new Derived;
     delete p;
    }
    

    Since Sample does not have a virtual destructor, the delete p invokes the destructor of the class of the pointer (Sample::~Sample()), rather than the destructor of the most derived type (Derived::~Derived()). And as you can see, this is the wrong thing to do in the above scenario.

    Armed with this information, you can now answer the question.

    A class must have a virtual destructor if it meets both of the following criteria:

    • You do a delete p.
    • It is possible that p actually points to a derived class.

    Some people say that you need a virtual destructor if and only if you have a virtual method. This is wrong in both directions.

    Example of a case where a class has no virtual methods but still needs a virtual destructor:

    class Sample { };
    class Derived : public Sample
    {
     CComPtr<IStream> m_p;
    public:
     Derived() { CreateStreamOnHGlobal(NULL, TRUE, &m_p); }
    };
    
    Sample *p = new Derived;
    delete p;
    

    The delete p will invoke Sample::~Sample instead of Derived::~Derived, resulting in a leak of the stream m_p.

    And here's an example of a case where a class has virtual methods but does not require a virtual destructor.

    class Sample { public: virtual void vf(); }
    class Derived : public Sample { public: virtual void vf(); }
    
    Derived *p = new Derived;
    delete p;
    

    Since the object deletion occurs from the pointer type that matches the type of the actual object, the correct destructor will be invoked. This pattern happens often in COM objects, which expose several virtual methods corresponding to its interfaces, but where the object itself is destroyed by its own implementation and not from a base calss pointer. (Notice that no COM interfaces contain virtual destructors.)

    The problem with knowing when to make your destructor virtual or not is that you have to know how people will be using your class. If C++ had a "sealed" keyword, then the rule would be simpler: If you do a "delete p" where p is a pointer to an unsealed class, then that class needs have a virtual destructor. (The imaginary "sealed" keyword makes it explicit when a class can act as the base class for another class.)

  • The Old New Thing

    Controlling which devices will wake the computer out of sleep

    • 62 Comments

    I haven't experienced this problem, but I know of people who have. They'll put their laptop into suspend or standby mode, and after a few seconds, the laptop will spontaneously wake itself up. Someone gave me this tip that might (might) help you figure out what is wrong.

    Open a command prompt and run the command

    powercfg -devicequery wake_from_any

    This lists all the hardware devices that are capable of waking up the computer from standby. But the operating system typically ignores most of them. To see the ones that are not being ignored, run the command

    powercfg -devicequery wake_armed

    This second list is typically much shorter. On my computer, it listed just the keyboard, the mouse, and the modem. (The modem? I never use that thing!)

    You can disable each of these devices one by one until you find the one that is waking up the computer.

    powercfg -devicedisablewake "device name"

    (How is this different from unchecking Allow this device to wake the computer from the device properties in Device Manager? Beats me.)

    Once you find the one that is causing problems, you can re-enable the others.

    powercfg -deviceenablewake "device name"

    I would start by disabling wake-up for the keyboard and mouse. Maybe the optical mouse is detecting tiny vibrations in your room. Or the device might simply be "chatty", generating activity even though you aren't touching it.

    This may not solve your problem, but at least's something you can try. I've never actually tried it myself, so who knows whether it works.

    Exercise: Count how many disclaimers there are in this article, and predict how many people will ignore them.

  • The Old New Thing

    We can't cut that; it's our last feature

    • 38 Comments

    Many years ago, I was asked to help a customer with a problem they were having. I don't remember the details, and they aren't important to the story anyway, but as I was investigating one of their crashes, I started to wonder why they were even doing it.

    I expressed my concerns to the customer liaison. "Why are they writing this code in the first place? The performance will be terrible, and it'll never work exactly the way they want it to."

    The customer liaison confided, "Yeah, I thought the same thing. But this is a feature they're adding to the next version of their product. The product is so far behind schedule, they've been cutting features like mad to get back on track. But they can't cut this feature. It's the last one left!"

  • The Old New Thing

    What does an invalid handle exception in LeaveCriticalSection mean?

    • 27 Comments

    Internally, a critical section is a bunch of counters and flags, and possibly an event. (Note that the internal structure of a critical section is subject to change at any time—in fact, it changed between Windows XP and Windows 2003. The information provided here is therefore intended for troubleshooting and debugging purposes and not for production use.) As long as there is no contention, the counters and flags are sufficient because nobody has had to wait for the critical section (and therefore nobody had to be woken up when the critical section became available).

    If a thread needs to be blocked because the critical section it wants is already owned by another thread, the kernel creates an event for the critical section (if there isn't one already) and waits on it. When the owner of the critical section finally releases it, the event is signaled, thereby alerting all the waiters that the critical section is now available and they should try to enter it again. (If there is more than one waiter, then only one will actually enter the critical section and the others will return to the wait loop.)

    If you get an invalid handle exception in LeaveCriticalSection, it means that the critical section code thought that there were other threads waiting for the critical section to become available, so it tried to signal the event, but the event handle was no good.

    Now you get to use your brain to come up with reasons why this might be.

    One possibility is that the critical section has been corrupted, and the memory that normally holds the event handle has been overwritten with some other value that happens not to be a valid handle.

    Another possibility is that some other piece of code passed an uninitialized variable to the CloseHandle function and ended up closing the critical section's handle by mistake. This can also happen if some other piece of code has a double-close bug, and the handle (now closed) just happened to be reused as the critical section's event handle. When the buggy code closes the handle the second time by mistake, it ends up closing the critical section's handle instead.

    Of course, the problem might be that the critical section is not valid because it was never initialized in the first place. The values in the fields are just uninitialized garbage, and when you try to leave this uninitialized critical section, that garbage gets used as an event handle, raising the invalid handle exception.

    Then again, the problem might be that the critical section is not valid because it has already been destroyed. For example, one thread might have code that goes like this:

    EnterCriticalSection(&cs);
    ... do stuff...
    LeaveCriticalSection(&cs);
    

    While that thread is busy doing stuff, another thread calls DeleteCriticalSection(&cs). This destroys the critical section while another thread was still using it. Eventually that thread finishes doing its stuff and calls LeaveCriticalSection, which raises the invalid handle exception because the DeleteCriticalSection already closed the handle.

    All of these are possible reasons for an invalid handle exception in LeaveCriticalSection. To determine which one you're running into will require more debugging, but at least now you know what to be looking for.

    Postscript: One of my colleagues from the kernel team points out that the Locks and Handles checks in Application Verifier are great for debugging issues like this.

  • The Old New Thing

    What's the difference between the COM and EXE extensions?

    • 42 Comments

    Commenter Koro asks why you can rename a COM file to EXE without any apparent ill effects. (James MAstros asked a similar question, though there are additional issues in James' question which I will take up at a later date.)

    Initially, the only programs that existed were COM files. The format of a COM file is... um, none. There is no format. A COM file is just a memory image. This "format" was inherited from CP/M. To load a COM file, the program loader merely sucked the file into memory unchanged and then jumped to the first byte. No fixups, no checksum, nothing. Just load and go.

    The COM file format had many problems, among which was that programs could not be bigger than about 64KB. To address these limitations, the EXE file format was introduced. The header of an EXE file begins with the magic letters "MZ" and continues with other information that the program loader uses to load the program into memory and prepare it for execution.

    And there things lay, with COM files being "raw memory images" and EXE files being "structured", and the distinction was rigidly maintained. If you renamed an EXE file to COM, the operating system would try to execute the header as if it were machine code (which didn't get you very far), and conversely if you renamed a COM file to EXE, the program loader would reject it because the magic MZ header was missing.

    So when did the program loader change to ignore the extension entirely and just use the presence or absence of an MZ header to determine what type of program it is? Compatibility, of course.

    Over time, programs like FORMAT.COM, EDIT.COM, and even COMMAND.COM grew larger than about 64KB. Under the original rules, that meant that the extension had to be changed to EXE, but doing so introduced a compatibility problem. After all, since the files had been COM files up until then, programs or batch files that wanted to, say, spawn a command interpreter, would try to execute COMMAND.COM. If the command interpreter were renamed to COMMAND.EXE, these programs which hard-coded the program name would stop working since there was no COMMAND.COM any more.

    Making the program loader more flexible meant that these "well-known programs" could retain their COM extension while no longer being constrained by the "It all must fit into 64KB" limitation of COM files.

    But wait, what if a COM program just happened to begin with the letters MZ? Fortunately, that never happened, because the machine code for "MZ" disassembles as follows:

    0100 4D            DEC     BP
    0101 5A            POP     DX
    

    The first instruction decrements a register whose initial value is undefined, and the second instruction underflows the stack. No sane program would begin with two undefined operations.

  • The Old New Thing

    Quick overview of how processes exit on Windows XP

    • 37 Comments

    Exiting is one of the scariest moments in the lifetime of a process. (Sort of how landing is one of the scariest moments of air travel.)

    Many of the details of how processes exit are left unspecified in Win32, so different Win32 implementations can follow different mechanisms. For example, Win32s, Windows 95, and Windows NT all shut down processes differently. (I wouldn't be surprised if Windows CE uses yet another different mechanism.) Therefore, bear in mind that what I write in this mini-series is implementation detail and can change at any time without warning. I'm writing about it because these details can highlight bugs lurking in your code. In particular, I'm going to discuss the way processes exit on Windows XP.

    I should say up front that I do not agree with many steps in the way processes exit on Windows XP. The purpose of this mini-series is not to justify the way processes exit but merely to fill you in on some of the behind-the-scenes activities so you are better-armed when you have to investigate into a mysterious crash or hang during exit. (Note that I just refer to it as the way processes exit on Windows XP rather than saying that it is how process exit is designed. As one of my colleagues put it, "Using the word design to describe this is like using the term swimming pool to refer to a puddle in your garden.")

    When your program calls ExitProcess a whole lot of machinery springs into action. First, all the threads in the process (except the one calling ExitProcess) are forcibly terminated. This dates back to the old-fashioned theory on how processes should exit: Under the old-fashioned theory, when your process decides that it's time to exit, it should already have cleaned up all its threads. The termination of threads, therefore, is just a safety net to catch the stuff you may have missed. It doesn't even wait two seconds first.

    Now, we're not talking happy termination like ExitThread; that's not possible since the thread could be in the middle of doing something. Injecting a call to ExitThread would result in DLL_THREAD_DETACH notifications being sent at times the thread was not prepared for. Nope, these threads are terminated in the style of TerminateThread: Just yank the rug out from under it. Buh-bye. This is an ex-thread.

    Well, that was a pretty drastic move, now, wasn't it. And all this after the scary warnings in MSDN that TerminateThread is a bad function that should be avoided!

    Wait, it gets worse.

    Some of those threads that got forcibly terminated may have owned critical sections, mutexes, home-grown synchronization primitives (such as spin-locks), all those things that the one remaining thread might need access to during its DLL_PROCESS_DETACH handling. Well, mutexes are sort of covered; if you try to enter that mutex, you'll get the mysterious WAIT_ABANDONED return code which tells you that "Uh-oh, things are kind of messed up."

    What about critical sections? There is no "Uh-oh" return value for critical sections; EnterCriticalSection doesn't have a return value. Instead, the kernel just says "Open season on critical sections!" I get the mental image of all the gates in a parking garage just opening up and letting anybody in and out. [See correction.]

    As for the home-grown stuff, well, you're on your own.

    This means that if your code happened to have owned a critical section at the time somebody called ExitProcess, the data structure the critical section is protecting has a good chance of being in an inconsistent state. (Afer all, if it were consistent, you probably would have exited the critical section! Well, assuming you entered the critical section because you were updating the structure as opposed to reading it.) Your DLL_PROCESS_DETACH code runs, enters the critical section, and it succeeds because "all the gates are up". Now your DLL_PROCESS_DETACH code starts behaving erratically because the values in that data structure are inconsistent.

    Oh dear, now you have a pretty ugly mess on your hands.

    And if your thread was terminated while it owned a spin-lock or some other home-grown synchronization object, your DLL_PROCESS_DETACH will most likely simply hang indefinitely waiting patiently for that terminated thread to release the spin-lock (which it never will do).

    But wait, it gets worse. That critical section might have been the one that protects the process heap! If one of the threads that got terminated happened to be in the middle of a heap function like HeapAllocate or LocalFree, then the process heap may very well be inconsistent. If your DLL_PROCESS_DETACH tries to allocate or free memory, it may crash due to a corrupted heap.

    Moral of the story: If you're getting a DLL_PROCESS_DETACH due to process termination,† don't try anything clever. Just return without doing anything and let the normal process clean-up happen. The kernel will close all your open handles to kernel objects. Any memory you allocated will be freed automatically when the process's address space is torn down. Just let the process die a quiet death.

    Note that if you were a good boy and cleaned up all the threads in the process before calling ExitThread, then you've escaped all this craziness, since there is nothing to clean up.

    Note also that if you're getting a DLL_PROCESS_DETACH due to dynamic unloading, then you do need to clean up your kernel objects and allocated memory because the process is going to continue running. But on the other hand, in the case of dynamic unloading, no other threads should be executing code in your DLL anyway (since you're about to be unloaded), so—assuming you coded up your DLL correctly—none of your critical sections should be held and your data structures should be consistent.

    Hang on, this disaster isn't over yet. Even though the kernel went around terminating all but one thread in the process, that doesn't mean that the creation of new threads is blocked. If somebody calls CreateThread in their DLL_PROCESS_DETACH (as crazy as it sounds), the thread will indeed be created and start running! But remember, "all the gates are up", so your critical sections are just window dressing to make you feel good.

    (The ability to create threads after process termination has begun is not a mistake; it's intentional and necessary. Thread injection is how the debugger breaks into a process. If thread injection were not permitted, you wouldn't be able to debug process termination!)

    Next time, we'll see how the way process termination takes place on Windows XP caused not one but two problems.

    Footnotes

    †Everybody reading this article should already know how to determine whether this is the case. I'm assuming you're smart. Don't disappoint me.

  • The Old New Thing

    If you can detect the difference between an emulator and the real thing, then the emulator has failed

    • 2 Comments

    Recall that a corrupted program sometimes results in a "Program too big to fit in memory" error. In response, Dog complained that while that may have been a reasonable response back in the 1980's, in today's world, there's plenty of memory around for the MS-DOS emulator to add that extra check and return a better error code.

    Well yeah, but if you change the externally visible behavior, then you've failed as an emulator. The whole point of an emulator is to mimic another world, and any deviations from that other world can come back to bite you.

    MS-DOS is perhaps one of the strongest examples of requiring absolute unyielding backward compatibility. Hundreds if not thousands of programs scanned memory looking for specific byte sequences inside MS-DOS so it could patch them or hunted around inside MS-DOS's internal state variables so it could modify them. If you move one thing out of place, those programs stop working.

    MS-DOS contains chunks of "junk DNA", code fragments which do nothing but waste space, but which exist so that programs which go scanning through memory looking for specific byte sequences will find them. (This principle is not dead; there's even some junk DNA in Explorer.)

    Given the extreme compatibility required for MS-DOS emulation, I'm not surprised that the original error behavior was preserved. There is certainly some program out there that stops working if attempting to execute a COM-style image larger than 64KB returns any error other than 8. (Besides, if you wanted it to return some other error code, you had precious few to choose from.)

  • The Old New Thing

    Do not overload the E_NOINTERFACE error

    • 26 Comments

    One of the more subtle ways people mess up IUnknown::QueryInterface is returning E_NOINTERFACE when the problem wasn't actually an unsupported interface. The E_NOINTERFACE return value has very specific meaning. Do not use it as your generic "gosh, something went wrong" error. (Use an appropriate error such as E_OUTOFMEMORY or E_ACCESSDENIED.)

    Recall that the rules for IUnknown::QueryInterface are that (in the absence of catastrophic errors such as E_OUTOFMEMORY) if a request for a particular interface succeeds, then it must always succeed in the future for that object. Similarly, if a request fails with E_NOINTERFACE, then it must always fail in the future for that object.

    These rules exist for a reason.

    In the case where COM needs to create a proxy for your object (for example, to marshal the object into a different apartment), the COM infrastructure does a lot of interface caching (and negative caching) for performance reasons. For example, if a request for an interface fails, COM remembers this so that future requests for that interface are failed immediately rather than being marshalled to the original object only to have the request fail anyway. Requests for unsupported interfaces are very common in COM, and optimizing that case yields significant performance improvements.

    If you start returning E_NOINTERFACE for problems other than "The object doesn't support this interface", COM will assume that the object really doesn't support the interface and may not ask for it again even if you do. This in turn leads to very strange bugs that defy debugging: You are at a call to IUnknown::QueryInterface, you set a breakpoint on your object's implementation of IUnknown::QueryInterface to see what the problem is, you step over the call and get E_NOINTERFACE back without your breakpoint ever hitting. Why? Because at some point in the past, you said you didn't support the interface, and COM remembered this and "saved you the trouble" of having to respond to a question you already answered. The COM folks tell me that they and their comrades in product support end up spending hours debugging customer's problems like "When my computer is under load, sometimes I start getting E_NOINTERFACE for interfaces I definitely support."

    Save yourself and the COM folks several hours of frustration. Don't return E_NOINTERFACE unless you really mean it.

  • The Old New Thing

    Psychic debugging: The first step in diagnosing a deadlock is a simple matter of following the money

    • 26 Comments

    Somebody asked our team for help because they believed they hit a deadlock in their program's UI. (It's unclear why they asked our team, but I guess since our team uses the window manager, and their program uses the window manager, we're all in the same boat. You'd think they'd ask the window manager team for help.)

    But it turns out that solving the problem required no special expertise. In fact, you probably know enough to solve it, too.

    Here are the interesting threads:

      0  Id: 980.d30 Suspend: 1 Teb: 7ffdf000 Unfrozen
    ChildEBP RetAddr  
    0023dc90 7745dd8c ntdll!KiFastSystemCallRet 
    0023dc94 774619e0 ntdll!ZwWaitForSingleObject+0xc 
    0023dcf8 774618fb ntdll!RtlpWaitOnCriticalSection+0x154 
    0023dd20 00cd03f2 ntdll!RtlEnterCriticalSection+0x152 
    0023dd38 00cd0635 myapp!LogMsg+0x15 
    0023dd58 00cd0c6a myapp!LogRawIndirect+0x27 
    0023fcb8 00cb64a7 myapp!Log+0x62 
    0023fce8 00cd7598 myapp!SimpleClientConfiguration::Cleanup+0x17 
    0023fcf8 00cd8ffe myapp!MsgProc+0x1a9 
    0023fd10 00cda1a9 myapp!Close+0x43 
    0023fd24 761636d2 myapp!WndProc+0x62 
    0023fd50 7616330c USER32!InternalCallWinProc+0x23 
    0023fdc8 76164030 USER32!UserCallWinProcCheckWow+0x14b 
    0023fe2c 76164088 USER32!DispatchMessageWorker+0x322 
    0023fe3c 00cda3ba USER32!DispatchMessageW+0xf 
    0023fe9c 00cd0273 myapp!GuiMain+0xe8 
    0023feb4 00ccdeca myapp!wWinMain+0x87 
    0023ff48 7735c6fc myapp!__wmainCRTStartup+0x150 
    0023ff54 7742e33f kernel32!BaseThreadInitThunk+0xe 
    0023ff94 00000000 ntdll!_RtlUserThreadStart+0x23 
     
       1  Id: 980.ce8 Suspend: 1 Teb: 7ffdd000 Unfrozen
    ChildEBP RetAddr  
    00f8d550 76162f81 ntdll!KiFastSystemCallRet 
    00f8d554 76162fc4 USER32!NtUserSetWindowLong+0xc 
    00f8d578 76162fe5 USER32!_SetWindowLong+0x131 
    00f8d590 74aa5c2b USER32!SetWindowLongW+0x15 
    00f8d5a4 74aa5b65 comctl32_74a70000!ClearWindowStyle+0x23 
    00f8d5cc 74ca568f comctl32_74a70000!CCSetScrollInfo+0x103 
    00f8d618 76164ea2 uxtheme!ThemeSetScrollInfoProc+0x10e 
    00f8d660 00cdd913 USER32!SetScrollInfo+0x57 
    00f8d694 00cdf0a4 myapp!SetScrollRange+0x3b 
    00f8d6d4 00cdd777 myapp!TextOutputStringColor+0x134 
    00f8d93c 00cd04c4 myapp!TextLogMsgProc+0x3db 
    00f8d960 00cd0635 myapp!LogMsg+0xe7 
    00f8d980 00cd0c6a myapp!LogRawIndirect+0x27 
    00f8f8e0 00cd6367 myapp!Log+0x62 
    00f8faf0 7735c6fc myapp!remote_ext::ServerListenerThread+0x45c 
    00f8fafc 7742e33f kernel32!BaseThreadInitThunk+0xe 
    00f8fb3c 00000000 ntdll!_RtlUserThreadStart+0x23 
    

    The thing about debugging deadlocks is that you usually don't need to understand what's going on. The diagnosis is largely mechanical once you get your foot in the door. (Though sometimes it's hard to get your initial footing.)

    Let's look at thread 0. It is waiting for a critical section. The owner of that critical section is thread 1. How do I know that? Well, I could've debugged it, or I could've used my psychic powers to say, "Gosh, that function is called LogMsg, and look there's another thread that is inside the function LogMsg. I bet that function is using a critical section to ensure that only one thread uses it at a time."

    Okay, so thread 0 is waiting for thread 1. What is thread 1 doing? Well, it entered the critical section back in the LogMsg function, and then it did some text processing and, oh look, it's doing a SetScrollInfo. The SetScrollInfo went into comctl32 and ultimately resulted in a SetWindowLong. The window that the application passed to SetScrollInfo is owned by thread 0. How do I know that? Well, I could've debugged it, or I could've used my psychic powers to say, "Gosh, the change in the scroll info has led to a change in window styles, and the thread is trying to notify the window of the change in style. The window clearly belongs to another thread; otherwise we wouldn't be stuck in the first place, and given that we see only two threads, there isn't much choice as to what other thread it could be!"

    At this point, I think you see the deadlock. Thread 0 is waiting for thread 1 to exit the critical section, but thread 1 is waiting for thread 0 to process the style change message.

    What happened here is that the program sent a message while holding a critical section. Since message handling can trigger hooks and cross-thread activity, you cannot hold any resources when you send a message because the hook or the message recipient might want to acquire that resource that you own, resulting in a deadlock.

  • The Old New Thing

    Everybody thinks about CLR objects the wrong way (well not everybody)

    • 34 Comments

    Many people responded to Everybody thinks about garbage collection the wrong way by proposing variations on auto-disposal based on scope:

    What these people fail to recognize is that they are dealing with object references, not objects. (I'm restricting the discussion to reference types, naturally.) In C++, you can put an object in a local variable. In the CLR, you can only put an object reference in a local variable.

    For those who think in terms of C++, imagine if it were impossible to declare instances of C++ classes as local variables on the stack. Instead, you had to declare a local variable that was a pointer to your C++ class, and put the object in the pointer.

    C#C++
    void Function(OtherClass o)
    {
     // No longer possible to declare objects
     // with automatic storage duration
     Color c(0,0,0);
     Brush b(c);
     o.SetBackground(b);
    }
    void Function(OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     o.SetBackground(b);
    }
    void Function(OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     o->SetBackground(b);
    }

    This world where you can only use pointers to refer to objects is the world of the CLR.

    In the CLR, objects never go out of scope because objects don't have scope.¹ Object references have scope. Objects are alive from the point of construction to the point that the last reference goes out of scope or is otherwise destroyed.

    If objects were auto-disposed when references went out of scope, you'd have all sorts of problems. I will use C++ notation instead of CLR notation to emphasize that we are working with references, not objects. (I can't use actual C++ references since you cannot change the referent of a C++ reference, something that is permitted by the CLR.)

    C#C++
    void Function(OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     Brush b2 = b;
     o.SetBackground(b2);
    
    
    
    
    
    }
    void Function(OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b2;
     dispose b;
     dispose c;
     dispose o;
    }

    Oops, we just double-disposed the Brush object and probably prematurely disposed the OtherClass object. Fortunately, disposal is idempotent, so the double-disposal is harmless (assuming you actually meant disposal and not destruction). The introduction of b2 was artificial in this example, but you can imagine b2 being, say, the leftover value in a variable at the end of a loop, in which case we just accidentally disposed the last object in an array.

    Let's say there's some attribute you can put on a local variable or parameter to say that you don't want it auto-disposed on scope exit.

    C#C++
    void Function([NoAutoDispose] OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     [NoAutoDispose] Brush b2 = b;
     o.SetBackground(b2);
    
    
    }
    void Function([NoAutoDispose] OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     [NoAutoDispose] Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b;
     dispose c;
    }

    Okay, that looks good. We disposed the Brush object exactly once and didn't prematurely dispose the OtherClass object that we received as a parameter. (Maybe we could make [NoAutoDispose] the default for parameters to save people a lot of typing.) We're good, right?

    Let's do some trivial code cleanup, like inlining the Color parameter.

    C#C++
    void Function([NoAutoDispose] OtherClass o)
    {
     Brush b = new Brush(new Color(0,0,0));
     [NoAutoDispose] Brush b2 = b;
     o.SetBackground(b2);
    
    
    }
    void Function([NoAutoDispose] OtherClass* o)
    {
     Brush* b = new Brush(new Color(0,0,0));
     [NoAutoDispose] Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b;
    }

    Whoa, we just introduced a semantic change by what seemed like a harmless transformation: The Color object is no longer auto-disposed. This is even more insidious than the scope of a variable affecting its treatment by anonymous closures, for introduction of temporary variables to break up a complex expression (or removal of one-time temporary variables) are common transformations that people expect to be harmless, especially since many language transformations are expressed in terms of temporary variables. Now you have to remember to tag all of your temporary variables with [NoAutoDospose].

    Wait, we're not done yet. What does SetBackground do?

    C#C++
    void OtherClass.SetBackground([NoAutoDispose] Brush b)
    {
     this.background = b;
    }
    void OtherClass::SetBackground([NoAutoDispose] Brush* b)
    {
     this->background = b;
    }

    Oops, there is still a reference to that Brush in the o.background member. We disposed an object while there were still outstanding references to it. Now when the OtherClass object tries to use the reference, it will find itself operating on a disposed object.

    Working backward, this means that we should have put a [NoAutoDispose] attribute on the b variable. At this point, it's six of one, a half dozen of the other. Either you put using around all the things that you want auto-disposed or you put [NoAutoDispose] on all the things that you don't.²

    The C++ solution to this problem is to use something like shared_ptr and reference-counted objects, with the assistance of weak_ptr to avoid reference cycles, and being very selective about which objects are allocated with automatic storage duration. Sure, you could try to bring this model of programming to the CLR, but now you're just trying to pick all the cheese off your cheeseburger and intentionally going against the automatic memory management design principles of the CLR.

    I was sort of assuming that since you're here for CLR Week, you're one of those people who actively chose to use the CLR and want to use it in the manner in which it was intended, rather than somebody who wants it to work like C++. If you want C++, you know where to find it.

    Footnote

    ¹ Or at least don't have scope in the sense we're discussing here.

    ² As for an attribute for specific classes to have auto-dispose behavior, that works only if all references to auto-dispose objects are in the context of a create/dispose pattern. References to auto-dispose objects outside of the create/dispose pattern would need to be tagged with the [NoAutoDispose] attribute.

    [AutoDispose] class Stream { ... };
    
    Stream MyClass.GetSaveStream()
    {
     [NoAutoDispose] Stream stm;
     if (saveToFile) {
      stm = ...;
     } else {
      stm = ...;
     }
     return stm;
    }
    
    void MyClass Save()
    {
     // NB! do not combine into one line
     Stream stm = GetSaveStream();
     SaveToStream(stm);
    }
    
Page 3 of 431 (4,309 items) 12345»