• The Old New Thing

    No matter where you put an advanced setting, somebody will tell you that you are an idiot

    • 127 Comments

    There are advanced settings in Windows, settings which normal users not only shouldn't be messing with, but which they arguably shouldn't even know about, because that would give them just enough knowledge to be dangerous. And no matter where you put that advanced setting, somebody will tell you that you are an idiot.

    Here they are on an approximate scale. If you dig through the comments on this blog, you can probably find every single position represented somewhere.

    1. It's okay if the setting is hidden behind a registry key. I know how to set it myself.
    2. I don't want to mess with the registry. Put the setting in a configuration file that I pass to the installer.
    3. I don't want to write a configuration file. The program should have an Advanced button that calls up a dialog which lets the user change the advanced setting.
    4. Every setting must be exposed in the user interface.
    5. Every setting must be exposed in the user interface by default. Don't make me call up the extended context menu.
    6. The first time the user does X, show users a dialog asking if they want to change the advanced setting.

    If you implement level N, people will demand that you implement level N+1. It doesn't stop until you reach the last step, which is aggressively user-hostile. (And then there will also be people who complain that you went too far.)

    From a technical standpoint, each of the above steps is about ten to a hundred times harder than the previous one. If you put it in a configuration file, you have to write code to load a parser and extract the value. If you want an Advanced button, now you have to write a dialog box (which is already a lot of work), consult with the usability and user assistance to come up with the correct wording for the setting, write help text, provide guidance to the translators, and now since it is exposed in the user interface, you need to write automated tests and add the setting to the test matrices. It's a huge amount of work to add a dialog box, work that could be spent on something that benefits a much larger set of customers in a more direct manner.

    That's why most advanced settings hang out at level 1 or, in the case of configuring program installation, level 2. If you're so much of a geek that you want to change these advanced settings, it probably won't kill you to fire up a text editor and write a little configuration file.

    Sidebar

    Joel's count of "fifteen ways to shut down Windows" is a bit disingenuous, since he's counting six hardware affordances: "Four FN+key combinations... an on-off button... you can close the lid." Okay, fine, Joel, we'll play it your way. Your proposal to narrow it down to one "Bye" button, still leaves seven ways to shut down Windows.

    And then people will ask how to log off.

  • The Old New Thing

    We've traced the call and it's coming from inside the house: A function call that always fails

    • 64 Comments

    A customer reported that they had a problem with a particular function added in Windows 7. The tricky bit was that the function was used only on very high-end hardware, not the sort of thing your average developer has lying around.

    GROUP_AFFINITY GroupAffinity;
    ... code that initializes the GroupAffinity structure ...
    if (!SetThreadGroupAffinity(hThread, &GrouAffinity, NULL));
    {
     printf("SetThreadGroupAffinity failed: %d\n", GetLastError());
     return FALSE;
    }
    

    The customer reported that the function always failed with error 122 (ERROR_INSUFFICIENT_BUFFER) even though the buffer seems perfectly valid.

    Since most of us don't have machines with more than 64 processors, we couldn't run the code on our own machines to see what happens. People asked some clarifying questions, like whether this code is compiled 32-bit or 64-bit (thinking that maybe there is an issue with the emulation layer), until somebody noticed that there was a stray semicolon at the end of the if statement.

    The customer was naturally embarrassed, but was gracious enough to admit that, yup, removing the semicolon fixed the problem.

    This reminds me of an incident many years ago. I was having a horrible time debugging a simple loop. It looked like the compiler was on drugs and was simply ignoring my loop conditions and always dropping out of the loop. At wit's end, I asked a colleague to come to my office and serve as a second set of eyes. I talked him through the code as I single-stepped:

    "Okay, so we set up the loop here..."

    NODE pn = GetActiveNode();
    

    "And we enter the loop, continuing while the node still needs processing."

    if (pn->NeedsProcessing())
    {
    

    "Okay, we entered the loop. Now we realign the skew rods on the node."

     pn->RealignSkewRods();
    

    "If the treadle is splayed, we need to calibrate the node against it."

     if (IsSplayed()) pn->Recalibrate(this);
    

    "And then we loop back to see if there is more work to be done on this node."

    }
    

    "But look, even though the node needs processing «view node members», we don't loop back. We just drop out of the loop. What's going on?"

    Um, that's an if statement up there, not a while statement.

    A moment of silence while I process this piece of information.

    "All right then, sorry to bother you, hey, how about that sporting event last night, huh?"

  • The Old New Thing

    User interface code + multi-threaded apartment = death

    • 17 Comments

    There are single-threaded apartments and multi-threaded apartments. Well, first there were only single-threaded apartments. No wait, let's try that again.

    First, applications had only one thread. Remember, 16-bit Windows didn't have threads. Each process had one of what we today call a thread, end of story. Compatibility with this ancient model still exists today, thanks to the dreaded "main" threading model. The less said about that threading model the better.

    OLE was developed back in the 16-bit days, so it used window messages to pass information between processes, there being no other inter-process communication mechanism available. When you initialized OLE, it created a secret OleMainThreadWnd window, and those secret windows were used to communicate between processes (and in Win32, threads). As we learned some time ago, window handles have thread affinity, which means that these communication windows have thread affinity, which means that OLE has thread affinity. When you made a call to an object that belonged to another apartment, OLE posted a message to the owner thread's secret OleMainThreadWnd window to tell it what needs to be done, and then it went into a private message loop waiting for the owner thread to do the work and post the results back.

    Meanwhile, the OLE team realized that there were really two parts to what they were doing. There was the low-level object and interface management stuff (IUnknown, CoMarshalInterThreadInterfaceInStream) and the high-level "object linking and embedding" stuff (IOleWindow, IOleDocument) that was the impetus for the OLE effort in the first place. The low-level stuff got broken out into a functional layer known as COM; the high-level stuff kept the name OLE.

    Breaking the low-level and high-level stuff apart allowed the low-level stuff to be used by non-GUI programs, which for quite some time were eyeing that object management functionality with some jealousy. As a result, COM grew two personalities, one focused on the GUI customers and another focused on the non-GUI customers. For the non-GUI customers, additional functionality such as multi-threaded apartments were added, and since the customers didn't do GUI stuff, multi-threaded apartments weren't burdened by the GUI rules. They didn't post messages to communicate with each other; they used kernel objects and WaitForSingleObject. Everybody wins, right?

    Well, yes, everybody wins, but you have to know what side your bread is buttered on. If you initialize a GUI thread as a multi-threaded apartment, you have violated the assumptions under which multi-threaded apartments were invented! Multi-threaded apartments assume that they are not running on GUI threads since they don't pump messages; they just use WaitForSingleObject. This not only clogs up broadcasts, but it can also deadlock your program. The thread that owns the object might try to send a message to your thread, but your thread can't receive the message since it isn't pumping messages.

    That's why COM objects involved with user interface programming nearly always require a single-threaded apartment and why OleInitialize initializes a single-threaded apartment. Because multi-threaded apartments were designed on the assumption that there was no user interface. Once you're doing user interface work, you have to use a single-threaded apartment.

  • The Old New Thing

    We can't cut that; it's our last feature

    • 38 Comments

    Many years ago, I was asked to help a customer with a problem they were having. I don't remember the details, and they aren't important to the story anyway, but as I was investigating one of their crashes, I started to wonder why they were even doing it.

    I expressed my concerns to the customer liaison. "Why are they writing this code in the first place? The performance will be terrible, and it'll never work exactly the way they want it to."

    The customer liaison confided, "Yeah, I thought the same thing. But this is a feature they're adding to the next version of their product. The product is so far behind schedule, they've been cutting features like mad to get back on track. But they can't cut this feature. It's the last one left!"

  • The Old New Thing

    What does an invalid handle exception in LeaveCriticalSection mean?

    • 27 Comments

    Internally, a critical section is a bunch of counters and flags, and possibly an event. (Note that the internal structure of a critical section is subject to change at any time—in fact, it changed between Windows XP and Windows 2003. The information provided here is therefore intended for troubleshooting and debugging purposes and not for production use.) As long as there is no contention, the counters and flags are sufficient because nobody has had to wait for the critical section (and therefore nobody had to be woken up when the critical section became available).

    If a thread needs to be blocked because the critical section it wants is already owned by another thread, the kernel creates an event for the critical section (if there isn't one already) and waits on it. When the owner of the critical section finally releases it, the event is signaled, thereby alerting all the waiters that the critical section is now available and they should try to enter it again. (If there is more than one waiter, then only one will actually enter the critical section and the others will return to the wait loop.)

    If you get an invalid handle exception in LeaveCriticalSection, it means that the critical section code thought that there were other threads waiting for the critical section to become available, so it tried to signal the event, but the event handle was no good.

    Now you get to use your brain to come up with reasons why this might be.

    One possibility is that the critical section has been corrupted, and the memory that normally holds the event handle has been overwritten with some other value that happens not to be a valid handle.

    Another possibility is that some other piece of code passed an uninitialized variable to the CloseHandle function and ended up closing the critical section's handle by mistake. This can also happen if some other piece of code has a double-close bug, and the handle (now closed) just happened to be reused as the critical section's event handle. When the buggy code closes the handle the second time by mistake, it ends up closing the critical section's handle instead.

    Of course, the problem might be that the critical section is not valid because it was never initialized in the first place. The values in the fields are just uninitialized garbage, and when you try to leave this uninitialized critical section, that garbage gets used as an event handle, raising the invalid handle exception.

    Then again, the problem might be that the critical section is not valid because it has already been destroyed. For example, one thread might have code that goes like this:

    EnterCriticalSection(&cs);
    ... do stuff...
    LeaveCriticalSection(&cs);
    

    While that thread is busy doing stuff, another thread calls DeleteCriticalSection(&cs). This destroys the critical section while another thread was still using it. Eventually that thread finishes doing its stuff and calls LeaveCriticalSection, which raises the invalid handle exception because the DeleteCriticalSection already closed the handle.

    All of these are possible reasons for an invalid handle exception in LeaveCriticalSection. To determine which one you're running into will require more debugging, but at least now you know what to be looking for.

    Postscript: One of my colleagues from the kernel team points out that the Locks and Handles checks in Application Verifier are great for debugging issues like this.

  • The Old New Thing

    Quick overview of how processes exit on Windows XP

    • 37 Comments

    Exiting is one of the scariest moments in the lifetime of a process. (Sort of how landing is one of the scariest moments of air travel.)

    Many of the details of how processes exit are left unspecified in Win32, so different Win32 implementations can follow different mechanisms. For example, Win32s, Windows 95, and Windows NT all shut down processes differently. (I wouldn't be surprised if Windows CE uses yet another different mechanism.) Therefore, bear in mind that what I write in this mini-series is implementation detail and can change at any time without warning. I'm writing about it because these details can highlight bugs lurking in your code. In particular, I'm going to discuss the way processes exit on Windows XP.

    I should say up front that I do not agree with many steps in the way processes exit on Windows XP. The purpose of this mini-series is not to justify the way processes exit but merely to fill you in on some of the behind-the-scenes activities so you are better-armed when you have to investigate into a mysterious crash or hang during exit. (Note that I just refer to it as the way processes exit on Windows XP rather than saying that it is how process exit is designed. As one of my colleagues put it, "Using the word design to describe this is like using the term swimming pool to refer to a puddle in your garden.")

    When your program calls ExitProcess a whole lot of machinery springs into action. First, all the threads in the process (except the one calling ExitProcess) are forcibly terminated. This dates back to the old-fashioned theory on how processes should exit: Under the old-fashioned theory, when your process decides that it's time to exit, it should already have cleaned up all its threads. The termination of threads, therefore, is just a safety net to catch the stuff you may have missed. It doesn't even wait two seconds first.

    Now, we're not talking happy termination like ExitThread; that's not possible since the thread could be in the middle of doing something. Injecting a call to ExitThread would result in DLL_THREAD_DETACH notifications being sent at times the thread was not prepared for. Nope, these threads are terminated in the style of TerminateThread: Just yank the rug out from under it. Buh-bye. This is an ex-thread.

    Well, that was a pretty drastic move, now, wasn't it. And all this after the scary warnings in MSDN that TerminateThread is a bad function that should be avoided!

    Wait, it gets worse.

    Some of those threads that got forcibly terminated may have owned critical sections, mutexes, home-grown synchronization primitives (such as spin-locks), all those things that the one remaining thread might need access to during its DLL_PROCESS_DETACH handling. Well, mutexes are sort of covered; if you try to enter that mutex, you'll get the mysterious WAIT_ABANDONED return code which tells you that "Uh-oh, things are kind of messed up."

    What about critical sections? There is no "Uh-oh" return value for critical sections; EnterCriticalSection doesn't have a return value. Instead, the kernel just says "Open season on critical sections!" I get the mental image of all the gates in a parking garage just opening up and letting anybody in and out. [See correction.]

    As for the home-grown stuff, well, you're on your own.

    This means that if your code happened to have owned a critical section at the time somebody called ExitProcess, the data structure the critical section is protecting has a good chance of being in an inconsistent state. (Afer all, if it were consistent, you probably would have exited the critical section! Well, assuming you entered the critical section because you were updating the structure as opposed to reading it.) Your DLL_PROCESS_DETACH code runs, enters the critical section, and it succeeds because "all the gates are up". Now your DLL_PROCESS_DETACH code starts behaving erratically because the values in that data structure are inconsistent.

    Oh dear, now you have a pretty ugly mess on your hands.

    And if your thread was terminated while it owned a spin-lock or some other home-grown synchronization object, your DLL_PROCESS_DETACH will most likely simply hang indefinitely waiting patiently for that terminated thread to release the spin-lock (which it never will do).

    But wait, it gets worse. That critical section might have been the one that protects the process heap! If one of the threads that got terminated happened to be in the middle of a heap function like HeapAllocate or LocalFree, then the process heap may very well be inconsistent. If your DLL_PROCESS_DETACH tries to allocate or free memory, it may crash due to a corrupted heap.

    Moral of the story: If you're getting a DLL_PROCESS_DETACH due to process termination,† don't try anything clever. Just return without doing anything and let the normal process clean-up happen. The kernel will close all your open handles to kernel objects. Any memory you allocated will be freed automatically when the process's address space is torn down. Just let the process die a quiet death.

    Note that if you were a good boy and cleaned up all the threads in the process before calling ExitThread, then you've escaped all this craziness, since there is nothing to clean up.

    Note also that if you're getting a DLL_PROCESS_DETACH due to dynamic unloading, then you do need to clean up your kernel objects and allocated memory because the process is going to continue running. But on the other hand, in the case of dynamic unloading, no other threads should be executing code in your DLL anyway (since you're about to be unloaded), so—assuming you coded up your DLL correctly—none of your critical sections should be held and your data structures should be consistent.

    Hang on, this disaster isn't over yet. Even though the kernel went around terminating all but one thread in the process, that doesn't mean that the creation of new threads is blocked. If somebody calls CreateThread in their DLL_PROCESS_DETACH (as crazy as it sounds), the thread will indeed be created and start running! But remember, "all the gates are up", so your critical sections are just window dressing to make you feel good.

    (The ability to create threads after process termination has begun is not a mistake; it's intentional and necessary. Thread injection is how the debugger breaks into a process. If thread injection were not permitted, you wouldn't be able to debug process termination!)

    Next time, we'll see how the way process termination takes place on Windows XP caused not one but two problems.

    Footnotes

    †Everybody reading this article should already know how to determine whether this is the case. I'm assuming you're smart. Don't disappoint me.

  • The Old New Thing

    Psychic debugging: The first step in diagnosing a deadlock is a simple matter of following the money

    • 26 Comments

    Somebody asked our team for help because they believed they hit a deadlock in their program's UI. (It's unclear why they asked our team, but I guess since our team uses the window manager, and their program uses the window manager, we're all in the same boat. You'd think they'd ask the window manager team for help.)

    But it turns out that solving the problem required no special expertise. In fact, you probably know enough to solve it, too.

    Here are the interesting threads:

      0  Id: 980.d30 Suspend: 1 Teb: 7ffdf000 Unfrozen
    ChildEBP RetAddr  
    0023dc90 7745dd8c ntdll!KiFastSystemCallRet 
    0023dc94 774619e0 ntdll!ZwWaitForSingleObject+0xc 
    0023dcf8 774618fb ntdll!RtlpWaitOnCriticalSection+0x154 
    0023dd20 00cd03f2 ntdll!RtlEnterCriticalSection+0x152 
    0023dd38 00cd0635 myapp!LogMsg+0x15 
    0023dd58 00cd0c6a myapp!LogRawIndirect+0x27 
    0023fcb8 00cb64a7 myapp!Log+0x62 
    0023fce8 00cd7598 myapp!SimpleClientConfiguration::Cleanup+0x17 
    0023fcf8 00cd8ffe myapp!MsgProc+0x1a9 
    0023fd10 00cda1a9 myapp!Close+0x43 
    0023fd24 761636d2 myapp!WndProc+0x62 
    0023fd50 7616330c USER32!InternalCallWinProc+0x23 
    0023fdc8 76164030 USER32!UserCallWinProcCheckWow+0x14b 
    0023fe2c 76164088 USER32!DispatchMessageWorker+0x322 
    0023fe3c 00cda3ba USER32!DispatchMessageW+0xf 
    0023fe9c 00cd0273 myapp!GuiMain+0xe8 
    0023feb4 00ccdeca myapp!wWinMain+0x87 
    0023ff48 7735c6fc myapp!__wmainCRTStartup+0x150 
    0023ff54 7742e33f kernel32!BaseThreadInitThunk+0xe 
    0023ff94 00000000 ntdll!_RtlUserThreadStart+0x23 
     
       1  Id: 980.ce8 Suspend: 1 Teb: 7ffdd000 Unfrozen
    ChildEBP RetAddr  
    00f8d550 76162f81 ntdll!KiFastSystemCallRet 
    00f8d554 76162fc4 USER32!NtUserSetWindowLong+0xc 
    00f8d578 76162fe5 USER32!_SetWindowLong+0x131 
    00f8d590 74aa5c2b USER32!SetWindowLongW+0x15 
    00f8d5a4 74aa5b65 comctl32_74a70000!ClearWindowStyle+0x23 
    00f8d5cc 74ca568f comctl32_74a70000!CCSetScrollInfo+0x103 
    00f8d618 76164ea2 uxtheme!ThemeSetScrollInfoProc+0x10e 
    00f8d660 00cdd913 USER32!SetScrollInfo+0x57 
    00f8d694 00cdf0a4 myapp!SetScrollRange+0x3b 
    00f8d6d4 00cdd777 myapp!TextOutputStringColor+0x134 
    00f8d93c 00cd04c4 myapp!TextLogMsgProc+0x3db 
    00f8d960 00cd0635 myapp!LogMsg+0xe7 
    00f8d980 00cd0c6a myapp!LogRawIndirect+0x27 
    00f8f8e0 00cd6367 myapp!Log+0x62 
    00f8faf0 7735c6fc myapp!remote_ext::ServerListenerThread+0x45c 
    00f8fafc 7742e33f kernel32!BaseThreadInitThunk+0xe 
    00f8fb3c 00000000 ntdll!_RtlUserThreadStart+0x23 
    

    The thing about debugging deadlocks is that you usually don't need to understand what's going on. The diagnosis is largely mechanical once you get your foot in the door. (Though sometimes it's hard to get your initial footing.)

    Let's look at thread 0. It is waiting for a critical section. The owner of that critical section is thread 1. How do I know that? Well, I could've debugged it, or I could've used my psychic powers to say, "Gosh, that function is called LogMsg, and look there's another thread that is inside the function LogMsg. I bet that function is using a critical section to ensure that only one thread uses it at a time."

    Okay, so thread 0 is waiting for thread 1. What is thread 1 doing? Well, it entered the critical section back in the LogMsg function, and then it did some text processing and, oh look, it's doing a SetScrollInfo. The SetScrollInfo went into comctl32 and ultimately resulted in a SetWindowLong. The window that the application passed to SetScrollInfo is owned by thread 0. How do I know that? Well, I could've debugged it, or I could've used my psychic powers to say, "Gosh, the change in the scroll info has led to a change in window styles, and the thread is trying to notify the window of the change in style. The window clearly belongs to another thread; otherwise we wouldn't be stuck in the first place, and given that we see only two threads, there isn't much choice as to what other thread it could be!"

    At this point, I think you see the deadlock. Thread 0 is waiting for thread 1 to exit the critical section, but thread 1 is waiting for thread 0 to process the style change message.

    What happened here is that the program sent a message while holding a critical section. Since message handling can trigger hooks and cross-thread activity, you cannot hold any resources when you send a message because the hook or the message recipient might want to acquire that resource that you own, resulting in a deadlock.

  • The Old New Thing

    Things I've written that have amused other people, Episode 4

    • 66 Comments

    One of my colleagues pointed out that my web site is listed in the references section of this whitepaper. It scares me that I'm being used as formal documentation because that is explicitly what this web site isn't. I wrote back,

    I really need to put a disclaimer on my web site.
    FOR ENTERTAINMENT PURPOSES ONLY

    Remember, this is a blog. The opinions (and even some facts) expressed here are those of the author and do not necessarily reflect those of Microsoft Corporation. Nothing I write here creates an obligation on Microsoft or establishes the company's official position on anything. I am not a spokesperson. I'm just this guy who strings people along in the hopes that they might hear a funny story once in a while.

    You'd think this was obvious, but apparently there are people who think that somehow what I write has the weight of official Microsoft policy and take my sentences apart as if they were legal documents or who take my articles and declare them to be official statements from Microsoft Corporation.

  • The Old New Thing

    If you can detect the difference between an emulator and the real thing, then the emulator has failed

    • 2 Comments

    Recall that a corrupted program sometimes results in a "Program too big to fit in memory" error. In response, Dog complained that while that may have been a reasonable response back in the 1980's, in today's world, there's plenty of memory around for the MS-DOS emulator to add that extra check and return a better error code.

    Well yeah, but if you change the externally visible behavior, then you've failed as an emulator. The whole point of an emulator is to mimic another world, and any deviations from that other world can come back to bite you.

    MS-DOS is perhaps one of the strongest examples of requiring absolute unyielding backward compatibility. Hundreds if not thousands of programs scanned memory looking for specific byte sequences inside MS-DOS so it could patch them or hunted around inside MS-DOS's internal state variables so it could modify them. If you move one thing out of place, those programs stop working.

    MS-DOS contains chunks of "junk DNA", code fragments which do nothing but waste space, but which exist so that programs which go scanning through memory looking for specific byte sequences will find them. (This principle is not dead; there's even some junk DNA in Explorer.)

    Given the extreme compatibility required for MS-DOS emulation, I'm not surprised that the original error behavior was preserved. There is certainly some program out there that stops working if attempting to execute a COM-style image larger than 64KB returns any error other than 8. (Besides, if you wanted it to return some other error code, you had precious few to choose from.)

  • The Old New Thing

    Everybody thinks about CLR objects the wrong way (well not everybody)

    • 34 Comments

    Many people responded to Everybody thinks about garbage collection the wrong way by proposing variations on auto-disposal based on scope:

    What these people fail to recognize is that they are dealing with object references, not objects. (I'm restricting the discussion to reference types, naturally.) In C++, you can put an object in a local variable. In the CLR, you can only put an object reference in a local variable.

    For those who think in terms of C++, imagine if it were impossible to declare instances of C++ classes as local variables on the stack. Instead, you had to declare a local variable that was a pointer to your C++ class, and put the object in the pointer.

    C#C++
    void Function(OtherClass o)
    {
     // No longer possible to declare objects
     // with automatic storage duration
     Color c(0,0,0);
     Brush b(c);
     o.SetBackground(b);
    }
    void Function(OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     o.SetBackground(b);
    }
    void Function(OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     o->SetBackground(b);
    }

    This world where you can only use pointers to refer to objects is the world of the CLR.

    In the CLR, objects never go out of scope because objects don't have scope.¹ Object references have scope. Objects are alive from the point of construction to the point that the last reference goes out of scope or is otherwise destroyed.

    If objects were auto-disposed when references went out of scope, you'd have all sorts of problems. I will use C++ notation instead of CLR notation to emphasize that we are working with references, not objects. (I can't use actual C++ references since you cannot change the referent of a C++ reference, something that is permitted by the CLR.)

    C#C++
    void Function(OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     Brush b2 = b;
     o.SetBackground(b2);
    
    
    
    
    
    }
    void Function(OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b2;
     dispose b;
     dispose c;
     dispose o;
    }

    Oops, we just double-disposed the Brush object and probably prematurely disposed the OtherClass object. Fortunately, disposal is idempotent, so the double-disposal is harmless (assuming you actually meant disposal and not destruction). The introduction of b2 was artificial in this example, but you can imagine b2 being, say, the leftover value in a variable at the end of a loop, in which case we just accidentally disposed the last object in an array.

    Let's say there's some attribute you can put on a local variable or parameter to say that you don't want it auto-disposed on scope exit.

    C#C++
    void Function([NoAutoDispose] OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     [NoAutoDispose] Brush b2 = b;
     o.SetBackground(b2);
    
    
    }
    void Function([NoAutoDispose] OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     [NoAutoDispose] Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b;
     dispose c;
    }

    Okay, that looks good. We disposed the Brush object exactly once and didn't prematurely dispose the OtherClass object that we received as a parameter. (Maybe we could make [NoAutoDispose] the default for parameters to save people a lot of typing.) We're good, right?

    Let's do some trivial code cleanup, like inlining the Color parameter.

    C#C++
    void Function([NoAutoDispose] OtherClass o)
    {
     Brush b = new Brush(new Color(0,0,0));
     [NoAutoDispose] Brush b2 = b;
     o.SetBackground(b2);
    
    
    }
    void Function([NoAutoDispose] OtherClass* o)
    {
     Brush* b = new Brush(new Color(0,0,0));
     [NoAutoDispose] Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b;
    }

    Whoa, we just introduced a semantic change by what seemed like a harmless transformation: The Color object is no longer auto-disposed. This is even more insidious than the scope of a variable affecting its treatment by anonymous closures, for introduction of temporary variables to break up a complex expression (or removal of one-time temporary variables) are common transformations that people expect to be harmless, especially since many language transformations are expressed in terms of temporary variables. Now you have to remember to tag all of your temporary variables with [NoAutoDospose].

    Wait, we're not done yet. What does SetBackground do?

    C#C++
    void OtherClass.SetBackground([NoAutoDispose] Brush b)
    {
     this.background = b;
    }
    void OtherClass::SetBackground([NoAutoDispose] Brush* b)
    {
     this->background = b;
    }

    Oops, there is still a reference to that Brush in the o.background member. We disposed an object while there were still outstanding references to it. Now when the OtherClass object tries to use the reference, it will find itself operating on a disposed object.

    Working backward, this means that we should have put a [NoAutoDispose] attribute on the b variable. At this point, it's six of one, a half dozen of the other. Either you put using around all the things that you want auto-disposed or you put [NoAutoDispose] on all the things that you don't.²

    The C++ solution to this problem is to use something like shared_ptr and reference-counted objects, with the assistance of weak_ptr to avoid reference cycles, and being very selective about which objects are allocated with automatic storage duration. Sure, you could try to bring this model of programming to the CLR, but now you're just trying to pick all the cheese off your cheeseburger and intentionally going against the automatic memory management design principles of the CLR.

    I was sort of assuming that since you're here for CLR Week, you're one of those people who actively chose to use the CLR and want to use it in the manner in which it was intended, rather than somebody who wants it to work like C++. If you want C++, you know where to find it.

    Footnote

    ¹ Or at least don't have scope in the sense we're discussing here.

    ² As for an attribute for specific classes to have auto-dispose behavior, that works only if all references to auto-dispose objects are in the context of a create/dispose pattern. References to auto-dispose objects outside of the create/dispose pattern would need to be tagged with the [NoAutoDispose] attribute.

    [AutoDispose] class Stream { ... };
    
    Stream MyClass.GetSaveStream()
    {
     [NoAutoDispose] Stream stm;
     if (saveToFile) {
      stm = ...;
     } else {
      stm = ...;
     }
     return stm;
    }
    
    void MyClass Save()
    {
     // NB! do not combine into one line
     Stream stm = GetSaveStream();
     SaveToStream(stm);
    }
    
Page 3 of 458 (4,571 items) 12345»