August, 2010

  • The Old New Thing

    Everybody thinks about CLR objects the wrong way (well not everybody)

    • 34 Comments

    Many people responded to Everybody thinks about garbage collection the wrong way by proposing variations on auto-disposal based on scope:

    What these people fail to recognize is that they are dealing with object references, not objects. (I'm restricting the discussion to reference types, naturally.) In C++, you can put an object in a local variable. In the CLR, you can only put an object reference in a local variable.

    For those who think in terms of C++, imagine if it were impossible to declare instances of C++ classes as local variables on the stack. Instead, you had to declare a local variable that was a pointer to your C++ class, and put the object in the pointer.

    C#C++
    void Function(OtherClass o)
    {
     // No longer possible to declare objects
     // with automatic storage duration
     Color c(0,0,0);
     Brush b(c);
     o.SetBackground(b);
    }
    void Function(OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     o.SetBackground(b);
    }
    void Function(OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     o->SetBackground(b);
    }

    This world where you can only use pointers to refer to objects is the world of the CLR.

    In the CLR, objects never go out of scope because objects don't have scope.¹ Object references have scope. Objects are alive from the point of construction to the point that the last reference goes out of scope or is otherwise destroyed.

    If objects were auto-disposed when references went out of scope, you'd have all sorts of problems. I will use C++ notation instead of CLR notation to emphasize that we are working with references, not objects. (I can't use actual C++ references since you cannot change the referent of a C++ reference, something that is permitted by the CLR.)

    C#C++
    void Function(OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     Brush b2 = b;
     o.SetBackground(b2);
    
    
    
    
    
    }
    void Function(OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b2;
     dispose b;
     dispose c;
     dispose o;
    }

    Oops, we just double-disposed the Brush object and probably prematurely disposed the OtherClass object. Fortunately, disposal is idempotent, so the double-disposal is harmless (assuming you actually meant disposal and not destruction). The introduction of b2 was artificial in this example, but you can imagine b2 being, say, the leftover value in a variable at the end of a loop, in which case we just accidentally disposed the last object in an array.

    Let's say there's some attribute you can put on a local variable or parameter to say that you don't want it auto-disposed on scope exit.

    C#C++
    void Function([NoAutoDispose] OtherClass o)
    {
     Color c = new Color(0,0,0);
     Brush b = new Brush(c);
     [NoAutoDispose] Brush b2 = b;
     o.SetBackground(b2);
    
    
    }
    void Function([NoAutoDispose] OtherClass* o)
    {
     Color* c = new Color(0,0,0);
     Brush* b = new Brush(c);
     [NoAutoDispose] Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b;
     dispose c;
    }

    Okay, that looks good. We disposed the Brush object exactly once and didn't prematurely dispose the OtherClass object that we received as a parameter. (Maybe we could make [NoAutoDispose] the default for parameters to save people a lot of typing.) We're good, right?

    Let's do some trivial code cleanup, like inlining the Color parameter.

    C#C++
    void Function([NoAutoDispose] OtherClass o)
    {
     Brush b = new Brush(new Color(0,0,0));
     [NoAutoDispose] Brush b2 = b;
     o.SetBackground(b2);
    
    
    }
    void Function([NoAutoDispose] OtherClass* o)
    {
     Brush* b = new Brush(new Color(0,0,0));
     [NoAutoDispose] Brush* b2 = b;
     o->SetBackground(b2);
     // automatic disposal when variables go out of scope
     dispose b;
    }

    Whoa, we just introduced a semantic change by what seemed like a harmless transformation: The Color object is no longer auto-disposed. This is even more insidious than the scope of a variable affecting its treatment by anonymous closures, for introduction of temporary variables to break up a complex expression (or removal of one-time temporary variables) are common transformations that people expect to be harmless, especially since many language transformations are expressed in terms of temporary variables. Now you have to remember to tag all of your temporary variables with [NoAutoDospose].

    Wait, we're not done yet. What does SetBackground do?

    C#C++
    void OtherClass.SetBackground([NoAutoDispose] Brush b)
    {
     this.background = b;
    }
    void OtherClass::SetBackground([NoAutoDispose] Brush* b)
    {
     this->background = b;
    }

    Oops, there is still a reference to that Brush in the o.background member. We disposed an object while there were still outstanding references to it. Now when the OtherClass object tries to use the reference, it will find itself operating on a disposed object.

    Working backward, this means that we should have put a [NoAutoDispose] attribute on the b variable. At this point, it's six of one, a half dozen of the other. Either you put using around all the things that you want auto-disposed or you put [NoAutoDispose] on all the things that you don't.²

    The C++ solution to this problem is to use something like shared_ptr and reference-counted objects, with the assistance of weak_ptr to avoid reference cycles, and being very selective about which objects are allocated with automatic storage duration. Sure, you could try to bring this model of programming to the CLR, but now you're just trying to pick all the cheese off your cheeseburger and intentionally going against the automatic memory management design principles of the CLR.

    I was sort of assuming that since you're here for CLR Week, you're one of those people who actively chose to use the CLR and want to use it in the manner in which it was intended, rather than somebody who wants it to work like C++. If you want C++, you know where to find it.

    Footnote

    ¹ Or at least don't have scope in the sense we're discussing here.

    ² As for an attribute for specific classes to have auto-dispose behavior, that works only if all references to auto-dispose objects are in the context of a create/dispose pattern. References to auto-dispose objects outside of the create/dispose pattern would need to be tagged with the [NoAutoDispose] attribute.

    [AutoDispose] class Stream { ... };
    
    Stream MyClass.GetSaveStream()
    {
     [NoAutoDispose] Stream stm;
     if (saveToFile) {
      stm = ...;
     } else {
      stm = ...;
     }
     return stm;
    }
    
    void MyClass Save()
    {
     // NB! do not combine into one line
     Stream stm = GetSaveStream();
     SaveToStream(stm);
    }
    
  • The Old New Thing

    When does an object become available for garbage collection?

    • 13 Comments

    As we saw last time, garbage collection is a method for simulating an infinite amount of memory in a finite amount of memory. This simulation is performed by reclaiming memory once the environment can determine that the program wouldn't notice that the memory was reclaimed. There are a variety of mechanism for determining this. In a basic tracing collector, this determination is made by taking the objects which the program has definite references to, then tracing references from those objects, contining transitively until all accessible objects are found. But what looks like a definite reference in your code may not actually be a definite reference in the virtual machine: Just because a variable is in scope doesn't mean that it is live.

    class SomeClass {
     ...
     string SomeMethod(string s, bool reformulate)
     {
      OtherClass o = new OtherClass(s);
      string result = Frob(o);
      if (reformulate) Reformulate();
      return result;
     }
    }
    

    For the purpose of this discussion, assume that the Frob method does not retain a reference to the object o passed as a parameter. When does the OtherClass object o become eligible for collection? A naïve answer would be that it becomes eligible for collection at the closing-brace of the SomeMethod method, since that's when the last reference (in the variable o) goes out of scope.

    A less naïve answer would be that it become eligible for collection after the return value from Frob is stored to the local variable result, because that's the last line of code which uses the variable o.

    A closer study would show that it becomes eligible for collection even sooner: Once the call to Frob returns, the variable o is no longer accessed, so the object could be collected even before the result of the call to Frob is stored into the local variable result. Optimizing compilers have known this for quite some time, and there is a strong likelihood that the variables o and result will occupy the same memory since their lifetimes do not overlap. Under such conditions, the code generation for the statement could very well be something like this:

      mov ecx, esi        ; load "this" pointer into ecx register
      mov edx, [ebp-8]    ; load parameter ("o") into edx register
      call SomeClass.Frob ; call method
      mov [ebp-8], eax    ; re-use memory for "o" as "result"
    

    But this closer study wasn't close enough. The OtherClass object o becomes eligible for collection even before the call to Frob returns! It is certainly eligible for collection at the point of the ret instruction which ends the Frob function: At that point, the Frob has finished using the object and won't access it again. Although somewhat of a technicality, it does illustrate that

    An object in a block of code can become eligible for collection during execution of a function it called.

    But let's dig deeper. Suppose that Frob looked like this:

    string Frob(OtherClass o)
    {
     string result = FrobColor(o.GetEffectiveColor());
    }
    

    When does the OtherClass object become eligible for collection? We saw above that it is certainly eligible for collection as soon as FrobColor returns, because the Frob method doesn't use o any more after that point. But in fact it is eligible for collection when the call to GetEffectiveColor returns—even before the FrobColor method is called—because the Frob method doesn't use it once it gets the effective color. This illustrates that

    A parameter to a method can become eligible for collection while the method is still executing.

    But wait, is that the earliest the OtherClass object becomes eligible for collection? Suppose that the OtherClass.GetEffectiveColor method went like this:

    Color GetEffectiveColor()
    {
     Color color = this.Color;
     for (OtherClass o = this.Parent; o != null; o = o.Parent) {
      color = BlendColors(color, o.Color);
     }
     return color;
    }
    

    Notice that the method doesn't access any members from its this pointer after the assignment o = this.Parent. As soon as the method retrieves the object's parent, the object isn't used any more.

      push ebp                    ; establish stack frame
      mov ebp, esp
      push esi
      push edi
      mov esi, ecx                ; enregister "this"
      mov edi, [ecx].color        ; color = this.Color // inlined
      jmp looptest
    loop:
      mov ecx, edi                ; load first parameter ("color")
      mov edx, [esi].color        ; load second parameter ("o.Color")
      call OtherClass.BlendColors ; BlendColors(color, o.Color)
      mov edi, eax
    looptest:
      mov esi, [esi].parent       ; o = this.Parent (or o.Parent) // inlined
      // "this" is now eligible for collection
      test esi, esi               ; if o == null
      jnz loop                    ; then rsetart loop
      mov eax, edi                ; return value
      pop edi
      pop esi
      pop ebp
      ret
    

    The last thing we ever do with the Other­Class object (presented in the Get­Effective­Color function by the keyword this) is fetch its parent. As soon that's done (indicated at the point of the comment, when the line is reached for the first time), the object becomes eligible for collection. This illustrates the perhaps-surprising result that

    An object can become eligible for collection during execution of a method on that very object.

    In other words, it is possible for a method to have its this object collected out from under it!

    A crazy way of thinking of when an object becomes eligible for collection is that it happens once memory corruption in the object would have no effect on the program. (Or, if the object has a finalizer, that memory corruption would affect only the finalizer.) Because if memory corruption would have no effect, then that means you never use the values any more, which means that the memory may as well have been reclaimed out from under you for all you know.

    A weird real-world analogy: The garbage collector can collect your diving board as soon as the diver touches it for the last time—even if the diver is still in the air!

    A customer encountered the Call­GC­Keep­Alive­When­Using­Native­Resources FxCop rule and didn't understand how it was possible for the GC to collect an object while one of its methods was still running. "Isn't this part of the root set?"

    Asking whether any particular value is or is not part of the root set is confusing the definition of garbage collection with its implementation. "Don't think of GC as tracing roots. Think of GC as removing things you aren't using any more."

    The customer responded, "Yes, I understand conceptually that it becomes eligible for collection, but how does the garbage collector know that? How does it know that the this object is not used any more? Isn't that determined by tracing from the root set?"

    Remember, the GC is in cahoots with the JIT. The JIT might decide to "help out" the GC by reusing the stack slot which previously held an object reference, leaving no reference on the stack and therefore no reference in the root set. Even if the reference is left on the stack, the JIT can leave some metadata behind that tells the GC, "If you see the instruction pointer in this range, then ignore the reference in this slot since it's a dead variable," similar to how in unmanaged code on non-x86 platforms, metadata is used to guide structured exception handling. (And besides, the this parameter isn't even passed on the stack in the first place.)

    The customer replied, "Gotcha. Very cool."

    Another customer asked, "Is there a way to get a reference to the instance being called for each frame in the stack? (Static methods excepted, of course.)" A different customer asked roughly the same question, but in a different context: "I want my method to walk up the stack, and if its caller is OtherClass.Foo, I want to get the this object for OtherClass.Foo so I can query additional properties from it." You now know enough to answer these questions yourself.

    Bonus: A different customer asked, "The Stack­Frame object lets me get the method that is executing in the stack frame, but how do I get the parameters passed to that method?"

  • The Old New Thing

    Everybody thinks about garbage collection the wrong way

    • 89 Comments

    Welcome to CLR Week 2010. This year, CLR Week is going to be more philosophical than usual.

    When you ask somebody what garbage collection is, the answer you get is probably going to be something along the lines of "Garbage collection is when the operating environment automatically reclaims memory that is no longer being used by the program. It does this by tracing memory starting from roots to identify which objects are accessible."

    This description confuses the mechanism with the goal. It's like saying the job of a firefighter is "driving a red truck and spraying water." That's a description of what a firefighter does, but it misses the point of the job (namely, putting out fires and, more generally, fire safety).

    Garbage collection is simulating a computer with an infinite amount of memory. The rest is mechanism. And naturally, the mechanism is "reclaiming memory that the program wouldn't notice went missing." It's one giant application of the as-if rule.¹

    Now, with this view of the true definition of garbage collection, one result immediately follows:

    If the amount of RAM available to the runtime is greater than the amount of memory required by a program, then a memory manager which employs the null garbage collector (which never collects anything) is a valid memory manager.

    This is true because the memory manager can just allocate more RAM whenever the program needs it, and by assumption, this allocation will always succeed. A computer with more RAM than the memory requirements of a program has effectively infinite RAM, and therefore no simulation is needed.

    Sure, the statement may be obvious, but it's also useful, because the null garbage collector is both very easy to analyze yet very different from garbage collectors you're more accustomed to seeing. You can therefore use it to produce results like this:

    A correctly-written program cannot assume that finalizers will ever run at any point prior to program termination.

    The proof of this is simple: Run the program on a machine with more RAM than the amount of memory required by program. Under these circumstances, the null garbage collector is a valid garbage collector, and the null garbage collector never runs finalizers since it never collects anything.

    Garbage collection simulates infinite memory, but there are things you can do even if you have infinite memory that have visible effects on other programs (and possibly even on your program). If you open a file in exclusive mode, then the file will not be accessible to other programs (or even to other parts of your own program) until you close it. A connection that you open to a SQL server consumes resources in the server until you close it. Have too many of these connections outstanding, and you may run into a connection limit which blocks further connections. If you don't explicitly close these resources, then when your program is run on a machine with "infinite" memory, those resources will accumulate and never be released.

    What this means for you: Your programs cannot rely on finalizers keeping things tidy. Finalizers are a safety net, not a primary means for resource reclamation. When you are finished with a resource, you need to release it by calling Close or Disconnect or whatever cleanup method is available on the object. (The IDisposable interface codifies this convention.)

    Furthermore, it turns out that not only can a correctly-written program not assume that finalizers will run during the execution of a program, it cannot even assume that finalizers will run when the program terminates: Although the .NET Framework will try to run them all, a bad finalizer will cause the .NET Framework to give up and abandon running finalizers. This can happen through no fault of your own: There might be a handle to a network resource that the finalizer is trying to release, but network connectivity problems result in the operation taking longer than two seconds, at which point the .NET Framework will just terminate the process. Therefore, the above result can be strengthened in the specific case of the .NET Framework:

    A correctly-written program cannot assume that finalizers will ever run.

    Armed with this knowledge, you can solve this customer's problem. (Confusing terminology is preserved from the original.)

    I have a class that uses Xml­Document. After the class is out of scope, I want to delete the file, but I get the exception System.IO.Exception: The process cannot access the file 'C:\path\to\file.xml' because it is being used by another process. Once the progam exits, then the lock goes away. Is there any way to avoid locking the file?

    This follow-up might or might not help:

    A colleague suggested setting the Xml­Document variables to null when we're done with them, but shouldn't leaving the class scope have the same behavior?

    Bonus chatter: Finalizers are weird, since they operate "behind the GC." There are also lots of classes which operate "at the GC level", such as Weak­Reference GC­Handle and of course System.GC itself. Using these classes properly requires understanding how they interact with the GC. We'll see more on this later.

    Related reading

    Unrelated reading: Precedence vs. Associativity Vs. Order.

    Footnote

    ¹ Note that by definition, the simulation extends only to garbage-collected resources. If your program allocates external resources those external resources continue to remain subject to whatever rules apply to them.

  • The Old New Thing

    Some known folders cannot be moved, but others can, and you'll just have to accept that

    • 26 Comments

    Locations in the shell known folder system can be marked as KF_CATEGORY_FIXED, which makes them immovable. Conversely, if a file system folder is not immovable, then it can be moved.

    This dichotomy appears simple and unworthy of discussion, except that customers sometimes have trouble incorporating this concept into their world view.

    I have some code that calls SHSet­Folder­Path, and it works for most of the folders I'm interested in, but for some CSIDL values like CSIDL_COMMON_APPDATA, it fails with E_INVALIDARG. Doesn't matter if I run elevated or not. What am I doing wrong?

    The difference is that CSIDL_COMMON_APPDATA (known under the New World Order as FOLDERID_ProgramData) is marked as KF_CATEGORY_FIXED so it cannot be moved.

    Is there a way that we can override the KF_CATEGORY_FIXED flag and move it anyway?

    Nope. It's immovable. Sorry. You'll just have to accept that that folder will not go where you want it.

    The very next day, we got a complementary question from an unrelated customer:

    We have a program that monitors a known folder, and it goes haywire if the user changes the location of the folder while it's being monitored. Is there a way to prevent the user from moving the folder?

    If a known folder can be moved, then you have to accept that it can be moved. You can't override its category and force it to be KF_CATEGORY_FIXED just because your life would be easier if it were.

    I found it interesting that we got two requests on consecutive days asking for what appear to be opposite things: "I want to force this folder to be movable" and "I want to force this folder to be immovable." I can only imagine what would happen if both programs are running at the same time!

    What the program can do is register an IFile­Is­In­Use on the directory so that it will be called when somebody wants to move it. At least that way it knows when scary things are about to happen and can prepare itself for the changes that lie ahead. I'm told that a sample program illustrating IFile­Is­In­Use is in the Windows 7 SDK under the directory winui\Shell\AppPlatform\FileIsInUse. There's also an old article on the subject over on the now-defunct Shell Revealed blog.

  • The Old New Thing

    Raymond misreads flyers, episode 2: It Takes You

    • 17 Comments

    As part of a new phase in Microsoft's continuing recycling efforts, the recycling program got a new motto. The new motto was not announced with any fanfare. Rather, any recycling-related announcement had the new motto at the top as part of the letterhead.

    The new motto: It Takes You.

    I had trouble parsing this at first. To me, it sounded like the punch line to a Yakov Smirnoff joke.

    Episode 1.

  • The Old New Thing

    How many failure reports does a bug have to get before Windows will fix it?

    • 79 Comments

    When a program crashes or hangs, you are given the opportunity to send an error report to Microsoft. This information is collected by the Windows Error Reporting team for analysis. Occasionally, somebody will ask, "How many failures need to be recorded before the bug is fixed? A thousand? Ten thousand? Ten million?"

    Each team at Microsoft takes very seriously the failures that come in via Windows Error Reporting. Since failures are not uniformly distributed, and since engineering resources are not infinite, you have to dedicate your limited resources to where they will have the greatest effect. Each failure that is collected and reported is basically a vote for that failure. The more times a failure is reported, the higher up the chart it moves, and the more likely it will make the cut. In practice, 50% of the crashes are caused by 1% of the bugs. Obviously you want to focus on that top 1%.

    It's like asking how many votes are necessary to win American Idol. There is no magic number that guarantees victory; it's all relative. As my colleague Paul Betts describes it, "It's like a popularity contest, but of failure."

    Depending on the component, it may take only a few hundred reports to make a failure reach the upper part of the failure charts, or it may take hundreds of thousands. And if the failure has been tracked to a third-party plug-in (which is not always obvious in the crash itself and may require analysis to ferret out), then the information is passed along to the plug-in vendor.

    What about failures in third-party programs? Sure, send those in, too. The votes are still tallied, and if the company has signed up to access Windows Error Reporting data, they can see which failures in their programs are causing the most problems. Signing up for Windows Error Reporting is a requirement for the Windows 7 software logo program. I've also heard but cannot confirm that part of the deal is that if your failure rate reaches some threshold, your logo certification is at risk of revocation.

    If the vendor signed up for Windows Error Reporting but is a slacker and isn't looking at their failures, there's a chance that Microsoft will look at them. Towards the end of the Windows 7 project, the compatibility team looked at the failure data coming from beta releases of Windows 7 to identify the highest-ranked failures in third party programs. The list was then filtered to companies which participated in Windows Error Reporting, and to failures which the vendor had not spent any time investigating. I was one of a group of people asked to study those crashes and basically debug the vendor's program for them.

    My TechReady talk basically consisted of going through a few of these investigations. I may clean some of them up for public consumption and post them here, but it's a lot of work because I'd have to write a brand new program that exhibits the bug, force the bug to trigger (often involving race conditions), then debug the resulting crash dump.

    I know there is a subculture of people who turn off error reporting, probably out of some sense of paranoia, but these people are ultimately just throwing their vote away. Since they aren't reporting the failures, their feedback doesn't make it back to the failure databases, and their vote for fix this bug never gets reported.

    By the way, the method for disabling Windows Error Reporting given in that linked article is probably not the best. You should use the settings in the Windows Error Reporting node in the Group Policy Editor.

    I should've seen this coming: Over time, I've discovered that there are a some hot-button topics that derail any article that mentions them. Examples include UAC, DRM, and digital certificate authorities. As a result, I do my best to avoid any mention of them. I didn't mention digital certificate authorities today, but I did link to an article which mentioned them, and now the subject has overrun the comments like kudzu. I don't need to deal with this nonsense, so I'm just going to kill my promised future article that was related to Windows Error Reporting, (I was also planning on converting my talk on debugging application compatibility issues into future articles, but since that's also related to Windows Error Reporting, I'm going to abandon that too.)

  • The Old New Thing

    Don't forget to replace your placeholder bitmaps with real bitmaps

    • 23 Comments

    The story We Burned the Poop reminded me of an embarrassing story a colleague of mine related from earlier in his programming career.

    During the development of the product he was working on, the programmers needed an image for a comparatively rarely-used piece of the user interface. Since programmers aren't graphic designers, they inserted a placeholder bitmap which would be used until a real image arrived. And since programmers are nerds, they used a picture of a television character who was popular at the time.

    The testers naturally ran the program through its paces, and when that piece of the user interface appeared with the placeholder bitmap, the testers smiled a little.

    Time passed, and people became quite accustomed to seeing that television character's face appear when they exercised that little corner of the program. An oddly appropriate face to alert you of an unusual condition.

    And then the project reached its completion, and the master CD was sent off to the factory for mass duplication.

    Wait, what about that picture of the television character?

    When asked why they signed off on testing with a placeholder bitmap, the testers explained, "Oh, we thought that was an artistic decision. The character's personality sort of made sense in that part of the program." Since the image appeared in a comparatively rarely-used corner of the program, it never appeared during product demonstrations.

    Project management contacted the producers of the television program to see whether they could just license the character's face and avoid having to destroy all of the CDs that had been pressed so far.

    The producers responded politely, "Thank you for your interest in our television character. Our licensing fees begin at $big-number, plus an additional royalty which varies depending on the nature of the usage, but it begins at $another-big-number per unit."

    That amount was far too much, and management had to choose the cheaper of two expensive options: Delivering a new master CD to the factory and telling them to destroy all the copies they had already pressed.

  • The Old New Thing

    A brief conversation while preparing to hike along the Pacific coast

    • 4 Comments

    Many years ago, one of my colleagues (who happens to be an avid outdoorsy person with a dry sense of humor) assembled a small group of people to take a long weekend hike along the Capa Alava Trail and then north along the Pacific coast. As we gathered in the parking lot midday Friday with our backpacking gear, another of our colleagues (whom I will call "Mary") walked past and the following conversation ensued:

    Mary: "Hey, where ya goin'? (joking) That way I can call Search and Rescue if I don't see you on Monday."

    Bob: "We're heading out to Lake Ozette."

    Mary: "Cool, where's Lake Ozette?"

    Bob: "That's okay, Search and Rescue knows where it is."

  • The Old New Thing

    Did I know where the Novell short file name behavior came from?

    • 11 Comments

    Commenter Yuhong Bao asks, "Do you know that the Novell behavior described in "Not all short filenames contain a tilde" came from HPFS?"

    Yes, but it was not relevant to the discussion so I didn't bother mentioning it. Going into more detail on the history of the Novell "long namespace" would just encourage people to conclude, "This problem affects only Novell, and since I don't use Novell, I don't have to worry about this."

Page 3 of 3 (29 items) 123