Why are unused using directives not a warning?

Why are unused using directives not a warning?

Rate This
  • Comments 56

As I’ve discussed before, we try to reserve warnings for only those situations where we can say with almost certainty that the code is broken, misleading or useless. One reason for trying to ensure that warnings are not “false positives” is that we don’t ever want to encourage someone to take working, correct code and break it so as to remove the warning. Another is that since many people compile with “warnings are errors” turned on, we do not want to introduce a whole lot of unnecessary build breakages should the warning turn out to be unwarranted.

An unused using directive is almost certainly not “broken”, and it’s probably not misleading. It certainly is useless though. Why not produce a warning for this, so that you can remove the useless directive from your program?

Suppose we make an unused using into a warnings. Suppose you do have “warnings are errors” turned on. And suppose you create a new console project in Visual Studio.  We generate you this code:

using System;
using System.Text;
using System.Linq;

class Program
{
  static void Main(string[] arguments)
  {
  }
}

… which doesn’t compile. Which is better, generating code that doesn't compile, or omitting the using directives and making you type "using System;" the first time you try to use Console.WriteLine?

Either way is a really bad user experience. So, no such feature.

  • …and lest people misunderstand, the code example in my previous comment is not a real-world example.  It's a dramatically simplified version of a real-world example, for the sole purpose of illustrating a point.  Obviously, if you're just trying to convert a string to an int where a failure to convert might happen, there are much better ways to do that.

  • Yep some of those things are annoying.

    My only rant, since we're off-topic, is Visual Studio's AutoEventWireup. It's always a 'why is this firing twice' or 'why is this not firing' head scratch that usually is the issue, and it's a real time waster.

    An on-topic comment is that I'm pretty happy that unused usings are not an error/warning. ReSharper is a cheap and simple solution.

    And the default usings are a bit annoying. I remember Web Forms and the System.Drawing.2D.Image versus the System.Web.UI.Controls.Image - oh how ambiguous names annoy me. Doesn't help when a dev on our team made their own Image entity that is used practically everywhere. Namespaces are great, but reusing common names is a recipe for pain. Thanks dude!

  • So... what's the reasoning this type of thing can't go in the 'messages' window? that thing almost never gets used, and I can't seem to find anyway to actually make use of that window. i'd love to leave notes/non-error warnings or even better the TODO list, or throws not implemented exception list in there. instead of having to manage a separate tab for those cases.

  • @Pavel:

    > Now, personally, I never really understood the need for this two-stage pattern in the first place. It's not like derived classes can't attach event handlers to parent's events... I suspect it was put in place for the sake of  optimization.

    I do not view this as an optimization pattern - I view at the template method pattern. Having a method that is always called *before* and/or *after* event handlers are raised can be very useful. Event handlers are not promised to be raised in any particular order, so there's no way to make yourselfe first or last subscriber. Besides, I always find it awkward to see a class that subscribes to events that it (or it's base class) itself raises.

  • > I do not view this as an optimization pattern - I view at the template method pattern. Having a method that is always called *before* and/or *after* event handlers are raised can be very useful. Event handlers are not promised to be raised in any particular order,  so there's no way to make yourselfe first or last subscriber.

    Actually, yes, they are raised in a particular order. Well, it's up to a specific class to declare its own contract (as, ultimately, it can always define add/remove as well as raise itself), but default implementation of events always raises handlers in the order in which they were added. This is trivially observable when using ref arguments, or when passing in mutable object of reference types. In fact, the framework itself uses that pattern heavily in form of e.Cancel property of various EventArgs subclasses - that one requires a definite ordering of event handlers to be of any use.

    So far, even with custom events, I haven't seen any implementation that didn't guarantee order. And if one doesn't guarantee it for clients, then why should it guarantee it for derived classes? Or, put it another way - if you want "before" and "after" functionality, then why should it only be available to derived classes, and not to every client of your class? And you can trivially do the latter by providing Before.../After... events, as e.g. WPF does for many of its events.

    >  Besides, I always find it awkward to see a class that subscribes to events that it (or it's base class) itself raises.

    Why? I understand why it could feel awkward to subscribe to your own event (though then again, it can also be clear and concise in some cases - it all depends on the exact case), but what's wrong with subclasses subscribing to the events of the superclass? Events are just part of the API, just as members and properties are; it's no more wrong for a derived class to subscribe to an event on its base class than it is for a derived class to call a method or property on its base class. If anything, this leads to more uniform code, as any class accesses the event functionality in the same way, regardless of whether the event is the one inherited from superclass, or provided by some random object.

    I can understand it as a matter of personal subjective preference, but that alone isn't a good rationalization for it.

  • > It seems to me that a big reason for the "two-stage pattern" is so that derived classes can take control over the event-raising process, even inhibiting the raising of events altogether if need be, and at the very least injecting their own behavior at a specific point relative to the event raising, rather than being at the mercy of the order of subscription (being first is relatively easy, as long as the constructor is correctly written, but being last is a lot harder, and being both first and last is even harder still!)

    It's an interesting point. I find it ironic, though, that CLS spec actually defines three standard special methods for events - given event X, there's "add_X" and "remove_X", as you'd expect, but there's also "raise_X". CLR itself also special-cases all three with keywords ".addon", ".removeon", and ".fire" - same as properties have ".get" and ".set" (there's also ".custom" for both events and properties). However, neither VB nor C# seem to actually be able of either generating ".fire" event method, or using one. If they were, it would be more logical for a class that wants its children to be able to intercept raising of the event to declare said event virtual, and let them override ".fire".

    At the same time, I don't think that providing derived classes with the ability to intercept raising of all events (or even any events) is a reasonable defaults. We don't usually allow overriding most methods, why allow mucking with all events? And, in practice, that's how the pattern is normally used - for _all_ events, regardless of whether it is meaningful (and secure!) to allow subclasses a backdoor or not.

    IMO, the only reasonable default is to treat subclasses as "just another client", with no special privileges. C# does that with non-virtual-by-default already, which the "On..." pattern breaks.

  • "At the same time, I don't think that providing derived classes with the ability to intercept raising of all events (or even any events) is a reasonable defaults. We don't usually allow overriding most methods, why allow mucking with all events?"

    "We" don't.  The main place this is the default is in the Forms namespace, and in related classes (e.g. BackgroundWorker).  And sure, you might argue that's poor design, but that's the design choice of those authoring that particular class library.

    There's nothing in C# that makes it the default, and in fact I generally do _not_ follow the "protected virtual" pattern, specifically because in general, it makes sense to me that event implementation details are just as private as any other implementation detail I might include in a class.  I'm not specifically trying to preclude inheritance; I just feel that even when inheriting a class, only some subset of the implementation details need to be modifiable by the derived classes, and those cases will be determined as the design of the class plays out.

    That said, I think it's safe to assume that at least _some_ thought along those lines was done by those who came up with the Forms API, and the fact that they have followed this "overridable by default" design indicates that in that particular domain, they found that more often it _was_ useful and important to grant that level of access to derived classes.

    That said, note that this is not a universal design approach.  In Java, non-private methods are overridable by default, and I've seen very few class implementations where this has been disabled explicitly by making methods "final".  In general, implementation details meant to be restricted to the given class are simply made "private", and the remainder are (one hopes) implemented in a way that allows effective sub-classing with overrides.  I think that in practice, much less thought is put into the question of designing for inheritance than should be.

    At least in C#, supporting polymorphic inheritance has to be done explicitly, increasing the chances that some real thought will have been put into how best to implement that and provide for features sub-classes will need.

    Based on that, I take as granted that the model seen in the Forms namespace does in fact have at least some good arguments in favor of it, as a divergence from what would be the nominal default.  And in fact, I have found that for most events I override in a Control subclass, it is in fact useful to be able to follow the same technique for ensuring order of operations, and to have a model where the actions taken by the class itself are in fact distinct from client code outside the class that may have subscribed to the event.  My code is actually much more uniform this way, than if I had to create and subscribe a delegate to base class events any time I needed to enhance base class functionality.

    (p.s. I am a bit embarrassed at how far we've gone from the original point of this blog post.  But, obviously not so much that I'm resisting making further off-topic comments. :) )

  • @Pavel, to clarify, your assertion is correct that "yes, [events] are raised in a particular order", however what I think what @pete.d was intending to convey is that subscribers have no guarantees on order, nor should they come to expect a certain order of events being fired (via delegates).

    The class could make a guarantee, but I've never seen one (ever) - it's just an implementation detail.

    As you say, sometimes it's better to self-subscribe, other times it's better to override.

  • The bigger problem for my company is unused using () statements around database connections and other iDisposable objects that really need them.

    We have found countless places in our code where developers forgot to add using() {} statements around SqlConnections and are effectively leaking them.

  • @Pavel

    I thought of a few reasons for the "protected virtual OnFoo(FooArgs)" pattern. I can't come up with an aha moment, though.

    There should be some way for a derived type to invoke its parent's event.  That supports "protected" and the args.

    There shouldn't be a way for nonderived types to invoke an event.  That rules out "protected internal".

    There shouldn't be a way for nonderived types to inspect the underlying delegate chain. Subscribers can put delegates to nonpublic methods in there; there's an expectation of privacy.  That underscores why the backing delegate is private.

    This is almost more of a mechanical problem than a logical one, but: there's a syntax problem in C#, at least--a logjam around accessibility. An event already has a modifier that is shorthand for its Add and Remove methods.  How would we express accessibility of the event raiser, distinct from the accessibility of add/remove?

    That covers everything but the reaons for "virtual".  Leo mentioned the template method pattern.  I think I'd call this a hook more than template--I think template implies both virtual and nonvirtual steps, where the base class performs the invariant steps nonvirtually. In this case, there is no invariant behavior, is there?  Derived classes can suppress the actual raising of the event altogether.  But the larger point stands: derived classes *can* decorate the base behavior.

    If a derived class provides its own Add/Remove, it has to invoke its own local event; the base event is private. So there's an in-for-a-dime-in-for-a-dollar reason for "virtual".

    By making OnFoo virtual, you allow derived types to do a bit of covariance.  Derived classes can read all the data from the FooArgs object and invoke an event that passes a SubFooArgs instead. It's cowboyish but possible. (I'm not coming up with a compelling example why you'd WANT to do this, but...)

    Maybe this is a more realistic motivation: with a virtual OnFoo, you can precede the base classes's event with a cancellable pre-event of your own.

    Okay. That's plenty of offtopic speculation from the likes of me.  I'd still like to know if there's an aha-moment explanation, though.  This is beginning to feel like an interview question...

  • The OnFoo pattern has a couple useful purposes that I can think of.

    Let's say you're implementing a base class like UIElement which could have a dozen events. Since a complicated UI could have thousands upon thousands of UIElement-derived objects, most of which won't have even one event wired up, let alone several of them, you could implement some sort of dictionary-based backing store for events. That way you can define as many events as you want for a UIElement and not have to worry about paying the price for every single one of the thousands of UIElements in an app.

    Of course derived controls like Button and TextBox are going to need to hook these events. Should the creation of every single TextBox invoke the creation of an event dictionary and several delegates to add to it? Clearly it's much more efficient to have controls like Button and TextBox override some functions to get their desired behaviors.

    The second purpose was one I ran across recently. There's a class called ObservableCollection that raises an event every time its collection changes, and I wanted a way to make a batch of changes at a time (like sorting) and only raise a single event at the end. If I could not have overridden the OnCollectionChanged function to not raise the event while making the batched changes, I would have had to reimplement the whole class myself.

  • I'd just like to publicly apologise to Eric for derailing the whole comment conversation here. I won't make it worse by replying to the various points raised since then.

    So, warnings and unused using directives, anyone?

  • I think that the compiler should only raise warnings for things that could reasonably be bugs in your program. For example, if you declare a variable and never use it, it could be an indication of a typo or forgetting to write some code that uses it.

    However, an extra using directive is hardly an indication that you were intending to use some namespace and just forgot. It simply is a way of expanding the global namespace should you need it. Warning for unused usings would be like warning for pragmas that have no effect or #defined symbols not used in an #if -- it may be useful information to know, but it's not an indication of a likely bug.

  • Wouldn't it be nice if using statements could also be added just before methods and blocks. It is the fact that they get added before a class that often makes them forgotten and therefore fall into disuse. It would make it simpler to specify the rare using statements closer to where they were actually used, and to have those that had a wider scope further away i.e. above the class declaration. Just my 2c.

  • I've personally removed tens of thousands of lines of unneeded usings, unreferenced variables, unused methods added for completeness, set/get variable wrapper functions, etc from several offshore written asp.net systems.  Those systems followed strict coding standards that added 10 to 20% overhead in the number of lines per file.

    Removing unneeded usings is a good thing.

    Code simplification is a good thing.  Simplification is not replacing a 5 line loop with a 1 line one using linq or old C style terseness.

Page 3 of 4 (56 items) 1234