Should we produce warnings for unused/uninitialized internal fields?

Should we produce warnings for unused/uninitialized internal fields?

Rate This
  • Comments 39

You may have noticed as you've been developing C# applications that if you have an internal field of an internal type which you never use, we give you a warning about it. In fact, there are a number of warnings that we give for such fields, depending upon whether the field was never read or written, initialized but not read, or read but not initialized (and therefore always set to its default value.)

Now, obviously we ought not to give similar warnings for public fields on public types. The thing doing the reading or writing is highly likely to be an external assembly which may not even have been written yet, so it would be silly to give a warning. (And similarly, we suppress this warning on internals if we are building a module that is going to be linked to another module; the reads and writes might be in the other module.  We also suppress it if the internal members of the assembly are visible to an external assembly.)

The C# 2.0 compiler was a bit weak though. If you had an unused public field on an internal class, the compiler would say, well, that's a public field, suppress the warning. But a "public" field on an internal class is not actually visible to anything in the outside world (again, modulo the parenthetical above) so really we could produce the warnings here.

There were a bunch of cases like that. A few months ago I tightened up the C# 3.0 compiler so that we now detect and warn for all the possible cases where we can actually deduce that the field is not used/not initialized.

Naturally this means that people started getting warnings on code which had previously compiled without warnings, so other teams at Microsoft that test pre-release builds of the compilers started to notice. This change has raised a few questions amongst users on other teams, which might motivate changing this behaviour again. It is particularly hard on users who compile with "all warnings are errors" turned on.

Two scenarios in particular have been brought to my attention.

First, more and more classes are now being written specifically to be used with private reflection and/or lightweight codegen. That is, something outside of the C# language is going to be manipulating the fields of a particular class, and it is by design to have the field unread/unwritten by the static portion of the program. It's irksome to have a warning about something which is by design.

Second, more and more classes are now being written where a field is initialized but never read, but the field is initialized to a finalizable object, and the finalizer effectively “reads” the field and does work on it.

There are a number of possible responses to these scenarios:

1) Roll back the behaviour to that of C# 2.0. That doesn’t actually solve any of these problems; we still potentially give warnings which the user doesn’t want.  But we give warnings in fewer scenarios.  We can also make the plausible statement “this is what the compiler has always done, you can live with it”. 

The downside of course is that we would be eliminating warnings which the user may want. The whole point of this exercise in the first place was to produce useful warnings.

2) Do nothing. Tell the people who are raising these objections to live with it.  They can turn off the warning programmatically if they want. We strike a blow for language purity and ignore the real-world objections of people who are using features external to the language like reflection.

3) Give the "unread" warning only if the field is of a type known to have no finalizer. (Or, weaker, suppress the "unread" warning only if the field is of a type known to have a finalizer.)

4) Eliminate all the warnings altogether.  Declare a new design principle: warnings should be only for patterns which are highly likely to be  dangerous mistakes.  The warnings in question are just helpful reminders that you might have dead code to eliminate.  FXCOP produces this warning already, so if people want to see it, they can use FXCOP or the static analysis tool of their choice.

5) Create some new attributes that let people document how a field is intended to be used. Be smart about looking at the attribute, and if we see attributes which say "this struct is intended for use in p/invoke", or "this field is to be accessed via private reflection", and so on, then suppress the warnings.

6) Do something else that Eric hasn't thought of.

Anyone have thoughts or opinions on this?

Thanks!

  • I vote for choice #5.

    This gives the control to the developers and also leads to a minimum of documentation.

    Especially those cases where the - not obviously - read field is read by reflection or within the finalizer are hard to track down. Encouraging documentation by attributes is an advantage.

  • So basically #4 is to shift some warnings into a "notice" class, of relatively harmless items that should never impact compilation?

    #5 certainly sounds nice, but I could just imagine the headaches of implementing it in the compiler and in any code that uses it. Meta attributes have a tendancy to pile up and get annoying quickly, although they're certainly clearer than a pragma directive. (Not that my opinion is important, just musing.)

  • 1, 2 and 4 are all OK, I'd vote for 4.

    3 is too difficult for average developers, and IMHO 5 is a misuse of attributes, which are more to do with runtime behaviour.

  • 6. Let the developer select a warning and check as ignore/resolved and store the results as part of the build process (rather than using attributes in the code). Somewhat similar to #2, but at least gives the developer some control over which warnings are important to them and means the warning gets flagged. Still possibly best left for FxCop.

  • I would have to go with number 5, as it leaves a bit of a trail for maintenance programmers who may not have access to the code that accesses the field.

  • I vote for #5 as well, or for Mark's suggestion about making these warnings configurable.  Please don't do #4!  Those warnings have saved my bacon more times than I can count; 9 times out of 10 those warnings tell me that I accidentally made an assignment to the wrong field, or did something stupid like making a public property get itself instead of its data field.  Those are the biggest nightmares to debug because they're usually due to some minor typo or late-night brain-fart.

    I realize that's not necessarily in the best interests of language purity, but practical concerns shouldn't be ignored either.  I think it's best to design for the common case (those of us for whom unused/unassigned/prematurely-used fields is *likely* to be an error) and providing only on-demand support to the edge cases (people using Reflection as an integral part of their design, as opposed to simple plug-in systems or a tool for overcoming the limitations of 3rd-party libraries).

    Also keep in mind that the people reading and commenting in here have an active interest in these matters, whereas the majority of C# programmers on the front lines probably haven't even heard of FXCop and think that static analysis should be handled by the electrical contractors.  I do use that tool sometimes, but wading through the enormous and ominous piles of warnings it generates is far more time-consuming than seeing a small handful of compiler warnings.

  • Really depends on the whole set of cases you are considering.

    I was leaning towards 5, but then I went back and reconsidered in the context of the example you gave.

    ie you have an internal class that has a field marked as public. Effectively the public tag has already stated that they want the field to be publicly available, so adding an additional attribute to say this again in a different way seems a bit redundant. (call it the [YesIReallyMeantPublicAttribute])

    Which I suppose puts me in the #1 camp.

  • Sorry, I also wanted to reply to Joe's statement about attributes only being appropriate for runtime behaviour - I can't honestly claim to know the true motivation behind them but I still think this couldn't be further from the truth.  Attributes are simply metadata - there just happen to be several parts of the .NET Runtime that check for these attributes and change their behaviour accordingly, much like how metadata is used in other applications.

    Attributes *might* indicate some preference for runtime behaviour, but they might also be design-time (the attributes in System.ComponentModel), assembly information (System.Reflection), security requirements (System.Security.*), all sorts of wacky and unexpected stuff (System.Web or System.Web.Services), or in some cases they may even affect the compiler, such as with FlagsAttribute.

    Whether or not it's the optimal solution, it's a perfectly valid use of attributes.

  • Aaron: yes, I am taking into consideration the fact that the people who comment have an enormous self-selection bias.  They are likely to be the people who are very far from the average programmer in terms of interest in language design issues, opinionatedness, etc.

    But that's what I want.  I want strongly held opinions from smart people who have thought about it. Those are the kinds of people I want designing tools for people who want to use tools, rather than think about tools. That doesn't mean that I'm going to do exactly what you guys say, but I very much take your opinions seriously.

    I figured that #5 would be the most popular option. Unfortunately, it is also the least likely to happen, since we do not have time to introduce more classes into the .NET framework, design them, test them, do security reviews, blah blah blah.  The compiler has a lot of moving parts already; adding more to external assemblies is not likely to fly with management this late in the cycle.  I will make the best case I possibly can for it, but right now #2 is the lowest-cost option and therefore the one most likely to happen.

  • I vote for #5: let the programmer document their intent. It's kind of an 'extern "C"' for the new millenium.

  • I vote for #4, if its not messing with the compliation...why is the complier warning anyway? FXCop is the place for this type of item. I'd vote against 1-3, 5 becuase developers don't know everything (what, shocking I know), what's not important now, may be important in a few years (look at sec vulns)...same applies here

  • I also think #5 is the best choice. I apreciate this kind of warnings. And if our code starts producing more warnings than before, it's just a mather of adding an attribute in a few of places. I deal with code which extensively uses that reflection approach, and I wouldn't mind having those warnings around here.

  • If time+money is an issue, I'd say go with #2 for now and see about going #5 in a later version.

    I would argue for a move towards code that can have very powerful static analysis done to it.  There is much discussion and hype and enthusiasm for test-driven development these days, but it strikes me that testing is by definition an incomplete mechanism; 100% code coverage != 100% state coverage.  I am very much in favor of low-friction, non-crufty language enhancements that could help out static analysis tools.  If I can prove that a variable is never null, there is no need to have a test for it being null.  Maybe this is a pipe dream, but I would love to more "provable" code in the future.

    It seems that in instances where a field is accessed by methods not easy to discover by static analysis, it is very important to document how that field is used.  There are plenty of stories online these days about how Ruby's extensive metaprogramming can cause unpredictable, hard-to-debug behavior.  #5 seems like a reasonable, small step towards alleviating this potential problem.

  • I don't use C#, so my comments may be gibberish.  But my impression on reading this is:

    1) sucks

    2) sucks ever-so-slightly less

    3) seems like a kluge

    4) seems okay, but seems kind of the like the 'inverse' of #2, and thus smacks of the same "you deal with it" attitude as your deliberate phrasing of #2.  Doesn't seem to jibe with the spirit of adding the warnings in the first place.  But it might be more correct than #2: how would you implement the warnings from 2.0 if you were doing it today?

    5) seems like you're inviting people to cause themselves headaches later.  Is there going to be any "validation" of this metadata?  Well, not at compile time, so if they want it, they'll have to add it later.  But they won't.  So it will go stale as the code is maintained, and then later, they'll be getting warnings for some of the cases they want to be warned about, but not the rest (where they have the stale attributes).  And then they'll say "why won't the compiler warn me if my attributes might be wrong".  And then it looks a lot like #4, for some yet-to-be-defined acronym in place of 'FXCOP'.  So I say "feh" to #5.

    If it makes sense (see my first line about gibberish), could you have an OnlyWarnForTheStuffWeWarnedAboutInTwoPointOh compiler option?  I know, yuck, but such is the price of "backwards compatibility".

    @Luke, if you could "prove" that a variable is never null without actually testing it, I'd give you a cookie.  I would likely not give you a job.  Search the web for the phrase "proved it correct".

  • @Zac: I have discussed the topic of provability in quite a bit of depth, so please don't make snap judgments like you appear to have done.  Nice is an example of a language that guarantees NullPointerExceptions are never thrown: http://nice.sourceforge.net/ .  I never said I could prove that any variable is not null without testing; static analysis can make use of if statements.  Your allusion to Knuth's comment about proving his code correct but not debugging it is nice, but is quite irrelevant; Nice is a refutation of the point I think you're making.  Also see the Ada language.  Note with my mention of Ada "non-cruftiness" is high on my priority list.

Page 1 of 3 (39 items) 123