Should we produce warnings for unused/uninitialized internal fields?

Should we produce warnings for unused/uninitialized internal fields?

Rate This
  • Comments 39

You may have noticed as you've been developing C# applications that if you have an internal field of an internal type which you never use, we give you a warning about it. In fact, there are a number of warnings that we give for such fields, depending upon whether the field was never read or written, initialized but not read, or read but not initialized (and therefore always set to its default value.)

Now, obviously we ought not to give similar warnings for public fields on public types. The thing doing the reading or writing is highly likely to be an external assembly which may not even have been written yet, so it would be silly to give a warning. (And similarly, we suppress this warning on internals if we are building a module that is going to be linked to another module; the reads and writes might be in the other module.  We also suppress it if the internal members of the assembly are visible to an external assembly.)

The C# 2.0 compiler was a bit weak though. If you had an unused public field on an internal class, the compiler would say, well, that's a public field, suppress the warning. But a "public" field on an internal class is not actually visible to anything in the outside world (again, modulo the parenthetical above) so really we could produce the warnings here.

There were a bunch of cases like that. A few months ago I tightened up the C# 3.0 compiler so that we now detect and warn for all the possible cases where we can actually deduce that the field is not used/not initialized.

Naturally this means that people started getting warnings on code which had previously compiled without warnings, so other teams at Microsoft that test pre-release builds of the compilers started to notice. This change has raised a few questions amongst users on other teams, which might motivate changing this behaviour again. It is particularly hard on users who compile with "all warnings are errors" turned on.

Two scenarios in particular have been brought to my attention.

First, more and more classes are now being written specifically to be used with private reflection and/or lightweight codegen. That is, something outside of the C# language is going to be manipulating the fields of a particular class, and it is by design to have the field unread/unwritten by the static portion of the program. It's irksome to have a warning about something which is by design.

Second, more and more classes are now being written where a field is initialized but never read, but the field is initialized to a finalizable object, and the finalizer effectively “reads” the field and does work on it.

There are a number of possible responses to these scenarios:

1) Roll back the behaviour to that of C# 2.0. That doesn’t actually solve any of these problems; we still potentially give warnings which the user doesn’t want.  But we give warnings in fewer scenarios.  We can also make the plausible statement “this is what the compiler has always done, you can live with it”. 

The downside of course is that we would be eliminating warnings which the user may want. The whole point of this exercise in the first place was to produce useful warnings.

2) Do nothing. Tell the people who are raising these objections to live with it.  They can turn off the warning programmatically if they want. We strike a blow for language purity and ignore the real-world objections of people who are using features external to the language like reflection.

3) Give the "unread" warning only if the field is of a type known to have no finalizer. (Or, weaker, suppress the "unread" warning only if the field is of a type known to have a finalizer.)

4) Eliminate all the warnings altogether.  Declare a new design principle: warnings should be only for patterns which are highly likely to be  dangerous mistakes.  The warnings in question are just helpful reminders that you might have dead code to eliminate.  FXCOP produces this warning already, so if people want to see it, they can use FXCOP or the static analysis tool of their choice.

5) Create some new attributes that let people document how a field is intended to be used. Be smart about looking at the attribute, and if we see attributes which say "this struct is intended for use in p/invoke", or "this field is to be accessed via private reflection", and so on, then suppress the warnings.

6) Do something else that Eric hasn't thought of.

Anyone have thoughts or opinions on this?

Thanks!

  • I would suggest adding flags to the compiler.  C# 2.0's warnings would be the default level.  Add 3 more levels: no warnings, critical warnings (a subset of 2.0's warnings), the default (C# 2.0 standard), then add one (or more) verbose levels as required.

    I'm not familiar with the C# toolchain here, so I can't guage the impact on people's toolsets - I would assume that if you had commandline access to the compiler, you can pass in a -w[0-3] as a compiler argument.  IDE's would have to be modified to add a control to set a warning level.

    You can also use this strategy and simply re-prioritize the warnings the compiler generates (based on how critical they are) so that the *amount* of warnings emitted approximates C# 2.0's verbosity.

  • #4. Another case where initialize but never accessed is read is when using the record/replay semantics of mocking frameworks. In order to record a getter, you need to do something like "var foo = Bar.Quux;" I think that this is not serious enough to generate a compiler warning and that static analysis is where we want this sort of automated check.

  • I'd say 4) is the only correct solution, but with a caveat.  Treating unuse as a 'warning' just seem incorrect to me.  It's a bit of information about how to improve your code,  but it's only a warning if the field is un-intentional,  and chances are it's not.    

    More likely you just havn't gotten around to using it YET.   (This happens to me all the time).  Possibly the code has drifted to the point where it's no longer used, but C# isn't old enough for that to be the most common case yet.

    The most common case, is that the code is in flux, and the fields are there for future use.   The warnings become a thing that you have to code around; Requiring extra metadata (or worse - unused code!) in order to supress a mal-behavior of the compiler.   The C++ compiler has a bunch of these now.  (Warning, I autogenerated a GUID for your interface -  Yeah I know.  I MEANT YOU TO !!!).

    So what you REALLY need to do is option 6.  Take everthing that you treat as a 'warning' now that is really LINT and change it to a level 5 notification that you have to ENABLE to see, and/or that 'treat warnings as errors' doesn't treat as an error.

    Incidently,  more and more I use C# to generate type information that I then run through tlbexp to get .tlb files that I then include in C++.   I find this  a far superior way to define interfaces that I want to implement in C++, and MIGHT want to use in managed code in the future.  Much cleaner than MIDL or Attributed C++.    Though I"m unhappy that MS seems to have ceased improving MIDL,  it desparately needs something like MarshalAs().

    tj

  • We already have 5 in FxCop. It's called the SuppressMessageAttribute - we use this to indicate that we shouldn't warn on a particular construct.

    I personally believe (and not because of my vested interest in FxCop ;)) that C# should remain pure, and avoid having intimate knowledge about particular Framework classes and leave that to Code Analysis tools.

    Unfortunately, there are particular source constructs that we simply cannot check. For example, unused constants are particularly hard to detect at the IL level, whereas, at the AST level the compiler can detect this with 100% certainty (excluding reflection).

  • I think #2 is good enough.  I certainly don't want to have less warnings -- running FxCop is always an extra step, and it's better to have those warnings right away.  Creating a new attribute for what can be done with a pragma and a comment seems like overkill, though.

  • I'm voting for #6, if that's not asking too much ;-)

    Really, #5 would be a good option, and it would be up to anybody who doesn't like it to just not use it. If #5 is not possible, you'd probably be right to go with #2 for now.

    Actually, any system that makes supressing certain warnings easier by indicating intentions rather than specifying warning numbers would make the code both less ugly (since they usually destroy indentation, those #pragmas are quite evil) and arguably more portable between compilers. Make sure those attributes are inheritable, because I'd like to be able to combine them with my own attributes. (For instance, if I write code that initializes some field, I'm quite likely to provide an attribute too. If I can inherit this from your attribute, the use won't have to specify two of them.)

    Compiling with "treat warnings as errors" is everybodys own decision, it should not be used as a reason to cripple the warning system of C#.

    Talking about attributes, are there any plans to get rid of the limitations for attributes in c#/the CLR?

    - no order

    - no generics

    - no lambdas

    - no compound attributes

    I'm aware you're not responsible for the CLR's restrictions, but C# could in fact overcome all of these (except the order, but that could be simulated by compound attributes) by deriving attribute classes per instance. I'll include a sample transformation:

    class C {

     [Invariant<string> (s => Regex.IsMatch (s, "[a-zA-Z]+")]

     public string Name { get; set; }

    }

    would become:

    class C {

     [InvariantAttribute_someGuid]

     public string Name { get; set; }

     [CompilerGenerated]

     private class InvariantAttribute_someGuid: InvariantAttribute<string> {

       public Invariant_someGuid () : base (s => Regex.IsMatch (s, "[a-zA-Z]+")) {}

     }

    }

    With that simple transformation, anything would become possible with attributes!

    (Actually, in C# it's not possible to derive a generic type from Attribute, not even an abstract generic type. However, without trying I'd guess that abstract generic attributes are fine with the CLR, since the concrete attribute type has to close the base type anyway. Is that correct?)

  • Definitely go with #5.  I'd much rather have the compiler make me stop and think about each instance of this problem and decide whether it truly is a problem or not.  An attribute like those used to tell static code analysis to shut up about individual items (For example; Yes, I know "RegEx" is an abbreviation.) would be perfect.

  • Unused/unread vars are, IMHO, quite common at early stages of the development. Warnings are too "scary" in these cases, as they could be place holders or something else, and still they do not affect generation of a correct program.

    For pure CSC.EXE, I'm for number #4. Compiler must do compiling and anything that does not affect correct program generation should be left to other tools. It *might* result also faster compiling. IMHO, checking for unread fits here. Maybe a single warning "WARNING: Static analysis suggested. Unread/unused... things found here" could help.

    My #6 for CSC+IDE combination:

    I'd like a combination of Compiler/IDE that marks these cases as TODO. A compiler/IDE injected TODO attribute in these cases would keep me on the track what I need to check.

    After all done, I'd could turn the TODO flags off or remove the vars.

  • Based on you description of added errors (unused public field on an internal class) I'm unable to reproduce a situation without a warning (at warning level 4, which I assume is what you meant).

    e.g.

    internal class MyClass

    {

      public int field;

    }

    //...

    MyClass myClass = new MyClass();

    ...always generates CS0649 on "field".

    Apart from that, I vote for option 2.  If the field is truly not being used, anyone complaining about a new warning about a clear design anomaly either doesn't want to be told they did something wrong or don't want to have to bother fixing it...  Both are no reason to avoid adding the warning.  I don't consider expecting an public field in an internal class being accessed via Reflection a valid design reason to avoid this warning.

  • +1 for roberthahn's proposal. A new warning level (call it 5) where they types of warnings are reported, (while leaving default of 4), allows people to enable these types of warnings while preserving the previous behavior for existing code bases.

  • I prefer no warnings, such kind of things are the tools reponsibility, for example resharper does it so well, so it might be a behaviour of ORCAS

    but I join Stefan Wenig about attributes, a lot of good tricks are disabled because of these limitations

    like

    [AttributeUsage(AttributeTargets.Property)]

    [LazyLoadedPropert<T>(delegate(T target, PropertyInfo property){

    //code to fetch the lazy loaded property

    })]

  • 1: Hate it... give me the information

    2: Rude, but #3 suggestion

    3: Nope, too vague and hard to discribe... probably to hard to get right all the time in the compiler.

    4: REALLY hate it, FxCop is too "optional"... we've got enough code out there.

    5: Okay, #2 suggestion

    6: Add another keyword to C# that indicates that the fields is reflectable instead of using an attribute.  If done correctly could be used to allow "safe" reflection of private fields in lower-trust environment.  As for what the keyword should be: friend or visible sound nice

    public class Foo

    {

     visible int zot;

    }

  • Peter Richie:  I apologize, I did not exactly characterize the problem in my original description.  The set of "missing" warnings is hard to characterize succinctly.

    You can reproduce the problem by having an internal class with a public field which is written but not read.  That gives no warning, but it could.

    An internal class with a public field which is not written DOES give a warning in C# 2.0.

  • I vote for #4.  

    Situations that might or might not be mildly bad, but the compiler does not have enough information to tell are exactly want FX-Cop and tools  like lint are for.

    It would be nice if FX-Cop was bundled into the next release of Visual Studio, with a simple interface to "Build at FX-Cop level".    Of course FX-Cop should have a nice way to be told in the code, "I meant to do that, move along"

  • FxCop is far harder to use than the standard compiler in the IDE.  I've definitely used these compiler messages usefully before, so #4 (remove them) seems unfortunate.  Given that people use treat warnings as errors, and probably use it in automated build scripts which you should probably avoid breaking, emitting false positives is not nice.  #5 (use attributes) seems crufty, but if you consider the possible uses of write-only or uninitialized variables (usually reflection), then you're probably already dealing with an audience unfazed by an extra attribute.

    I'ld say a combination of #4 and #5 is most attractive:  please don't show these warnings as "real" warnings which might break warnings-are-errors people and distract others, but don't remove them, just place them in a lower warning class which by default is not shown, and can't (easily) be configured to be treated as an error.  Then potentially add the attributes nevertheless  - an attribute signifying that the odd design is intentional.  These attributes have value anyways, and you'ld only need em to hide errors in a really explicit mode which devs need to proactively activate.  

Page 2 of 3 (39 items) 123