Udostępnij za pośrednictwem


Should we produce warnings for unused/uninitialized internal fields?

You may have noticed as you've been developing C# applications that if you have an internal field of an internal type which you never use, we give you a warning about it. In fact, there are a number of warnings that we give for such fields, depending upon whether the field was never read or written, initialized but not read, or read but not initialized (and therefore always set to its default value.)

Now, obviously we ought not to give similar warnings for public fields on public types. The thing doing the reading or writing is highly likely to be an external assembly which may not even have been written yet, so it would be silly to give a warning. (And similarly, we suppress this warning on internals if we are building a module that is going to be linked to another module; the reads and writes might be in the other module.  We also suppress it if the internal members of the assembly are visible to an external assembly.)

The C# 2.0 compiler was a bit weak though. If you had an unused public field on an internal class, the compiler would say, well, that's a public field, suppress the warning. But a "public" field on an internal class is not actually visible to anything in the outside world (again, modulo the parenthetical above) so really we could produce the warnings here.

There were a bunch of cases like that. A few months ago I tightened up the C# 3.0 compiler so that we now detect and warn for all the possible cases where we can actually deduce that the field is not used/not initialized.

Naturally this means that people started getting warnings on code which had previously compiled without warnings, so other teams at Microsoft that test pre-release builds of the compilers started to notice. This change has raised a few questions amongst users on other teams, which might motivate changing this behaviour again. It is particularly hard on users who compile with "all warnings are errors" turned on.

Two scenarios in particular have been brought to my attention.

First, more and more classes are now being written specifically to be used with private reflection and/or lightweight codegen. That is, something outside of the C# language is going to be manipulating the fields of a particular class, and it is by design to have the field unread/unwritten by the static portion of the program. It's irksome to have a warning about something which is by design.

Second, more and more classes are now being written where a field is initialized but never read, but the field is initialized to a finalizable object, and the finalizer effectively “reads” the field and does work on it.

There are a number of possible responses to these scenarios:

1) Roll back the behaviour to that of C# 2.0. That doesn’t actually solve any of these problems; we still potentially give warnings which the user doesn’t want.  But we give warnings in fewer scenarios.  We can also make the plausible statement “this is what the compiler has always done, you can live with it”. 

The downside of course is that we would be eliminating warnings which the user may want. The whole point of this exercise in the first place was to produce useful warnings.

2) Do nothing. Tell the people who are raising these objections to live with it.  They can turn off the warning programmatically if they want. We strike a blow for language purity and ignore the real-world objections of people who are using features external to the language like reflection.

3) Give the "unread" warning only if the field is of a type known to have no finalizer. (Or, weaker, suppress the "unread" warning only if the field is of a type known to have a finalizer.)

4) Eliminate all the warnings altogether.  Declare a new design principle: warnings should be only for patterns which are highly likely to be  dangerous mistakes.  The warnings in question are just helpful reminders that you might have dead code to eliminate.  FXCOP produces this warning already, so if people want to see it, they can use FXCOP or the static analysis tool of their choice.

5) Create some new attributes that let people document how a field is intended to be used. Be smart about looking at the attribute, and if we see attributes which say "this struct is intended for use in p/invoke", or "this field is to be accessed via private reflection", and so on, then suppress the warnings.

6) Do something else that Eric hasn't thought of.

Anyone have thoughts or opinions on this?

Thanks!

Comments

  • Anonymous
    May 15, 2007
    I vote for choice #5. This gives the control to the developers and also leads to a minimum of documentation. Especially those cases where the - not obviously - read field is read by reflection or within the finalizer are hard to track down. Encouraging documentation by attributes is an advantage.

  • Anonymous
    May 15, 2007
    So basically #4 is to shift some warnings into a "notice" class, of relatively harmless items that should never impact compilation? #5 certainly sounds nice, but I could just imagine the headaches of implementing it in the compiler and in any code that uses it. Meta attributes have a tendancy to pile up and get annoying quickly, although they're certainly clearer than a pragma directive. (Not that my opinion is important, just musing.)

  • Anonymous
    May 15, 2007
    1, 2 and 4 are all OK, I'd vote for 4. 3 is too difficult for average developers, and IMHO 5 is a misuse of attributes, which are more to do with runtime behaviour.

  • Anonymous
    May 15, 2007

  1. Let the developer select a warning and check as ignore/resolved and store the results as part of the build process (rather than using attributes in the code). Somewhat similar to #2, but at least gives the developer some control over which warnings are important to them and means the warning gets flagged. Still possibly best left for FxCop.
  • Anonymous
    May 15, 2007
    I would have to go with number 5, as it leaves a bit of a trail for maintenance programmers who may not have access to the code that accesses the field.

  • Anonymous
    May 15, 2007
    I vote for #5 as well, or for Mark's suggestion about making these warnings configurable.  Please don't do #4!  Those warnings have saved my bacon more times than I can count; 9 times out of 10 those warnings tell me that I accidentally made an assignment to the wrong field, or did something stupid like making a public property get itself instead of its data field.  Those are the biggest nightmares to debug because they're usually due to some minor typo or late-night brain-fart. I realize that's not necessarily in the best interests of language purity, but practical concerns shouldn't be ignored either.  I think it's best to design for the common case (those of us for whom unused/unassigned/prematurely-used fields is likely to be an error) and providing only on-demand support to the edge cases (people using Reflection as an integral part of their design, as opposed to simple plug-in systems or a tool for overcoming the limitations of 3rd-party libraries). Also keep in mind that the people reading and commenting in here have an active interest in these matters, whereas the majority of C# programmers on the front lines probably haven't even heard of FXCop and think that static analysis should be handled by the electrical contractors.  I do use that tool sometimes, but wading through the enormous and ominous piles of warnings it generates is far more time-consuming than seeing a small handful of compiler warnings.

  • Anonymous
    May 15, 2007
    Really depends on the whole set of cases you are considering. I was leaning towards 5, but then I went back and reconsidered in the context of the example you gave. ie you have an internal class that has a field marked as public. Effectively the public tag has already stated that they want the field to be publicly available, so adding an additional attribute to say this again in a different way seems a bit redundant. (call it the [YesIReallyMeantPublicAttribute]) Which I suppose puts me in the #1 camp.

  • Anonymous
    May 15, 2007
    Sorry, I also wanted to reply to Joe's statement about attributes only being appropriate for runtime behaviour - I can't honestly claim to know the true motivation behind them but I still think this couldn't be further from the truth.  Attributes are simply metadata - there just happen to be several parts of the .NET Runtime that check for these attributes and change their behaviour accordingly, much like how metadata is used in other applications. Attributes might indicate some preference for runtime behaviour, but they might also be design-time (the attributes in System.ComponentModel), assembly information (System.Reflection), security requirements (System.Security.*), all sorts of wacky and unexpected stuff (System.Web or System.Web.Services), or in some cases they may even affect the compiler, such as with FlagsAttribute. Whether or not it's the optimal solution, it's a perfectly valid use of attributes.

  • Anonymous
    May 15, 2007
    Aaron: yes, I am taking into consideration the fact that the people who comment have an enormous self-selection bias.  They are likely to be the people who are very far from the average programmer in terms of interest in language design issues, opinionatedness, etc. But that's what I want.  I want strongly held opinions from smart people who have thought about it. Those are the kinds of people I want designing tools for people who want to use tools, rather than think about tools. That doesn't mean that I'm going to do exactly what you guys say, but I very much take your opinions seriously. I figured that #5 would be the most popular option. Unfortunately, it is also the least likely to happen, since we do not have time to introduce more classes into the .NET framework, design them, test them, do security reviews, blah blah blah.  The compiler has a lot of moving parts already; adding more to external assemblies is not likely to fly with management this late in the cycle.  I will make the best case I possibly can for it, but right now #2 is the lowest-cost option and therefore the one most likely to happen.

  • Anonymous
    May 15, 2007
    I vote for #5: let the programmer document their intent. It's kind of an 'extern "C"' for the new millenium.

  • Anonymous
    May 15, 2007
    I vote for #4, if its not messing with the compliation...why is the complier warning anyway? FXCop is the place for this type of item. I'd vote against 1-3, 5 becuase developers don't know everything (what, shocking I know), what's not important now, may be important in a few years (look at sec vulns)...same applies here

  • Anonymous
    May 15, 2007
    I also think #5 is the best choice. I apreciate this kind of warnings. And if our code starts producing more warnings than before, it's just a mather of adding an attribute in a few of places. I deal with code which extensively uses that reflection approach, and I wouldn't mind having those warnings around here.

  • Anonymous
    May 15, 2007
    The comment has been removed

  • Anonymous
    May 15, 2007
    The comment has been removed

  • Anonymous
    May 15, 2007
    The comment has been removed

  • Anonymous
    May 15, 2007
    I would suggest adding flags to the compiler.  C# 2.0's warnings would be the default level.  Add 3 more levels: no warnings, critical warnings (a subset of 2.0's warnings), the default (C# 2.0 standard), then add one (or more) verbose levels as required. I'm not familiar with the C# toolchain here, so I can't guage the impact on people's toolsets - I would assume that if you had commandline access to the compiler, you can pass in a -w[0-3] as a compiler argument.  IDE's would have to be modified to add a control to set a warning level. You can also use this strategy and simply re-prioritize the warnings the compiler generates (based on how critical they are) so that the amount of warnings emitted approximates C# 2.0's verbosity.

  • Anonymous
    May 15, 2007
    The comment has been removed

  • Anonymous
    May 15, 2007
    I'd say 4) is the only correct solution, but with a caveat.  Treating unuse as a 'warning' just seem incorrect to me.  It's a bit of information about how to improve your code,  but it's only a warning if the field is un-intentional,  and chances are it's not.     More likely you just havn't gotten around to using it YET.   (This happens to me all the time).  Possibly the code has drifted to the point where it's no longer used, but C# isn't old enough for that to be the most common case yet. The most common case, is that the code is in flux, and the fields are there for future use.   The warnings become a thing that you have to code around; Requiring extra metadata (or worse - unused code!) in order to supress a mal-behavior of the compiler.   The C++ compiler has a bunch of these now.  (Warning, I autogenerated a GUID for your interface -  Yeah I know.  I MEANT YOU TO !!!). So what you REALLY need to do is option 6.  Take everthing that you treat as a 'warning' now that is really LINT and change it to a level 5 notification that you have to ENABLE to see, and/or that 'treat warnings as errors' doesn't treat as an error. Incidently,  more and more I use C# to generate type information that I then run through tlbexp to get .tlb files that I then include in C++.   I find this  a far superior way to define interfaces that I want to implement in C++, and MIGHT want to use in managed code in the future.  Much cleaner than MIDL or Attributed C++.    Though I"m unhappy that MS seems to have ceased improving MIDL,  it desparately needs something like MarshalAs(). tj

  • Anonymous
    May 15, 2007
    We already have 5 in FxCop. It's called the SuppressMessageAttribute - we use this to indicate that we shouldn't warn on a particular construct. I personally believe (and not because of my vested interest in FxCop ;)) that C# should remain pure, and avoid having intimate knowledge about particular Framework classes and leave that to Code Analysis tools. Unfortunately, there are particular source constructs that we simply cannot check. For example, unused constants are particularly hard to detect at the IL level, whereas, at the AST level the compiler can detect this with 100% certainty (excluding reflection).

  • Anonymous
    May 15, 2007
    I think #2 is good enough.  I certainly don't want to have less warnings -- running FxCop is always an extra step, and it's better to have those warnings right away.  Creating a new attribute for what can be done with a pragma and a comment seems like overkill, though.

  • Anonymous
    May 15, 2007
    The comment has been removed

  • Anonymous
    May 15, 2007
    Definitely go with #5.  I'd much rather have the compiler make me stop and think about each instance of this problem and decide whether it truly is a problem or not.  An attribute like those used to tell static code analysis to shut up about individual items (For example; Yes, I know "RegEx" is an abbreviation.) would be perfect.

  • Anonymous
    May 16, 2007
    Unused/unread vars are, IMHO, quite common at early stages of the development. Warnings are too "scary" in these cases, as they could be place holders or something else, and still they do not affect generation of a correct program. For pure CSC.EXE, I'm for number #4. Compiler must do compiling and anything that does not affect correct program generation should be left to other tools. It might result also faster compiling. IMHO, checking for unread fits here. Maybe a single warning "WARNING: Static analysis suggested. Unread/unused... things found here" could help. My #6 for CSC+IDE combination: I'd like a combination of Compiler/IDE that marks these cases as TODO. A compiler/IDE injected TODO attribute in these cases would keep me on the track what I need to check. After all done, I'd could turn the TODO flags off or remove the vars.

  • Anonymous
    May 16, 2007
    Based on you description of added errors (unused public field on an internal class) I'm unable to reproduce a situation without a warning (at warning level 4, which I assume is what you meant). e.g. internal class MyClass {   public int field; } //... MyClass myClass = new MyClass(); ...always generates CS0649 on "field". Apart from that, I vote for option 2.  If the field is truly not being used, anyone complaining about a new warning about a clear design anomaly either doesn't want to be told they did something wrong or don't want to have to bother fixing it...  Both are no reason to avoid adding the warning.  I don't consider expecting an public field in an internal class being accessed via Reflection a valid design reason to avoid this warning.

  • Anonymous
    May 16, 2007
    +1 for roberthahn's proposal. A new warning level (call it 5) where they types of warnings are reported, (while leaving default of 4), allows people to enable these types of warnings while preserving the previous behavior for existing code bases.

  • Anonymous
    May 16, 2007
    The comment has been removed

  • Anonymous
    May 17, 2007
    The comment has been removed

  • Anonymous
    May 17, 2007
    Peter Richie:  I apologize, I did not exactly characterize the problem in my original description.  The set of "missing" warnings is hard to characterize succinctly. You can reproduce the problem by having an internal class with a public field which is written but not read.  That gives no warning, but it could. An internal class with a public field which is not written DOES give a warning in C# 2.0.

  • Anonymous
    May 18, 2007
    I vote for #4.   Situations that might or might not be mildly bad, but the compiler does not have enough information to tell are exactly want FX-Cop and tools  like lint are for. It would be nice if FX-Cop was bundled into the next release of Visual Studio, with a simple interface to "Build at FX-Cop level".    Of course FX-Cop should have a nice way to be told in the code, "I meant to do that, move along"

  • Anonymous
    May 18, 2007
    The comment has been removed

  • Anonymous
    May 18, 2007
    The first thing I thought of before even seeing your list of ideas was an attribute based approach.

  • Anonymous
    May 20, 2007
    This is a pertinent issue to my organization. We use a custom OR-mapping framework that uses reflection as described in your first scenario. We get many instances of the warning for unused/uninitialized fields and unfortunately sometimes our developers end up with the bad habit of ignoring compiler warnings - ANY warnings. It would be excellent if something like Option #5 could be implemented.

  • Anonymous
    May 21, 2007
    I would vote for #5 iff (if and only if) this attribute can solve some other compiler problems, like inlining or type inference optimization - not just for a warning. An attribute like [External] to flag that the field is used in externall (or unsafe) code could be used. The problem is System.Reflection(.Emit) that can really mess things up but I don't think that everyone else should pay the price just because I like to play around with Reflection.Emit. Using attributes/keywords to make optimizations possible (or rather switch off optimizations) is already incorporated with the volatile-keyword so using attributes/keywords to solve Reflection.Emit or unsafe code problems wouldn't really change the feeling of the language - but it would have the huge downsize that it would break a lot of c#2.0-code... . On a similar note: I get warnings about unused fields in private nested structs if I only reach these fields indirectly in unsafe code using pointer-manipulations and pointer-casts to get to them (which is possible and "safe" if I have a fixed layout on the struct). E.g.: Struct1* s1Ptr = &s; Struct2* s2Ptr = (Struct2*)s1Ptr; In this case I would get a warning about the fields in Struct2 not being used, even if they might be used What I am trying to say is that #3 is an OK solution since you are giving warnings in to many cases already anyway and I think the developers realize that you can't think of all the strange cases. If you don't like to add even more possibly incorrect warnings I would vote for roberthahn's solution (i.e. one more warning level).

  • Anonymous
    June 08, 2007
    I vote #2. There was never a guarantee that the GC wouldn't collect unused fields after their last use, was there? (It does, after all, reserve the right -- and exercises that right! -- to collect an object even while one of its own methods is running.) If it's that important that the reference stay alive in beyond its last use, the class can do a GC.KeepAlive() on it in the destructor.

  • Anonymous
    June 20, 2007
    Welcome to the XXVIII Community Convergence. In these posts I try to wrap up events that have occurred

  • Anonymous
    July 02, 2007
    I vote #5.  We use reflection a lot, and it violates standard information hiding, so it is vitally important developers reading the code can identify these fields.  An attribute would solve both problems, and conveniently issue a warning when the attribute is forgotten.

  • Anonymous
    July 05, 2007
    Basically what's needed in my opinion is greater flexibility and control in the hands of the developer, while at the same time keeping the tedium to a minimum. But another way is to look at the settings for warning levels as being slightly too crude. Assuming all warnings have a severity level they also have a separate type of impact on your code. CS0649 etc isn't necessarily a warning - but it can only be determined on a case by case basis. #5 Works very well in that it would play nicely with documentation - signalling that this property will be modified in ways beyond statical analysis etc. It's also the only proposed solution with a case by case flexibility.

  • Anonymous
    July 12, 2007
    The comment has been removed

  • Anonymous
    August 28, 2007
    #5! IIRC, in C++ there also was something linke UnusedParameter which was there to supress the warning.