July, 2004

  • Eric Gunnerson's Compendium

    Future language features & scenarios


    We're starting to think about the release after Whidbey in the C# compiler team.

    I know that it seems strange that we have just shipped the Whidbey beta, and we're already talking about what comes next, but we're fairly close to being “shut down” for the compiler right now. So, we're taking the “big picture“ perspective.

    Since we're in this mode, now's the time to suggest things that we should add to the language and/or scenarios where you think the language falls short.

    An ideal comment would be something like:

    I'd like to be able to support mixin-style programming in C#

    If it isn't obvious what you're suggesting, please provide a link.

    It would also be good to suggest through code - something like:

    I currently am forced to write code like

    <ugly code here>

    and it's really ugly.

    I won't be able to respond to all comments nor will we implement all of them (or perhaps any of them), but we will consider all of them.



  • Eric Gunnerson's Compendium

    Naming generic type parameters...


    There's been quite a discussion going on on the MSDN feedback site over the naming of generic type parameters.

    If you look at the definition of the generic collections classes, you'll see things like:

    Dictionary<K, V>

    What's up with those T's, K's, and V's?

    If you look at our beta1 library design guidelines (can't find a published link right now) that suggests that you use single-character names for type parameters rather than longer, more descriptive names. But those guidelines weren't always there. In fact, before we had them, you might see a class like:

    class Dictionary<Key, Value>

    which seems reasonable enough. But one day, you're browsing code, and you look at a method:

    public void Process(Key k)
        // code here...

    What's Key? It looks like it's a type, and unless you know that there's a type parameter named “Key“, you're likely to be confused for a while. We therefore decided to use the single-character naming approach, and my experience is that it's worked pretty well.

    When you're working on a generic class, you normally have a small number of generic type parameters, and therefore it's not that hard to keep them straight. When you're using them, C# intellisense is nice enough to give you a description for the type parameter so you remember what it is.

    That is, it's nice enough to do that if you remember to *write* the description using the <typeparam> tag. The IDE will add this tag automatically when you type the /// before your generic class definition.

    Unfortunately, the beta1 drop doesn't have these descriptions for the built-in collection classes, so you don't get any help right now if you type “List<“. We'll be fixing that.

  • Eric Gunnerson's Compendium

    What have Microsoft blogs meant to you?


    I'm doing a few slides on blogging for a meeting that the C# team is having this Friday, and I need some good customer quotes to put on the slide.

    So, if you have comments about the C# team bloggers, please leave me a comment.

  • Eric Gunnerson's Compendium

    C# 3.0? You haven't even shipped 2.0 yet?


    One of my readers commented (and I'm paraphrasing here):

    Why are you asking me about features for the next version of C#, when the 2.0 version hasn't even shipped yet?

    A fair question.

    One of the problems that we have in the tools division is the long lead time between the time when we're done with a product and the time that it's actually available to our customers. Delta a few small items and some bugfixes, the C# compiler is essentially done for Whidbey, so that's why we're thinking about the next version.

    We've always had this offset, but because we're attempting to expose our designs much earlier than we have in past, it may look strange from the outside. As involving customers early on in design becomes more common, this should be less weird.

    Hope that explains things a little better.

  • Eric Gunnerson's Compendium

    JavaOne 2004: Final Thoughts


    JavaOne 2004 FinalThoughts


    Before you read my summary, I encourage you to read the daily posts I wrote. The comments on those posts can be very illuminating as well.


    Day 1

    Day 2

    Day 3

    Day 4






    The biggest different between conferences like TechEd and PDC and JavaOne is level of community involvement. The Microsoft conferences have come a long way from where we were even a few years ago, but our conferences are more about explanation, and JavaOne is more about collaboration. While we do spend lots of time looking at what customers are asking for and incorporating it into our products, the JSR process gives a feeling of working together that our approach lacks. Or, to put it another way, our approach is “Get feedback, design, get feedback”, while the JSR approach is “design as a group”. I suspect that there’s a lot of customer wilingness to be more involved in our design process. MSDN Feedback is a start on this road, but there’s lot of opportunity for us to be closer to customers.


    I’m also impressed at the speed at which JSRs progress and how they’re decoupled from main releases of the product. There’s an opportunity to design/code/release in several iterations rather than in one big release that we aren’t currently utilizing (though we have discussed doing this in the past).


    I also like the fact that many of the community efforts are highlighted in the conference.


    Java Language


    The “Tiger” release of Java (was 1.5, now named 5.0) provides some interesting additions to the language, but after looking at the features in more detail and sampling the public reaction, I’m not convinced in the overall success of their additions.


    • Generics are useful, but the reference type limitation is unfortunate. Wildcards add a lot of complexity, and it's not clear if the benefit is worth it.
    • Annotations support design-time code injection (through a separate tool), which I a powerful capability, but the combinatorics of multiple processors operating on the same source is challenging, as is debugging the resulting code. I think the “your source is modified” model is tougher to get your mind around than an API-based approach.
    • Enums have more capability than you really need.
    • Static imports are of some utility, but may challenge readability 

    The combination of static import and design-time annotations is going to cause some issues around code readability and understandability. On the other hand, given the amount of boilerplate code required for J2EE approaches (changing with EJB 3.0, I understand), I can understand the desire to simplify things.


    Yesterday I found out that noted Java architect and author Joshua Bloch has left Sun to go to Google.


    Groovy Language


    I forgot to write up my thoughts about the session that I attended on Groovy.


    Groovy is a new scripting language (well, they call it an “agile dynamic language“, but I prefer the term scripting) for the Java platform. I attended this with some interest as I've written a lot of Perl, and I understand the value of such languages.


    However, as a semi-professional language designer, I'm not sure about groovy. Their goal is to be Java-like, but they've also added a lot of “improvements“ that result in Groovy being a weird cross between a simple scripting language and a high-powered research language. For example, closures and operator overloading are not things that one would expect in a simple language.


    I didn't see a lot of evidence of rigor in the language design. That may be okay for the target audience, but I think it may lead to a confusing set of features and weird interactions.


    Java IDEs


    Several features in the Java world make it hard to write IDEs.


    The fact that IDE development isn’t coupled to API development causes a lot of problems. I’m not an expert on Java development, but the fact that there were lots of demos of laying out forms and hooking them up leads me to believe that such features aren’t as expected in the Java IDE world as they are in VS. The fact that the ASP.NET team owns the whole stack from design-time to runtime gives a much more coherant experience.


    I also think that the sheer number of Java IDEs means that Java developers either need to learn more than one IDE (and deal with the difference in approach), or go without in some areas.


    Many of the features demoed are either derivative of VS, or direct copies of VS.


    Eclipse is an interesting wild-card in this area. Because users often use a collection of modules that aren’t necessarily written to work together, consistency can suffer. On the other hand, the low-level extendability is far better than what we have in VS, and the fact that you can deal with your code at the AST level is a very powerful capability that isn’t available in Visual Studio.


    Aspect Oriented Programming


    I attended several sessions on AOP. It’s still a fairly divisive topic – there are definitely powerful capabilities available in AOP, but my opinion is that the long-term maintainability of code with AOP is likely to be poor, especially as more than one aspect is involved. Similar to design-time annotations, code using AOP does more than it says it does.


    Device-Contained UIs


    The Jini (link) concept of keeping device UI code on a network-accessible device is tremendously powerful, if it can be done in a secure manner. It’s a great way to avoid having to install separate software to talk to a device. The success will depend on whether we end up with the wholly connected world that some people envision.





    There was a big “Java Timeline” display that ran 50 or so feet in the tunnel between Moscone South and North. There was space and markers for people to write their own comments on the timeline, and by the end of the conference, there was little free space. It was a cool thing to do, and would be especially cool if it ended up online so others could look at it.


    Sun played a fair number of games during their demos - “Utilizing Java Technology” is a favorite phrase that was used a lot, but without any clarification as to how much technology was being used.

  • Eric Gunnerson's Compendium

    Fixed statement and null input...


    I'd like some input on a change we're considering for Whidbey.

    Consider the following wrapper class:

    unsafe class Wrapper
        public void ManagedFunc(byte[] data)
            fixed (byte* pData = data)
        }     void UnmanagedFunc(byte* pData)
            } }

    In this class, I've fixed a byte[] array so I can pass it to an unmanaged method. I've included “UnmanagedFunc()“ in my program to illustrate what it looks like, but I would normally be calling a C function through P/Invoke in this scenario.

    The problem with this code is that some C APIs accept null values as arguments. It would be nice to be able to pass a null byte[] and have that translate to a null pointer, but the fixed statement throws if it gets a null array.

    There's an obvious workaround:

     public void ManagedFuncOption1(byte[] data)
       if (data == null)
       fixed (byte* pData = data)

    and a less obvious one.

    public void ManagedFuncOption2(byte[] data)
     bool nullData = data == null;   fixed (byte* pTemp = nullData ? new byte[1] : data)
       byte* pData = nullData ? null : pTemp;

    Thr problem with the workarounds is that they are ugly, and they get fairly complicated if there is more than one parameter involved.

    The language spec (section 18.6) says that the behavior of fixed is implementation defined if the array expression is null, so we could change our behavior so that fixing a null array would result in a null pointer rather than an exception. 


    1. Have you written code where this would be useful?
    2. Are there situations where the change of behavior would cause you problems?
    3. Are the workarounds simple enough that this isn't an issue?


  • Eric Gunnerson's Compendium

    35 years ago today...


    35 years ago, two men came within 20 seconds of dying 250,000 miles away.

    Hours after averting tragedy, a very young Eric Gunnerson got to stay up late to watch Neil Armstrong walk on the moon.

    I was 5.

    I've always been a bit of a space nut, and my memory is full of important points in space history:

    • Armstrong and Aldrin's lunar landing
    • Apollo 13
    • Apollo Soyuz
    • Skylab
    • The first flight of Columbia
    • The last flight of Challenger
    • The last flight of Columbia

    But watching NASA the last 15 years makes me sad. Despite some good efforts to reform the culture and get back to the kind of organization that recovered from Apollo 1, NASA has not suceeded in reforming itself, and it's stuck with a hugely expensive shuttle and a space station without a clearly-defined purpose. The unmanned and astronomy programs continue to be excellent, but manned spaceflight has lost it's way.

    At this point, I think you'd get a better result if you cancelled two shuttle flights and gave Burt Rutan and Lockheed Skunkworks the money for one flight, and let them work to advance the state of the art.

    But having said that, I would like commemorate the dedication and sacrifice of all those involved in Apollo. Many sacrificed money and their family relationships to the cause. Some gave their lives. Gus Grissom, Ed White, and Roger Chaffee died in the Apollo 1 fire. Charles Bassett, Theodore Freeman, Elliot See, and Clifton Williams died in training mission plane crashes.

    Here's hoping that this is not the last time that humans will walk on other worlds.

  • Eric Gunnerson's Compendium

    using - It's not just for memory management


    When we designed the using statement waaaay back in the first version of C#, we decided to call it using because we thought it had other purposes outside of the usual use:

    using (StreamWriter writer = File.CreateText(”blah.txt”))
    { ... }

    Today I was responding to a customer suggestion, in which he suggested language support for the ReaderWriterLock class, so it could be more like the built-in lock statement. Unfortunately, ReaderWriterLock.AcquireReader() doesn't return something that's IDisposable, but it's pretty easy to add that with a wrapper class.

    I spent about 15 minutes writing a wrapper, then I had a thought, did a google search, and came across this article, which already has it done. Plus, the version in the article probably wraps the whole class, and has likely been tested.

  • Eric Gunnerson's Compendium

    Answer: What is the lifetime of local instances?


    Answer to this poser

    I wasn't sure the answer to this question was observable, so I wrote a short program:

    using System;
    class Early
         Console.WriteLine("Early Cleaned Up");
    class Test
      public static void Main()
         Early e = new Early();
         Console.WriteLine("Done Waiting");

    The output from this is

    Early Cleaned Up
    Done Waiting

    In other words, there is no guarantee that a local variable will remain live until the end of a scope if it isn't used. The runtime is free to analyze the code that it has and determine what there are no further usages of a variable beyond a certain point, and therefore not keep that variable live beyond that point (ie not treat it as a root for the purposes of GC).

  • Eric Gunnerson's Compendium

    Extending existing classes


    One other point that came up in the static import thread was extending existing classes. It's not uncommon to want to add specific methods to existing classes - or at least have the appearance of doing this. For example, I might want to add a new method to the string class, so I can call it with:

    string s = ...;

    string r = s.Permute(10, 15);

    rather than

    string r = Utils.Permute(s, 10, 15);

    We've discussed this a few times in the past, and we think this is an important scenario to consider. There are a number of reasons why you wouldn't want to actually modify the class, of which security is just one consideration. But one could think of a way of writing something like (very hypothetical syntax):

    class Utils
        public static string Permute(string s, int a, int b) {...}

    and have the compiler then allow you to use it as if it were part of the string class. This is very useful, but perhaps not terribly understandable, and would certainly be open to abuse.

    Another option would be to allow the following definition (also hypothetical)

    class MyString<T>: T where T:string
        public string Permute(int a, int b) {...}

    Now, if you use a MyString, you can add methods onto the existing method. This would also be useful to add a specific implementation of something onto an existing class, somewhat in the way that Mixins work.

    We have no plans in this area, but will likely discuss the scenario more in the future.

  • Eric Gunnerson's Compendium

    Le Tour


    This week starts part of my favorite part of the summer, “Le Tour de France”. The tour is likely the hardest athletic challenge in the world. Last year, Armstrong covered 2100 miles at an average speed of just under 26 MPH, including 7 days of riding over mountain passes.

    The best TV coverage is on OLN, though you need to get past the whole “Cyclism” thing. I recommend recording the early morning live broadcast, as they use different commentators for the afternoon recap.

  • Eric Gunnerson's Compendium

    Memories of Youth


    I remember the hot summer days of my youth. Our backyard patio had ants. I had a 5” magnifying glass. You can guess the rest.

    I've always had a soft spot for the power of focused sunlight, though I never took it as far as the creators of Melt Man did.

    Reading Jeremy's post on his Fresnel Lens makes me pine for those days.

  • Eric Gunnerson's Compendium

    Decreased performance when compiling with no options...


    One of the things that we've done in Whidbey is add some extra IL instructions to improve the debugging experience. This allows you to (for example) set a break on a closing brace.

    Because /o- is the default setting, this means that performance if you just compile with “csc” will be slightly degraded in Whidbey. If you've been building without setting /o+ and you care about perf, you will want to change your code to throw /o+ explicitly.

    If you're using VS release and debug configurations, this will be set automatically for you.


  • Eric Gunnerson's Compendium

    JavaOne: Day 4


    JavaOne Day Four


    This is the last of my daily posts. I've got a list of bigger topics I'm going to post on, but I want to ruminate on them for a few days before I do so.


    Keynote: Stretch Your Mind


    “A Gosling-fest” this morning


    JavaStudio Creater Demo – Again.


    This and Looking Glass have been the things that get talked about over and over and over. It’s getting a little bit tiring for me, especially since I don’t know enough about the underlying system to be able to tell how cool it is what they’ve done.


    I’m watching a demo right now with a 10 point font on a giant screen. It’s almost possible to read the text.


    Demo is showing validation for web entries, which is being hand-coded. I think ASP.NET has better support than this. The demo is the epitome of a non-flashy demo. That may be bad, or that may be good – sometimes good demos can be viewed as too slick.


    JavaStudio Enterprise demo


    There are some interesting collaboration features, that let you do collaborative development, with two people viewing and editing the same file. It’s an interesting capability.


    Java Real-time


    There’s a real-time (purportedly “hard” real-time) version of Java available, in which you can have threads that aren’t affected by the GC. They show an inverted pendulum real-time control device. This is a really good demo – the pendulum can swing up to vertical, and then hold vertical, both against outside forces, while other anti-social programs are running on the box, and even fail-over to a different system.


    This is the best keynote demo of the conference.


    Bluetooth/Jini stuff


    There’s an interesting concept here. Rather than try to ship out the software to control a device separately, make it part of the device firmware that can be downloaded to your client. The client just needs to be able to support downloading the code and running it, and then it gets a customized control straight from the device.


    This is pretty cool, though I worry about the hackability. If there’s a security hold, I just need to put a little bluetooth device near the ATM, and I can probably grab ontrol from whomever comes by.


    The demo is okay if you're a hardware guy - which I am - but for others, being able to turn on an LED from an interface on an IPAQ isn't terribly exciting.


    Java and Open Source


    This was a panel discussion, entitled “the Big Question”. The following members:


    Brian Behlendorf (Apache, Collab.NET)

    Rob Gingell  (Sun fellow)

    James Gosling

    James Governer

    Laurence Lessig

    Justin Schaffer (MLB.com)

    Rod Smith (IBM fellow)


    Moderator: Tim O’Reilly


    I took a lot of notes, but I’ll try to summarize it down into something pretty minimal.


    This talk came as a result of an open letter that Rob Smith sent to the Sun about open sourcing java.  I have to give Sun a lot of credit for engaging in such a discussion in an open forum – I’m pretty sure that Microsoft isn’t at that point. Yet.


    Tim O’Reilly started by asking how many in the audience contributed to open source projects in their free time. By my estimate, less than 5% of the audience raised their hands.


    I’ll try to summarize Rod Smith’s position, and talk about the general discussion. The reason I say “I’ll try” is because it’s Rod from IBM and Rob from Sun talking, and I’m not sure my notes are correct in all cases.


    The most important point of Rod’s position is that there are lots of changes under way on the internet, such as the move towards SOA, and that their impression is that the pace of innovation is slowing down, and that the current rate is not fast enough. I don’t think Rod really put forth a good vision for how things would be different if Java were open source.


    Brian added some comments on the experience that Apache has had in dealing wth the JCP process. It’s been harder to do than they would like, and he feels that there are many open source people who won’t work with Java because of the difficulty (or perceived difficulty).


    Rob did most of the talking on the Sun side.


    This is a continuation of a journey we’ve been on in the last 10 years, originally it was Sun only. Talking about evolution is important, but we have to be cautious about it – the promise we’re offering is that Java programs will not be lied to by things claiming to be Java out of malice or differences in quality of implementation. We need to maintain that as we evolve.


    Gosling added:


    There’s a process that happens, where somebody does something really cool, specialized for their needs, but cool. And then the community writes a JSR about taking the concept and coming up with the standard way to express it. Sometimes the community makes a mismash of it, but over time the argument happens about how to take innovation and spread them about. The JCP isn’t so much about innovation, but about harvesting innovation.


    I think his last point there is very interesting.  Rob also mentioned that their processes are tuned to doing things the way they are now, and doing open source would probably be more expensive.


    Lessig spent some time talking about not using licenses to keep trust, but using other devices (which, as far as I could tell, he never described in detail) to keep the wrong thing from happening. Both he and Brian have a lot of faith in the community doing the right thing on compatibility, but others see how it would be in the interest of others to be non-compatible (or to not put a focus on being compatible).


    The best perspective came from Justin, who said:


    What are you trying to accomplish with making Java open source? Why take what’s working well for many businesses and put it at risk?


    This got a huge amount of applause from the audience.


    Sun was very open in changing their process to make it easier for teams to interact. I’ve been struck throughout the conference how much in the Java world gets done through JSRs, and I think that’s one area where Sun has done a much better job than Microsoft has. Just the fact that there’s a public published place for all such ideas is a huge improvement over the way we do things at Microsoft.


    Web services interoperability and performance: Java 2 platform, enterprise edition, and .NET


    Note to Sun. Get somebody editing the titles of your talks. The worst I found is:


    The P2P Sockets Project. Easily Create JXTA Techology based peer-to-peer applicatinos using your skills on the Java 2 platform Enterprise Edition (J2EE)


    This talk wasn’t very good, as the speakers weren’t very familiar with .NET. Their results (which show J2EE is faster) may be correct, but they didn’t seem to approach the process with much rigor. I don’t think the results from somebody familiar with J2EE but not with .NET are of much use. Nor are the inverse.


    Particularly annoying was the statement that there was no way to write provider-independent database code in .NET.


    I left after about 30 minutes to get lunch and look for a power outlet.


    What’s new with JBuilder (JBuilder 10)?


    Value of the IDE:

    • Accelerate daily coding tasks (claim ownership of Code Insight, error insight. Didn’t VS do those first?)
    • Goto Definition, UML, etc. from type
    • Automate and integrate the lifecycle. Version management
    • Protocol and standards transparency

     One of the cool things that JBuilder does is distributed refactorings. This allows you to save a refactoring, ship it with a new version of your product, and then allow your client to apply that refactoring on their code to easily migrate to the new version. That’s a nice feature to have.


    (boy am I tired of 4-quadrant graphs. I’ve seen them in too many talks today).


    One of the real challenges in the Java space is that the API designers don’t really work with the IDE folks. The IDE talks are sprinkled with comments such as “deployment descriptors really aren’t too easy to understand” or “it’s not very easy to write EJBs”.


    Code productivity

    • Templates, synch edit, code formatting
    • Refactorings (oo, in the uml borwser)
      • 10 different refactorings, though two of the listed ones are really about understanding rather code rather than refactoring it.
    • Local striping (snapshot of project, local source code)
    • Code folding (region support)
    • Filters in the structure pane
    • Custom debugger views
    • Scope Insight


    Version management


    Support StarTeam, CVS, VSS (?), ClearCase

    Cutting edge (requirements tracking, bug tracking, task management)


    Demos (on XP Pro)


    Nice feature – the icon in solution explorer changes to show which files have compilation errors in them. That makes it somewhat easier to navigate errors and fix them.


    Templates are very similar to snippets, but AFAICT, they only have a simple “surround” functionality.


    Syncedit – select a block of text, and then within it if you edit an identifier, it changes all instances of the identifier as you edit it. Sort of like an inline refactoring. Cool.


    Eclipse 3.0 – New and Noteworthy


    Approximately 500 people for this talk.


    Erich Gamma, John Niegard


    85% of their downloads are on windows. Few Linux useres (< 10%), similar for Mac users.


    • Eclipse platform
    • Rich client platform
    • Java application development tools


    Themes for 3.0

    • User experience – large scale products
    • Responsiveness
    • New look


    Plan is to talk about things that matter to end users, but the first 20 minutes talks about what architectural changes were made in 3.0.


    They have an interesting “cheat sheet” feature. It renders in a pane in the IDE, and has user-clickable buttons that cause things to happen.


    Java related stuff


    There were a lot of slides here with lots of details, so this is just an overview.


    Their big push has been architecture to enable deep features. They have a really good AST-based system, and from what I can tell, they’re reconstructing code on the fly from AST changes. This is a really nice infrastructure, and they use it in a number of ways.


    For example, when the do extract method, they search the ASTs to find other usages of the same pattern, and offer to extract them out as well.


    They also have some very nice “code exploration” features. For example:


    • Click on a parameter highlights all uses of the parameter (background to yellow)
    • Click on exception in catch block highlights all the places where that exception could be thrown in the try block.
    • Click on interface highights all interface implementation methods.

    This is a pretty compelling product, especially for the price.

  • Eric Gunnerson's Compendium

    Century Training


    A few people have asked me what training I did for my century ride.

    I'm currently riding 4 days a week (well, I took 10 days off at JavaOne and on vacation). Monday/Wednesday/Friday, I do a 15 mile ride which takes about an hour. It has some up and down hills, and a steady 2.5 mile hill in the middle. I try to ride that hill at a somewhat painful steady-state, where my legs are hurting but I don't feel like I'm going to die.

    On the weekend (usually Saturday), I'll go on a long ride. I started at around 30 miles in March, progressed to 50 miles in May, and then peaked at 70 miles the week before the century. I aim for a pace that will leave me tired at the end but not so I couldn't ride more if I had to, which obviously varies based upon the distance.

    For the century, I found it to be hugely useful that I had ridden the bulk of the course before, so that I knew where all the hills were and how steep they were.

    I'm planning on adding more mileage as the summer progresses, and perhaps a new bicycle.

    Oh, and I haven't yet gotten around to getting a heart rate monitor yet, though that would probably be a useful addition. I bought Chris Carmichael's book a while back, and I'm planning on trying some of his training drills - I would really like to be able to bump my average speed up a few miles per hour.

  • Eric Gunnerson's Compendium

    Taken to task on my static import post...


    Darren took me to task on my post on static import.

    Despite the fact that he said, “I code in patterns and in english... the syntax of the language is an afterthought!
    If you don't at least tolerate those assertions, don't bother reading the rest of the post
    ..“, I did read the rest of the post.

    I'm not sure exactly what Darren means when he says, “syntax is an afterthought“. I do like his two assertions:

    a) when coding, nothing is more important than simplicity and readability
    b) my test of readability =
    "how long would it take someone who spoke english, but has never seen a computer in their life, to understand the code"

    I think where we differ is on how we define readability. I agree that something like


    is quicker to parse than


    But is it easier to understand? Well, that really depends on the person reading the code. If they're familiar with the code and Sin is a very descriptive name, then the first one is to be preferred. If they're unfamiliar with the code (either it's not theirs, or it's code they haven't looked for a while) and/or Sin isn't a descriptive name, then I don't think it's as clear cut. If I don't know what Sin does, I need more information. This could be supplied by a IDE if I hover over the method, but it's not obvious just by looking at the code.

    Darren makes some other interesting points that you should read, but I'd really like to comment on his last paragraph:

    But when it comes down to it, the biggest reason we should be able to directly "use" static classes is choice. If you don't want to use this feature, then DON'T. Having it in the language doesn't affect you in the slightest. I very much DO want the syntax, even though only for two classes. It affects me, because I want to use it - it doesn't affect you, because you don't... so why do you care?

    Every feature in the language effects every user of the language. The question is not “do I use the feature“, but “is the feature used in code that I need to understand“? If the second is true - and I think it is true in almost every case, whether it be on multi-team groups, or code that you come across in books or on the net - then you need to understand the feature. It's going to be discussed in every book (well, every *good* book) on the language, and people aren't going to feel comfortable with the feature until they understand it.

    Or, to put it another way, “simplicity is also a feature“.

    One other question came up in this thread:

    Why are static imports bad, but shortcutting namespaces ok?

    Because I say so.

    Okay, I guess I'll have to do better about that.

    Static imports blur together two different things - methods on the current class, and static methods elsewhere in the system. You can't tell one from the other.

    For example:




    Are both obviously static method calls. But


    is currently a local method call. With static imports, it could either be that or a static method somewhere else.

  • Eric Gunnerson's Compendium

    Alfred is worried...


    Those of you who read my holiday letter post from last year may remember that Alfred is my Roomba. That is, if you haven't blotted the experience of reading the post out of your mind.

    See, I warned you, and what did you do - you went back and looked at the post anyway. Isn't the picture of Alfred wearing the basket cute?

    Roomba has been a loyal servant for these past months, and except for occaisional bouts of catatonic paralysis at the edge of our stairs, has performed his duties with devotion an diligence.

    But there's a new kid in town:


    Roomba discovery is here:

    New features are:

    • Bigger bin
    • Better battery
    • Self-charging (roomba finds his charger on the floor)
    • Dirt detection
    • Automatic hat wearing
    • Enhanced “sulk“ mode
    • Teenager mode (cleans while you watch, then goes and watches TV when you leave)


  • Eric Gunnerson's Compendium

    Why a manned space program?


    One of the questions about in the comments to my post on Apollo question asked “Why do we have a manned space program?

    That's a fair question, but before I answer it, there are two things I want to talk about.

    The first is to give credit where credit is due. A tremendous amount of really good science has been done by unmanned probes, and with the exception of the moon, virtually all our knowledge about the solar system has come from these missions.

    The second is to be careful to define what we're discussing. When I talk about “a” manned program, I'm not necessaarily talking about the current manned program.

    So, why should we have a manned space program. Well, you should be sure to go read some of the information written by NSS, but here's my opinion:

    There is something about personal participation that fires the human imagination. Whether it's climbing Mt. Everest, travelling to the south pole, or flying higher than anybody else, people have always aspired to be the first to do something. Striving to accomplish that which is difficult is a defining human characteristic.

    So that's the abstract reason. But there are also practical reasons.

    The first reason is that space has the potential to provide nearly limitless resources. Whether it is possible to exploit them economically isn't yet clear, but if we could replace half of the world's coal-fired power plants with solar power satellites, we could make a tremendous reduction in the release of sulpher, radioactive elements, and greenhouse gasses. Clean power from space has a lot of appeal.

    There are also opportunities around other resources. If we could mine asteroid iron, our reliance on environmentally-damaging mining on Earth would be reduced.

    Given our continued population growth, the ability to spread out and get resources elsewhere is very interesting.

    The final reason concerns the long-term survival of the species. We have good evidence to global catastrophe in the past, and without spreading out to other planets, it's fairly certain that such an event will happen again.

    So, that's what I think, but those who are smarter and more informed than I am have written a lot on the subject.

  • Eric Gunnerson's Compendium

    Killing comment spam

    Is there a .TEXT utility to let me kill all comments that have a specific URL in them? I've been getting hit with comment spam recently...
  • Eric Gunnerson's Compendium

    Why language features die, and language extensibility


    Rick Byers wrote (some time ago):

    Thanks for the awesome post Eric. I'd be interested in hearing more detail about the sorts of things that cause features to be rejected. Is it common to reject a feature that you think would be valuable only because of syntactic compatibility limitations (parser ambiguity, breaking change, etc)?

    What are you thoughts on how language evolution should work in general (outside the confines of C#)? Do you think it would be possible to have languages that could more readily accept the type of extensions you've wanted to make to C# but couldn't?

    For example, do you think there would be value in a language that added a layer of abstraction between the syntax presented to a user, and the persisted form? Eg. if a language was stored on disk as an XML representation of the parse tree, then you could evolve the language (add keywords, etc.) and rely on the IDE tools to intelligent present the code to the user.

    I've been saving up this one for a while now.

    It's common for us to reject some features because they aren't along the lines of our language philosophy.

    It's also fairly common for us to reject a feature because we can't come up with a good syntax for the feature. Sometimes this is because we just don't like the constructs we come up with, because they are ugly, or they don't really make things simpler for the user, or they don't cover the right scenarios. The syntax we can use is heavily constrained by the existing structure of the language. Take a look at your keyboard, look at all the special characters, and tell me which ones aren't already used for something in C#. The list is very short, so we are constrained by the operators that are available. We're also constrained by whether our change would be breaking, and in what situations things would be breaking. C# 2.0 has no major breaking changes, and though that isn't an absolute rule for us, it's certainly a goal. Adding new keywords is, in general, a bad thing to do.

    Finally, we're constrained by what the runtime can/will implement, and whether things can be implemented across languages. Some features only make sense if they're done in all the languages, but that means all languages need to agree before we do it.

    Rick also asked about language evolution.

    There are different opinions about this. Some believe that languages should never change. Others believe that they should be able to extend their language at will. An extreme example of this is Intentional Programming.

    I think I'm one of the few people around who have actually played around with intentional programming. Conceptually, it's interesting, but in the real world, I think the “everybody designs their own language” approach is challenging at best. One can envision a world where the user representation is extensible but the underlying representation is standard, but I think that's a bad world to be in. It may be great for you, but it's probably not good for your team, or the poor guy who takes over your code two years from now. And there's a lot to be said for the “code in a text file“ world.

    We have well-defined ways for users to add functionality - through classes, methods, interfaces, etc. I think that languages should only consider adding features when there is an obvious shortcoming to solving the solution through existing functionality. At that point, you need to understand those issues and determine whether the language solution is the right way to address the issue.

    So I'm not big on extensible languages. Existing facilities - such as Macros in C++, do have their users, but are a disaster from the readability standpoint (both for the compiler and the developer).

  • Eric Gunnerson's Compendium

    Beta1 Suggestions and bug reports


    For a long time, sending a bug or suggestion to Microsoft has been a bit of a challenge. For the .NET Beta, we've introduced the MSDN Feedback Center.

    I will ask you to bear with us, as we're just getting up to speed on this. You may get answers that aren't as good as you'd like, or we may miss some issues. If you get into a situation where the right thing doesn't happen, feel free to drop me a line, and I'll see what I can do.

    This is for beta1 only right now, but we'll expand as time goes on.

  • Eric Gunnerson's Compendium

    Google is my personal search engine


    Google gives high weight to blogs. In fact, google gives too much weight to blogs. Too often I've gone to research something that I only know a little about (say, training for a century) and found that my post is on the first page of results. Want to know what it's like to turn 28? I'm apparently the expert.

    It does have its uses, however. Today I needed to find an old blog post, and I found that if I type:

    eric <word> <word>

    more often than not, the first hit is my blog post. Convenient.

    Of course, it may be that Google is doing this to protect me from the “Total Perspective Vortex






  • Eric Gunnerson's Compendium

    Anders Hejlsberg - Programming data in C# 3.0 #


    Dan pointed me to a new video on Channel9 about some of the things we're talking about for C# 3.0

  • Eric Gunnerson's Compendium

    JetBrains ReSharper 1.0 now available

    JetBrains (of IntelliJ fame) has shipped V1.0 of ReSharper
  • Eric Gunnerson's Compendium

    Stevens Pass Summit Lakes (Grace Lake)


    Last Sunday, we went on a short hike to the summit lakes at Stevens Pass Ski Area.

    It was the first hike for us this season, and the first one with canine company.

    The real trail starts at the top of the Brooks lift (roughly centered in the photo). But to get there, you get to climb 900' up a steep jeep trail (well, you can walk straight up the hill if you want). We started at 8AM to miss the heat, so it wasn't bad.

    The trails to the lakes are fairly easy - mostly up and down, but the trail is pretty overgrown in places, so you'll need to spend some time pushing through underbrush.

    The lakes are typical alpine lakes - shallow, and very pristine. I didn't take pictures at all of them, but here's one of the very tiny ones.

    Recommended, but my knees are a bit sore after the hike down.

Page 1 of 2 (31 items) 12