The Future Of C#, Part Five

The Future Of C#, Part Five

Rate This
  • Comments 74

When we're designing a new version of the language, something we think about a lot is whether the proposed new features "hang together". Having a consistent "theme" to a release makes it easier for us to prioritize and organize proposals, it makes it easier for our marketing and user education people to effectively communicate with customers, it's just all-around goodness.

If you look at C# 2.0, it was a bit of a grab-bag. The big features were clustered around the notion of enabling rich, typesafe programming with abstract data types that represent collections of data -- and thus generic types and iterator blocks. But there was a whole lot of other stuff in there as well: implementing anonymous methods was a major feature that doesn't fit well with this theme. And there were other more minor features as well: partial classes, improvements to properties, and so on.

With C# 3.0, the theme was very clear: language-integrated query. Anything that did not directly support LINQ was immediately made lower priority. It is rather amazing to me that partial methods and auto-implemented properties got in at all; that they were relatively easy features to design, implement, test and document was what saved them.

What then is the theme of C# 4.0? Again, it seems like rather a grab-bag: covariance and contravariance, improved interop with dynamic languages, improved interop with legacy COM object models, named and optional parameters. It also seems like a pretty small set of new features compared to generics or query comprehensions.

That was deliberate. Some feedback that we received loud and clear throught the C# 3.0 ship cycle was "this is awesome, we need these language features immediately!" and, somewhat contradictorily, "please stop fundamentally changing the way I think about programming every couple years!" Rather than trying to find some way to yet again radically increase the expressiveness and power of the language, we decided to spend a cycle on making what we already have work better with the other stuff in our programming platform infrastructure.

"Now actually works the way you'd expect it to" is not really a theme that gets people excited, but sometimes you've got to stop running forward at full speed and take some time to fix the existing stuff that is annoying a lot of people. (When I was on the VSTO team I petitioned the C# team to please, please make ref parameters optional on calls to legacy COM object models, but they were too busy with designing LINQ; I'm delighted that we've finally gotten that in.)

We also want to make sure that we are anticipating the problems that people are about to face and mitigate them now. We know that dynamic languages and object models designed with dynamism in mind are becoming increasingly popular. Given that there will be stronger demand for statically typed C# to interoperate with them in the future, let's get dynamic programming interoperability in there proactively, rather than be reactive about it later.

Looking forward, it's not clear what exactly the theme of future (hypothetical!) versions of the language will be. The expected onslaught of cheap hardware parallelism looms large in our minds, so that's a possible theme. Enabling metaprogramming is another possible theme on our minds, thought it is not at all clear how that would happen. (Make C# its own metalanguage? Extend expression trees to statement trees, declaration trees, and so on? Open up the internals of the compiler and provide an object model that lets people generate programs directly? It is hard to say what direction is the right one to go in here.) Fortunately, people way smarter than I am are thinking about these things.

  • I know Microsoft likes to go for the grand and the exciting, but how about more practical stuff?

    I don't have the largest developers circle, but I know of absolutely no one in a circle of 50 or so who uses LINQ, or cares about binding to anything on a user interface, etc.

    How about something as simple as an optional compiler warning telling me when an argument is not being used in a method, or if the value returned by a method (function) does not have a variable assigned, etc.

    And ditto what Sky Beaver said.

  • I'd REALLY like to see support for generic classes and operators as first class entities.  To see what I mean, try to write a generic class for Matrices that support all operators you might want to overload: +, -, *, /, ^.  This is not allowed in C# 3.0.  You end up creating two copies of the class for float and double.

    I believe this can be done with a 'where' clause in the class declaration that lets you list the types supported:  i.e. where T is float, double, int, long, uint, ulong.

  • I'd like to put my vote in for parallelism (however it's implemented).

    Quad core processors will be ubiquitous in the next 12 months (if not sooner) and dual-quad as well as 6 & 8 core processors are just around the corner so I'd say parallelism is already here.

  • I love the extensions for 3.0, I hope the extensions for 4.0 get just as good!

    @other Thomas  & Jay. In C# 3.0 this can be done very nicely like so:

    Extension method

    public static class EventStuff

    {

           public static void Raise(this EventHandler handler, object sender, EventArgs args)

           { if (handler != null) handler(sender, args); }

           //and others

    }

    and call your events like this

    {

           public event EventHandler MyEvent;

           protected virtual void OnMyEvent(EventArgs args)

           { MyEvent.Raise(this, args); }

    }

    In this way you don't waste memory and CPU time.

    Personaly I'd realy like a non-nullable reference type. Something like:

    object! x = null; //generates an exception

    And an extended coalesce operator. possibly:

    int x = a.b.c.d.e !! 0 ; //x would become 0 if any of a, b, c, d or e would be null.

    Treat lambda's, delegates and methods like values properly.

    So that their hash values don't only depend on the signature and delegates generated with the

    same code tree can be sensibly compared for equality.

  • GuyO: nooo! Now I can read assignments and understand what they mean without having to be familiar with the classes involved.  Please, C#, keep it that way.

  • Interesting article, Eric.

    Finding the happy medium between sweeping change and stagnation is a consideration that we are all faced with as developers regardless of who our target audience is.  I found it surprising to hear just the extent of this issue as it relates to further C# development given the relatively advanced target audience that your team has.  It prompted me to share some of my own thoughts and perspective on the topic as someone whose target audience is often not even computer literate: http://icrontic.com/forum/blog.php?b=115

    Regards,

    Rob

  • Virtual Constructors would be real nice.

    Pity Anders didnt bring this concept in from Delphi.

  • To build on top of what Nick said, "Virtual Class Methods" would solve a few problems that I've had over the years with C#.  I could then do virtual static classes.  Anders Hejlsberg designed this capability into Delphi, so I'm curious as to why it has been omitted from C# for so long.

  • Disposal

    I'd love to see the need for directly calling Dispose() go away. So much code plumbing code in such an advanced language!   There must be away to improve this whole issue.  Maybe Dispose could be called automaticlly.

    That's what the "using" statement is for -- Eric 

    Object Oriented

    I'd love to see improvements in the language that help limit scope.  Example: class fields that are used to serve up properties.

    How is that different from auto props? -- Eric

    Deterministic Lifetimes

    A language feature to add to a class?  Is there a role in C# for this concept and still maintain performance.

    That's what the "using" statement is for -- Eric 

    Revisit Existing Features

    There is a tendacy to forget about features that were released in previous version. These are often the basis for much of the languages use:

    * SqlClient
    * Forms
    * Click Once
    * Crsytal Reports

    These would greatly benifit from some updating.

    Sure. But none of those are language features. -- Eric

    -Pete

  • A scope about the popularity of different programming languages can be found here:

    http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html

    In my opinion that improvemets, although interested,  unlikely sky rocks today's low popularity of C#.  I would like to change "please stop fundamentally changing the way I think about programming every couple years!" for "please stop telling me what you think is the best way of programming and give me a language for programming the way I would like". In my opinion, any improvement that is not stick with the best performance is not worthly enough.

    I would like to see solved "the struct pointer juju problem"

       static void Foo<T>(T value) where T is struct

       {

           uint size = sizeof(T); //Error 1. Cannot take the address of, get the sizeof…

       }

    This is a basic and large demanded feature that of the language. Also some level of inheritation of fields for structures would be a huge improvement of the language.

  • We are witnessing a machine processing performance explosion in action today and over the next few years it is set to accelerate - increasing core count, Larabbee etc are prime examples.    The days of boost via GHz uplift are gone.    One (not the only one I grant you) of the historic arguments for C and C++ over C# and other CLR based languages has always been their efficiency.   For many classes of applications this advantage is likely to shift more from code level efficiency to who can best implement optimisation at the higher level - i.e. Parallelism.    In future, performance optimisation effort in developing our applications may possibly be best spent at this level and it will need best of breed support to do so.    

    At a lower level, for some classes of application the parallelism boost will also come from  wide vector SIMD processing - 128 bits on today's CPUs and 512 bits on Larrabee I'm lead to believe - and how long before it is made wider on mainstream CPUs?    This is currently completely ignored by the CLR targeting languages including the CLR version of C++ - why?  

    The battle ground for future Applications winning or losing may well be fought over who best leverages these increasingly accelerating processing resources.     I actually like what some have described as the "boutique"  C# language changes, but I am also fearful for C#'s future - if it cannot provide ease and reliable leveraging of the huge processing potential, then C# apps will fail to compete in both the applications generated from it and C# will fail in its battle with the other languages.

    My customers do not care if my staff are using Linq under the bonnet to talk to the database or internal data or not, or using the latest Generic Constructs, or more elegantly implemented patterns.    They will care when my competitors reliably bring out a 5, 10 or 20  (in 10 years?)... times performance boost.    I have always liked C# as a developer and C# for its productivity as a company owner,  and I have always hated the fact that it always plays second fiddle to C and C++ for crunching power and perceived performance.

    I appreciate the new "Parallel For" etc, but the C# team (and CLR and VS team) need to continue to heavily invest effort in parallel processing leverage (coding, debugging, parallel processing run time analysis etc) and although I appreciate that C# (and CLR) have some of its roots in the "safe programming" and metal independent paradigm, I would really like to see it recognise the blossoming SIMD capabilities of the hardware.    

    Let's put on hold for a little while some of the more " boutique" or "ivory tower" language changes and really arm C# with the weapons for the performance battle that is coming.

  • @mastermemorex: you can do this:

    static void Foo<T>(T value) where T is struct

      {

          int size = Marshal.SizeOf(value);

          // or Marshal.SizeOf(typeof(T));

      }

  • * non nullable types backed into the language would be the biggest win for me. (I would favour a compiler switch to move a project to not nullable by default and then allow the ! syntax (or ? for consistency with Nullable, albeit perhaps with confusion so ambivalent there) for nullable parameters and variables.

    * After than enum constraint (though Jon's unconstrained melody might work around that for quite a few scenarios)

    * generic constraints on static operators will full JIT optimization down to specific instructions for native types

  • I know I may be a little late, but anyway...

    Eric: Well, life's not a bed of roses. And then some. Let's assume this discussion went into a wonderwall... ;)

    1st -- AOP. Please, Microsoft, buy Gael Fraiteur's company, and integrate the stuff into .NET!

    2nd -- Attributes evaluated at runtime, like attributes whose params are Lambdas that refer to actual code, or to extension methods

    3rd -- Integration of the Rx into the .NET core

    4th -- Attributes, like in 2nd, for getters and setters (separate)

    Why am I suggesting these?

    With AOP, you could fulfill a whole lot of the proposals made before. With at-runtime-evaluated attributes, you are close to a solution for immutables (or at least thread-safes), and with the 3rd and 4th, you'd kill the nasty fly.

    Whatever the syntax could look like, you'd get:

    * Less reflection in code, because arbitrary attributes/aspects would be evaluated at runtime (e.g. by the MEF, without explicitly coded strategies and lookalikes).

    * Metaprogramming via attributes/aspects

    * Immutability via attributes/aspects (OnEnterSetLockAttribute(), OnLeaveReleaseLockAttribute(), FinallyReleaseLockAttribute() and so on)

    * A kind-of native coupling between the event subsystem and the Rx

    In my spare time, I am working on a distributed, self-coordinating microkernel (the "Disco-Mike") devoted to creating EBCs (event-based components), and I am feeling a lot of pain with all the reflection stuff. If PostSharp was still free, I'd not even suffer from a tenth of the pain. If the Rx would integrate smoother, I'd not even suffer from a tenth of the pain. If there were at-runtime-evaluated attributes, there would be no need for at least 30% of my code, or even more. If there were attributes for getters and setters, separately, the Observer Pattern would be like "batteries included".

    EBA (event-based architecture) and first of all DEBA (distributed ...) is, imho, the very way to a "zero-coupled" world in which contracts could base on a single notation, itself basing on XSD and WSDL (and maybe on a successor of SCA). This would in turn make interfaces themselves obsolete.

    Just another step of abstraction. Just another step of less coupling.

    Just not only one but a noteworthy amount of nasty flies killed.

    Carsten.

Page 5 of 5 (74 items) 12345