A simple sample for C# 4.0 ‘dynamic’ feature

A simple sample for C# 4.0 ‘dynamic’ feature

  • Comments 11

Earlier I posted some code to start Visual Studio using C# 3.0:

using System;
using EnvDTE;

class Program
{
    static void Main(string[] args)
    {
        Type visualStudioType = Type.GetTypeFromProgID("VisualStudio.DTE.9.0");
        DTE dte = Activator.CreateInstance(visualStudioType) as DTE;
        dte.MainWindow.Visible = true;
    }
}

Now here’s the code that does the same in C# 4.0:

using System;

class Program
{
    static void Main(string[] args)
    {
        Type visualStudioType = Type.GetTypeFromProgID("VisualStudio.DTE.10.0");
        dynamic dte = Activator.CreateInstance(visualStudioType);
        dte.MainWindow.Visible = true;
    }
}

At first, it looks the same, but:

  1. Referencing EnvDTE.dll is not required anymore – you also don’t need using EnvDTE – don’t need to reference anything!
  2. You declare the ‘dte’ variable to be weakly typed with the new dynamic contextual keyword
  3. You don’t have to cast the instance returned by Activator.CreateInstance
  4. You don’t get IntelliSense as you type in the last line
  5. The calls to dte are resolved and dispatched at runtime

It’s a trade-off, but I still view dynamic as yet another useful tool in the rich C# programmer’s toolbox to choose from.

  • After having to work with VB.Net for the last year, I don't like the idea of bringing late binding into C# much at all. It's kinda convenient on its face, but I've seen so many cases of InvalidCastExceptions that the compiler could be catching that it just makes me shake my head. One man's convenience is another man's scourge.

    I'd like to think there will be some kind of safeguard to prevent that, but I won't hold my breath.

  • >I'd like to think there will be some kind of safeguard to prevent that, but I won't hold my breath.

    What kind of safeguard would you expect?  The whole point of late binding is that it's done after the compiler's already had its chance to ensure correctness.

    If that level of type safety is a requirement for to you, there's not really anything in C# 4.0 stopping you from building against an Interop DLL the way you do today.

  • Timothy: Just pretend instead of "like" I said "hope". I'm not pretending that the compiler can magically figure out intent, which is why I didn't say "Why did you add 'dynamic' when you already have 'var'".

    My issue is that it ultimately leads to lazy programming habits that defer static error checking that won't make it past the developer with runtime checks that make it to the user. A very specific example which I have personally had to deal with is:

    Public Sub Foo(ByVal datasource As Object)

       Try

           datasource.Sort("FirstName")

       Catch

           ' Swallow the exception

       End Catch

       Me.MyGrid.DataSource = datasource

    End Sub

    Now, this is just my opinion, but the code above is bad design, as it's not very OO, incurs performance penalties on the frequent case that the incoming object doesn't declare a very specific sort signature, and it's a nightmare to debug due to the exception noise.

  • This is good for simple script, but I do not feel this being suitable for an application consisting of more than one file.

    Take IntelliSense -- I may know what the methods of DTE are, but the second solution will require *all* developers in my team to know it as well. While with IntelliSense they learn it as they need.

    Or just simple maintainability -- if we upgrade to a later version of COM interface, what will break? Add X hours of manual testing and you will know.

  • Kirill Osenkov has posted a simple example showing the code using forthcoming "dynamic" keyword

  • While I like the trend of bringing support for dynamically evaluated types, I agree with comments above that in this particular case the trade-off can feel too big for many developers. We don't easily give up strong types. I tried to sketch how C# can support both: dynamic types with IntelliSense support in the cases when it can be retrieved at compile time. Here's what I got:

    http://bloggingabout.net/blogs/vagif/archive/2009/05/03/intellisense-and-dynamic.aspx

  • This makes me sad. This was a step in the wrong direction for C#. It was bad with VB/VBA called it "Variant", and it's still a horrible idea now.

    Not trying to shoot the messenger, I'm just sayin..

  • Kirill, what about the perf comparison of both?

  • Hi Andrew,

    during the first call there should be an insignificant perf hit due to the runtime binder caching the callsite. All consequent time the execution flow passes through a dynamic statement, it will be exactly as fast because the resolved callsite is cached for subsequent uses. This is really similar to the JIT-compilation process.

    In any case, in this particular scenario it doesn't make sense to measure perf at all, because starting up Visual Studio will be billions of times slower than the perf loss caused by dynamic.

    In general, you have to measure in a case by case basis. I don't expect dynamic causing perf regressions.

  • This example is probably a bad example of the usefulness of dynamic as it doesn't solve anything that couldn't otherwise be solved relatively easily in regular ol' .NET.  Save a cast?  No need to import a library...I dunno; those two don't sell me on the usefulness of dynamic.

    On the other hand, it does make possible one scenario that required quite a bit of code to resolve: double dispatch.

    See: http://www.charliedigital.com/PermaLink,guid,e65b5c84-b54d-468a-81bf-211e35d8fb5c.aspx

    And then see: http://www.charliedigital.com/PermaLink,guid,93e4f51f-043f-49b6-815a-f3dd1e2ad7b3.aspx

  • Hi all,

    I agree that this is probably not the best example for dynamic.

    Also, Charles, the double-dispath thing is extremely cool!

    Thanks,

    Kirill

Page 1 of 1 (11 items)
Leave a Comment
  • Please add 6 and 2 and type the answer here:
  • Post