It Already Is A Scripting Language

It Already Is A Scripting Language

Rate This
  • Comments 51

My recent post about the possibility of considering maybe someday perhaps adding "top level" methods to C# in order to better enable "scripty" scenarios generated a surprising amount of immediate emphatic pushback. Check out the comments to see what I mean.

Two things immediately come to mind.

First off, the suggestion made by a significant number of the commenters is "instead of allowing top-level methods, strengthen the using directive." That is, if you said "using System.Math;" then all the static members of that class could be used without qualification.

Though that is a perfectly reasonable idea that is requested frequently, it does not actually address the problem that top-level methods solve. A better "using" makes things easier for the developer writing the call. The point of top-level methods for scripty scenarios is to make it easier on the developer writing the declaration. The point is to eliminate the "ritual" of declaring an unnecessary class solely to act as a container for code.

Second, and more generally, I am surprised by this pushback because of course C# already is a scripting language, and has had this feature for almost a decade. Does this code fragment look familiar?

<%@ Page Language="C#" %>   
<script runat="server">   
  void Page_Load(object sender, EventArgs e) 
  {
      // whatever
  }
</script>
...

Where's the class? Where are the using directives that allow "EventArgs" to be used without qualification? Where's the code that adds this method group to the event's delegate? All the ritual seems to have been eliminated somehow. That sure looks like a "top-level method" to me.

Of course we know that behind the scenes it isn't any such thing. ASP.NET does textual transformations on the page to generate a class file. And of course, we recommend that you use the "code behind" technique to make a file that actually contains the methods in the context of an explicit (partial) page class, to emphasize that yes, this is a class-based approach to server-side processing.

But you certainly do not have to use "code behind". If you're an ASP traditionalist (like me!) and would rather see C# as a "scripting language" that "scripts" the creation of content on a web server, you go right ahead. You can put your "top level" methods in "script" blocks and your "top level" statements in "<% %>" blocks and you'll thereby avoid the ritual of having to be explicit about the containers for those code elements. The ASP.NET code automatically reorganizes your "script" code into something that the C# compiler can deal with, adding all the boring boilerplate code that has to be there to keep the compiler happy.

But consider now the burden placed upon the developers of ASP.NET by the language design. Those guys cannot simply parse out the C# text and hand it to the C# compiler. They've got to have a solid understanding of what the legal code container topologies are, and jump through hoops -- not particularly difficult hoops, but hoops nevertheless -- to generate a class that can actually be compiled and executed.

This same burden is placed upon every developer who would like to expose the ability to add execution of end-user-supplied code to their application, whether that's in the form of adding extensibility via scripting, or by enabling evaluation of user-supplied expressions (such as queries). It's a burden placed on developers of productivity tools, like Jon Skeet's "snippet compiler", or LINQPad. It's a burden on developers who wish to experiment with REPL-like approaches to rapid development of prototypes, test cases, and so on.

I am not particularly excited about the convenience of the ability to save five characters by eliding the "Math." when trying to calculate a cosine. The exciting value props are that we might be able to lower the cost of building tools that extend the power of the C# language into third-party applications, and at the same time enable new ways to rapidly experiment and develop high-quality code.

Of course we do not want to wreck the existing value props of the language in doing so; as I said last time, we also believe that there is huge value in the language design naturally leading professional, large-scale application developers towards building well-factored component-based programs.

Like all design decisions, when we're faced with a number of competing, compelling, valuable and noncompossible ideas, we've got to find a workable compromise. We don't do that except by considering all the possibilites, which is what we're doing in this case.

  • Steve Jobs said, "Be careful who you sell to, because they will control your future."

    Your audience is biased toward strong typing because no one survives in the C# without going through the motions of strong typing.

    I respect the strong-typers if only because of their numbers.

    There is also a huge dynamic programming userbase I'll call the "dynamics".

    The dynamics know there's a huge group of strong-typers and they respect that. So dynamic programming shouldn't be done under the ".cs" extension, but rather under the ".css" extension.

    Dynamics grew up on Lisp and Perl and then Ruby. But they're weary of languages defined by implementation rather than specification. They also hate calling .Net when it means marshalling native types into strings and back out into .Net types.

    Dynamics want C# with dynamic typing, functional programming (F#), and unrestricted Linq usage. They love the enhancements of C# and just want to see them rounded out so that syntax isn't so awkward for Monads and map-concats and backtracking predicates and local variable-binding pattern matching. They would please also like symbols.

    But strong typers say there's no need for this in C#. Strong typing is discipline which yields code which can be by a theorom-proven to be less buggy than untyped code. I spend  about five times more time fussing with types than I ever spent tracing incorrect code usage.

    I think C# was designed with only one mistake: C# was advertised as a "safe" language like Java, but some people think it's free of GPF's only because of compile-time method binding. Those who were raised on Pascal were suffering wound's from abuse of C's (void *).

    But functional programming techniques combine well with optional strict typing. Like the "unsafe" keyword, there should be a "dynamic" keyword for use only by trained professionals who will take responsibility to ensure the objects they pass to a function respond to the right interfaces. In return, they are liberated from time spent typing and coercing and so forth.

    A dynamic function takes either a fixed or possibly varying number of arguments and has the value of the last expression it evaluates. A yield from a called function causes a yield from the innermost nested iterator. The syntax is C#, but the programming is two times faster.

    Dynamism is needed in C# for these reasons:

      - rapid prototyping

      - maintaining code

      - programmer productivity

      - code clarity

    Example 1: Clarity

    Here's a line from my recent program C# 3.0:

    g.DrawLine((float)(Math.Abs(4*Math.Sin(Math.PI*v)+CenterX), 0,0,0);

    This is fine for people who are religious about strong typing. But is it really less bug-prone than:

    g.DrawLine(abs(4*sin(PI*v)+CenterX), 0,0,0);

    I know the strong typers need to be satisfied and let's give them the entire ".cs" filespace, past, present and future, to litter with their restrictions.

    But if we're honest, which is faster to write? Which is easier to maintain? I've made several mistakes with the SIN function and the parentheses, but I never forgot my typing.

    What about:

    float i = (float)Math.Sin(1/2);

    Have we really solved the obvious problem? The compile-time strong-type policeman didn't warn me about this being the same as:

    float i = (float)Math.Sin(0);

    In any dynamic typing language I would have known I had to take personal responsibility for passing the correct arguments to my functions and I would have written:

    i = sin(1.0/2.0);

    Strong typing didn't save me here. It doesn't save me from subtle issues governing decimal conversions.

    A method written to an ultra-specific type won't generalize to work with other objects. I'll have to create an interface and integrate it into the code.

    And I'd better have access to the source code because I can't extend certain classes, etc.

    See, most of the time strong typing requires that I think about distinctions which have little to do with the problem I'm trying to solve.

    There are two things which come from Functional Programming. From the "neat" side come strong typing, Monads, and unchanging structures (these are already present in C# but are way to ugly when defining functions as first-class objects). From the "powerful" sometimes scruffy side comes dynamic typing, implied type coercion (if I go to the trouble to define methods to convert between types I want it to be used all the time!!), continuations, parallel processing integration, asynchronous IO integration, better control over calls between browsers and servers, and fully extending Linq's power into C#. (I know you're doing some of this anyway; please be orthogonal in your approach!)

    I am creating a huge library with my current application and I may have to go to extra-ordinary lengths to make my library methods into non-brittle generalizable utility functions which are plasable for both maintanance (my religion on maintanance is the opposite of other posters- I find strongly typed libraries to be so brittle that significant changes require re-writing a HUGE amount of code. I have never used a library in a dynamic language which had problems because of typing errors. Range checks within conversino functions is all that's needed!  Yes strong typing increases the theoretical analyzability of a program, but proving a program works using logic is and always will be too expensive a proposition. True program-theorom-proving neatness in programs is the hope of madmen).

    My bottom line: your current user base is a SUBSET of who WILL use C# in the future (just about every programmer will use it-- it will be like English-- because alternatives are dying).

    There's no reason for F# to exist-- it offers a different syntax from C# without need. The features should be integrated into C# and DISABLED by default through a compiler directive. The directive could be something like "--PROTOTYPE" to emphasize a buy-in to the strong-typing religion.

    I use C# all the time. What else would I use on .Net? A language like Ruby or Perl which is defined ad hoc by its implementation??

    If C# doesn't become more dynamic, I will have to turn to F# or else create my own programming environment in order to make progress.

    I can discuss this from any perspective. I just fear this perspective is absent.

  • Hey,   Michael Ginn, why don't you check out the Boo language, which is can use static typing (but with automatic type  inference) OR dynamic typing, all running on top of the CLR.

    http://boo.codehaus.org/

  • To clarify some points you made:

    "This post is about evaluating the benefits and costs of providing a standard mechanism whereby tool providers can make it easy to let their users define what look like "top level" methods in C#"

    I wholly disagree. At no point do you examine the costs of maintaining the code you propose writing. The examination does not go far enough. There are too many respondents that consider your proposal a potential victim of abuse (and it WOULD be rampant), and there seems no appreciation for having to live in the house you build. Ryan and TheCPUWizard, above, seem to understand that. Most professionals that live with their own code as well as others' code understand that.

    I find it fascinating the number of comments I've received that assume that we on the language design team are cowboys who will add any old thing to the language. How anyone can look at the design of C# and deduce that the designers don't care about discouraging bad coding practices is quite beyond my comprehension.

    As I've stated many times, first off, this is an idea for a possible future language feature. We don't have a design, we don't have a budget, we have an idea. How we "package" the feature in order to ensure that we continue to meet our goal of being a "pit of quality" language is completely unclear; what seems likeliest if we do it at all is to make the feature a part of the "compiler as a service" API, such that the compiler itself will provide a service whereby it can accept "top level" methods and statements, and provide the same services on those guys that it provides today on sets of files: design-time analysis and code generation. Whether that would then drive changes into the actual language itself, or a variant on the language, is unclear. Either way, we have to design the semantics of the feature, which is the interesting part.

    -- Eric

    Moreover, productivity is not defined simply by shortest time vs highest quality; it does not stop at the end of construction. You are using a manufacturer's definition, but I think most of us engineers can agree, producing code doesn't stop at manufacturer. Quality engineering requires planning, implementation, documentation, testing, and maintenance. Adding less-typed features to a strongly-typed language, at best, accelerates implementation, but increases the need on documentation, and ambiguates testing and maintenance.

    This is what I meant by ambiguous methods calls... if you're reading code and the method call or "var"'d Type is one of your own, and you can't immediately recognize it because it's been made unclear, then you're wasting your time. And then, imagine it's code using a Type that you're learning, and your eyes don't pick it up because your brain is too busy trying to find what you just read about, that's just nonsense waste.

    "As I've said a couple of times, the compelling benefit of top level methods has little to do with the call side. Saving those five keystrokes is not interesting."

    But you're only addressing the first of my 3 sentences there... What benefit ARE you talking about that we're not already paying with extension methods? Do engineers really care? It sounds like No.

    "This feature is supported by JScript.NET. Baking in that feature required careful compiler design from the ground up. It would be quite difficult to shoehorn such a feature into C# or VB, when the compilers were not designed for that."

    Actually, I may have fibbed a bit. Microsoft.VisualBasic.VBCodeProvider and Microsoft.CSharp.CSharpCodeProvider each get a CompileAssemblyFromSource() method from CodeDomProvider, I think that fits the bill, and has been around since 1.1. So apologies for the poor example, though to the point, it's not just JScript.

    But JScript can accept new code that runs in the context of already-running code. In JScript you can spit new code at runtime that accesses local variables by name, for example. Fully-fledged "eval" is a hard feature to add to a language post-hoc. Just providing an API that lets you spit a new assembly is comparatively trivial. -- Eric

    Moving aside from the academic, how about we examine collision of such a feature? If I buy 3rd-party control X, which has a PI "global" property or method in it, it would collide with the one .NET comes with (I'm assuming you'd have them move it out of Math), or even one of my own. How do I call the one I want? How do I read the code and know which one I'm calling? How does the reader, 16 months and one generation of developer removed, not get utterly confused? And how does any solution garner an implementation that doesn't look like namespacing (which is what we'd be trying to avoid!)? Is this a fair trade between inventor and user?

    Oh, and to Micheal Ginn, my man, why are you using 1/2 instead of 0.5? In any case, the proposals in this article wouldn't resolve that issue either, what you're shooting for is called VB.NET. If you're writing a huge library, and you consider changes to be important to support, then how do your unit tests behave? Don't you find them brittle as well? Therefore, is this really a C# problem? Moreover, your g.DrawLine() sample is not made more clear by removing namespaces, it's made more clear by breaking that one line of code into multiple, separate, well-commented lines. It may look like more code, but you'll thank yourself for it in 2 years when you revisit it. And it can't be about performance, because there's no way EITHER line you presented is performant in any way.

    I also believe libraries like you describe are perfect candidates for F#. It is a language of What, not How, and is specifically engineered to help solve problems in your kind of way.

    And back to Eric, seriously, as a consumer, I don't care whether or not something is easy or hard to do. "Impossible" I might consider caring about, but I don't see that word often in software, with good reason. I don't want to know how difficult it would be to allow enums to define methods (like Java already has!), I want the feature because I'm doing it anyway (in a quirky way, via extension methods).

    All in all, the "cleverness" that CPU refers to is often a dangerous thing for business. Forcing a group of engineers to work with, maintain, code that someone created because it was neat, because they did it fast, did something clever (and therefore is the best judge of quality of their own code, right? Please note the sarcasm), is simple risk. Make the language more powerful, but don't make it into another language (go invent that other language elsewhere!).

    And above all else, heed the feedback you're receiving, as most of it seems negative.

    I do heed it. But most of it also seems to be massive overreaction, frankly.

    Luce was right. No good deed goes unpunished. People asked us loudly to have a more transparent design process. So when we come up with a completely unbaked idea that we're kicking around to see if it might possibly work in some context, I mention it, because that's transparency. And as a result I get fifty comments criticizing my unbaked idea for being unbaked. A little more positivity would be appreciated. -- Eric

  • Thank you Eric for taking the time to make this post.  I appreciate you guys reaching out to us.  I hope the input is useful.

  • Eric,

    When you are talking about top level functions -- what you are really talking about is namespace global functions, correct?  That actually is a good idea -- to have functions that are namespace global, and that when you include the namespace, become global to whatever module did the using statement.

    The classic example would be ASP.NET, where you have this all over the place:

    public readonly HttpSessionState Session()

    {

      get {

         HttpContext.Current.Session;

      }

    }

    By all over the place I do, indeed, mean literally all over the place.  Usually, session and the other context variables are used heavily, and probably 95% of ASP.NET applications that are non-trivial reference it a few dozen times per request.

    The minimum scope operators to access it, however, is two -- as shown.  There's not really any advantage to that over:

    namespace System.Web.SessionState

    {

    public readonly HttpSessionState Session()

    {

      get {

         HttpContext.Current.Session;

      }

    }

    }

    In the context of ASP.NET, we know we're talking about the current Session.  We know what that means, in ASP.NET, and repeating the scope qualifier to get at a singleton then scoping that to get at the instance is not improving code clarity or quality.

    Which is why HttpRequest defines the property, and the derived classes (like Page) inherit the property.  Because having code that sprawls across the screen does not improve readability.

    The issue is that defining a convenience property like this breaks encapsulation; it is not a property of page, and it is not something page should be storing/retrieving.  It is environmental data, required to process the request.

    If aggregation were supported, it would be the "clean" way to handle it -- but aggregation is not supported.  I can't say "pull in the functionality of SessionStateConsumer and add it to the Page class" in C#.  I can say it in C++, from native code.  I could potentially delegate -- but then I'm back at three qualifiers or exposing the convenience methods and repeating code.

    Also, there are legitimate functions in most applications that do not act on any object instance, and that belong to some genericly named static class that serves as a namespace just because they "have to be somewhere."  

    As long as by "top level," we're talking "in a namespace, but outside of any class" -- it makes perfect sense to provide that functionality.  There's not any greater/lesser chance of collision than if they are in a namespace qualified class, you can always use the qualified name to get at the one you want, and it is cleaner than having a class that exists purely to scope the functions.

  • I would love to be able to do something like:

    public class MyClass {

    public void ExecuteScript(String fileName) {

    String script = File.ReadAllText(fileName);

    ScriptingEngine engine = new ScriptingEngine();

    engine.Execute(script);

    }

    }

    where my custom script would be able to interact with the code where it is running.

  • I certainly like C# as a compiled language.  It being compiled is one of the many reasons why I like C# so much.  If people want a scriptable language, why can they just not use something else.  I guess it would be nice to have an interpreter for C# that would allow the use of "script", but for web pages, etc: isn't scripting an old frame of mind that the language is trying to keep us away from?

  • Eric, the transparency is appreciated by some of us. Thank you.

    I guess I'm not seeing the cost for the developer of a snippet compiler or LINQPad. Probably because I've never tried to develop anything like that, but wouldn't the necessity of generating a class wrapper around the code be a sort of one-time cost that you would pay during development, and be done with?

    Not that it would be free, but once you've written the code to wrap a snippet the user typed into a Snippet__Nonce$Class1, 2, 3, etc. for each snippet they enter, then Bob's your uncle. It is a cost, surely, but a small one-time cost you would pay at the start of the project. It wouldn't be an ongoing pain point, would it?

    Or is there something I'm not getting because I've never built anything like that? Like, something to do with the scope of variables created in snippets, different snippets needing to be able to see each others' data, something like that?

    Cheers!

  • There are already scripting languages with top level functions that can access the FCL (JScript.NET, VBScript.NET, PowerShell).  Why not use them for your "scripty" scenarios?

    A customer says "I want to rapidly prototype up some new functionality that I'm considering adding to my two-hundred-thousand-line C# project using a REPL tool like other implementations of C# have." You want me to tell that customer "well then, you should learn JScript!"? -- Eric

    Further, you're advocating a paradigm change, not just some syntatical sugar or new semantics that "fit" within the existing language (eg, LINQ).  Consider this scenario:

    And now I've got complete strangers telling me what I'm advocating. I don't recall advocating anything. I recall stating that we're consider some possibilities for ways to extend the reach and scope of the language. Furthermore, as I've stated already, various guises of C# already use the "top level" paradigm and have for a decade. That's hardly a massive paradigm shift. -- Eric

    1) Open a CS file.

    2) Look in a method implementation and see the following code:

    int i = Foo();

    3) With top level functions, you can no longer assume Foo() is a method in the current class.  As you put it, _consider the possiblities_:

    4) Have a headache just thinking about the amount of time you'll waste debugging other people's "scripty" code and memorizing which functions are top-level, which are in the current class, and which are in a partial class defined somewhere else.

    Yeah, I've learned through long experience that any time we so much as suggest the possibility of new language feature, the first feedback we always get is "other people -- not me of course, but all the bozos I work with -- are going to misuse this feature and make my life miserable."

    It's rather depressing constantly getting that deeply pessimistic feedback, but we don't let it stop us from adding value. We got that feedback on generics, anonymous methods, iterator blocks, query comprehensions, extension methods, implicitly typed locals, lambdas, dynamic interoperability, named parameters, you name it and people will tell you that adding any feature to the language is the end of the world. I rather like all those features and I'm glad we added them. Try being optimistic about the skill level of your coworkers; you'll be a happier person. -- Eric

    Finally, using ASP.NET as a basis or justification for changing the paradigm of a language is foolish.  ASP.NET is a very heavy weight and high-level toolkit; the fact that it needs to use code transformation to achieve top level functions is to be expected.  This would be like someone arguing that Ruby needs multiple inheritance (not Mixins) because Rails uses code transformation to achieve something similar (of course it doesn't).

  • Thanks for your response Eric.  I hope you don't mind me following up.

    > "using a REPL tool like other implementations of C# have"

    I don't see how a REPL tool would _require_ top level functions as a language feature, although I'm probably missing something.  

    It does not. Rather, both features require that we carefully define the exact language syntax and semantics of "top-level" features. It would be foolish to do all the work to make, say, REPL work without even considering whether doing so enables other interesting features, like improving the story for "scripty" hosts. As I've stated several times, whether the "top level" feature ever gets moved into the main language or not is merely a "packaging" question. The interesting question for the language designers and implementers is what the rules are. As I've stated several times, the most likely scenario, were we to do the feature in a hypothetical future version, is that top-level methods would be part of the "compiler as a service" package that we would expose to compiler hosts. But why would we not consider the benefits and costs of exposing a language feature, if we were going to do all the work already to make it possible, albeit for other reasons? -- Eric

     

    Dennis Lu's thesis paper on a C# REPL (http://www.cs.rice.edu/~javaplt/papers/dennis-ms-thesis.pdf) seems to indicate that no syntax changes would be necessary.  If anything the lack of a "true" interpreter is what makes REPL tricky in C#, as Mr Lu points out in his thesis (page 15).  And as Miguel de Icaza points out in his post about the Mono REPL (http://tirania.org/blog/archive/2008/Sep-08.html), the top level functions are a "monoism": part of their REPL tool and not the C# language itself.

    I agree that it's inconvenient for a developer to learn a new language to do rapid prototyping.

    > And now I've got complete strangers telling me what I'm advocating.

    Sorry, I should have said "propose" instead of "advocate".  I'm interested in what guises of C# already use the top level paradigm; perhaps you could add this information to your original post?

    > It's rather depressing constantly getting that deeply pessimistic feedback,

    Many of the new features in C# 2.0 and 3.0 got me very excited, especially LINQ and generics.  LINQ to SQL & ADO.NET Entity Framework are absolutely awesome for rapid prototyping.  Generics definitely changed the way I have to think about code organization (as in the scenario I gave), but the added value was tremendous.  Maybe I'm being selfish or shortsighted (I don't use REPLs, am happy learning a scripting language when I want to write scripts, etc), but I don't see top level functions adding enough value to offset the downsides.

    I feel like you made some pretty rude assumptions (and responses -- "deeply pessimistic"?) about what I was trying to say.  I'm not trying to attack your post, C#, or the idea of adding language features.  This topic interests me, I love C# (and it's evolution over time), and language design is fascinating (to observe :P).

    Indeed, I understand that it is easy to take plain text as much more rude and hostile than was intended by the author. For example, I'm sure you didn't actually intend to imply that I was foolish when you made the argument that using ASP.NET's behaviour to justifying the proposed feature was foolish. Someone who reads less charitably than I do might have seen that as hostile, but I do not. -- Eric

  • The "compiler as a service" and language feature packaging concepts aren't clear to me.

    They aren't clear to us either. The community collectively asked us to be more transparent and this is what you get as a result. That we have no particularly coherent feature set, packaging strategy, or delivery plan that we can blog about is unfortunate, but you can't have it both ways. You either get to see everything polished and beautiful right before we ship, or you get to hear the half-baked ideas we're kicking around in the hallway and live with them being inchoate. -- Eric

    I do see now how top level functions (and expressions, variables, etc) for the REPL or other rapid prototyping or Q&D extensibility/"scriptability" scenarios would be useful.

    Sure. Not everyone does. But we get asked for this kind of thing all the time, and when we show demos and prototypes of this sort of thing at conferences and user groups, we get big cheers and high fives, so apparently someone wants it. -- Eric

    You pointed out earlier that said features would have to be carefully designed into the language specification.  Is this it to make the features actually useful/reusable/flexible (unlike the "Monoisms" in their REPL) without creating specialized branches of the language?

    That would be one nice benefit, yes. But the larger benefits of careful design is that you end up with a carefully designed language -- a language that can be specified, implemented, tested, debugged and shipped to customers with some level of assurance that you can actually understand what the tool does. -- Eric

    Would we then end up with a flavor ("host"?) of C# ("C#Script"?) that supports additional top level entities while the default csc.exe does not?  Sort of like Windows Script Host but at a much deeper level; in a model that allows developers to embed the host(s) in their own products?

    Maybe. That's certainly an idea we're kicking around. Or maybe we do what Mono does and only allow the tool access to the feature. If you read my past posts on language design, you'll see we usually have several HUNDRED ideas for language features that we're kicking around, and we usually do a small handful of them in a given version. -- Eric

    Thanks again for your responses.

  •     Eric, thanks so much for listening to community needs and desires regarding C# Scripting. Thanks even more for making great progress despite the industry's sometimes bitter and reactive bias.

    The more I study C#, the more I realize it's possible to do just about anything one can do in a scripting language (SL) like Ruby. What do Agile/rapid-prototypers want from their scripting languages? How much does it cost in C#?

        1. SL's are often great at manipulating text; C# has a regular expression class that provides a great deal of that functionality.

        2. SL's allow top-level functions; in C#, one can create a NameSpace class with which I can Sin() all I want; I just embed my class in NameSpace class and delegate to chosen namespaces

        3. SL's allow untyped variables; in C#, define a class "Var" with implicit coercions and operator overrides and you've got it

        4. SL's sometimes provide functional programming power through closures, anonymous functions, etc. C#'s got most of that

    5. SL's don't sweat small types; in C# design "Var" to architect good, solid, formal math

        6. SL's have REPL for debugging and incremental development; some say it's critical to functional programming; Mono already has it, so one hopes C# could also have it

        7. SL's add value by saving programmer time; C# triples my productivity with intellisense (is this wrong?)

    8. SL's allow run-time method binding & symbols; I'll wave my hand and say "look at attributes and properties in the CLI" and possibly run a macro expander on the source for @Symbol syntax.

    Complaint #1: it's slow. SL's are supposed to glue together high-level functionality, so dynamism at some cost is expected.

    Complaint #2: tower of bable. If every IT shop or university group has its own flavor of "var"'s, it gets messy. Do you convert back and forth to your own type? Are they "checking" or "not checking" for math overflows (watchout-- the default is not to check!).

    Sometimes it makes sense to have different math rules. For kids learning trig maybe degrees are better than radians. In Agile land I use longs and doubles to cover all the variations I need for numbers, and coercion is silent but runtime-checked for errors.  This satisfies even my corporate RPG/AS400 giant manufacturing client. (Thanks for the ref's to Boo and F# but I have to help hire the programming team so I need to stay mainstream).

    But when I render breaking ocean waves in real time I do my math (sometimes on the GPU) in 16-bit floats (precise to about 0.01% of a pixel, but terrible at repeated addition (1/100 is a repeating decimal and it doesn't get much of a chance to repeat).

    Since REPL was deployed on Mono, C# scripting will happen no matter what.

    The C# team is in an ideal place to set ground rules for conquering the tower of Bable; Tribes sharing C-Sharp libraries with other Tribes.

    Bottom line:

    C# already gives more than enough rope for programmers to hang themselves.  The C# team could make the process a lot easier with:

         - Syntactic sugar (e.g. each point above benefits from syntactic support, especially regex and untyped vars and top-level namespace control (with the ability to resolve individual name clashes)) Obviously allowing Linq's "vars" to get passed around would be helpful.

         - Framework Standards and Conventions

         - Additional dynamism under the hood and eval available in the freely distributed .Net3.5 binaries (don't know if Emit ships in those or not).

  • don't pollute the global namespace.

    Don't extend using to allow "globalizing" static methods of classes.

    No global methods. All methods should be attached to a class. No Top level methods.

  • OK took a while but I figured out how to do scripting iautomatically in C# and VS without actually changing the language. I'm not saying how because nobody wants former RPG programmers to shove their global-variable approach to programming (operations on objects become messy because they don't follow a set of invariants, etc.).  

    Like it or not, people can do this.

    My hope is that Microsoft will establish some conventions (ground rules) so that scripting frameworks can inter-operate and so that namespaces are relativized to company domains, etc.

    To prevent polluting the namespace I am taking pro-active measures. Every class in my projects has a short prefix radically reducing the chances of classname clashes.

    Also, since C# is supposed to be a "safe" programming language, I really wish coercion and math had "checking" on by default instead of off. Checking is needed because it's straightforward to design databases to handle unfinished business (not easy, but possible). It's not possible to design programs to run well with garbage data.

    What good is strict type checking if you're left with what might as well be C's void * longs?

    Maybe for time testing you could state for EVERY program run, in the output window, either "Checking is off; this may lead to undetected overflows and coercion errors in numbers.", or "Checking is on; overflows and explicit coercion errors will be detected, possibly reducing performance. Compare only with checking-enabled Java ."

    ---Michael

  • Thanks Eric for the glimpse into things to (possibly) come.

    A lot of negative reaction has probably been overstated, especially for something which is just an incomplete idea at this time. Still, I think the comments are born from a real fear of what can happen.  Just to illustrate: I work as a consultant going from client to client sometimes creating new projects, but often being called in to modify some C# code for which the original programmer(s) are long gone. Usually there is a very tight schedule for these changes and coming up to speed on existing code is paramount. Bad code is common (hey I'm not perfect either and have written my share of 'bad' code when under a deadline). Lately I have been seeing a lot seen things like this: var x = ProcessIt(GetAWidget("Blah", 5, theList.Find(item => item.name == name)));  Abuses like this would likely drive me insane if it weren't for intellisense. I can't wait until (inevitably) these methods start returning dynamic types and intellisense goes out the window as well because someone couldn't be bothered to specify a type. Yes poor code is possible without var, dynamic, and 'scripty' code, but if we keep making it easier for people to write unmaintainable messes in the interest of making code writing faster, we run the risk of being left with nothing but unmaintainable messes. In my opinion, it doesn't matter a damn bit how long it takes to write the code. What matters is how long it takes to read and understand it. That said, I'm sure you guys realize this . So far I enjoy programming in C# more than just about any other language. Thanks for the hard work.

Page 3 of 4 (51 items) 1234