Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
I am super excited to announce that we have just released a third "Community Technology Preview" of Roslyn. Roslyn, in case you have not heard, is the code name for the project I work on; we are re-architecting the C# and VB compilers so that they are no longer "black boxes" where code goes in, a miracle happens, and then IL comes out. Rather, the box is now glass and you can use the lexical, syntactic and semantic analysis engines that we write for your own purposes.
We have implemented semantic analysis of most of the C# and VB language features now. On the C# side we are through most of the C# 3 features; we still lack "dynamic" from C# 4 and "await" from the recently-released C# 5. We've also made many changes (hopefully all improvements) to the APIs. For a complete list of the updates, to access the Roslyn question-and-answer forum, or to download it and try it for yourself, go to msdn.com/roslyn.
We would love to get your feedback on the forum or on connect.microsoft.com/visualstudio about what you do and do not like about the APIs; that's why we do these technology previews so often. (Please leave feedback on the forum rather than as a comment to this blog; we have a team of program managers who read the forums.) I hope you enjoy this latest CTP; we enjoyed building it.
Great News Eric,
I love the transition from the black box to a glass box :-)
I love the Roslyn idea, and I'm sure that will be a big steep forward for .Net and VS.
Said that I always questioned in MS forums about why Visual Studio generate intermediate info and why that can't be avoided.
For example if you have 5 projects in a solution, all of winforms, all the resx are transformed in resources, written to disk, later linked to the .dll of each project, copied all around in obj and bin folders, I think that memory must be used as intermediate storage for that.
Roslyn will make all that intermediate stuff faster ? avoiding to open compiler, read files, generate all objs, link all up in the dll, and the next project read all the info from dll to start compilation again. Roslyn will make inter process compiling a smart process ?
Thanks for your explanation, keep in the good work
I'm looking forward to the final release of Roslyn, when hopefully it will be integrated fully into VS. I really hope the VS team is able to leverage Roslyn to allow writing lambdas in the watch/immediate windows. Debugging without the benefit of LINQ is terrible!
I'm curious as well like Marcos. Why should it have been written to disk, when it can be actually done in memory? I believe the process still can get optimized better.
I guess the intermediate files are written to disk so compiling a large project doesn't use so much memory every other process on the system starves.
The performance when compiling large projects in VS is pain slow, VS2012 did a great job at that, but the disk is anyway a bottleneck like the command line compilers
We have ton of free memory because is cheap. VS is a 32bit process, so if you have only one VS open so you are only using at most 1.8GB for development the rest of the memory don't help, mostly because of addressing 32bits it can't use his memory for intermediate files because can get out of Virtual Memory.
Another explanation of why is done that way is what Eric said, .net compilers are black boxes, are command line utilities so for compiling 5 projects you need to call 5 times to the command line compiler,
Another argument is that Visual Studio don't distinguish between Build and Run a project, is the same, so Visual Studio MUST generate the bin folder of all intermediate projects, if you have 80 projects VS thinks, you wanna build it (when you only want to run the startup project, all projects get build) that must be changed to help us to gain time, RUN <> BUILD
But if Visual Studio creates a Compilation Service process with Roslyn (and 64bits) so Visual Studio instead of call a command line, calls a unified Compiler Service so there the intermediate files can be in memory and no disk write is needed, just for the final bin folder.
That can be later expanded to monitor file changes, cache results between Visual Studio executions, so our time to compile when you open visual studio and the caching will be superb)
I did a ton of test in Visual Studio using a RamDrive (allocationg 1Gb of my ram for disk T:) later I redirected with NTFS juntion all the objs and bins to the T: drive so no disk access is taking place, and I note the performance gain. Anyway if you have 10 projects all the command line compilers must READ again all the references etc, so cpu becomes the bottle neck
Hope that feedback helps
Downloading it now! Sounds excellent.
is NOT a valid link to
correct is: connect.microsoft.com/visualstudio
I wonder if C# will ever allow to intercept the parser pipeline to customize the AST generation...