The Chinese version is available here (中文链接在):http://blogs.msdn.com/b/lixiong/archive/2011/12/03/win8.aspx
Learning how things work internally has always been my favorite. The first thing I did was to launch debugger when I got Win8 installed. Based on my research, Win8 is an amazing product. There are many old and new technologies involved in Win8. I’d like to use this blog to retrospect different Microsoft technologies.
Please note, the following analysis was based on Win8 CTP. I expect many changes when Win8 gets to RTM.
This above information is just my person opinions.
COM -Component Object Model
COM is a great technology invented in mid 90’s. COM was indented to address the reuse of software components. Prior to COM, software reuses were usually done at source code level. The C/C++ library was a typical example. Source code level requires recompilation, and it is difficult for distribution. COM made it possible to reuse software components at binary level. Starting from Win95, OLE and Office95, COM has been a cornerstone of Microsoft platform. From DirectX API to the most common clipboard, and even the host interface of .NET Framework are all based on COM. But, there is no technology which is good for everything. The problem of COM is the following:
The threading model like STA/MTA/NTA is over complex
The threading model of COM was designed to help developer. The implementation of threading model in COM Runtime was based on many system components like Win32 Message, RPC and Windows Services. COM developers must be familiar with these components if they want to use COM correctly. For example, a developer must understand the message loop and maintain it for any thread that hosts STA, otherwise deadlock is likely to happen. The complexity of threading model was a blocker for many developers who wanted to use COM at expert level.
Lack of development tool support
The relationship between COM and Visual Studio 6 is very similar as CLR2/VS2005, CLR3.5/VS2008 and CLR4/VS2010 nowadays. You had to choose either VB6 or ATL to create COM component with VS6. VB6 was good for business applications, especially the popular MIS system at that time. But VB6 was not powerful and flexible enough. Due to the lack of muti-threading support, there was no way to use MTA in VB6. ATL was powerful and flexible, but it was difficult to use. Every time you want to extend an interface, you had to do a lot of ceremony works like using muti-inheriency, defining new templates and using lots of C macros. It was never a pleasure of using ATL.
The complexities were out of control when it expanded to DCOM, COM+, and DTC
To fit into enterprise development, COM was extended and evolved to DCOM and COM+. “Enterprise development” means the requirements of security and scalability. The classic COM was an in-process model, in which the caller and callee ran under the same user account. Using impersonation was helpful, but a completed solution was to make different components run under different security boundaries. DCOM was born to make COM component possible to run from a different process, run from a different computer, run with different security levels for different interfaces, authenticate with domain account, and support different levels of encryption. You can click start, run and type dcomcnfg.exe. Try to expand some nodes and open the property page to see how complex it could be. For scalability, Microsoft further extended DCOM to COM+ by introducing object pooling and new sync model. The most famous COM+ based solution was ASP, which dominated the web development for decades on Windows platform, and eventually evolved to ASP.NET. Furthermore, Microsoft built DTC and MSMQ which were also based on COM+. You may not be aware of the niubility of DTC. DTC is short for Distributed Transaction Coordinator. It allowed you to run SQL statement to operate different database systems (like Oracle and MSSQL) under the same transaction without writing special code. This feature was super important when MSSQL competed with Oracle/DB2 at earlier days. These things made COM more horrible to use especially when development tools did not catch up.
In my mind, CLR is flawless in almost all areas. But just because CLR was so good, Microsoft reduced the investment on the un-mamanged components starting from 2003.
Spreading and pushing CLR is never bad at all. Most developers write CRUD and UI code. You cannot let them to use C/C++ all the time. CLR is performant, the development tools are strong, and every new version brings huge improvements. But, there is a theory that has always been true: it is possible to write a C# application to get similar performance as using C/C++, but it costs more than using C/C++ at the first place. There are many reasons: the underlying API is unmanaged and the interop hits performance; the C# cannot control the details as much as C++; the JIT compiler has to generate additional code to achieve the CLR exception model and security model; this pointer is compared to test NullReferenceException as a cost before use. The situation is, there are many applications which still require performance strictly, but they were ignored during the fast development of CLR. From the years between Windows Server 2003 and Windows Server 2008 R2, the platform, API and development tools for unmanaged application had no equal improvement as CLR. This may not be a problem for Windows+PC platform, but you cannot expect people write mobile application with C#. The unbalance led little improvement on mobile and consumer areas for a long while.
WPF is a product with beautiful design. It solved 4 problems in traditional Win32 UI. 1) Win32 UI’s rendering is Window object and GUI based. WPF used a new model with dedicated rending thread, which improved rendering performance and was able to use GPU for acceleration. 2) Win32 depends on GDI and HWND object. For complex UI application which creates many handle, it is common to get out of resource error. Many developers used tab control to reduce the usage of handle to solve this problem. WPF does not use any native child window. The control concepts are defined at an upper layer. This design avoided the massive usage of native resources. 3) Win32 relies on many low level Win32 APIs, which are not convenient to use and are difficult to learn. WPF introduced Dispatcher class and BeginInvoke pattern, which simplified these issues. 4) The data, visual and code were not well separated in Win32. WPF used data binding, XAML and code behind to make the patterns clean. It is more easy to apply modern patterns like MVC and MVVM in WPF. All of these are the beauty of design.
The side effort of this was over design. WPF is based on CLR, and the over design kills performance. If you are familiar with WPF, you can find half of the design patterns in textbook be used in the dependent property. WPF is an encyclopedia of design patterns. In my calculation, there are about 7 different ways to hook up a mouse click event. The 7 ways were designed to use in different situations. It is good to be flexible, but the cost of flexibility is performance. The magic of patterns and data bindings is reflection in CLR. This is why you cannot get the same sleek feelings as native Win32 UI when a WPF application gets bigger.
The interoperability of Microsoft products
It’s time to discuss why I feel so good about Win8.
You can image projection as a new Windows API model. The classic API model was either DLL exported functions or COM interface. Both were not performant when invoking from CLR. The root cause is the incompatible memory and security models between CLR and unmanaged code. With projection, WinRT created the best way for upper language to invoke. This design avoided the cost and improved the programmer experience. Of course, this is also benefit by the development of Visual Studio tools, C# language and the CLR. For example, C# 5 introduced the await keyword, and WinRT introduced async operation pattern. Projection made the await statement to the WinRT async operation call without any redundant code. There was no CLR Engine involvement, no context switch, no marshaling and no proxies. It was just a simple assembly call to a stub from C# code, and then it jumped to the WinRT implementations. Please refer to Build conference to learn more about this. The callstack at the end is also vivid to explain this.
WinRT took the good part of COM as an efficiency and lightweight binary reuse model, and got rid of the complexity to be omnipotent. It let the upper programming languages and tools to achieve the flexibility. With projection, the performance issue got solved for interoperability. Microsoft changed the plan to use C/C++ to create a performant WinRT, and let CLR to do upper layer work. C/C++ is good at performance and bad with development cost. C# and CLR is opposite. Win8 makes the best of both.
I attached some exemplary call stacks in Win8 here:
This is the end
One question: will WPF be rewritten in WinRT core style technology (new, improved COM, as far as I understand it), or WinRT be expanded to support WPF application style (I mean non-Metro style/desktop app)?
I don't know, but my guess is that WPF remains seperate from WinRT and depends on CLR.
I love WPF, managed code, and C#. But based on the fact of current WPF performance which is less performant IMO, I'd like to see whether the situation can be improved with the introduction of WinRT in Win8.
Based on my understanding, WinRT is limited to developing Metro style app only. I actually wonder if such improvement could be brought to desktop app as well, so that C# can still be used while keep being more performant than current. But that's just my curiosity... :D