I know the answer (it's 42)

A blog on coding, .NET, .NET Compact Framework and life in general....

Posts
  • I know the answer (it's 42)

    .NET Just in Time Compilation and Warming up Your System

    • 10 Comments
    IMG_7823.jpg

    One of the primary obstacle we face while scaling our system is Just In Time (JIT) compilation of the .NET stack. We run a .NET managed application server at a huge scale with many thousands of managed assemblies on each server (and many many thousands of those servers distributed globally).We deploy code daily and since those are managed code they are JITed at each deployment. Our system is very sensitive to latency and we do not want those servers getting new code to cost us execution latency. So we use various mechanisms to warm the servers before they start serving queries. Common techniques we use are

    1. NGEN
    2. Force JITing (a specialized multi-core JIT technique see bottom of post)
    3. Sending warmer queries that warm up the system

    In this effort I frequently handle questions regarding how the system JITs managed methods. Even after so many years of the CLR JIT existing there seems to be confusion around when JIT happens, what is the unit of compilation. So I thought I’d make a quick post on this topic.

    Consider the following simple code I have in One.cs

    using System;
    
    namespace Foo
    {
        class MyClass
        {
            public void a()
            {
                int i = 0;
                while(true)
                {
                    b(i++);
                }
            }
    
            public void b(int i)
            {
                Console.WriteLine(i);
            }
        }
    
        class Program
        {
            static void Main()
            {
                MyClass mc = new MyClass();
                mc.a();
            }
        }
    }
    

    Of interest is the function Foo.MyClass.a and Foo.MyClass.b. We will debug to find out exactly when the later is JITed.

    First I compile and then launch the debugger. I will use the windbg debugger and the sos extensions extensively in this post. Also read https://support2.microsoft.com/kb/319037?wa=wsignin1.0 to see how to setup symbol servers to debug into the CLR.

    csc /debug+ One.cs
    windbg one.exe

    After that in windbg I run the following command to setup

    .sympath+ d:\MySymbols          ;$$ Use the Microsoft symbol server (see link above)
    sxe ld:clr                      ;$$ break on CLR loaded
    g                               ;$$ continue the program until you break on CLR.dll being loaded
    .loadby sos clr                 ;$$ load the sos debugger extension

    !bpmd One.exe Foo.MyClass.a ;$$ Set a managed break point in a()
    g ;$$ continue until break point in a() is hit

    When this break point is hit, we have obviously already JITed MyClass.a() and executing it. The question we now have is that whether all the functions a() calls like MyClass.b() already JITed. If not when/how will that be JITed. Lets debug it!!

    **Color coding indicates how I take output of one command to give inputs to the next one.

    First lets find the this pointer for the MyClass instance. This can be obtained from the current managed call stack

    0:000> !clrstack -a
    PARAMETERS:
            this (0x0000000000d9eea0) = 0x0000000002922c58
    
    

    The details of the this object shows the MethodTable for it. The MethodTable has pointer to EEClass (cold data).

    0:000> !do 0x0000000002922c58
    Name:        Foo.MyClass
    MethodTable: 00007ffab2f640d8
    EEClass:     00007ffab3072340
    Size:        24(0x18) bytes
    File:        d:\Skydrive\Code\C#\_JITPresentation\One.exe
    Fields:
    

    Now we can see more details of the MethodTable, which will show the individual methods descriptors.

    0:000> !dumpmt -md 00007ffab2f640d8
    EEClass:         00007ffab3072340
    Module:          00007ffab2f62fc8
    Name:            Foo.MyClass
    mdToken:         0000000002000002
    File:            d:\Skydrive\Code\C#\_JITPresentation\One.exe
    BaseSize:        0x18
    ComponentSize:   0x0
    Slots in VTable: 7
    Number of IFaces in IFaceMap: 0
    --------------------------------------
    MethodDesc Table
    Entry       MethodDesc    JIT Name
    00007ffb07c16300 00007ffb077c80e8 PreJIT System.Object.ToString()
    00007ffb07c5e760 00007ffb077c80f0 PreJIT System.Object.Equals(System.Object)
    00007ffb07c61ad0 00007ffb077c8118 PreJIT System.Object.GetHashCode()
    00007ffb07c5eb50 00007ffb077c8130 PreJIT System.Object.Finalize()
    00007ffab3080120 00007ffab2f640d0    JIT Foo.MyClass..ctor()
    00007ffab3080170 00007ffab2f640b0    JIT Foo.MyClass.a()
    00007ffab2f6c050 00007ffab2f640c0   NONE Foo.MyClass.b(Int32)
    

    The type has 7 methods. Also the out clearly indicates that Foo.MyClass.a() is JITed and Foo.MyClass.b() is NONE (or not JITed). We can get more details about these methods

    0:000> !dumpmd 00007ffab2f640b0
    Method Name:  Foo.MyClass.a()
    Class:        00007ffab3072340
    MethodTable:  00007ffab2f640d8
    mdToken:      0000000006000001
    Module:       00007ffab2f62fc8
    IsJitted:     yes
    CodeAddr:     00007ffab3080170 <----- JITed
    Transparency: Critical
    i. 0:000> !dumpmd 00007ffab2f640c0
    Method Name:  Foo.MyClass.b(Int32)
    Class:        00007ffab3072340
    MethodTable:  00007ffab2f640d8
    mdToken:      0000000006000002
    Module:       00007ffab2f62fc8
    IsJitted:     no
    CodeAddr:     ffffffffffffffff <----- Not yet JITed
    

    So at this point we know that a() is JITed but the method b() it calls is not. In that the question arises that if it is not what is the content of the native instructions for a() and what does that code call into for b(). The disassembly will clearly show that the entire method a() is JITed and that for outward managed calls there are calls to stubs

    0:000> u 00007ffab3080170 L24
    One!Foo.MyClass.a() [d:\Skydrive\Code\C#\_JITPresentation\One.cs @ 8]:
    00007ffa`b3080170 48894c2408      mov     qword ptr [rsp+8],rcx
    00007ffa`b3080175 4883ec38        sub     rsp,38h
    00007ffa`b3080179 c744242000000000 mov     dword ptr [rsp+20h],0
    00007ffa`b3080181 c644242400      mov     byte ptr [rsp+24h],0
    00007ffa`b3080186 48b83834f6b2fa7f0000 mov rax,7FFAB2F63438h
    00007ffa`b3080190 8b00            mov     eax,dword ptr [rax]
    00007ffa`b3080192 85c0            test    eax,eax
    00007ffa`b3080194 7405            je      One!Foo.MyClass.a()+0x2b (00007ffa`b308019b)
    00007ffa`b3080196 e82574b25f      call    clr!JIT_DbgIsJustMyCode (00007ffb`12ba75c0)
    00007ffa`b308019b 90              nop
    00007ffa`b308019c c744242000000000 mov     dword ptr [rsp+20h],0
    00007ffa`b30801a4 eb23            jmp     One!Foo.MyClass.a()+0x59 (00007ffa`b30801c9)
    00007ffa`b30801a6 90              nop
    00007ffa`b30801a7 8b4c2420        mov     ecx,dword ptr [rsp+20h]
    00007ffa`b30801ab ffc1            inc     ecx
    00007ffa`b30801ad 8b442420        mov     eax,dword ptr [rsp+20h]
    00007ffa`b30801b1 89442428        mov     dword ptr [rsp+28h],eax
    00007ffa`b30801b5 894c2420        mov     dword ptr [rsp+20h],ecx
    00007ffa`b30801b9 8b542428        mov     edx,dword ptr [rsp+28h]
    00007ffa`b30801bd 488b4c2440      mov     rcx,qword ptr [rsp+40h]
    00007ffa`b30801c2 e889beeeff      call    Foo.MyClass.b(Int32) (00007ffa`b2f6c050)
    00007ffa`b30801c7 90 nop
    00007ffa`b30801c8 90 nop
    00007ffa`b30801c9 c644242401 mov byte ptr [rsp+24h],1 00007ffa`b30801ce ebd6 jmp One!Foo.MyClass.a()+0x36 (00007ffa`b30801a6) 00007ffa`b30801d0 90 nop 00007ffa`b30801d1 4883c438 add rsp,38h 00007ffa`b30801d5 c3 ret 00007ffa`b30801d6 0000 add byte ptr [rax],al 00007ffa`b30801d8 1909 sbb dword ptr [rcx],ecx 00007ffa`b30801da 0100 add dword ptr [rax],eax 00007ffa`b30801dc 096200 or dword ptr [rdx],esp
    
    

    So we see that for b a call is made to the memory location 00007ffa`b2f6c050. We can see what is there now by disassembling that address.

    0:000> u 00007ffa`b2f6c050
    Foo.MyClass.b(Int32):
    00007ffa`b2f6c050 e87b5e755f      call    clr!PrecodeFixupThunk (00007ffb`126c1ed0)
    00007ffa`b2f6c055 5e              pop     rsi
    00007ffa`b2f6c056 0201            add     al,byte ptr [rcx]

    So basically instead of real native JITed code existing for b() there is actually a stub or thunk in it’s place. So we clearly establish that when a function is called it’s entire code is JITed and other method it calls is not yet JITed (however, there are caveats like inline methods etc). Now we can now go and set a breakpoint inside JIT to break when it tries it JIT the b() method. This is what we do

    0:000> bp clr!UnsafeJitFunction ;$$ entry point for JITing a method
    0:000> g                        ;$$ continue executing until we hit the UnsafeJITFunction
    0:000> k                        ;$$ dump the stack for JITing
    clr!UnsafeJitFunction
    clr!MethodDesc::MakeJitWorker
    clr!MethodDesc::DoPrestub
    clr!PreStubWorker+0x3d6
    clr!ThePreStub+0x5a [f:\dd\ndp\clr\src\vm\amd64\ThePreStubAMD64.asm @ 92]
    One!Foo.MyClass.a()+0x57 [d:\Skydrive\Code\C#\_JITPresentation\One.cs @ 12]

    As we can see that the JITing actually happened in the same call thread that is executing a() and exactly when b was called. ThePreStub finally calls the JITer. The JITer will actually JIT the method b() and backtrack the stack and patch up the call, so that it will actually now be a call straight to the JITed copy of b(). We hit g couple of times and now see what happens for the MethodDescriptor for b()

    0:000> !dumpmd 00007ffab2f640c0
    Method Name:  Foo.MyClass.b(Int32)
    Class:        00007ffab3072340
    MethodTable:  00007ffab2f640d8
    mdToken:      0000000006000002
    Module:       00007ffab2f62fc8
    IsJitted:     yes
    CodeAddr:     00007ffab30801f0 <-- Now it is JITed
    Transparency: Critical
    

    As we see b() is now JITed and we can see it’s disassembly as well. However, more interesting, lets go back and see what the disassembly of a() now contains

    0:000> u 00007ffab3080170 L24
    One!Foo.MyClass.a() [d:\Skydrive\Code\C#\_JITPresentation\One.cs @ 8]:
    00007ffa`b3080170 48894c2408      mov     qword ptr [rsp+8],rcx
    00007ffa`b3080175 4883ec38        sub     rsp,38h
    00007ffa`b3080179 c744242000000000 mov     dword ptr [rsp+20h],0
    00007ffa`b3080181 c644242400      mov     byte ptr [rsp+24h],0
    00007ffa`b3080186 48b83834f6b2fa7f0000 mov rax,7FFAB2F63438h
    00007ffa`b3080190 8b00            mov     eax,dword ptr [rax]
    00007ffa`b3080192 85c0            test    eax,eax
    00007ffa`b3080194 7405            je      One!Foo.MyClass.a()+0x2b (00007ffa`b308019b)
    00007ffa`b3080196 e82574b25f      call    clr!JIT_DbgIsJustMyCode (00007ffb`12ba75c0)
    00007ffa`b308019b 90              nop
    00007ffa`b308019c c744242000000000 mov     dword ptr [rsp+20h],0
    00007ffa`b30801a4 eb23            jmp     One!Foo.MyClass.a()+0x59 (00007ffa`b30801c9)
    00007ffa`b30801a6 90              nop
    00007ffa`b30801a7 8b4c2420        mov     ecx,dword ptr [rsp+20h]
    00007ffa`b30801ab ffc1            inc     ecx
    00007ffa`b30801ad 8b442420        mov     eax,dword ptr [rsp+20h]
    00007ffa`b30801b1 89442428        mov     dword ptr [rsp+28h],eax
    00007ffa`b30801b5 894c2420        mov     dword ptr [rsp+20h],ecx
    00007ffa`b30801b9 8b542428        mov     edx,dword ptr [rsp+28h]
    00007ffa`b30801bd 488b4c2440      mov     rcx,qword ptr [rsp+40h]
    00007ffa`b30801c2 e889beeeff      call    Foo.MyClass.b(Int32) (00007ffa`b2f6c050)
    00007ffa`b30801c7 90              nop
    00007ffa`b30801c8 90              nop
    00007ffa`b30801c9 c644242401      mov     byte ptr [rsp+24h],1
    00007ffa`b30801ce ebd6            jmp     One!Foo.MyClass.a()+0x36 (00007ffa`b30801a6)
    00007ffa`b30801d0 90              nop
    00007ffa`b30801d1 4883c438        add     rsp,38h
    00007ffa`b30801d5 c3              ret
    00007ffa`b30801d6 0000            add     byte ptr [rax],al
    00007ffa`b30801d8 1909            sbb     dword ptr [rcx],ecx
    00007ffa`b30801da 0100            add     dword ptr [rax],eax
    00007ffa`b30801dc 096200          or      dword ptr [rdx],esp
    00007ffa`b30801df 005600          add     byte ptr [rsi],dl
    

    Now if we re-disassemble the target of this call at 00007ffa`b2f6c050

    0:000> u 00007ffa`b2f6c050
    Foo.MyClass.b(Int32):
    00007ffa`b2f6c050 e99b411100      jmp     One!Foo.MyClass.b(Int32) (00007ffa`b30801f0)
    00007ffa`b2f6c055 5f              pop     rdi
    00007ffa`b2f6c056 0201            add     al,byte ptr [rcx]

    As you can see the address has been patched up and now b() is JITed and a() calls into b() without going through any stubs.

    Obviously in this example I took a bunch of assumptions, but hopefully you now have the understanding to go debug your own scenarios and see what is at play. Some of the takeaways if you have JIT issues at startup

    
    

    JIT happens at method granularity

    
    

    Use modular code. Especially if you have error handling and other code which is rarely or almost never used, instead of having them in the main function, move them out. This will ensure that they are never JITed or at best not JITed at startup

    void Foo()
    {
        try{
           //...
        }
        catch(Exception ex)
        {
             ComplexErrorHandling(ex);
        }
    }

    is better than

    void Foo()
    {
        try{
           //...
        }
        catch(Exception ex)
        {
             LogLocal();
             UploadToSomeServer();
             // MoreCode;
             // EvenMoreCode;
        }
    }

    For a given function running it once, JITs the whole function. However, do note that if it has difference code branches and each calls other functions you will need to execute all branches. In the case below Foo has to be called with both true and false to ensure downstream methods are JITed

    void Foo(bool flag)
    {
        if(flag)
            YesFlag();
        else
            NoFlag();
    }
    

    JITing happens in the same thread as the calls. The JIT engine does takes lock to ensure there is no races while JITing the same method from multiple threads.

    Consider using NGEN, MultiCoreJIT or ForceJIT all methods you care about or even build your own mechanism based on the following code

    Here’s some PseudoCode to force JIT which accomplishes that using RuntimeHelpers.PrepareMethod API (note this code does no error handling whatsoever). You can craft code around this to ForceJIT only assemblies and/or types in them that is causing JIT bottlenecks. Also this can be parallelized across cores. The .NET Multicore JIT is based on similar principle but automatically does it by generating a profile of what executes at your application startup and then JITing it for you in the next go.

    using System;
    using System.Reflection;
    namespace ConsoleApplication5
    {
        class Program
        {
            static private void ForceJit(Assembly assembly)
            {
                var types = assembly.GetTypes();
    
                foreach (Type type in types)
                {
                    var ctors = type.GetConstructors(BindingFlags.NonPublic
                                                | BindingFlags.Public
                                                | BindingFlags.Instance
                                                | BindingFlags.Static);
    
                    foreach (var ctor in ctors)
                    {
                        JitMethod(assembly, ctor);
                    }
    
                    var methods = type.GetMethods(BindingFlags.DeclaredOnly
                                           | BindingFlags.NonPublic
                                           | BindingFlags.Public
                                           | BindingFlags.Instance
                                           | BindingFlags.Static);
                    
                    foreach (var method in methods)
                    {
                        JitMethod(assembly, method);
                    }
                }
            }
    
            static private void JitMethod(Assembly assembly, MethodBase method)
            {
                if (method.IsAbstract || method.ContainsGenericParameters)
                {
                    return;
                }
                
                System.Runtime.CompilerServices.RuntimeHelpers.PrepareMethod(method.MethodHandle);
            }
    
            static void Main(string[] args)
            {
                ForceJit(Assembly.LoadFile(@"d:\Scratch\asm.dll"));
            }
        }
    }
    
  • I know the answer (it's 42)

    Use of SuppressIldasmAttribute

    • 1 Comments
    Meteors and sky Wish Poosh Campground, Cle Elum Lake, WA

    We use ildasm in our build deployment pipeline. Recently one internal partner pinged me saying that it was failing with a weird message that ildasm is failing to disassemble one particular assembly. I instantly assumed it to be a race condition (locks taken on the file, some sort of anti-virus holding read locks, etc). However, he reported back it is a persistent problem. I asked for the assembly and tried to run

    ildasm foo.dll

    I was greeted with

    image

    Dumbfounded I dug around and found this weird attribute on this assembly

    [assembly: SuppressIldasmAttribute] 

    MSDN points out http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.suppressildasmattribute(v=vs.110).aspx that this attribute is to make ildasm not disassembly a given assembly. For the life of me I cannot fathom why someone invented this attribute. This is one of those things which is so surreal…. Obviously you can simply point reflector or any of the gazillion other disassemblers to this assembly and they be happy to oblige. False sense of security is worse than lack of security, I’d recommend not to use this attribute.

  • I know the answer (it's 42)

    Fastest way to switch mouse to left handed

    • 0 Comments
    Milky way

     

    I think I was born left handed, unfortunately I was brought up to be right handed. This was not uncommon in India 30 years back. Left hand usage was looked down upon.

    However, the good thing is I am still ambidextrous (equal handed) in some things like using the mouse. For ergonomic reason I keep switching between left and right hand usage to ensure I do not wear out both my wrists with 12 hour daily computer usage.

    The main problem I face when I switch and even otherwise when I am in left-hand mode is that most PC’s are configured for right hand use. In a course of the day I use many machines (upwards of 10) as I remote into data-center machines and even 3-4 local machines. The fastest way I have found to switch between left and right hand mouse is just run the following command either from the console or from WindowsKey+R

    rundll32.exe user32.dll,SwapMouseButton

    Basically this command calls the SwapMouseButton win32 function in user32.dll.

    If you know of anything more brief either using command shell or powershell do let me know.

  • I know the answer (it's 42)

    .NET: NGEN, explicit loads and load-context promotion

    • 3 Comments
    Sunset over the Pacific

    If you want to know the conclusion and want to skip the details jump to the end for the climax :). If you care to see this feature in, please vote for this at http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/5194915-ngen-should-support-explicit-path-based-assembly-l

    Load-Context

    In my previous post on how NGEN loads Native images I mentioned that NGEN images are supported only in the default load context. Essentially there are 3 load contexts (excluding Reflection-only context) and based on how you load an assembly it lands in one of those 3 contexts. You can read more about the load contexts at http://blogs.msdn.com/b/suzcook/archive/2003/05/29/57143.aspx. However for our purposes
    1. Default context: This is the context where assembly loaded through implicit assembly references or Assembly.Load(…) call lands
    2. LoadFrom context is where assemblies loaded with Assembly.LoadFrom call is placed
    3. Null-context or neither context is where assemblies loaded with Assembly.LoadFile, reflection-emit (among other APIs) are placed.
    Even though a lot of people view the contexts only in the light of how they impact searching of assembly dependencies, they have other critical impact. E.g. native images of an assembly (generated via NGEN) is only loaded if that assembly is loaded in the default context.

    #1 and #3 are pretty simple to understand. If you use Assemby.Load or if your assembly has other implicit assembly dependency then for those assemblies NativeBinder will search for their native images. If you try to load an assembly through Assembly.LoadFile(“c:\foo\some.dll”) then it will be loaded in null-context and will definitely not get native image support. Things get weird for #2 (LoadFrom).

    Experiments

    Lets see an simple example where I have an executable loadfrom.exe which has the following call
    Assembly assem = Assembly.LoadFrom(@"c:\temp\some.dll");
    some.dll has been NGEN’d as

    c:\temp>ngen install some.dll
    Microsoft (R) CLR Native Image Generator - Version 4.0.30319.17929
    Copyright (c) Microsoft Corporation. All rights reserved.
    1> Compiling assembly c:\temp\some.dll (CLR v4.0.30319) ...

    Now we run the loadfrom.exe as follows
    c:\temp>loadfrom.exe
    Starting
    Got assembly
    In the the fusion log I can see among others the following messages

    WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
    LOG: Start validating all the dependencies.
    LOG: [Level 1]Start validating native image dependency mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089.
    Native image has correct version information.
    LOG: Validation of dependencies succeeded.
    LOG: Bind to native image succeeded.
    Attempting to use native image C:\Windows\assembly\NativeImages_v4.0.30319_32\some\804627b300f73759069f96bac51811a0\some.ni.dll.
    Native image successfully used.


    Interestingly the native image was loaded for some.dll even though it was loaded using Assembly.LoadFrom. This was done in spite of loader clearly warning in the log that it will not attempt to load the native image.

    Now lets trying running this same program just ensuring that the exe and dll is not in the same folder

    c:\temp>copy loadfrom.exe ..
    1 file(s) copied.

    c:\temp>cd ..
    c:\>loadfrom.exe
    Starting
    Got assembly
    In this case the log says something different

    WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
    LOG: IL assembly loaded from c:\temp\some.dll.

    As you can see the NI image was not loaded.

    The reason is Load Context promotion. When LoadFrom is used on a path from which a Load would’ve anyway found an assembly the LoadFrom results in loading the assembly in the default context. Or the load-context is promoted to the default context. In our first example since c:\temp\some.dll was on the applications base path (APPBASE) the load landed in default-context and ni was loaded. The same didn’t happen in the second example.

    Conclusion

    1. NGEN images is only supported on the default context. E.g. for Assemblies loaded for implicit references or through Assemb.Load() API call
    2. NGEN images is not supported on explicit-loads done via Assembly.LoadFile(path)
    3. NGEN images is not reliably supported on explicit-loads done via Assembly.LoadFrom(path)
    Given the above there is no real way to load Assemblies from arbitrary paths and get NGEN native image support. In the modern programming world a lot of large applications are moving away from the traditional GAC based approach to a more plug-in based, loosely-coupled-components approach. These large application locate it’s plug-ins via its own proprietary probing logic and loads them using one of the explicit path based load mechanisms. For these there is no way to get the performance boost based on native images. I think this is a limitation which CLR needs to address in the future.
  • I know the answer (it's 42)

    Dad loves Surface

    • 4 Comments
    Mt. Rainier

    I have given a surface to my daughter. A lot of my friends/family ask me how I like using Surface and whether my daughter likes it as well. I can tell you that the killer feature from dad’s point of view is Family Safety. I am not an iPad/Android user, so I do not know how they do in this area, but I love the capability in Windows. In my humble opinion this is pretty under-sold and a lot of parents are unaware of this gem.

    You just need to follow the steps at http://windows.microsoft.com/en-US/windows/set-up-family-safety to set it up either for a new account or for your child’s existing Microsoft account. Once you do just head onto https://familysafety.microsoft.com/

    image

    Tap/click into your child’s account and you can setup various things. I use all of them including time restrictions, app restrictions.

    image

    I get weekly email with her activity report and can ding her on the time she spends on Netflix.

    Also I get an email when she tries to install weird games. And no I’m not going to allow her to use Gangnam Guy

    image

    Setting daily time limits and curfew hours is fun :)

    image

     

    image

    When she goes out of this time range Surface locks up with a screen asking her to take the device to a parent to unlock. I can use my live ID and extend her hours if I so want. Now the main problem is to say no to such a cute face :)

    IMG_3578

  • I know the answer (it's 42)

    .NET: Loading Native (NGEN) images and its interaction with the GAC

    • 0 Comments

    It’s common for people to think that NGEN works with strong named assemblies only and it places output files or uses GAC closely. This is mostly not true.

    If you are new to this there’s a quick primer on NGEN that I wrote http://blogs.msdn.com/b/abhinaba/archive/2013/12/10/net-ngen-gac-and-their-interactions.aspx.

    GAC

    The Global Assembly Cache or GAC is a central repository where managed assemblies can be placed either using the command like gacutil tool or programmatically using Fusion APIs. The main benefits of GAC is

    1. Avoid dll hell
    2. Provide for a central place to discover dependencies (place your binary in the central place and other applications will find it)
    3. Allow side-by-side publication of multiple versions of the same assembly
    4. Way to apply critical patches, especially security patches that will automatically flow all app using that assembly
    5. Sharing of assemblies across processes. Particularly helpful for system assemblies that are used in most managed assemblies
    6. Provide custom versioning mechanisms (e.g. assembly re-directs / publisher policies)

    While GAC has it’s uses it has its problems as well. One of the primary problem being that an assembly has to be strongly named to be placed in GAC and it’s not always possible to do that. E.g. read here and here.

    NIC

    The NIC or Native Image Cache is the location where NGEN places native images. When NGEN is run to create a native image as in

    c:\Projects>ngen.exe install MyMathLibrary.dll

    The corresponding MyMathLibrary.ni.dll is placed in the NIC. The NIC has a similar purpose as GAC but is not the same location. NIC is placed at <Windows Dir>\assembly\NativeImages_<CLRversion>_<arch>. E.g. a sample path is

    c:\Windows\assembly\NativeImages_v4.0.30319_64\MyMathLibrary\7ed4d51aae956cce52d0809914a3afb3\MyMathLibrary.ni.dll

    NGEN places the files it generates in NIC along with other metadata to ensure that it can reliably find the right native image corresponding to an IL image.

    How does the .NET Binder find valid native images

    The CLR module that finds assemblies for execution is called the Binder. There are various kinds of binders that CLR uses. The one used to find native images for a given assembly is called the NativeBinder.

    Finding the right native image involves two steps. First the IL image and the corresponding potential native image is located on the file system and then verification is made to ensure that the native image is indeed a valid image for that IL. E.g. the runtime gets a request to bind against an assembly MyMathLibrary.dll as another assembly program.exe has dependency on it. This is what will happen

    1. First the standard fusion binder will kick in to find that assembly. It can find it either in
      1. GAC, which means it is strongly named. The way files are placed in GAC ensures that the binder can extract all the required information about the assembly without physically opening the file
      2. Find it the APPBASE (E.g. the local folder of program.exe). It will proceed to open the IL file and read the assemblies metadata
    2. Native binding will proceed only in the default context (more about this in a later post)
    3. The NativeBinder finds the NI file from the NIC. It reads in the NI file details and metadata
    4. Verifies the NI is indeed for that very same IL assembly. For that it goes through a rigorous matching process which includes (but not limited to) full assembly name match (same name, version,  public key tokens, culture), time stamp matching (NI has to be newer than IL), MVID (see below)
    5. Also verifies that the NI has been generated for the same CLR under which it is going to be run (exact .NET version, processor type, etc…) .
    6. Also ensures that the NI’s dependencies are also valid. E.g. when the NI was generated it bound against a particular version of mscorlib. If that mscorlib native image is not valid then this NI image is also rejected

    The question is what happens if the assembly is not strongly named? The answer is in that case MVID is used to match instead of relying on say the signing key tokens. MVID is a guid that is embedded in an IL file when a compiler compiles it. If you compile an assembly multiple times, each time the IL file is generated it has an unique MVID. If you open any managed assembly using ildasm and double lick on it’s manifest you can see the MVID

    .module MyMathLibrary.dll 
    // MVID: {EEEBEA21-D58F-44C6-9FD2-22B57F4D0193}

    If you re-compile and re-open you should see a new id. This fact is used by the NativeBinder as well. NGEN stores the mvid of the IL file for which a NI is generated. Later the native binder ensures that the MVID of the IL file matches with the MVID of the IL file for which the NI file was generated. This step ensures that if you have multiple common.dll in your PC and all of which has version 0.0.0.0 and is not signed, even then NI for one of the common.dll will not get used for another common.dll.

    The Double Loading Problem

    In early version of .NET when a NI file was opened the corresponding IL file was also opened. I found a 2003 post from Jason Zander on this. However, currently this is partially fixed. In the above steps look at step 1. To match NI with its IL a bunch of information is required from the IL file. So if that IL file comes from the GAC then the IL file need not be opened to get those information. Hence no double loading happens. However, if the IL file comes from outside the GAC then it is indeed opened and kept open. This causes significant memory overhead in large applications. This is something which the CLR team needs to fix in the future.

    Summary

    1. Unsigned (non strong-named) assemblies can also be NGEN’d
    2. Assemblies need not be placed in GAC to ngen them or to consume the ngen images
    3. However, GAC’d files provide better startup performance and memory utilization while using NI images because it avoids double loading
    4. NGEN captures enough metadata on an IL image to ensure that if its native image has become stale (no longer valid) it will reject the NI and just use the IL
  • I know the answer (it's 42)

    NGEN Primer

    • 7 Comments

    I am planning to write couple of NGEN/GAC related posts. I thought I’d share out some introductory notes about NGEN. This is for the a beginner managed developer.

    Primer

    Consider I have a math-library which has this simple C# code.

    namespace Abhinaba
    {
        public class MathLibrary
        {
            public static int Adder(int a, int b)
            {
                return a + b;
            }
        }
    }

    The C# compiler compiles this code into processor independent CIL (Common Intermediate Language) instead of a machine specific (e.g. x86 or ARM) code. That CIL code can be seen by opening the dll generated by C# compiler in a IL disassembler like the default ildasm that comes with .NET. The CIL code looks as follows

    .method public hidebysig static int32  Adder(int32 a,
                                                 int32 b) cil managed
    {
      // Code size       9 (0x9)
      .maxstack  2
      .locals init ([0] int32 CS$1$0000)
      IL_0000:  nop
      IL_0001:  ldarg.0
      IL_0002:  ldarg.1
      IL_0003:  add
      IL_0004:  stloc.0
      IL_0005:  br.s       IL_0007
      IL_0007:  ldloc.0
      IL_0008:  ret
    } // end of method MathLibrary::Adder

    To abstract away machine architecture the .NET runtime defines a generic stack based processor and generates code for this make-belief processor. Stack based means that this virtual processor works on a stack and it has instructions to push/pop values on the stack and instructions to operate on the values already inside the stack. E.g. in this particular case to add two values it pushes both the arguments onto the stack using ldarg instructions and then issues an add instruction which automatically adds the value on the top of the stack and pushes in the result. The stack based architecture places no assumption on the number of registers (or even if the processor is register based) the final hardware will have.

    Now obviously there is no processor in the real world which executes these CIL instructions. So someone needs to convert those to object code (machine instructions). These real world processors could be from the x86, x64 or ARM families (and many other supported platforms). To do this .NET employs Just In Time (JIT) compilation. JIT compilers responsibility is to generate native machine specific instructions from the generic IL instructions on demand, that is as a method is called for the first time JIT generates native instructions for it and hence enables the processor to execute that method. On my machine the JIT produces the following x86 code for the add

    02A826DF  mov         dword ptr [ebp-44h],edx  
    02A826E2  nop  
    02A826E3  mov         eax,dword ptr [ebp-3Ch]  
    02A826E6  add         eax,dword ptr [ebp-40h]  

    This process happens on-demand. That is if Main calls Adder, Adder will be JITed only when it is actually being called by Main. If a function is never called it’s in most cases never JITed. The call stack clearly shows this on-demand flow.

    clr!UnsafeJitFunction <------------- This will JIT Abhinaba.MathLibrary.Adder 
    clr!MethodDesc::MakeJitWorker+0x535
    clr!MethodDesc::DoPrestub+0xbd3
    clr!PreStubWorker+0x332
    clr!ThePreStub+0x11
    App!ConsoleApplication1.Program.Main()+0x3c <----- This managed code drove that JIT

    The benefits of this approach are

    1. It provides for a way to develop applications with a variety of different languages. Each of these languages can target the MSIL and hence interop seamlessly
    2. MSIL is processor architecture agnostic. So the MSIL based application could be made to run on any processor on which .NET runs (build once, run many places)
    3. Late binding. Binaries are bound to each other (say an exe to it’s dlls) late which results in allowing more significant lee-way on how loosely couple they could be
    4. Possibility of very machine specific optimization. As the compilation is happening on the exact same machine/device on which the application will run

    JIT Overhead

    The benefits mentioned above comes with the overhead of having to convert the MSIL before execution. The CLR does this on demand, that is when a method is just going to execute it is converted to native code. This “just in time” dynamic compilation or JITing adds to both application startup cost (a lot of methods are executing for the first time) as well as execution time performance. As a method is run many times, the initial cost of JITing fades away. The cost of executing a method n times can expressed as

    Cost JIT + n * Cost Execution

    At startup most methods are executing for the first time and n is 1. So the cost of JIT pre-dominates. This might result in slow startup. This effects scenarios like phone where slow application startup results in poor user experience or servers where slow startup may result in timeouts and failure to meet system SLAs.

    Also another problem with JITing is that it is essentially generating instructions in RW data pages and then executing it. This does not allow the operating system to share the generated code across processes. So even if two applications is using the exact same managed code, each contains it’s own copy of JITed code.

    NGEN: Reducing or eliminating JIT overhead

    From the beginning .NET supports the concept of pre-compilation by a process called NGEN (derived from Native image GENeration). NGEN consumes a MSIL file and runs the JIT in offline mode and generates native instructions for all managed IL functions and store them in a native or NI file. Later applications can directly consume this NI file. NGEN is run on the same machine where the application will be used and run during installation of that application. This retains all the benefits of JIT and at the same time removes it’s overhead. Also since the file generated is a standard executable file the executable pages from it can be shared across processes.

    c:\Projects\ConsoleApplication1\ConsoleApplication1\bin\Debug>ngen install MyMathLibrary.dll
    Microsoft (R) CLR Native Image Generator - Version 4.0.30319.33440
    Copyright (c) Microsoft Corporation.  All rights reserved.
    1>    Compiling assembly c:\Projects\bin\Debug\MyMathLibrary.dll (CLR v4.0.30319) ...

    One of the problem with NGEN generated executables is that the file contains both the IL and NI code. The files can be quiet large in size. E.g. for mscorlib.dll I have the following sizes

    Directory of C:\Windows\Microsoft.NET\Framework\v4.0.30319

    09/29/2013  08:13 PM         5,294,672 mscorlib.dll
                   1 File(s)      5,294,672 bytes

    Directory of C:\Windows\Microsoft.NET\Framework\v4.0.30319\NativeImages

    10/18/2013  12:34 AM        17,376,344 mscorlib.ni.dll
                   1 File(s)     17,376,344 bytes

     

    Read up on MPGO tool on how this can be optimized (http://msdn.microsoft.com/library/hh873180.aspx)

    NGEN Fragility

    Another problem NGEN faces is fragility. If something changes in the system the NGEN images become invalid and cannot be used. This is true especially for hardbound assemblies.

    Consider the following code

    class MyBase
    {
        public int a;
        public int b;
        public virtual void func() {}
    }
    
    static void Main()
    {
        MyBase m = new MyBase();
        mb.a = 42;
        mb.b = 20;
    }

    Here we have a simple class whose variables have been modified. If we look into the MSIL code of the access it looks like

    L_0008: ldc.i4.s 0x2a
    L_000a: stfld int32 ConsoleApplication1.MyBase::a
    L_000f: ldloc.0 
    L_0010: ldc.i4.s 20
    L_0012: stfld int32 ConsoleApplication1.MyBase::b

    The native code for the variable access can be as follows

                mb.a = 42;
    0000004b  mov         eax,dword ptr [ebp-40h] 
    0000004e  mov         dword ptr [eax+4],2Ah 
                mb.b = 20;
    00000055  mov         eax,dword ptr [ebp-40h] 
    00000058  mov         dword ptr [eax+8],14h 

    The code generation engine essentially took a dependency of the layout of MyBase class while generating code to modify and update that. So the hard coded layout dependency is that compiler assumes that MyBase looks like

    <base> MethodTable
    <base> + 4 a
    <base> + 8 b

    The base address is stored in eax register and the updates are made at an offset of 4 and 8 bytes from that base. Now consider that MyBase is defined in assembly A and is accessed by some code in assembly B, and that Assembly A and B are NGENed. So if for some reason the MyBase class (and hence assembly A is modified so that the new definition becomes.

    class MyBase
    {
        public int foo;
        public int a;
        public int b;
        public virtual void func() {}
    }

    If we looked from the perspective of MSIL code then the reference to these variables are on their symbolic names ConsoleApplication1.MyBase::a, so if the layout changes the JIT compiler at runtime will find their new location from the metadata located in the assembly and bind it to the correct updated location. However, from NGEN this all changes and hence the NGEN image of the accessor is invalid and have to be updated to match the new layout

    <base> MethodTable
    <base> + 4 foo
    <base> + 8 a
    <base> + 12 b

    This means that when the CLR picks up a NGEN image is needs to be absolutely sure about it’s validity. More about that in a later post.

  • I know the answer (it's 42)

    .NET: Figuring out if your application is exception heavy

    • 3 Comments
    Ocean beach

    In the past I worked on a application which used modules from different teams. Many of these modules raised and caught a ton of exceptions. So much so that performance data was showing that these exceptions were causing issues. So I had to figure out an easy way to programmatically find out these code and inform their owners that exception is for exceptional scenarios and shouldn’t be used for normal codeflow :)

    Thankfully CLR provides an easy hook in the form of an AppDomain event. I just need to hook into the AppDomain’s FirstChanceException event and CLR notifies me upfront when the exception is raised. It does that even before any managed code gets a chance to handle it (and potentially suppresses it).

    The following is a plugin which throws and immediately catches an exception.

    namespace Plugins
    {
        public class FunkyPlugin
        {
            public static void ThrowingFunction()
            {
                try
                {
                    Console.WriteLine("Just going to throw");
                    throw new Exception("Cool exception");
                }
                catch (Exception ex)
                {
                    Console.WriteLine("Caught a {0}", ex.Message);
                }
            }
        }
    }

    In the main application I added code to subscribe to the FirstChanceException event before calling the plugins

    using System;
    using System.Runtime.ExceptionServices;
    using System.Reflection;
    
    namespace foo
    {
        public class Program
        {
            static void Main()
            {
                // Register handler
                AppDomain.CurrentDomain.FirstChanceException += FirstChanceHandler; 
                Plugins.FunkyPlugin.ThrowingFunction();
            }
            
            static void FirstChanceHandler(object o, 
                                           FirstChanceExceptionEventArgs e)
            {
                MethodBase site = e.Exception.TargetSite;
                Console.WriteLine("Thrown by : {0} {1}({2})", site.Module, 
                                                              site.DeclaringType, 
                                                              site.ToString());
                Console.WriteLine("Stack: {0}", e.Exception.StackTrace);
            }
        }
    }

    Line 11 is the event subscription and FirstChanceHandler just dumps out the name of the assembly and type that raises the exception. The output of this program is as follows

    Just going to throw
    Thrown by : some.dll Plugins.FunkyPlugin(Void ThrowingFunction())
    Stack:    at Plugins.FunkyPlugin.ThrowingFunction()
    Caught a Cool exception

    As you can see the handler runs even before the catch block executes and I have the full information of the assembly, type and method that throws the exception.

    Behind the Scene

    For most it might suffice to know that the event handler gets called before anyone gets a chance to handle the exception. However if you care about when this is fired, then its in the first pass (first chance) just after the runtime notifies the debugger/profiller.

    The managed exception system piggy backs on native OS exception handling system. Though the x86 exception handling (FS:0 based chaining) is significantly different from the x64 (PDATA) it has the same basic idea

    1. From outside a managed exception looks exactly like a native exception and hence the OSes normal exception handling mechanism kicks in
    2. Exception handling requires some mechanism to walk the thread callstack on which the exception is thrown. So that it can find an uplevel catch block as well as call the finally block of all functions in-between the catch and the point of exception being thrown. The mechanism varies in between x86 and x64 but is not super relevant for our discussion. (a series of data-structures pushed onto the stack in case of x86 or a series of data-structure table registered with OS in x64).
    3. On an exception the OS walks the stack and for managed function frames calls into CLR’s registered personality routine (that's what its called :)). This routine knows how to handle managed exceptions
    4. This routine notifies the profiler then the debugger of this first-chance exception, so that debugger can potentially break on the exception and do other relevant operations. If debugger did not handle the first chance exception the processing of the exception continues
    5. If there is a registered handler for FirstChanceException that is called
    6. JIT is consulted to find appropriate catch block for the exception (none might be found)
    7. The CLR returns the right set of information to the OS indicating that indeed the exception will be processed
    8. The OS initiates the second-pass
    9. For every function in between the frame of exception and the found catch block the CLR’s handler routine is called and the CLR consults the JIT to find the appropriate finally blocks and proceeds to call them for cleanup. In this phase the stack actually starts unwinding
    10. This continues till the frame in which the catch was initially found is reached. CLR proceeds to execute the catch block.
    11. If all is well the exception has been caught and processed and peace is restored to the world.

    As it should be evident from the above basic flow the FirstChanceHandler will get called before any code gets the chance to catch it and also in case the exception will go unhandled.

    PS: Please don’t throw an exception in the FirstChance handler :)

  • I know the answer (it's 42)

    Bing it on

    • 3 Comments
    1468682_10151739093417274_1187424887_n

    In early 2008 I joined the CLR team to clean garbage (or to write Garbage Collectors :)). It has been a wild ride writing generational Garbage Collectors, memory managers and tinkering with runtime performance or memory models. It’s been great to see people on the street use stuff that I had helped build or to see internal team reports as .NET updates/downloads went out to 100s of millions of machines. In one case I was under a medical diagnostic device clearly running on .NET. I didn’t run out indicating my faith in the stuff we built (or maybe I was sedated, who knows).

    I decided to change things up a bit. So I decided to move from the world of devices, desktops and consoles to that of the cloud. From this week I have begun working in the Bing team. From now on I will no longer be a part of the team that builds CLR but will become part of the team which really pushes the usage of CLR to the extreme. Using .NET to serve billions of queries on thousands of machines.

    I hope to continue blogging about CLR/.NET and provide a users perspective of the best managed runtime in the world.

    BTW the photo above is the fortune cookie I got at my farewell lunch with the CLR team. Very appropriate.

  • I know the answer (it's 42)

    Quick/Dirty Function Coverage using Windbg

    • 0 Comments

    To find code coverage at line and block granularity you need a full-fledged code coverage tool. However, sometimes you can use a quick and dirty trick in WinDbg to see which functions are being called. This works well when you need to do this for a small set of functions, which is what I recently needed to do. Lets say it was for all the functions of a class called Graph.

    First I got the application under the debugger, wherein it automatically broke into the debugger at the application start. Then I added breakpoints to all these functions using the following

    0:000> bm test!Graph*::* 10000
      2: 003584a0          @!"test!Graph::Vertex::`scalar deleting destructor'"
      3: 00356f80          @!"test!Graph::Vertex::~Vertex"
      4: 00358910          @!"test!Graph::AddVertex"
      5: 00356b70          @!"test!Graph::~Graph"
      6: 003589d0          @!"test!Graph::RangeCheck"
      7: 003589b0          @!"test!Graph::Count"
      8: 00357ce0          @!"test!Graph::operator[]"
      9: 003561a0          @!"test!Graph::Vertex::Vertex"
     10: 00356170          @!"test!Graph::Vertex::Vertex"
     11: 00358130          @!"test!Graph::`scalar deleting destructor'"
     12: 003588a0          @!"test!Graph::AddEdge"
     13: 003551e0          @!"test!Graph::Graph"

    Here I am telling windbg to add breakpoints on all methods in the Graph class with a very large hit count of 0x10000. Then I just let the application proceed and play with the various controls. Finally I closed the application, when it again broke under the debugger. At this point I just list the breakpoints.

    0:000> bl
     1 e 00352c70     0001 (0001)  0:**** test!main
     2 e 003584a0     fd8f (10000)  0:**** test!Graph::Vertex::`scalar deleting destructor'
     3 e 00356f80     fcc7 (10000)  0:**** test!Graph::Vertex::~Vertex
     4 e 00358910     ff38 (10000)  0:**** test!Graph::AddVertex
     5 e 00356b70     ffff (10000)  0:**** test!Graph::~Graph
     6 e 003589d0     d653 (10000)  0:**** test!Graph::RangeCheck
     7 e 003589b0     c1de (10000)  0:**** test!Graph::Count
     8 e 00357ce0     fda7 (10000)  0:**** test!Graph::operator[]
     9 e 003561a0     fd8f (10000)  0:**** test!Graph::Vertex::Vertex
    10 e 00356170     ff38 (10000)  0:**** test!Graph::Vertex::Vertex
    11 e 00358130     ffff (10000)  0:**** test!Graph::`scalar deleting destructor'
    12 e 003588a0     ec56 (10000)  0:**** test!Graph::AddEdge
    13 e 003551e0     ffff (10000)  0:**** test!Graph::Graph

    The key fields are marked below

    13 e 003551e0 ffff (10000) 0:**** test!Graph::Graph

    10000 indicates that after 0x10000 times the breakpoint is hit should the debugger actually break on this. FFFF indicates how many times it is left for this break to happen. So a simple subtraction (0x10000 – 0xFFFF) tells us that this function has been called once. It’s easy to see that one Graph object was created and destroyed (1 call to ctor and 1 to dtor) and that the Graph<T>::Count has been called 15906 times (0x10000 – 0xC1DE). So I didn’t really miss any of the functions in that test. If I did it would say 10000 (10000) for the function that I missed.

  • I know the answer (it's 42)

    Custom Resolution of Assembly References in .NET

    • 0 Comments

    Right now I am helping out a team with an assembly resolution conflict bug. I thought I’d share the details out, because I am sure others have landed in this situation.

    In a complex managed applications, especially in cases where the application uses dynamically loaded plugins/addons, it’s common that all the assemblies required by these addons are not present in the assembly probing path. In the perfect world the closure set of all assembly references of an application is strong-named and the CLR binder uses standard probing order to find all assemblies and everything works perfectly. However, the world is not ideal.

    Requirement of having to resolve assemblies from different locations do arise and .NET has support for that. E.g. this stackoverflow question http://stackoverflow.com/questions/1373100/how-to-add-folder-to-assembly-search-path-at-runtime-in-net has been rightly been answered by pointing to AssemblyResolve event. When .NET fails to find an assembly after probing (looking through) the various folders .NET uses for assemblies, it raises an AssemblyResolve event. User code can subscribe to this event and supply assemblies from whatever path it wants to.

    This simple mechanism can be abused and results in major system issues. The main problem arises from over-eager resolution code. Consider an application A that uses two modules (say plugins) P1 and P2. P1 and P2 is somehow registered to A, and A uses Assembly.Load to load P1 and P2. However, P1 and P2 ships with various dependencies which it places in various sub-directories which A is unaware of and the CLR obviously doesn’t look into those folders to resolve the assemblies. To handle this situation both P1 and P2 has independently decided to subscribe to the AssemblyResolve event.

    The problem is that for all cases CLR fails to locate an assembly it will call these resolve-event handlers sequentially. So based on the order of registering these handlers, it is possible that for a missing dependency of P2 the resolution handler of P1 gets called. Coincidentally it is possible that the assembly CLR is failing to resolve is known to both P1 and P2. Could be because the name is generic or because maybe it’s a widely used 3rd party assembly which a ton of plugins use. So P1 loads this missing assembly P2 is referring to and returns it. CLR goes ahead and binds P2 to the assembly P1 has returned. This is when bad things start to happen because maybe P2 needs a different version of it. Crash follows.

    The MSDN documentation has already called out how to handle these issues in http://msdn.microsoft.com/en-us/library/ff527268.aspx. Essentially follow these simple rules

    1. Follow the best practices for assembly loading http://msdn.microsoft.com/en-us/library/dd153782.aspx
    2. Return null if you do not recognize the referring assembly
    3. Do not try to resolve assemblies you do not recognize
    4. Do not use Assembly.Load or AppDomain.Load to resolve the assemblies because that can result in recursive calls to the same ResolveEvent finally leading to stack-overflow.

    The skeleton code for the resolve event handler can be something like

    static Assembly currentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
    {
        // Do not try to resolve assemblies which your module doesn't explicitly handle
        // There will be other resolve handlers that are called in-sequence, let them
        // do their job
        if (args.RequestingAssembly == null || !IsKnownAssembly(args.RequestingAssembly))
            return null;
    
        // parse and create name of the assembly being requested and then use your own logic
        // to locate the assembly
        AssemblyName aname = new AssemblyName (args.Name);
        string path = FindAssemblyByCustomLogic(aname);
    
        if (!File.Exists(path))
            return null;
                
        Assembly assembly = Assembly.LoadFile(path);
        return assembly;
    }

    Here you need to fill in the implementation of IsKnownAssembly which only returns true for assemblies that belong to your module.

  • I know the answer (it's 42)

    Arduino Fun – Door Entry Alarm

    • 4 Comments
     
    Arduino UNO based door entry alarm

    Physical computing and “internet of things” is a super exciting area that is unfolding right now. Even decades back one could hook up sensors and remotely get those data and process it. What is special now is that powerful micro-controllers are dirt cheap and most of us have in our pockets a really powerful computing device. Connecting everything wirelessly is also very easy now and almost every home has a wireless network.

    All of these put together can create some really compelling and cool stuff where data travels from sensor over wireless networks into the cloud and finally into the cell phone we carry everywhere. I finally want to create a smart door so that I can get an notification while at work when someone knocks at our home door. Maybe I can remotely open the door. The possibilities are endless, but time is not, so lets see how far I get in some reasonable amount of time.

    Arduino UNO

    I unboxed the Arduino Uno Ultimate Starter Kit that I had got last week and spend some time with my daughter setting it up. The kit comes with a fantastic manual that helped me recap the basics of electronics. It contains an Arduino UNO prototyping board based on ATmega328 chip. It is a low-power 8-bit microprocessor running at max 20 MHz. To most people that seems paltry but it’s amazing what you can pull off with one of these chips.

    The final assembled setup of the kit looks as follows. It comes with a nice handy plastic surface on which the Arduino (in blue) and a breadboard is stuck. It’s connected over USB to my PC.

    IMG_9181

    Getting everything up was easy with the instruction booklet that came. Only glitch was that Windows 8 wouldn’t let me install the drivers because they are not properly signed. So I had to follow the steps given here to disable driver verification.

    Post that the Arduino IDE connected to the board and I could easily write and deploy code (C like syntax).

    image

    The tool bar icons are a bit weird though (side arrow for upload and up arrow for open????).

    There was no way to debug through the IDE (or at least couldn’t find one). So I setup some easy printf style debugging. Basically you write to the serial port and the IDE displays it.

    image

    It was after this that I got to know that there’s a Visual Studio plugin with full debugging support. However, I haven’t yet used that.

    The Project

    imageI decided to start out with making a simple entry alarm and see how much time it takes to get everything done. In college I built something similar, but without a microcontroller (based on 555 IC and IR photo-transistors) and it took decent amount of time to hook up all the components. Basically the idea is that across the door there will be some source of light and a sensor will be on the other side. When someone passes in between the light on the sensor will be obstructed and this will sound an alarm.

    When I last did it in college I really made it robust by using pulsating (at fixed frequency) IR LED as source and IR sensors. Now for this project I relied on visible light and the photo-resistor that came with the kit.

    I built the following circuit.

    image

    Connected a photo-resistor in series with another 10K resistor and connected the junction to the analog input pin A0 of Arduino. Essentially this acts like a voltage divider. In bright light the junction and hence A0 input reads around 1.1 V. When light is obstructed the resistance of photo-resistor changes and the junction reads 2.6 V. The analog pins read in a range of 0 (means 0 volt) and 1023 (for 5V). So this roughly comes to around 225 in light and 530 in the shade. Obviously these are relative based on the strength of the light and how dark it becomes when someone obstructs the light. To avoid taking absolute dependency on the value I created another voltage divider using a potentiometer and connected that to another analog input pin A1. So now I can change the potentiometer to control a threshold value. If the voltage of A0 is above this threshold it would mean that it’s dark enough that someone obstructed the light falling on the resistor and it’s time to sound the alarm.

    The alarm consists of flashing blue and red LEDs (obviously to match police lights) and a standard siren sound played using a piezo crystal that also came with the kit.

    This full assembled and deployed setup looks as follows.

    IMG_9167

    Update*** The picture above says photo-transistor, it should be photo-resistor

    Code

    The key functions are setup() which is automatically called at startup and loop() which as the name suggests is called in a loop.setup() sets up the digital pins for output to drive the flashing LEDS. In loop() I read in the values of  photo-resistor and that from the potentiometer. Based on comparison I sound the alarm

    // Define the constants
    const int sensorPin = 0; // Photo-resistor pin
    const int controlPin = 1; // Potentiometer pin
    const int buzzerPin = 9; // Buzzer pin
    const int rLedPin = 10; // Red LED pin
    const int bLedPin = 11; // Blue LED pin

    // Always called at startup
    void setup()
    {
    // Set the two LED pins as output
    pinMode(rLedPin, OUTPUT);
    pinMode(buzzerPin, OUTPUT);
    }

    // This loops forever
    void loop()
    {
    int sensorVal = analogRead(sensorPin);
    int controlVal = analogRead(controlPin);

    if(sensorVal < controlVal)
    {
    // Light is below threshold so sound buzzer
    playBuzzer(buzzerPin);
    }

    delay(100);
    }

    void playBuzzer(const int buzzerPin)
    {
    for(int i = 0; i < 3; ++i)
    {
    // alternate between two tones, one high and one low
    // at the same time alternate the blue and red LED flashing

    digitalWrite(rLedPin, HIGH); // Red LED on
    tone(buzzerPin, 400); // play 400 Hz tone for 500 ms
    delay(500);
    digitalWrite(rLedPin, LOW); // RED LED off

    digitalWrite(bLedPin, HIGH); // Blue LED on
    tone(buzzerPin, 800); // play 800Hz tone for 500ms
    delay(500);
    digitalWrite(bLedPin, LOW); // Blue LED off
    }

    // Stop the buzzer
    noTone(buzzerPin);
    }

    imageNext Steps

    This system has some obvious flaws. Someone can duck below or over the light-path or even shine a flashlight on the sensor while passing through. To make this robust consider using strips of mirrors on the two side and then use a laser (preferably IR) bouncing off them so that it’s virtually impossible to get through without breaking the light

    Also you can also use a pulsating source of light and detect the frequency on the sensor. This will just make it more harder to break.

  • I know the answer (it's 42)

    C# code for Creating Shortcuts with Admin Privilege

    • 1 Comments
    Seattle skyline

    Context

    If you just care about the code, then jump to the end Smile

    In the CLR team and across other engineering teams in Microsoft we use build environments which are essentially command shells with a custom environment setup using bat and cmd scripts. These scripts setup various paths and environment variables to pick up the right set of build tools and output paths matching the architecture and build flavor. E.g. one example of launching such a shell could be…

    cmd.exe /k %BRANCH_PATH%\buildenv.bat <architecture> <build-flavor> <build-types> <etc...>

    The architectures can vary between things like x86, amd64, ARM, build flavors vary between debug, check, retail, release, etc… The build-types indicate the target like desktop, CoreCLR, metro. Even though all combination is not allowed, the allowed combination approaches around 30. In case of .NET Compact Framework which supports way more architectures (e.g. MIPS, PPC) and targets (e.g. Xbox360, S60) the combination is even larger.

    For day to day development I either need to enter this command each time to move to a different shell, or I have to create desktop shortcuts for all the combination. This becomes repetitive each time I move to a different code branch. I had created a small app that I ran each time I moved to a new branch and it would generate all the combination of shortcut given the branch details. However, our build requires elevation (admin-privilege). So even though I created the shortcuts, I’d have to either right click and use “Run as administrator” OR set that in the shortcuts property.

    image

    image

    This was a nagging pain for me. I couldn’t find any easy programmatic way to create a shortcut with Administrator privilege (I’m sure there is some shell API to do that). So finally I binary compared two shortcuts, one with the “Run as administrator” and one without. I saw that only one byte was different. So I hacked up a code to generate the shortcut and then modify the byte. I am sure there is better/safer way to do this, but for now this “Works for me”.

    The Code

    Since I didn’t find any online source for this code, I thought I’d share. Do note that this is a major hack and uses un-documented stuff. I’d never do this for shipping code or for that matter anything someone other than me would rely on. So use at your own risk… Also if you have a better solution let me know and I will use that…

       1:  // file-path of the shortcut (*.lnk file)
       2:  string shortcutPath = Path.Combine(shortCutFolder, string.Format("{0} {1}{2}.lnk", arch, flavor, extra));
       3:  Console.WriteLine("Creating {0}", shortcutPath);
       4:  // the contents of the shortcut
       5:  string arguments = string.Format("{0} {1} {2} {3}{4} {5}", "/k", clrEnvPath, arch, flavor, extra, precmd);
       6:   
       7:  // shell API to create the shortcut
       8:  IWshShortcut shortcut = (IWshShortcut)shell.CreateShortcut(shortcutPath);
       9:  shortcut.TargetPath = cmdPath;
      10:  shortcut.Arguments = arguments;
      11:  shortcut.IconLocation = "cmd.exe, 0";
      12:  shortcut.Description = string.Format("Launches clrenv for {0} {1} {2}", arch, flavor, extra);
      13:  shortcut.Save();
      14:   
      15:  // HACKHACK: update the link's byte to indicate that this is a admin shortcut
      16:  using (FileStream fs = new FileStream(shortcutPath, FileMode.Open, FileAccess.ReadWrite))
      17:  {
      18:      fs.Seek(21, SeekOrigin.Begin);
      19:      fs.WriteByte(0x22);
      20:  }
  • I know the answer (it's 42)

    Moving to Outlook from Google Reader

    • 4 Comments

    I am sure everyone by now knows that Google Reader is being shutdown. I am a heavy user of Google Reader or Greeder as I call it and I immediately started looking for an alternative, when this suddenly occurred to me, that all PC’s I use have Outlook installed on them. So if you work for an organization that runs on Exchange server, this could really work out well. You can use Office Outlook and Exchange as a great RSS feed reader. Consider this

    1. It will provide full sync across multiple Outlook clients running on different PCs
    2. It will provide on the go access via Outlook Web-access
    3. Your phone’s outlook client should also work with it
    4. You can pretend to work while wasting time on boingboing

    First things first: Export the opml file from Google Reader

    Login to www.google.com and then go to https://www.google.com/takeout/#custom:reader

    image

    This will take some time and create an archive.

    image

    Click on the download button and save the zip. Then extract the zip as follows

    image

    Inside the extracted folder you will have the opml file. For me it’s in C:\Users\admin\Desktop\XXXXXXXX@gmail.com-takeout\XXXXXXXX@gmail.com-takeout\Reader\subscriptions.xml

    Import to Outlook

    This opml file needs to be imported into outlook. Use the File tab and bring up the following UI in Outlook.

    image

    Then use Import. To bring up the following

    image

    Choose OPML file and tap on Next. Now point it to the file you extracted. For me it was C:\Users\admin\Desktop\XXXXXXXX@gmail.com-takeout\XXXXXXXX@gmail.com-takeout\Reader\subscriptions.xml

    Hit next and finally choose the feeds you want to import (Select All).

    image

    The tap on Next and here you have Outlook as an Rss feed reader…

    image

    Read Everywhere

    It totally sync’s on the cloud. Here I have it open on the browser. As you read a post it tracks what you are reading and across the browsers and all your outlook clients at work and home it will update and keep everything in sync.

    image

    Works great on the Windows Phone as well. I assume any Exchange client should work.

    wp_ss_20130314_0002wp_ss_20130314_0001

    Pain Points

    While reading on Outlook was seamless, there are some usability issues in both the browser and the phone. Surface RT is broken. Someone should really fix the mail client on Surface Sad smile

    The major paint point I see is that in the Outlook Web Access the pictures are not shown. Tapping on the picture placeholders work. I think some security feature is blocking the embedded images.

    Also on the Windows Phone you have to go to each feed and set it up so that it syncs that folder. This is a pain but I guess this is to protect against download over the carrier networks.

  • I know the answer (it's 42)

    C++/CLI and mixed mode programming

    • 2 Comments
    beach

    I had very limited idea about how mixed mode programming on .NET works. In mixed mode the binary can have both native and managed code. They are generally programmed in a special variant of the C++ language called C++/CLI and the sources needs to be compiled with /CLR switch.

    For some recent work I am doing I had to ramp up on Managed C++ usage and how the .NET runtime supports the mixed mode assemblies generated by it. I wrote up some notes for myself and later thought that it might be helpful for others trying to understand the inner workings.

    History

    The initial foray of C++ into the managed world was via the managed extension for C++ or MC++. This is deprecated now and was originally released on VS 2003.  This MC++ syntax turned out to be too confusing and wasn’t adopted well. The MC++ was soon replaced with C++/CLI. C++/CLI added limited extension over C++ and was more well designed so that the language feels more in sync with the general C++ language specification.

    C++/CLI

    The code looks like below.

    ref class CFoo
    {
    public:
        CFoo()
        {
            pI = new int;
            *pI = 42;
            str = L"Hello";
        }
    
        void ShowFoo()
        {
            printf("%d\n", *pI);
            Console::WriteLine(str);
        }
    
        int *pI;
        String^ str;
    };

    In this code we are defining a reference type class CFoo. This class uses both managed (str) and native (pI) data types and seamlessly calls into managed and native code. There is no special code required to be written by the developer for the interop.

    The managed type uses special handles denoted by ^ as in String^ and native pointers continue to use * as in int*. A nice comparison between C++/CLI and C# syntax is available at the end of http://msdn.microsoft.com/en-US/library/ms379617(v=VS.80).aspx. Junfeng also has a good post at http://blogs.msdn.com/b/junfeng/archive/2006/05/20/599434.aspx

    The benefits of using mixed mode

    1. Easy to port over C++ code and take the benefit of integrating with other managed code
    2. Access to the extensive managed API surface area
    3. Seamless managed to native and native to managed calls
    4. Static-type checking is available (so no mismatched P/Invoke signatures)
    5. Performance of native code where required
    6. Predictable finalization of native code (e.g. stack based deterministic cleanup)

     

    Implicit Managed and Native Interop

    Seamless, static type-checked, implicit, interop between managed and native code is the biggest draw to C++/CLI.

    Calls from managed to native and vice versa are transparently handled and can be intermixed. E.g. managed --> unmanaged --> managed calls are transparently handled without the developer having to do anything special. This technology is called IJW (it just works). We will use the following code to understand the flow.

    #pragma managed
    void ManagedAgain(int n)
    {
        Console::WriteLine(L"Managed again {0}", n);
    }
    
    #pragma unmanaged
    void NativePrint(int n)
    {
        wprintf(L"Native Hello World %u\n\n", n);
        ManagedAgain(n);
    }
    
    #pragma managed
    
    void ManagedPrint(int n)
    {
        Console::WriteLine(L"Managed {0}", n);
        NativePrint(n);
    }
    

    The call flow goes from ManagedPrint --> NativePrint –> ManagedAgain

    Native to Managed

    For every managed method a managed and an unmanaged entry point is created by the C++ compiler. The unmanaged entry point is a thunk/call-forwarder, it sets up the right managed context and calls into the managed entry point. It is called the IJW thunk.

    When a native function calls into a managed function the compiler actually binds the call to the native forwarding entry point for the managed function. If we inspect the disassembly of the NativePrint we see the following code is generated to call into the ManagedAgain function

    00D41084  mov         ecx,dword ptr [n]         // Store NativePrint argument n to ECX
    00D41087  push        ecx                       // Push n onto stack
    00D41088  call        ManagedAgain (0D4105Dh)   // Call IJW Thunk
    
    

    Now at 0x0D4105D is the address for the native entry point. If forwards the call to the actual managed implementation

    ManagedAgain:
    00D4105D  jmp         dword ptr [__mep@?ManagedAgain@@$$FYAXH@Z (0D4D000h)]  
    

    Managed to Native

    In the case where a managed function calls into a native function standard P/Invoke is used. The compiler just defines a P/Invoke signature for the native function in MSIL

    .method assembly static pinvokeimpl(/* No map */) 
            void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl) 
            NativePrint(int32 A_0) native unmanaged preservesig
    {
      .custom instance void [mscorlib]System.Security.SuppressUnmanagedCodeSecurityAttribute::.ctor() = ( 01 00 00 00 ) 
      // Embedded native code
      // Disassembly of native methods is not supported.
      //  Managed TargetRVA = 0x00001070
    } // end of method 'Global Functions'::NativePrint
    
    

    The managed to native call in IL looks as

    Manged IL:
      IL_0010:  ldarg.0
      IL_0011:  call void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl) NativePrint(int32)
    

    The virtual machine (CLR) at runtime generates the correct thunk to get the managed code to P/Invoke into native code. It also takes care of other things like marshaling the managed argument to native and vice-versa.

    Managed to Managed

    While it would seem this should be easy, it was a bit more convoluted. Essentially the compiler always bound to native entry point for a given managed method. So a managed to managed call degenerated to managed -> native -> managed and hence resulted in suboptimal double P/Invoke. See http://msdn.microsoft.com/en-us/library/ms235292(v=VS.80).aspx

    This was fixed in later versions by using dynamic checks and ensuring managed calls always call into managed targets directly. However, in some cases managed to managed calls still degenerate to double P/Invoke. So an additional knob provided was the __clrcall calling convention keyword. This will stop the native entry point from being generated completely. The pitfall is that these methods are not callable from native code. So if I stick in a __clrcall infront of ManagedAgain I get the following build error while compiling NativePrint.

    Error	2	error C3642: 'void ManagedAgain(int)' : cannot call a function with
    __clrcall calling convention from native code <filename>

    /CLR:PURE

    If a C++ file is compiled with this flag, instead of mixed mode assembly (one that has both native and MSIL) a pure MSIL assembly is generated. So all methods are __clrcall and the Cpp code is compiled into MSIL code and NOT to native code.

    This comes with some benefits as in the assembly becomes a standard MSIL based assembly which is no different from another managed only assembly. Also it comes with some limitation. Native code cannot call into the managed codes in this assembly because there is no native entry point to call into. However, native data is supported and also the managed code can transparently call into other native code. Let's see a sample

    I moved all the unmanaged code to a separate /C++:CLI dll as

    void NativePrint(int n)
    {
        wprintf(L"Native Hello World %u\n\n", n);
    }
    

    Then I moved my managed C++ code to a new project and compiled it with /C++:PURE

    #include "stdafx.h"
    #include 
    
    #include "..\Unmanaged\Unmanaged.h"
    using namespace System;
    
    void ManagedPrint(int n)
    {
        char str[30] = "some cool number";     // native data  
        str[5] = 'f';                          // modifying native data
        Console::WriteLine(L"Managed {0}", n); // call to BCL
        NativePrint(n);                        // call to my own native methods
        printf("%s %d\n\n", str, n);           // CRT
    }
    
    int main(array ^args)
    {
        ManagedPrint(42);
        return 0;
    }
    

    The above builds and works fine. So even with C/++:PURE I was able to

    1. Use native data like a char array and modify it
    2. Call into BCL (Console::WriteLine)
    3. Call transparently into other native code without having to hand generate P/Invoke signatures
    4. Use native CRT (printf)

    However, no native code can call into ManagedPrint. Also do note that even though Pure MSIL is generated, the code is unverifiable (think C# unsafe). So it doesn't get the added safety that the managed runtime provides (e.g. I can just do str[200]  = 0 and not get any bounds check error)

    /CLR:Safe

    /CLR:safe compiler switch generates MSIL only assemblies whose IL is fully verifiable. The output is not different from anything generated from say C# or VB.NET compilers. This provides more security to the code but at the same time losses on several capabilities over and above the PURE variant

    1. No support for CRT
    1. Only explicit P/Invokes

    So for /CLR:Safe we need to do the following

    [DllImport("Unmanaged.dll")]
    void NativePrint(int i);
    
    void ManagedPrint(int n)
    {
        //char str[3000] = "some cool number"; // will fail to compile with  
        //str[5] = 'f';                        // "this type is not verifiable"
    
        Console::WriteLine(L"Managed {0}", n);
    
        NativePrint(n);                        // Hand coded P/Invoke
    

    Migration

    MSDN has some nice articles on people trying to migrate from /CLR to

    1. To /CLR:Pure http://msdn.microsoft.com/en-US/library/ms173253(v=vs.80).aspx
    1. To /CLR:Safe http://msdn.microsoft.com/en-US/library/ykbbt679(v=vs.80).aspx
  • I know the answer (it's 42)

    Windows Phone 8: Evolution of the Runtime and Application Compatibility

    • 3 Comments

    Long time back at the wake of the release of Windows Phone 7 (WP7) I posted about the Windows Phone 7 series programming model. I also published how .NET Compact framework powered the applications on WP7.

    Further simplifying the block diagram, we can think of the entire WP7 application system as followsimage

    As with most block diagrams, this is gross simplifications. However, I hope it helps to easily picture the entire system.

    Essentially the application can be purely managed (written in say C# or VB.net). The application can only utilize services exposed by the developer platform and core services provided by .NET Compact Framework. The application can in no way directly use native code or talk to the OS (say call an Win32 API). It has to always go through the runtime infrastructure and is in a security sandbox.

    The application manager is the loose term I am using to encompass everything that is used to managed the application including the host.

    Windows Phone 8 (WP8) is a huge huge change from Windows Phone 7.x (WP7). From the perspective of a WP7 application running on a WP8 device the system looks as follows

    image

    Everything in Green in this diagram got outright replaced with entire new codebase and the rest of the system other than the application was heavily modified to work with the new OS and the new managed runtime.

    Shared Windows Core

    The OS moved away from Windows Compact Embedded (WinCE) OS core that was used in WP7 to a new OS which shares it’s core with the desktop Windows 8. This means that a bunch of things in the WP8 OS is shared with the desktop implementation, this includes things like kernel, networking, driver framework and others. The shared core obviously brings great value as innovations and features will more easily flow across the two form factors (device and desktop) and also reduce engineering redundancy on Microsoft side. Some of the benefits are readily visible today like great multi-core support, WinRT interop and others are more subtle.

    CoreCLR

    .NET Compact Framework (NETCF) that was used in WP7 has a very different design philosophy and hence a completely different implementation from the desktop .NET. I will have a follow up post on this but for now it suffices to note that .NETCF is a very portable runtime that is designed to be very versatile and cross platform. Desktop CLR on the other hand is more closely tied with Windows and the processor architecture. It closely works with the OS and the underlying HW to give the maximum performance benefit to managed code running on it.

    With the new Windows RT which works on ARM, desktop CLR was anyway updated to work on the ARM processor. So when the phone chose to move to shared core it was an obvious choice to move the CLR as well. This gave the same benefits of shared innovation and reduced engineering redundancy.

    The full desktop CLR is more heavy and provides functionality that is not really required by the phone scenarios and hence a lighter variant of it (which is built from the same source) called CoreCLR was chosen for WP8. CoreCLR is the evolution of the lightweight runtime that powered Silverlight. With the move to CoreCLR developers get a much faster runtime with extended feature set that includes interop via WinRT.

    Backward Compat

    One of the simple statements made during all of these WP8 launch presentations was that applications in the store built for WP7 will work as is for WP8. This is a small statement but is a huge achievement and effort from the runtime implementation perspective. Making apps work as is when the entire runtime, OS and chipset has changed is non-trivial to say the least. Our team worked very hard to make this possible.

    One of the biggest things that played out to our benefit was that the WP7 apps were fully sandboxed and couldn’t use any native code. This means that they didn’t have any means of taking behavioral dependence on the OS APIs or underlying HW. The OS APIs were used via the CLR and it could always add quirks to expose consistent behavior to the applications.

    API Compatibility
    This required us to ensure that CoreCLR exposes the same API set as NETCF. We ran various automated tools to manage the API surface area changes and retain meaningful API compat between WP7 and WP8. With a closed application store it was possible for us to gather complete metrics on API usage and correctly prioritize engineering resources to ensure that majority of applications continued to see the same API set in signature and semantics.

    We also needed to ensure that the same APIs behave as closely as possible with that provided by NETCF. We tested a lot of applications in the app store to get as close as we can and believe that we are at a place that should allow most WP7 application’s API usage to transparently fall over to the new runtime.

    Runtime behavior changes
    When a runtime changes there are behavioral changes that can expose pre-existing issues with applications. This includes things like timing differences. Even though these runtime behaviors are not documented, or in some cases especially called out that user code should not take dependence on them, some apps still did do that.

    Couple of the examples we saw

    1. Taking dependence on finalization order
      Even though CLI specification clearly calls out that finalization order in .NET is not deterministic, code still took subtle dependency on them. In one particular case object F1 used a file and it’s finalizer released it. Later another object F2 opens the same file. With change in the runtime, the timing of GC changed so that at the time F2 tried to open the file, F1 which has been already collected hasn’t yet had its finalizer run. Hence the application crashed. We got in touch with the app developer and got the code moved to use the right dispose pattern
    2. GC Timing and changes in number of generations
      Application finalizers contrary to .NET guidelines modified other managed objects. Now changed GC timings and generation resulted in those objects to have already been collected and hence resulted in finalizer crashes
    3. Threading issues
      This was one of the painful ones. With the change in OS and hence thread scheduler and addition of multiple cores in the ARM CPU, a lot of subtle races and deadlocks in the applications got exposed. These were pre-existing issues where synchronization primitives were not used correctly or some code relied on the quanta based thread scheduling WinCE did. With move to WP8, threads run in parallel on different cores and scheduled in different order, this lead to various deadlocks and race driven crashes. Again we had to get in touch with app developer to address these issues
    4. There were other cases where exact precision of floating point math was relied on. This resulted in a board game where pucks flew off from its surface
    5. Whether or not functions got inlined
    6. Order of static constructor initialization (especially in conjunction with function in-lining)

    We addressed some of these issues where it was realistic to fix them in the runtime. For some of the others we got in touch with the application developers. Obviously all applications in the store and all of their features were not tested. So you should try to test your applications when you have access to WP8.

    At one point we were all playing games on our phones telling our managers that I am compat testing Need For Speed and I need to test till level 10 :)

  • I know the answer (it's 42)

    Test Driven Development of a Generational Garbage Collection

    • 2 Comments
    Crater Lake, OR panorama

    These days everyone is talking about being agile and test driven development (TDD). I wanted to share a success story of TDD that we employed for developing Generational Garbage Collector (GC) for Windows Phone Mango.

    The .NET runtime on Windows Phone 7 shipped with a mark-sweep-compact; stop the world global non-generational GC. Once a GC was triggered, it stopped all managed execution and scanned the entire managed heap to look up all managed references and cleaned up objects that were not in use. Due to performance bottleneck we decided to enhance the GC by adding a generational GC (referred to as GenGC). However, post the General Availability or GA of WP7 we had a very short coding window. Replacing such a fundamental piece of the runtime in that short window was very risky. So we decided to build various kinds of stress infrastructure first, and then develop the GC. So essentially

    1. Write the tests
    2. See those tests failing
    3. Write code for the generational GC
    4. Get tests to pass
    5. Use the tests for regression tracking as we refactor the code and make it run faster

     

    Now building tests for a GC is not equivalent of traditional testing of features or APIs where you write tests to call into mocked up API, see it fail until you add the right functionality. Rather these tests where verifications modes and combination of runtime stresses that we wrote.

    To appreciate the testing steps we took do read the Back To Basics: Generational Garbage Collection and WP7 Mango: Mark-Sweep collection and how does a Generational GC help posts

    Essentially in a generational GC run all of the following references should be discovered by the GC

    1. Gen0 Objects reachable via roots (root –> object OR recursively root –> object –> object )
    2. Objects accessible from runtime native code (e.g. pinned for PInvoke, COM interop, internal runtime references)
    3. Objects referenced via Gen1 –> Gen0 pointers

    The first two were anyway heavily covered by our traditional GC tests. #3 being the new area being added.

    To implement a correct generational GC we needed to ensure that at all places in the runtime where managed object references are updated they need to get reflected in the CardTable (#3 above). This is a daunting task and prone to bugs via omission as we need to ensure that

    1. All forms of assignments in MSIL that we JIT there are calls to update the CardTable.
    2. All places in the native runtime code where such references are directly and or indirectly updated the same thing is ensured. This includes all JIT worker-routines, COM, Marshallers.

     

    If a single instance is missed then it would result in valid/reachable Gen0 objects being collected (or deleted) and hence in the longer run result in memory corruption, crashes that will be hard if not impossible to debug. This was assessed to be the biggest risk to shipping generational GC.

    The other problem is that these potential omissions can be only exposed by certain ordering of allocation and collection. E.g. only a missing tracked reference of A –> B can result in a GC issue only if a GC happened in between allocations of A and B (A is in higher generation than B). Also due to performance reasons (write atomicity for lock-less updates) for every assignment of A = B we do not update the card-table bit that covers the memory area of A. Rather we update the whole byte in the card-table. This means an update to A will cover other objects allocated adjacent to A. Hence if an update to an object just beside A in the memory is missed it will not be discovered until some other run where that object lands up being allocated farther away from A.

    GC Verification mode

    Our solution to all of these problems was to first create the GC verification mode. What this mode does is runs the traditional full mark-sweep GC. While running that GC it goes through all objects in the memory and as it traverses them for every reference A (Gen1) –> B(Gen0), it verifies that the card table bit for A is indeed set. This ensures that if a GenGC was to run, it would not miss that references

    Granular CardTable

    We used very high granular card-table resolution for test runs. For these special runs each bit of the card-table corresponded to almost one object (1 bit to 2 byte resolution). Even though the card-table size exploded it was fine because this wasn’t a shipping configuration. This spaced out objects covered by the card-table and exposed adjacent objects not being updated.

    GC Stress

    In addition we ran the GC stress mode, where we made the GC run extremely frequently (we could push it up to a GC in every allocation). The allocator was also updated to ensure that allocations were randomized so that objects moved around everywhere in the memory.

    Hole Finder

    Hole finder moves all objects around in memory after a GC. This exposes stale pointer issues. If an object didn’t get updated properly due to the GC it would now point to invalid memory because all previous memory locations are now invalid memory. So a subsequent write will fail-fast with AV and we can easily detect that point of failure.

    With all of these changes we ran the entire test suites. Also by throttling down the GC Stress mode we could still use the runtime to run real apps on the phone. Let me tell you playing NFS on a device with the verification mode, wasn’t fun :)

    With this precaution we ensured that not a single GenGC bug has come in from the phone. It shipped rock solid and we were more confident with code churn because regressions would always be caught. I actually never blogged about this because I felt that if I do, it’ll jinx something :)

  • I know the answer (it's 42)

    Core Parking

    • 5 Comments
    Rainier

    For some time now, my main box got a bit slow and was glitching all the time. After some investigation I found that some power profile imposed by our I T department enabled CPU parking on my machine. This effectively parks CPU on low load condition to save power. However,

    1. This effects high load conditions as well. There is a perceptible delay between the load hitting the computer and the CPU’s being unparked. Also some CPU’s remain parked for spike loads (large but smaller duration usage spikes)
    2. This is a desktop and not a laptop and hence I really do not care about power consumption that much

    Windows Task Manager (Ctrl + Shift + Esc and then Performance tab) clearly shows this parking feature. 3 parked cores show flatlines

    clip_image002

    You can also find out if your machine is behaving the same from Task Manager -> Performance Tab -> Resource Monitor -> CPU Tab. The CPU graphs on the right will show which cores if any are parked.

    clip_image004

    To disable this you need to

    1. Find all occurrences of 0cc5b647-c1df-4637-891a-dec35c318583 in the Windows registry (there will be multiple of those)
    2. For all these occurrences set the ValueMin and ValueMax keys to 0
    3. Reboot

    For detailed steps see the video http://www.youtube.com/watch?feature=player_detailpage&v=mL-w3X0PQnk#t=85s

    Everything is back to normal once this is taken care off Smile

    clip_image006

  • I know the answer (it's 42)

    Managing your Bugs

    • 0 Comments
    Rainier

    I have worked on bunch of large scale software product development that spanned multiple teams, thousands of engineers, many iterations. Some of those products plug into even larger products. So obviously we deal with a lot of bugs. Bugs filed on us, bugs we file on other partner teams, integration bugs, critical bugs, good to have fixes and plan ol crazy bugs. However, I have noticed one approach to handling those bugs which kind of bugs me.

    Some people just cannot make hard calls on the gray area bugs and keeps hanging them around. They bump the priority down and take those bugs from iteration to iteration. These bugs always hang around as Priority 2 or 3 (in a scale of Priority 0 through 3).

    I personally believe that bugs should be dealt with the same way emails should be dealt with. I always follow the 4 Ds for real bugs.

    Do it

    If the bug meets the fix bar and is critical enough, just go fix it. This is the high priority critical bug which rarely gives rise to any confusion. Something like the “GC crashes on GC_Sweep for application X”

    Delete it (or rather resolve it)

    If you feel a bug shouldn’t be fixed then simply resolve the bug. Give it back to the person who created the bug with clear reasons why you feel this shouldn’t be fixed. I know it’s hard to say “won’t fix”, but the problem is if one was to fix every bug in the system then the product will never ship. So just don’t defer the thought of making this hard call. There is no benefit in just keeping the bug hanging around for 6 months only to make the same call 2 weeks prior to shipping, citing time constraint as a reason. Make the hard call upfront. You can always tag the bug appropriately so that for later releases you can relook.

    Defer it

    Once you have made a call to fix the bug it should be correctly prioritized. With all bugs you know you won’t be fixing out of the way, its easy to make the right call of fixing it in the current development cycle or move it to the next iteration.

    Delegate it

    If you feel someone else needs to take a look at the bug because it’s not your area of expertise, QA needs to give better repro steps, someone else needs to make a call on it (see #1 and #2 above) then just assign it to them. No shame in accepting that you aren’t the best person to make a call.

  • I know the answer (it's 42)

    WP7: CLR Managed Object overhead

    • 3 Comments

    Winter day

    A reader contacted me over this blog to inquire about the overhead of managed objects on the Windows Phone. which means that when you use an object of size X the runtime actually uses (X + dx) where dx is the overhead of CLR’s book keeping. dx is generally fixed and hence if x is small and there are a lot of those objects it starts showing up as significant %.

    There seems to be a great deal of information about the desktop CLR in this area. I hope this post will fill the gap for NETCF (the CLR for the Windows Phone 7)

    All the details below are implementation detail and is provided just as guidance. Since these are not mandated by any specification (e.g. ECMA) they may and most probably will change.

    General layout

    The overhead varies on the type of objects. However, in general the object layout looks something like

    imageSo internally just before the actual object there is a small object-header. If the object is a finalizable object (object with a finalizer) then there’s another 4-byte pointer at the end of the object. 
    The exact size of the header and what's inside that header depends on the type of the object.

    Small objects (size < 16KB)

    All objects smaller than 16KB has a 8 byte object header.image

    The 32-bits in the flag are used to store the following information

    1. Locking
    2. GC information. E.g.
      1. Bits to mark objects
      2. Generation of the object
      3. Whether the object is pinned
    3. Hashcode (for GetHashCode support)
    4. Size of the object

    Since all objects are 4 byte aligned, the size is stored as an unit of 4 bytes and uses 11 bits. Which means that the maximum size that can be stored in that header is 16KB.

    The second flag contains a 32-bit pointer to the Class-Descriptor or the type information of that object. This field is overloaded and is also used by the GC during compaction phase.

    Since the header cost is fixed, the smaller the object the larger the overhead becomes in terms of % overhead.

    Large Objects

    As we saw in the previous section that the standard object header supports describing 16KB objects. However, we do need to support larger objects in the runtime.

    This is done using a special large-object header which is 20 bytes.image

    The overhead of large object comes to be at max of around 0.1%. A dedicated 32-bit size field allows the runtime to support very large objects (close to 2GB).

    It’s weird to see that the normal object header is repeated twice. The reason is that very large objects are rare on the phone. Hence the runtime tries to optimize it’s various operations so that the large object doesn’t bring in additional code paths and checks. Given an object reference having the normal header just before it helps us in speeding various operations. At the same time having the header as the first data-structure of all objects helps us to be faster in other cases. Bottom line is that the additional 8 bytes or 0.04% extra space usage helps us in enough performance gains at other places so that we pay that extra overhead cost.

    Pool overhead

    NETCF uses 64KB object pools and then sub-allocates objects from it. So in addition to the per-object overhead it uses an additional 44-byte of object-pool-header per 64kb. This means another 0.06% overhead per 64KB. For every object, the contribution of this can be ignored.

    Arrays

    Arrays pan out slightly differently.

    image

    In case of value type array every member doesn’t really need to have the type information and hence the object header associated with it. The Array header is a standard object header of 8 bytes or big object header of 20bytes (based on the size of the array). Then there is a 32-bit count of the number of elements in the array. For single dimension array there is no more overhead. However, for higher dimension arrays the length in each dimension is also stored to support runtime bounds check.

    The reference type arrays are similar but instead of the array contents being inlined into the array the elements are references to the objects on the managed heap and each of those objects have their standard headers.

    Putting it all together

    Standard object 8 bytes ( + 4 bytes for finalizable object)
    Large objects (>16KB) 20 bytes ( + 4 bytes for finalizable object)
    Arrays object overhead + 4 bytes array header +
     ( (nDimension > 1) ?  nDimension x 4 bytes : 0)
  • I know the answer (it's 42)

    My first Windows Phone App

    • 1 Comments
    lighthouse

    imageI have been involved with Windows Phone 7 from the very beginning and worked on delivering the CLR for the GA version and all the way through Nodo and Mango.

    All this while I was busy working on one of the lower layers and seeing bits and bytes flying by. For some time I wanted to try out writing an app for the phone to experience how it feels to be our developer customer. Unfortunately my skills fall more on system programming than designing good looking UI. I had a gazillion ideas floating in my head but most were beyond my graphic and UI design skills. I was also waiting for some sort of inspiration.

    Over the last school break our local King Country Library System which earned the best library system of the year award run an interesting effort. Students received a sheet which they filled up as they read books. On hitting 500 minutes and 1000 minutes of book reading they got prizes. One of the prize was a Kaleidoscope. Not the one with colored glass pieces like I had as a child, but one through which you could see everything around. This was the inspiration I was waiting for and I wrote an WP7 Mango app that does the same. Interestingly the app got a lot of good reviews and is rated at 5 stars (something I didn’t expect). It’s free, go check it out.

    Get the app here, documentation is at http://bonggeek.com/Kaleidoscope/

  • I know the answer (it's 42)

    Delay Sending an Email

    • 0 Comments
    Cascade Mountains, WA

    Like many engineer I am partially fueled by the engineers angst. I am not always able to vent it in code and sometimes resort to sending really dumb emails :).

    I figured out that I can tolerate delaying sending my emails over having to smack my face every other day. Basically all emails I send is delayed by about 5 minutes and that makes a huge difference. Somehow you always know you’ve sent something you shouldn’t have in the first 2 minutes of sending that email.

    This is how you set this up on Outlook 2010… Bring up the new rules UI from the ribbon (Home > Rules > Managed Rules and Alerts)

    image

    Next you’d get another UI just hit next and yes on the following warning.

    image

    After that choose the delay duration. I use 5 minutes

    image

    Bonus!!!

    In case you want to pretend that you are hard at work when you aren’t then you can also use the timed email feature in Outlook. In the new email window use the following.

    image

    image

    Unfortunately if you are an software engineer your work is measured by the amount and quality of code you check in and not emails so sadly this doesn’t work Smile

  • I know the answer (it's 42)

    Windows Phone Mango: Under the hood of Fast Application Switch

    • 3 Comments
    spin

    Fast Application Switch of FAS is kind of tricky for application developers to handle. There are a ton of documentation around how the developers need to handle the various FAS related events. I really liked the video http://channel9.msdn.com/Events/DevDays/DevDays-2011-Netherlands/Devdays059 which walks through the entire FAS experience (jump to around 8:30).

    In this post I want to talk about how the CLR (Common Language Runtime or .NET runtime) handles FAS and what that means to your application. Especially the Active –> Dormant –> Active flow. Most of the documentation/presentation quickly skips over this with the vague “The application is made dormant”. This is equivalent to the “witches use brooms to fly”. What is the navigation mechanism or how the broom is propelled are the more important questions which no one seems to answer (given the time of year, I just couldn’t resist :P) . Do note that most developers can just follow the coding guidelines for FAS and never need to care about this. However, a few developers, especially the ones developing multi-threaded apps and using threading primitives may need to care about this. And hence this post

    Design Principle

    The entire Multi-threading design was made to ensure the following

    Principle 1: Pre-existing WP7 apps shouldn’t break on Mango.
    Principle 2: When an application is sent to the background it shouldn’t consume any resources
    Principle 3: Application should be resumed fast (hence the name FAS)

    As you’d see that these played a vital role in the design being discussed below.

    States

    The states an application goes through is documented in http://msdn.microsoft.com/en-us/library/ff817008(VS.92).aspx 

    Execution Model Diagram for Windows Phone 7.5

    CLR Design

    The diagram below captures the various phase that are used to rundown the application to make it dormant and later re-activated.  It gives the flow of an application as it goes through the Active –> Dormant –> Active state (e.g. the application was running and the user launches another application and then uses the back button to go back to the first application).

    image

    Deactivated

    The Deactivated event is sent to the application to notify it that the user is navigating away from the application. After this there is 3 possible outcomes. It will either remain dormant, gets tombstoned or finally gets killed as well. Since there is no way to know which would happen, the application should store its transient state into the PhoneApplicationPage.State and it’s persistent state into some persistent store like the IsolatedStorage or even in the cloud. However, do note that the application has 10 seconds to handle the Deactivated event. In the 3 possible situations this is how the stored data will be used back

    1. Deactivate –> Dormant –> Active
      In this situation the entire process was intact in memory and just it’s execution was stopped (more about it below). In the Activated event the application can just check the IsApplicationInstancePreserved property. If this is true then the application is coming back from Dormant state and can just use the in-memory state. Nothing needs to be re-read in.
    2. Deactivate –> Dormant –> Tombstoned –> Active
      In this case the application’s in-memory state is gone. However, the PhoneApplicationPage.State is serialized back. So the application should read persistent user data from IsolatedStorage or other permanent sources in the Activated event. At the same time it case use the PhoneApplicationPage.State in the OnNavigatedTo event.
    3. Deactivate –> Dormant –> Terminated
      This case is no different from the application being re-launched. So in the Launching event the user data needs to be re-created from the permanent store. PhoneApplicationPage.State  is empty in this case

    The above supports the #1 principle of not breaking pre-existing WP7 apps. A WP7 app would’ve been designed without considering the dormant stage. Hence it would’ve just skipped the #1 option. So the only issue will be that a WP7 app will result in re-creating the application state each time and not get the benefit of the Dormant stage (it will get the performance of Tombstoning but not break in Mango).

    Post this event the main thread never transitions to user code (e.g. no events are triggered). The requirement on the application for deactivate is that

    1. It shouldn’t run any user code post this point. This means it should voluntarily stop background threads, cancel timers it started and so on
    2. This event needs to be handled in 10 seconds

    If app continues to run code, e.g. in another thread and modifies any application state then that state cannot be persisted (as there will be no subsequent Deactivated type event)

    Paused

    This event is an internal event that is not visible to the application. If the application adhered to the above guideline it shouldn’t care about it anyway.

    The CLR does some interesting stuff on this event. Adhering to the “no resource consumption” principle is very important. Consider that the application had used ManualResetEvent.WaitOne(timeout). Now this timeout can expire in the time when the application was dormant. If that happened it would result in some code running when the application is dormant. This is not acceptable because the phone maybe behind locked screen and this context switch can get the phone out of a low-power state. To handle this the runtime detaches Waits, Thread.Sleep at Paused. Also it cancels all Timers so that no Timer callbacks happen post this Pause event.

    Since Pause event is not visible to the application, it should consider that some time post Deactivated this detach will happen. This is completely transparent to user code. As far as the user code is considered, it just that these handles do not timeout or sleeps do not return during the time the application is dormant. The same WaitHandle objects or Thread.Sleeps start working as is after the application is activated (more about timeout adjustment below).

    This is also the place where other parts of the tear-down happens. E.g. things like asynchronous network calls cancelled, media is stopped.

    Note that the background user threads can continue to execute. Obviously that is a problem because the user code is supposed to voluntarily stop them at Deactivated.

    Freeze

    Besides user code there are a lot of other managed code running in the system. These include but not limited to Silverlight managed code, XNA managed code. Sometime after Paused all managed code is required to stop. This is called the CLRFreeze. At this point the CLR freezes or blocks all managed execution including user background threads. To do that it uses the same mechanism as used for foreground GC. In a later post I’d cover the different mechanics NETCF and desktop CLR uses to stop managed execution.

    Around freeze the application enters the Dormant stage where it’s in 0 CPU utilization mode.

    Thaw

    Managed threads stopped at Freeze are re-started at this point.

    Resuming

    At Resuming the WaitHandle, Thread.Sleep detached in Paused is re-attached. Also timeout adjustments are made during this time. Consider that the user had two handles on which the user code started Waits with 5 seconds and 10 seconds timeouts. After 3 seconds of starting the Waits the application is made dormant. When the application is re-activated, the Waits are restarted with the amount of timeout remaining at the point of the application getting deactivated. So essentially in the case below the first Wait is restarted with 2 seconds and the later with 7. This ensures that relative gap between Sleeps, Waits are maintained.

    image

    Note timers are still not restarted.

    Activated

    This is the event that the application gets and it is required to re-build it’s state when the activation is from Tombstone or just re-use the state in memory when the activation is from Dormant stage.

    Resumed

    This is the final stage or FAS. This is where the CLR restarts the Timers. The idea behind the late start of timers is that they are essentially asynchronous callbacks. So the callbacks are not sent until the application is activated (built its state) and ready to consume those callbacks.

    Conclusion

    1. Ideally application developer needs to take care of the FAS by properly supporting the various events like Deactivated, Activated
    2. Background threads continue to run post the Deactivated event. This might lead to issues by corrupting application state and losing state changes. Handle this by terminating the background threads at Deactivated
    3. While making application dormant Waits, Sleeps and Timers are deactivated. They are later activated with timeout adjustments. This happens transparently to user code
    4. Not all waiting primitives are time adjusted. E.g. Thread.Join(timeout) is not adjusted.
  • I know the answer (it's 42)

    WP7 Mango: The new Generational GC

    • 6 Comments

    In my previous post “Mark-Sweep collection and how does a Generational GC help” I discussed how a generational Garbage Collector (GC) works and how it helps in reducing collection latencies which show up as long load times (startup as well as other load situations like game level load) and gameplay or animation jitter/glitches. In this post I want to discuss how those general principles apply to the WP7 Generational GC (GenGC) specifically.

    Generations and Collection Types

    We use 2 generations on the WP7 referred to as Gen0 and Gen1. A collection could be any of the following 4 types

    1. An ephemeral or Gen0 collection that runs frequently and only collects Gen0 objects. Object surviving the Gen0 collection is promoted to Gen1
    2. Full mark-sweep collection that collects all managed objects (both Gen1 and Gen0)
    3. Full mark-sweep-compact collection that collects all managed objects (both Gen1 and Gen0)
    4. Full-GC with code-pitch. This is run under severe low memory and can even throw away JITed code (something that desktop CLR doesn’t support)

    The list above is in the order of increasing latency (or time they take to run)

    Collection triggers

    GC triggers are the same and as outlined in my previous post WP7: When does the GC run. The distinction between #2 and #3 above is that at the end of all full-GC the collector considers the memory fragmentation and can potentially run the memory compactor as well.

    1. After significant allocation
      After significant amount of managed allocation the GC is started. The amount today is 1MB (called GC quanta) but is open to change. This GC can be ephemeral or full-GC. In general it’s an ephemeral collection. However, it might be a full collection under the following cases
      1. After significant promotion of objects from Gen0 to Gen1 the collections become full collections. Today 5MB of promotion triggers a full GC (again this number is subject to change).
      2. Application’s total memory usage is close to the maximum memory cap that apps have (very little free memory left). This indicates that the application will get terminated if the memory utilization is not cut-back.
      3. Piling up of native resources. We use different heuristics like native to managed memory ratio and finalizer queue heuristics to detect if GC needs to turn to full collection to release native resources being held-up due to Gen0 only collections
    2. Resource allocation failure
      All resource allocation failure means that the system is under memory pressure and hence such collections are always full collection. This can lead to code pitch as well
    3. User code triggered GC
      User code can start collections via the System.GC.Collect() managed API. This results in a full collection as documented by that API. We have not added the method overload System.GC.Collect(generation). Hence there is no way for the developer to start a ephemeral or Gen0 only collection
    4. Sharing server initiated
      Sharing server can detect phone wide memory issue and start GC in all managed processes running. These are full-GC and can potentially pitch code as well.

     

    So from all of the above, the 3 key takeaways are

    1. Low memory or memory cap related collections are always full-collections. These could also turn out to be the more costly compacting collection and/or pitch JITed code
    2. Collections are in general ephemeral and become full-collection after significant object promotion
    3. No fundamental changes to the GC trigger policies. So an app written for WP7 will not see any major changes to the number of GC’s that happen. Some GC will be ephemeral and others will be full-GCs.

     

    Write Barriers/Card-table

    As explained in my previous post, to keep track of Gen1 to Gen0 reference we use write-barrier/card-table.

    Card-table can be visualized as a memory bitmap. Each bit in the card-table covers n bytes of the net address space. Each such bit is called a Card. For managed reference updates like  A.b = C in addition to JITing the real assignment, calls are added to Write-barrier functions. This  write barrier locates the Card corresponding to the address of write and sets it. Later during collection the collector checks all Gen-1 objects covered by a set card-bit and marks Gen-0 references in those objects.

    This essentially brings in two additional cost to the system.

    1. Memory cost of adding those calls to the WB in the JITed code
    2. Cost of executing the write barrier while modifying reference

    Both of the above are optimized to ensure they have minimum execution impact. We only JIT calls to WB when absolutely required and even then we have an overhead of a single instruction to make the call. The WB are hand-tuned assembly code to ensure they take minimum cycles. In effect the net hit on process memory due to write barriers is way less than 0.1%. The execution hit in real-world applications scenarios is also not in general measureable (other than real targeted testing).

    Differences from desktop

    In principle both the desktop GC and the WP7 GC are similar in that they use mark-sweep generational GC. However, there are differences based on the fact that the WP7 GC targets a more constrained device.

    1. 2 generations as opposed to 3 on the desktop
    2. No background or incremental collection supported on the phone
    3. WP7 GC has additional logic to track and handle application policies like application memory caps and total memory utilization
    4. The phone CLR uses a very different memory layout which is pooled and not linear. So no concept of Large Object Heap. So lifetime of large objects is no different
    5. No support for particular generation collection from user code
  • I know the answer (it's 42)

    WP7 Mango: Mark-Sweep collection and how does a Generational GC help

    • 7 Comments

    About a month back we announced that in the next release of Windows Phone 7 (codenamed Mango) we will ship a new garbage collector in the CLR. This garbage collector (GC) is a generational garbage collector.

    This post is a “back to basics” post where I’ll try to examine how a mark-sweep GC works and how adding generational collection helps in boosting it’s performance. We will take a simplified look at how mark-sweep-compact GC works and how generational GC can enhance it’s performance. In later posts I’ll try to elaborate on the specifics of the WP7 generational GC and how to ensure you get the best performance out of it.

    Object Reference Graph

    Objects in the memory can be considered to be a graph. Where every object is a node and the references from one object to another are edges. Something like

    image

    To use an object the code should be able to “reach” it via some reference. These are called reachable objects (in blue). Objects like a method’s local variable, method parameters, global variables, objects held onto by the runtime (e.g. GCHandles), etc. are directly reachable. They are the starting points of reference chains and are called the roots (in black).

    Other objects are reachable if there are some references to them from roots or from other objects that can be reached from the roots. So Object4 is reachable due to the Object2->Object4 reference. Object5 is reachable because of Object1->Object3->Object5 reference chain. All reachable objects are valid objects and needs to be retained in the system.

    On the other hand Object6 is not reachable and is hence garbage, something that the GC should remove from the system.

    Mark-Sweep-Compact GC

    A garbage collector can locate garbage like Object6 in various ways. Some common ways are reference-counting, copying-collection and Mark-Sweep. In this section lets take a more pictorial view of how mark-sweep works.

    Consider the following object graph

    1

    At first the GC pauses the entire application so that the object graph is not being mutated (as in no new objects or references are being created). Then it goes into the mark phase. In mark phase the GC traverses the graph starting at the roots and following the references from object to object. Each time it reaches an object through a reference it flips a bit in the object header indicating that this object is marked or in other words reachable (and hence not garbage). At the end everything looks as follows

    2

    So the 2 roots and the objects A, C, D are reachable.

    Next it goes into the sweep phase. In this phase it starts from the very first object and examines the header. If the header’s mark bit is set it means that it’s a reachable object and the sweep resets that bit. If the header’s bit is not set, it’s not reachable and is flagged as garbage.

    3

    So B and E gets flagged as garbage. Hence these areas are free to be used for other objects or can be released back to the system

    4

    This is where the GC is done and it may resume the execution of the application. However, if there are too many of those holes (released objects) created in the system, then the memory gets fragmented. To reduce memory fragmentation. The GC may compact the memory by moving objects around. Do note that compaction doesn’t happen for every GC, it is run based on some fragmentation heuristics.

    5

    Both C and D is moved here to squeeze out the hole for B. At the same time all references to these objects in the system is also update to point to the correct location.

    One important thing to note here is that unlike native objects, managed objects can move around in memory due to compaction and hence taking direct references to it (a.k.a memory pointers) is not possible. In case this is ever required, e.g. a managed buffer is passed to say the microphone driver native code to copy recorded audio into, the GC has to be notified that the corresponding managed object cannot be moved during compaction. If the GC runs a compaction and object moves during that microphone PCM data copy, then memory corruption will happen because the object being copied into would’ve moved. To stop that, GCHandle has to be created to that object with GCHandleType.Pinned to notify the GC that the corresponding object should never move.

    On the WP7 the interfaces to these peripherals and sensors are wrapped by managed interfaces and hence the WP7 developer doesn’t really have to do these things, they are taken care offm under the hood by those managed interfaces.

    The performance issue

    As mentioned before during the entire GC the execution of the application is stopped. So as long as the GC is running the application is frozen. This isn’t a problem in general because the GC runs pretty fast and infrequently. So small latencies of the order of 10-20ms is not really noticeable.

    However, with WP7 the capability of the device in terms of CPU and memory drastically increased. Games and large Silverlight applications started coming up which used close to 100mb of memory. As memory increases the number of references those many objects can have also increases exponentially. In the scheme explained above the GC has to traverse each and every object and their reference to mark them and later remove them via sweep. So the GC time also increases drastically and becomes a function of the net workingset of the application. This results in very large pauses in case of large XNA games and SL applications which finally manifests as long startup times (as GC runs during startup) or glitches during the game play/animation.

    Generational Approach

    If we take a look at a simplified allocation pattern of a typical game (actually other apps are also similar), it looks somewhat like below

    image

    The game has a large steady state memory which contains much of it’s steady state data (which are not released) and then per-frame it goes on allocating/de-allocating some more data, e.g. for physics, projectiles, frame-data. To collect this memory layout the traditional GC has to walk or traverse the entire 50+ mb of data to locate the garbage in it. However, most of the data it traverses will almost never be garbage and will remain in use.

    This application behavior is used for the Generational GC premise

    1. Most objects die young
    2. If an object survives collection (that is doesn’t die young) it will survive for a very long time

    Using these premise the generational GC tries to segregate the managed heap into older and younger generations objects. The younger generation called Gen-0 is collected in each GC (premise 1), this is called the Ephemeral or Gen0 Collection. The older generation is called Gen-1. The GC rarely collects the Gen-1 as the probability of finding garbage in it is low (premise 2).

    image

    So essentially the GC becomes free of the burden of the net working set of the application.

    Most GC will be ephemeral GC and it will only traverse the recently allocated objects, hence the GC latency remains very low. Post collection the surviving objects are promoted to the higher generation. Once a lot of objects are promoted, the higher generation starts becoming full and then a full collection is run (which collects both gen-1 and gen-0). However, due to premise 1, the ephemeral collection finds a lot of garbage in their runs and hence promotes very few objects. This means the growth rate of the higher generation is low and hence full-collection will run very infrequently.

    Ephemeral/Gen-0 collection

    Even in ephemeral collection the GC needs to deterministically find all objects in Gen-0 which are not reachable. This means the following objects needs to survive a Gen-0 collection

    1. Objects directly reachable from roots
    2. Root –> Gen0 –> Gen-0 objects (indirectly reachable from roots)
    3. Objects referenced from Gen1 to Gen0

    Now #1 and #2 pose no issues as in the Ephemeral GC, we will anyway scan all roots and Gen-0 objects. However to find objects from Gen1 which are referencing objects in Gen-0, we would have to traverse and look into all Gen1 objects. This will break the very purpose of having segregating the memory into generation. To handle this write-barrier+card-table technique is used.

    The runtime maintains a special table called card-table. Each time any references are taken in the managed code e.g. a.b = c; the code JITed for this assignment also updates an internal data-structure called CardTable to capture that write. Later during the ephemeral collection, the GC looks into that table to find all the ‘a’ which took new references. If that ‘a’ is a gen-1 object and ‘c’ a gen-0 object then it marks ‘c’ recursively (which means all objects reachable from ‘c’ is also marked). This technique ensures that without examining all the gen-1 objects we can still find all live objects in Gen-0. However, the cost paid is

    1. Updating object reference is a little bit more costly now
    2. Making old object taking references to new objects increases GC latency (more about these in later posts)

    Putting it all together, the traditional GC would traverse all references shown in the diagram below. However, an ephemeral GC can work without traversing the huge number of Red references.

    image

    It scans all the Roots to Gen-0 references (green) directly. It traverses all the Gen1->Gen0 references (orange) via the CardTable data structure.

    Conclusion

    1. Generational GC reduces the GC latency by avoiding looking up all objects in the system
    2. Most collections are gen-0 or ephemeral collection and are of low latency this ensures fast startup and low latency in game play
    3. However, based on how many objects are being promoted full GC’s are run sometimes. When they do, they exhibit the same latency as a full GC on the previous WP7 GC

    Given the above most applications will see startup and in-execution perf boost without any modification. E.g today if an application allocates 5 MB of data during startup and GC runs after every MB of allocation then it traverses 15mb (1 + 2 + 3 + 4 + 5). However, with GenGC it might get away with traversing as low as only 5mb.

    In addition, especially game developers can optimize their in-gameplay allocations such that during the entire game play there is no full collection and hence only low-latency ephemeral collections happens.

    How well the generational scheme works depend on a lot of parameters and has some nuances. In the subsequent posts I will dive into the details of our implementation and some of the design choices we made and what the developers needs to do to get the most value out of it.

Page 1 of 16 (376 items) 12345»