I know the answer (it's 42)

A blog on coding, .NET, .NET Compact Framework and life in general....

Posts
  • I know the answer (it's 42)

    C# generates virtual calls to non-virtual methods as well

    • 2 Comments
    Hyderabad Microsoft Campus

    Sometime back I had posted about a case where non-virtual calls are used for virtual methods and promised posting about the reverse scenario. This issue of C# generating callvirt IL instruction even for non-virtual method calls keeps coming back on C# discussion DLs every couple of months. So here it goes :)

    Consider the following code

    class B
    {
        public virtual void Virt(){
            Console.WriteLine("Base::Virt");
        }
    
        public void Stat(){
            Console.WriteLine("Base::Stat");
        }
    }
    
    class D : B
    {
        public override void Virt(){
            Console.WriteLine("Derived::Virt");
        }
    }
    
    class Program
    {
        static void Main(string[] args)
        {
            D d = new D();
            d.Stat(); // should emit the call IL instruction
            d.Virt(); // should emit the callvirt IL instruction
        }
    }

    The basic scenario is that a base class defines a virtual method and a non-virtual method. A call is made to base using a derived class pointer. The expectation is that the call to the virtual method (B.Virt) will be through the intermediate language (IL) callvirt instruction and that to the non-virtual method (B.Stat) through call IL instruction.

    However, this is not true and callvirt is used for both. If we open the disassembly for the Main method using reflector or ILDASM this is what we see

        L_0000: nop 
        L_0001: newobj instance void ConsoleApplication1.D::.ctor()
        L_0006: stloc.0 
        L_0007: ldloc.0 
        L_0008: callvirt instance void ConsoleApplication1.B::Stat()
        L_000d: nop 
        L_000e: ldloc.0 
        L_000f: callvirt instance void ConsoleApplication1.B::Virt()
        L_0014: nop 
        L_0015: ret 

    Question is why? There are two reasons that have been brought forward by the CLR team

    1. API change.
      The reason is that .NET team wanted a change in an method (API) from non-virtual to virtual to be non-breaking. So in effect since the call is anyway generated as callvirt a caller need not be recompiled in case the callee changes to be a virtual method.
    2. Null checking
      If a call is generated and the method body doesn't access any instance variable then it is possible to even call on null objects successfully. This is currently possible in C++, see a post I made on this here.
    3. With callvirt there's a forced access to this pointer and hence the object on which the method is being called is automatically checked for null.

    callvirt does come with additional performance cost but measurement showed that there's no significant performance difference between call with null check vs callvirt. Moreover, since the Jitter has full metadata of the callee, while jitting the callvirt it can generate processor instructions to do static call if it figures out that the callee is indeed non-virtual.

    However, the compiler does try to optimize situations where it knows for sure that the target object cannot be null. E.g. for the expression i.ToString(); where i is an int call is used to call the ToString method because Int32 is value type (cannot be null) and sealed.

  • I know the answer (it's 42)

    Guy or a Girl

    • 5 Comments
    Hyderabad Microsoft Campus

    One interesting aspect of working in Internationally distributed team is that sometime it gets difficult to make common judgements. E.g. when we see a name we inherently figure out whether it's a male or female name and refer to that person as such in email. The issue is that I cannot always make the same judgement in case of names from another country/culture.

    In my previous team in a long email thread someone continually referred to Khushboo as "he". Khushboo didn't correct him and it went on for some time until I pointed out that to him in a separate email. Today I was typing an email to someone and suddenly figured out I had no idea whether one of the person I'm referring to is male or female. I took a wild guess and I'm waiting to get corrected.

  • I know the answer (it's 42)

    Baby smash

    • 1 Comments
    Waiting in the Microsoft lobby

    What is the common thing between every programmer dad/mom? The moment they get onto a new UI platform they write a child proofing application for the keyboard.

    Scott Hanselman has just posted his version baby smash (via AmitChat). The funny thing is I've written one in WPF and so did my ex-manager.

  • I know the answer (it's 42)

    Cell phone assault

    • 4 Comments
    Visakhapatnam - Ramakrishna beach

    Last two weeks my cell phone got assaulted thrice. First it was someone sending me a virus over bluetooth (a sis file actually). This happened when I was taking a photograph of my daughter with the cell phone camera in a restaurant (Aromas of China, City Center mall in Hyderabad).

    The next one was bluetooth based advertisement messages in the Forum Mall in Bangalore. They were actually sending offers of the hour over bluetooth and I got 2 such messages.

    The third incident was in the airport when someone was again trying to send me and make me open an trojan app.

    I was really surprised with the rapid growth of cell phone based attacks. Worst is few people know of this. My wife had no idea that you can actually send applications over bluetooth and that can infect the phone.

  • I know the answer (it's 42)

    Stylecop has been released

    • 1 Comments
    me

    Microsoft released the internal tool StyleCop to public under the fancy yet boring name of Microsoft Source Analysis for C#. Even though the name is boring the product is not.

    You'll love this tool when it imposes consistent coding style across your team. You'll hate this tool when it imposes the same on you. The result is stunning looking, consistently styled code which your whole team can follow uniformly.

    StyleCop has been in use for a long time internally in Microsoft and many teams mandate it's usage. My previous team VSTT used it as well. The only crib I had is that it didn't allow single line getters and setters (and our team didn't agree to disable this rule either).

    // StyleCop didn't like this one
    public int Foo
    {
        get { return Foo; }
    }
    
    // StyleCop wanted this instead
    public int Foo
    {
        get
        {
            return Foo;
        }
    }

    Read more about using StyleCop here. You can set this up to be run as a part of your build process as documented here. Since this is plugged in as a MsBuild project you can use it in as a part of Team Foundation Build process as well.

    Let the style wars begin in team meetings :)

    Update: Jason corrected me in the comment, apparently StyleCop indeed allows single line getters and setters (seems like they got it fixed since the last time I used it).

  • I know the answer (it's 42)

    Building Scriptable Applications by hosting JScript

    • 4 Comments
    The kind of food I should have, but I don't

    If you have played around with large applications, I'm sure you have been intrigued how they have been build to be extendable. The are multiple options

    1. Develop your own extension mechanism where you pick up extension binaries and execute them.
      One managed code example is here, where the application loads dlls (assemblies) from a folder and runs specific types from them. A similar unmanaged approach is allow registration of guids and use COM to load types that implement those interfaces
    2. Roll out your own scripting mechanism:
      One managed example is here where on the fly compilation is used. With DLR hosting mechanism coming up this will be very easy going forward
    3. Support standard scripting mechanism:
      This involves hosting JScript/VBScript inside the application and exposing a document object model (DOM) to it. So anyone can just write standard JScript to extend the application very much like how JScript in a webpage can extend/program the HTML DOM.

    Obviously the 3rd is the best choice if you are developing a native (unmanaged) solution. The advantages are many because of low learning curve (any JScript programmer can write extensions), built in security, low-cost.

    In this post I'll try to cover how you go about doing exactly that. I found little online documentation and took help of Kaushik from the JScript team to hack up some code to do this.

    The Host Interface

    To host JScript you need to implement the IActiveScriptSite. The code below shows how we do that stripping out the details we do not want to discuss here (no fear :) all the code is present in the download pointed at the end of the post). The code below is in the file ashost.h

    class IActiveScriptHost : public IUnknown 
    {
    public:
        // IUnknown
        virtual ULONG __stdcall AddRef(void) = 0;
        virtual ULONG __stdcall Release(void) = 0;
        virtual HRESULT __stdcall QueryInterface(REFIID iid,
    void **obj) = 0; // IActiveScriptHost virtual HRESULT __stdcall Eval(const WCHAR *source,
    VARIANT *result) = 0; virtual HRESULT __stdcall Inject(const WCHAR *name,
    IUnknown *unkn) = 0;
    }; class ScriptHost : public IActiveScriptHost, public IActiveScriptSite { private: LONG _ref; IActiveScript *_activeScript; IActiveScriptParse *_activeScriptParse; ScriptHost(...){} virtual ~ScriptHost(){} public: // IUnknown virtual ULONG __stdcall AddRef(void); virtual ULONG __stdcall Release(void); virtual HRESULT __stdcall QueryInterface(REFIID iid, void **obj); // IActiveScriptSite virtual HRESULT __stdcall GetLCID(LCID *lcid); virtual HRESULT __stdcall GetItemInfo(LPCOLESTR name, DWORD returnMask, IUnknown **item, ITypeInfo **typeInfo); virtual HRESULT __stdcall GetDocVersionString(BSTR *versionString); virtual HRESULT __stdcall OnScriptTerminate(const VARIANT *result, const EXCEPINFO *exceptionInfo); virtual HRESULT __stdcall OnStateChange(SCRIPTSTATE state); virtual HRESULT __stdcall OnEnterScript(void); virtual HRESULT __stdcall OnLeaveScript(void); virtual HRESULT __stdcall OnScriptError(IActiveScriptError *error); // IActiveScriptHost virtual HRESULT __stdcall Eval(const WCHAR *source,
    VARIANT *result); virtual HRESULT __stdcall Inject(const WCHAR *name,
    IUnknown *unkn);
    public: static HRESULT Create(IActiveScriptHost **host) { ... } };

    Here we are defining an interface IActiveScriptHost. ScriptHost implements the IActiveScriptHost and also the required hosting interface IActiveScriptSite. IActiveScriptHost exposes 2 extra methods (in green) that will be used from outside to easily host js scripts.

    In addition ScriptHost also implements a factory method Create. This create method does the heavy lifting of using COM querying to get the various interfaces its needs (IActiveScript, IActiveScriptParse) and stores them inside the corresponding pointers.

    Instantiating the host

    So the client of this host class creates the ScriptHosting instance by using the following (see ScriptHostBase.cpp)

    IActiveScriptHost *activeScriptHost = NULL;
    HRESULT hr = S_OK;
    HRESULT hrInit = S_OK;
    
    hrInit = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);
    if(FAILED(hr)) throw L"Failed to initialize";
    
    hr = ScriptHost::Create(&activeScriptHost);
    if(FAILED(hr)) throw L"Failed to create ScriptHost";

    With this the script host is available through activeScriptHost pointer and we already have JScript engine hosted in our application

    Evaluating Scripts

    Post hosting we need to make it do something interesting.This is where the IActiveScriptHost::Eval method comes in.

    HRESULT __stdcall ScriptHost::Eval(const WCHAR *source, 
    VARIANT *result) { assert(source != NULL); if (source == NULL) return E_POINTER; return _activeScriptParse->ParseScriptText(source, NULL,
    NULL, NULL, 0, 1,
    SCRIPTTEXT_ISEXPRESSION,
    result, NULL); }

    Eval accepts a text of the script, makes it execute using IActiveScriptParse::ParseScriptText and returns the result.

    So effectively we can accept input from the console and evaluate it (or read a file and interpret the complete script in it.

    while (true) 
    {
        wcout << L">> ";
        getline(wcin, input);
        if (quitStr.compare(input) == 0) break;
    
        if (FAILED(activeScriptHost->Eval(input.c_str(), &result)))
    { throw L"Script Error"; } if (result.vt == 3) wcout << result.lVal << endl; }

    So all this is fine and at the end you can run the app (which BTW is a console app) and this what you can do.

    JScript sample Host
    q! to quit
    
    >> Hello = 7
    7
    >> World = 6
    6
    >> Hello * World
    42
    >> q!
    Press any key to continue . . .

    So you have extended your app to do maths for you or rather run basic scripts which even though exciting but is not of much value.

    Extending your app

    Once we are past hosting the engine and running scripts inside the application we need to go ahead with actually building the application's DOM and injecting it into the hosting engine so that JScript can extend it.

    If you already have a native application which is build on COM (IDispatch) then you have nothing more to do. But lets pretend that we actually have nothing and need to build the DOM.

    To build the DOM you need to create IDispatch based DOM tree. There can be more than one roots. In this post I'm not trying to cover how to build IDispatch based COM objects (which you'd do using ATL or some such other means). However, for simplicity we will roll out a hand written implementation which implements an interface as below.

    class IDomRoot : public IDispatch 
    {
        // IUnknown
        virtual ULONG __stdcall AddRef(void) = 0;
        virtual ULONG __stdcall Release(void) = 0;
        virtual HRESULT __stdcall QueryInterface(REFIID iid, 
    void **obj) = 0; // IDispatch virtual HRESULT __stdcall GetTypeInfoCount( UINT *pctinfo) = 0; virtual HRESULT __stdcall GetTypeInfo( UINT iTInfo, LCID lcid, ITypeInfo **ppTInfo) = 0; virtual HRESULT __stdcall GetIDsOfNames( REFIID riid,
    LPOLESTR *rgszNames, UINT cNames, LCID lcid,
    DISPID *rgDispId) = 0;
    virtual HRESULT __stdcall Invoke( DISPID dispIdMember, REFIID riid,
    LCID lcid, WORD wFlags,
    DISPPARAMS *pDispParams, VARIANT *pVarResult,
    EXCEPINFO *pExcepInfo, UINT *puArgErr) = 0; // IDomRoot virtual HRESULT __stdcall Print(BSTR str) = 0; virtual HRESULT __stdcall get_Val(LONG* pVal) = 0; virtual HRESULT __stdcall put_Val(LONG pVal) = 0; };

    At the top we have the standard IUnknown and IDispatch methods and at the end we have our DOM Root's methods (in blue). It implements a Print method that prints a string and a property called Val (with a set and get method for that property).

    The class DomRoot implements this method and an additional method named Create which is the factory to create it. Once we are done with creating this we will inject this object inside the JScript scripting engine. So our final script host code looks as follows

    IActiveScriptHost *activeScriptHost = NULL;
    IDomRoot *domRoot = NULL;
    HRESULT hr = S_OK;
    HRESULT hrInit = S_OK;
    
    hrInit = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);
    if(FAILED(hr)) throw L"Failed to initialize";
    
    // Create the host
    hr = ScriptHost::Create(&activeScriptHost);
    if(FAILED(hr)) throw L"Failed to create ScriptHost";
    
    // create the DOM Root
    hr = DomRoot::Create(&domRoot);
    if(FAILED(hr)) throw L"Failed to create DomRoot";
    
    // Inject the created DOM Root into the scripting engine
    activeScriptHost->Inject(L"DomRoot", (IUnknown*)domRoot);

    What happens with the inject is as below

    map rootList;
    typedef map::iterator MapIter;
    typedef pair InjectPair;
    
    HRESULT __stdcall ScriptHost::Inject(const WCHAR *name, 
    IUnknown *unkn) { assert(name != NULL); if (name == NULL) return E_POINTER; _activeScript->AddNamedItem(name, SCRIPTITEM_GLOBALMEMBERS |
    SCRIPTITEM_ISVISIBLE ); rootList.insert(InjectPair(std::wstring(name), unkn)); return S_OK; }

    In inject we store the name of the object and the corresponding IUnknown in a map (hash table). Each time the script will encounter a object in its code it calls GetItemInfo with that objects name and we then de-reference into the hash table and return the corresponding IUnknown

    HRESULT __stdcall ScriptHost::GetItemInfo(LPCOLESTR name,
                                        DWORD returnMask,
                                        IUnknown **item,
                                        ITypeInfo **typeInfo)
    {	
        MapIter iter = rootList.find(name);
        if (iter != rootList.end())
        {
            *item = (*iter).second;
            return S_OK;
        }
        else
            return E_NOTIMPL;
    }

    After that the script calls into that IDispatch to look for properties and methods and calls into them.

    The Whole Flow

    By now we have seen a whole bunch of code. Let's see how the whole thing works together. Let's assume we have a extension written in in JScript and it calls DomRoot.Val = 5; this is what happens to get the whole thing to work

    1. During initialization we had created the DomRoot object (DomRoot::Create) which implements IDomRoot and injected it in the script engine via AddNamedItem and stored it at our end in a rootList map.
    2. We call activeScriptHost->Eval(L"DomRoot.Val = 5;", ...) to evaluate the script. Evan calls _activeScriptParse->ParseScriptText.
    3. When the script parse engine sees the "DomRoot" name it figures out that the name is a valid name added with AddNamedItem and hence it calls its hosts ScriptHost::GetItemInfo("DomRoot");
    4. The host we have written looks up the same map filled during Inject and returns the IUnknown of it to the scripting engine. So at this point the scripting engine has a handle to our DOM root via an IUnknown to the DomRoot object
    5. The scripting engine does a QueryInterface on that IUnknown to get the IDispatch interface from it
    6. Then the engine calls the IDispatch::GetIDsOfNames with the name of the property "Val"
    7. Our DomRoots implementation of GetIDsOfNames returns the required Dispatch ID of the Val property (which is 2 in our case)
    8. The script engine calls IDispatch::Invoke with that dispatch id and a flag telling whether it wants the get or the set. In this case its set. Based on this the DomRoot re-directs the call to DomRoot::put_Val
    9. With this we have a full flow of the host to script back to the DOM

    In action

    JScript sample Host
    q! to quit
    
    >> DomRoot.Val = 5;
    5
    >> DomRoot.Val = DomRoot.Val * 10
    50
    >> DomRoot.Val
    50
    >> DomRoot.Print("The answer is 42");
    The answer is 42

     

    Source Code

    First of all the disclaimer. Let me get it off my chest by saying that the DomRoot code is a super simplified COM object. It commits nothing less than sacrilege. You shouldn't treat it as a sample code. I intentionally didn't do a full implementation so that you can step into it without the muck of IDispatchImpl or ATL coming into your way.

    However, you can treat the script hosting part (ashost, ScriptHostBase) as sample code (that is the idea of the whole post :) )

    The code organization is as follows

    ashost.cpp, ashost.h - The Script host implementation
    DomRoot.cpp, DomRoot.h - The DOM Root object injected into the scripting engine
    ScriptHostBase.cpp - Driver

    Note that in a real life example the driver should load jscript files from a given folder and execute it.

    Download from here

  • I know the answer (it's 42)

    Model, View, Controller

    • 1 Comments
    Chocs

    These days the whole world is abuzz with the Model, View, Controller (MVC) architecture. This is not something new and is known by computer scientists for close to 30 years. I guess the new found popularity is due to the fact that this has heavy application is web development and lot of main-stream web development platform are putting in support for this. Ruby-on-rails and ASP.NET MVC are classic examples.

    Coding horror has a nice post on this topic. I liked the following statement it made

    "Skinnability cuts to the very heart of the MVC pattern. If your app isn't "skinnable", that means you've probably gotten your model's chocolate in your view's peanut butter, quite by accident."

    I actually use a very similar concept. The moment I see an application's architecture (be it an interview candidate or a friend showing off something) I ask the question "Can you write a console version of this easily?". If the answer is no or it needs a re-design it means that the separation of model, view and controller is not correct. You are going to have a nightmare if you write and maintain that software.

  • I know the answer (it's 42)

    You need to be careful about how you use count in for-loops

    • 3 Comments
    Negotiating

    Lets consider the following code

     MyCollection myCol = new MyCollection();
     myCol.AddRange(new int[] { 1, 2, 3, 4, 5, });
     for (int i = 0; i < myCol.Count; ++i)
     {
         Console.WriteLine("{0}", i);
     }

    What most people forgets to consider is the condition check that happens for the for loop (in Red). In this case it is actually a call to a method get_Count. The compiler doesn't optimize this away (even when inlined)!! Since the condition is evaluated after each iteration, there's actually a method call happening each time. For most .NET code it is low-cost because ultimately it's just a small method with count being returned from an instance variable (if inlined the method call overhead also goes away). Something like this is typically written for count.

    public int Count
    {
        get
        {  
            return this.count;
        }
    }

    Someone wrote something very similar in C, where in the for-loop he used strlen. Which means that the following code actually is O(n2) because each time you are re-calculating the length...

    for(int i = 0; i < strlen(str); ++i) {...}

    Why the compiler cannot optimize it is another question. For one it's a method call and it is difficult to predict whether there's going to be any side-effect. So optimizing the call away can have other disastrous effect.

    So store the count and re-use it if you know that the count is not going to change...

  • I know the answer (it's 42)

    Alternatives to XML

    • 0 Comments
    Halloween

    Though not as much as the Jeff Atwood I don't like overuse of XML as well. In our last project we used XML in a bunch of places where it made sense and also planned to use it in bunch of other places where it didn't. For some strange reason some folks think its actually readable and suggested we use XML to dump the user actions we recorded because it's easy to parse and is human readable/editable. While I'm perfectly fine doing it in XML, but definitely not for that reason.

    Anyways, sense prevailed and even though we do store it in XML we dump out automation source code in obviously more readable C#/VB.NET.

    Before I completely get sidetracked let me state that this post is not about XML or about JSON but about the fact  that there exists many alternatives to both. Head on to here (via Coding Horror)...

  • I know the answer (it's 42)

    What is the similarity between Windows and Textile Industry

    • 0 Comments

    ...they both use threads and fibers :)

    Pink....

    Most people are aware of processes and threads. Windows offers an even finer granularity over execution. This is called Fiber. To quote MSDN

    A fiber is a unit of execution that must be manually scheduled by the application. Fibers run in the context of the threads that schedule them.

    The obvious question that should come to managed code developers is whether .NET supports Fibers? The answer is from 2.0, CLR does support fiber mode. This means that there are hosting APIs using which a host can make CLR use Fibers to run its threads. So in effect there's no requirement that a .NET thread be tied to the underlying OS's threads. A classic example is that of SQL Server which hosts .NET in fiber mode because it wants to take care of scheduling directly. Head over to here (scroll down to SQL Server section) for an excellent read about this topic.

    There's also the book Customizing the Microsoft .NET Framework Common Language Runtime written by Steven Pratschner which has a chapter on customizing CLR to use Fibers. I have already ordered the book. Once it comes in and I get a chance to read it, I'll post more about this.

  • I know the answer (it's 42)

    My workstation layout

    • 3 Comments

    Our team just moved to a new building in Microsoft India campus. A lot of people were going around checking out other people's office. I got asked couple of times about my workstation layout and thought I'll do a quick post on that.

    Workstation

    Like most people in our team I use a dual monitor setup. Last time I estimated I spend about 12% of my life looking for things (40% of it for my car keys). So even though there are people who use 7 monitors I'm never going to join that gang and bump that number to 30% by adding the time to search for my app window. And Microsoft will definitely not fund those many monitors either :). So for me 2 is enough.

    Both monitors I have, are standard HP1965 (19" monitors) hooked on to ATI Radeon cards. One of the monitors (the one on the left) is looped through a KVM switch and I can rotate that among the other 2 machines that I have. The other I have rotated in portrait (vertical) mode and use it primarily for coding. The image below should explain why

    Workstation

    This provides a much better code view. In the font size I use (Consolas 9pt) I can see 74 lines of code vs 54 in the landscape mode. So this means 37% more!!! Since I have ATI card I use Catalyst Control Center to rotate the display.

    I also prefer dark background and use white/light-color text on it. My eyes feel better with it. I keep both Visual Studio and GVim in dark color mode. You can download my vssettings from here and .vimrc from here.

    That kind of rounds up the workstation layout that I use in office. I try my best not to work on the laptop directly.  I TS on to it in case I need to use it for any reason. When I took the picture it was quietly napping on the other side of my office :)

  • I know the answer (it's 42)

    Struggling with email overload

    • 3 Comments

    It literally rains email at Microsoft (if you've been to Seattle/Redmond you know why :) )

    I've always struggled to keep up with the email in Microsoft. When I joined I was stunned with the downpour. The number of email I got on the first day was more than what I got in a month in Adobe. The situation worsened when I went through team transition last month because for some time I had to listen to the email threads of both teams DLs (distribution list).

    I have tried using various techniques to cope before. This included complex labyrinth of folders (I've met folks with 9 level deep folder nesting), rules, search folders, you name it!!

    All of them failed until I saw this post from John Lam. This talks about reverse pimping outlook. Even though I didn't go to the extreme he did, I basically got the following done

    1. Removed all rules, toolbars, folders
    2. Created 3 simple folders Archive, Followup, Automated
    3. Created one big fat rule to move all emails from automated DLs (e.g. checkin notices) to the automated folder
    4. Copy pasted macros from Johns blog and setup toolbar buttons to launch these macros and also associated short-cuts with these. Go to John's blog for the macros or download from here
    5. Effectively all emails from human beings land up in my inbox.
    6. When an email comes I read it. After that I have only 3 options
      1. Delete it
      2. Hit Alt+R to archive it (this launches a macro to mark the email as read and moves it to the archive folder)
      3. Hit Alt+U for follow up (this launches a macro to flag the email to be replied by EOD and moves it to the follow up folder)

    This ensures that I read all emails that come to me, I never miss an email now. I go on hitting zero emails in the inbox couple of time a day. Couple of times a day I scan the followup folder to ensure that I have replied/taken-action on all emails in the follow-up folder.

    Even though the process sounds complex it's working miraculously for me for the last two months. I can finally forget about email overload.

    My outlook looks as shown below. It's more cluttered than John's version because I need to see upcoming meetings in the right pane.

    image

  • I know the answer (it's 42)

    Moving blogs to http://geekgyan.blogspot.com/

    • 1 Comments
     

    It's been a drag maintaining two blogs for some time now. So going forward I'll shift to posting only on my personal blog. I'll be cross posting for some more time and then stop this blog.

    The new address is: http://geekgyan.blogspot.com/

    Point your aggregator to: http://geekgyan.blogspot.com/feeds/posts/default?alt=rss

    We are moving in Microsoft India as well. We will move to the spanking new Building-3. While putting my stuff in the boxes supplied, I thought its a good time to announce my blog move as well :)

  • I know the answer (it's 42)

    Forcing a Garbage Collection is not a good idea

    • 7 Comments
    Our cars 

    Someone asked on a DL about when to force a GC using GC.Collect. This has been answered by many experts before, but I wanted to re-iterate. The simple answer is

    "extremely rarely from production code and if used ensure you have consulted the GC folks of your platform".

    Lets dissect the response...

    Production Code

    The "production code" bit is key here. It is always fine to call GC.Collect from test/debug code when you want to ensure your application performs fine when a sudden GC comes up or you want to verify all your objects have been disposed properly or the finalizers behave correctly. All discussion below is relevant only to shipping production code.

    Rarely

    A lot of folks jumped into the thread giving examples of where they have done/seen GC.Collect being used successfully. I tried understanding each of the scenarios and explaining why in my opinion it is not required and doesn't qualify to make it to the rare scenario. I have copy pasted some of these scenarios with my response below (with some modifications).

    1. For example, your process has a class which wraps a native handle and implements Dispose pattern. And the handle will used in exclusive mode. The client of this class forgets to call Dispose/Close to release the native handle (they rely on Finalizer), then other process (suppose the native handle is inter-process resource) have to wait until next GC or even full GC to run Finalizer, since when Finalizer will run is not expected – other process will suffer from waiting such exclusive sharing resource…
      This is a program bug. You shouldn’t be covering a dispose pattern misuse with a GC call. You are essentially shipping buggy code or in case you provide the framework then allowing users to write buggy code. This should be fixed by ensuring that the clients call the dispose and not by forcing GC. I would suggest adding an Assert in the finalizer in your debug bits to ensure that you fail in the test scenario. In case of Fx write the right code and let performance issues surface so that users also writes the right code
    2. Robotics might be another example—you might want time-certain sampling and processing of data.
      .NET is not a Real time system. If you assume or try to simulate Real Time operations on it then I have only sympathy to offer :). Is the next suggestion to call all methods in advance so that they are already jitted?
    3. Another case I can think of is the program is either ill-designed or designed specially to have a lot of Finalizers (they wrap a lot of native resources in the design?). Objects with Finalizer cannot be collected in generation 0, at least generation 1, and have great chance to go to generation 2…
      This is not correct. The dispose pattern is there exactly for this reason. Any reason why you are not using dispose pattern and using GC suppress in the dispose method?
    4. Well, one “real world” scenario that I know of is in a source control file diff creation utility.  It loops through processing each file in the pack, and loads that entire file into memory in order to do so it calls GC.Collect when it’s finished with each file, so that the GC can reclaim the large strings that are allocated.
      Why cannot it just not do anything and is there a perf measurement to indicate otherwise? GC has per-run overhead. So incase nothing is done it may so happen that for a short diff creation the GC is never run or atleast run for every 10 files handled leading to less number of runs and hence better perf. For a batch system where there is no user interaction happening in the middle what is the issue if there is a system decided GC in the middle of the next file?
    5. A rare case in my mind is you allocate a lot of large objects > 85k bytes, and such size objects will be treated as generation 2 objects. You do not want to wait for next full GC to run (normally GC clears generation 0 or generation 1), you want to compact managed heap as soon as possible.
      Is it paranoia or some real reason? If it holds native resources then you are covered by dispose patterns and if you are considering memory pressure then isn’t GC there to figure out when to do it for you?

    In effect most usage are redundant.

    Question is then what qualifies as a rare scenario where you want to do a GC.Collect. This has been explained by Rico Mariani (here) and Patrick Dussud (here).

    ‘In a nutshell, don’t call it, unless your code is unloading large amounts of data at well-understood, non-repeating points (like at the end of a level in a game), where you need to discard large amounts of data that will no longer be used.”

    Its almost always when you know for sure a GC run is coming ahead (which you completely understand and maybe confirmed with the GC guys of your framework) and you want to control the exact point when you want it to happen. E.g.in case of a game level end you have burned out all the data and you know that you can discard them and if you don’t GC will start after 6 frames of rendering in your next level and you are better off doing it now as the system is idle and you’d drop a frame of two if it happened in the middle of the next frame.

    And obviously you call GC.Collect if you found an issue reported/discussed in the forums and you have figured out a GC bug which you want to work around.

    I would highly recommend seeing this video where Patrick Dussud the father of .NET GC explains why apparent GC issues may actually be side-effect of other things (e.g finalizes stuck trying to delete behind the scene STA COM objects).

    What is the problem with calling GC.Collect

    So why are folks against calling GC.Collect? There are multiple reasons

    1. There's an inherent assumption that the user knows more about when the GC is run. This cannot be true because according to CLR spec there is no standard time. See here. Since GC is plugged into the execution engine it knows best of the system state and knows when to fire. With Silver Light and other cross-plat technologies being mainstream it will become harder and harder to predict where your app is run. There's already 3 separate GCs the desktop, server and compact framework. Silver light will bring in more and your assumptions can be totally wrong.
    2. GC has some cost (rather large):
      GC is run by first marking all the objects and then cleaning them. So whether garbage or not the objects will be touched and it takes awful amount of time to do that. I've seen folks measure the time to do GC.Collect and figure out the time taken. This is not correct because GC.Collect fires the collection and returns immediately. Later GC goes about freezing all the threads. So GC time is way more than what collect takes and you need to monitor performance counter to figure out what is going on,
    3. GC could be self tuning:
      The desktop GC for example tunes itself based on historical data. Lets assume that a large collection just happened which cleaned up 100mb of data. Incidentally exactly after that a forced GC happened which resulted in no data to be cleaned up. GC learns that collection is not helping and next time when a real collection is to be fired (low memory condition) it simply backs off based on the historical data. However, if the forced GC didn't occur it'd have remembered that 100mb got cleared and would've jumped in right away.

    Both 2 and 3 are GC implementation specific (differs across desktop and Compact GC) stressing the first point which is most assumption are implementation details of the GC and may/will change jeopardizing the attempt to try out-guess the GC when to run.

  • I know the answer (it's 42)

    Ensuring only one instance of an application can be run and then breaking that approach

    • 1 Comments
    Whidbey_2005_0409_131710

    A question posted on a DL was as follows

    "I have an application that allows only one instance on the desktop, how do I force it to open multiple instance for some testing purposes".

    Disclaimer: Note that this is kinda hacky and only relevant if you are a tester trying to break something. This shouldn't be treated as a way to achieve anything productive.

    Since the user didn't explain which approach is used to ensure only one instance is allowed I'll try listing down the methods I know off and how I can break them.

    Using named Mutex:
    This is the more resilient approach taken by applications like Windows Media player. Here the application tries to create a named Mutex with sufficiently complicated name to avoid collision. E.g. namespace.app.guid and then try to acquire the mutex. In case it fails to acquire it then an instance is already running and it closes itself. If it acquired the mutex it means it is the first instance and it continues normally. An approach is outlined here.

    Even though this seems to be the most robust solution this can be made to fail as well. If an user has sufficient permission he can do a CloseHandle on a Mutex handle created by some other application as well. For this you don't even need to write code. Do the following

    1. Download and install Process Explorer.
    2. Launch it (might need elevation on Vista)
    3. Have an application that uses this approach like my application named SingleInstanceApp.exe. Try launching a second instance and it will fail to open it.
    4. Click on the app finder toolbar button in Process Explorer and drag it to SingleInstanceApp window. This will get this application selected in process explorer.
    5. Select View->Lower Pane View ->Handles or simply Ctrl+H
    6. This will show all the handles created by SingleInstanceApp.
    7. Scroll down looking for the mutexes. At the end the screen looks something as follows Capture
    8. Right click on the mutex and choose Close Handle.
    9. Now try creating the second instance and it will open fine.

    Enumerate the names of applications running and see if the app is already running:
    This uses a combination of Process.GetCurrentProcess and Process.GetProcessByName(), see an implementation here.

    This is easy to break. To do a DOS (Deinal of Service or DOS) just  create another application with the name that the app expects and per-launch it. To open two instances copy the application to another name and open that first and then the original application.

    Enumerate Windows

    This involves iterating through all the open windows in the system using EnumWindows and then seeing if a Window with a given name or classname is already present. Since the system doesn't guarantee uniqueness in either of the text or class name this can be broken in a similar approach to the first one. However, there are some complications to it as it is difficult to change window text from outside. However, a combination of code injection and SetWindowText win32 API should work.

  • I know the answer (it's 42)

    Trivia: How does CLR create an OutOfMemoryException

    • 4 Comments

     Bargaining

    When the system goes out of memory a OutOfMemoryException is thrown. Similarly for stack overflow a StackOverFlorException is thrown. Now typically an exception is thrown as follows (or the corresponding native way)

    throw new System.OutOfMemoryException();

    But when the system is already out of memory there's a little chance that creating an exception object will succeed. So how does the CLR create these exception objects?

    The answer is trivial and as expect, at the very application start it creates all these exception objects and stores them in a static list. In case these exceptions are to be thrown then it is fetched from this list and thrown.

    The consequence of handling Stack overflow is even bigger because once the stack has overflown calling even a single method in that thread is equally dangerous and can cause further corruption. That is the material for another post :)

  • I know the answer (it's 42)

    When does the .NET Compact Framework Garbage Collector run

    • 7 Comments

    Moon

    Other than the exact when part this post applies equally for the desktop portion.

    Disclaimer: This post is mainly indicative. When the GC runs is an implementation detail and shouldn't be relied on. This is not part of any contract or specification and may (most probably will) change.

    The ECMA specification for Garbage Collection is intentionally vague about when an object will be collected (or freed up). The memory management cycle mentioned in the spec is as follows

    1. When the object is created, memory is allocated for it, the constructor is run, and the object is considered live.
    2. If no part of the object can be accessed by any possible continuation of execution, other than the running of finalizers, the object is considered no longer in use and it becomes eligible for finalization. [Note: Implementations might choose to analyze code to determine which references to an object can be used in the future. For instance, if a local variable that is in scope is the only existing reference to an object, but that local variable is never referred to in any possible continuation of execution from the current execution point in the procedure, an implementation might (but is not required to) treat the object as no longer in use. end note]
    3. Once the object is eligible for finalization, at some unspecified later time the finalizer (§17.12) (if any) for the object is run. Unless overridden by explicit calls, the finalizer for the object is run once only.
    4. Once the finalizer for an object is run, if that object, or any part of it, cannot be accessed by any possible continuation of execution, including the running of finalizers, the object is considered inaccessible and the object becomes eligible for collection.
    5. Finally, at some time after the object becomes eligible for collection, the garbage collector frees the memory associated with that object.

    As you can see the specification doesn't even need an implementation to do code analysis to figure out garbage. It can simply use scoping rules (used anyway by the compiler to detect valid variable usage) for garbage detection. The specification also doesn't specify eventually when the objects are collected. The only need is that it is finalized and freed unspecified time later than the time when it goes out of use. This convenient open statement lets each GC implementers to choose whatever they deem fit for the purpose. Since even thread is not specified a concurrent GC or a non-concurrent GC can be used.

    However, everyone wants to know exactly when their platform's GC is run. Here goes the non-exhaustive list for the .NET Compact Framework's Garbage Collector

    1. Out of memory condition:
      When the system fails to allocate or re-allocate memory a full GC is run to free up as much as possible and then allocation is re-attempted once more before giving up
    2. After some significant allocation:
      If one megabyte of memory is allocated since the last garbage collection then GC is fired.
    3. Failure of allocating some native resources:
      Internally .NET uses various native resources. Some native resource allocation can fail due to memory issues and GC is run before re-attempting
    4. Profiler:
      Profiler APIs build into the framework can force a GC
    5. Forced GC:
      System.GC class is provided in the BCL to force a collection
    6. Application moves to background
      When the given application is moved to background collection is run

    Obviously there can be small differences across various platforms on which .NET CF is implemented. However, the differences are small enough to ignore for this discussion.

    Even though this list seems small it works pretty well across disparate systems like XBox and Windows Mobile. In the next post I'll try to get into why "production user code should never do a forced GC". I know that statement is a bit controversial (at least it got so in an internal thread).

  • I know the answer (it's 42)

    Which end of the egg do you crack. Putting it differently, what is your Endianess

    • 6 Comments
    Mr.Egg

    Few people seem to know that the word Endianess comes from Gullivers travels. In Gulliver's travel where there were two types of people, the Lilliputs who cracked the small side of their soft boiled eggs and the Blefuscu who used the big side (Big-Endian). Since I'm well networked these days (over Orkut/Facebook/LinkedIn) I make a conscious decision to be Big-Endian while cracking an egg as its the preferred network endianess.

    I do not want to delve into the holy endianess war especially because most modern processors allow hardware/software methods to switch it (reminds me of some politicians though :))

    However, I do use bit/byte questions as the acid test for fly/no-fly interviews as suggested by this guy. One of them involves asking about the whole endianess business and a code snippet to find out the endianess of the current system. I'm usually looking for something as below

    short a = 1;
        if (*((char *)&a) == 1)
            printf("Little Endian\n");
        else
            printf("Big Endian\n");
    The idea is not to look for an exact code but to ensure that there's no complete ignorance of this area...
  • I know the answer (it's 42)

    Multi-threaded developer

    • 1 Comments
    Building....

    Once upon a time a kick-ass developer I knew told me that a good developer needs to be multi-threaded and run the following threads

    1. The main worker thread: This is the one used to code up 3-tier applications for your employer. This pays for the rent and for that fancy big car.
    2. The core thread: This is the one that's used to read up data-structures, OS and other fundamental CS stuff which helps you to join into discussion when folks are discussing threaded BST. This also helps you to crack an interview in case you need a new job.
    3. The cool thread: This is the one you use to read up about Silver Light Mobile, ASP.NET MVC Framework, JSON, etc. This keeps you up to date and let's you hang around with other geeks like Scot Hanselman and Don Box

    The same person also told me that real programmers do not blog, he asked me "Do you know about Dave Cutler or Linus Torvald's blog?". So I guess we can safely ignore him :)

  • I know the answer (it's 42)

    Time for a change

    • 1 Comments

    I've been working for some time in Visual Studio Team System (first on the TFS server and then on the testing infrastructure). I was looking for a change and then I happened to talk with the GPM of the .NET COMPACT FRAMEWORK team. The following explains in brief what followed...

    NetCF

    ... and so I landed a job in the .NETCF Garbage Collector and other parts of the execution engine. I'm sure I'll be super busy as the team is heads down into getting Silver Light onto mobile devices and going cross-plat (I guess most folks have heard about SL on Symbian/Nokia).

    What this means is that I can (or rather I have to) muck more with .NET (both desktop and CF) without any guilt and can claim that blogging about them is actually a part of my job. So what this means for this blog is that there'll be more hardcore stuff about .NET and an additional Tag called NETCF when I post specifically about it.

    In case you are in India and want to work in a Kick-ass team delivering one of the most key technologies for the industry then drop me a line at abhinaba.basu @ you know where. Bragging rights will come for free when you point to all those cool WinMobile/S60/XBox/Zune devices and claim how you coded .NET for it....

    Wish me luck...

  • I know the answer (it's 42)

    International Mother Language Day is back and being celebrated with a lunar eclipse

    • 3 Comments
    Interview - Delhi

    For starters it doesn't include C++ or C#, even though your mom codes in it :) ...

    A lot of people speaking English natively forget the importance of mother language due to its predominance. They take their language for granted. However, each year a bunch of languages become extinct, the latest being Eyak, which got extinct exactly a month ago with the death of Marie Smith Jones the last native Eyak speaking person.

    To me this day (21st Feb) is even more special because on this day in 1952 people in Bangladesh laid down their lives while demanding the right to use their own (and mine) mother language, Bangla.

    I believe that if we don't actively try to preserve our mother language they will slowly become extinct. One of the most important things to preserve a language is to ensure that they are better covered by technology. Until XP complex script handling was not enabled by default. This resulted in Bangla and other Indic language to be rendered completely wrong on XP. This was a serious deterrent to use Bangla on Windows. I used to have Bangla signature in my email and got countless replies indicating the spelling is wrong. I always replied back to them about how to turn the complex script handling on. Things are changing rapidly, Vista has this on by default and with better keyboard and font support I'm sure using Bangla on Computers will become really easy.

     

    নমস্কার

  • I know the answer (it's 42)

    Linked on the msdn forum for all the wrong reasons

    • 2 Comments
    Flowers Rennaisance hotel - Mumbai

    Someone saw a message in their Team Foundation Server history which was "Edited by God" and asked how is this possible on the msdn forum. Someone actual read my blog (or some other's but I tend to believe mine 'cause I'm linked in that page)  which shows exactly how to do that. So now I have a poor-joke (PJ) of mine being featured in msdn :)

  • I know the answer (it's 42)

    African music anyone?

    • 3 Comments
    Rain in Lahari resort - hyderabad

    For the last couple of days a vast majority of the movie that I'm seeing is somehow landing in Africa. Since I didn't choose any of them (Amit did), it wasn't a conscious decision. Example include The Interpreter, Blood Diamond, Duma.

    In all of these movies African music is used in the background, and I've began to like them a lot. I have no idea about the tribe/nationality of these but  the sound is just enchanting. Any idea where I can get some good original African music on the web? Buying is also an option if Indian stores/sites have them (which I heavily doubt).

  • I know the answer (it's 42)

    Variable Parameter passing in C#

    • 2 Comments

    Banana leaves - hyderabad

    Sometime back I posted about variable parameters in Ruby. C# also supports methods that accepts variable number of arguments (e.g. Console.Writeline). In this post I'll try to cover what happens in the background. This is a long one and so bear with me :)

    Consider the following two methods. Both prints out each argument passed to it. However, the first accepts variable arguments using the params keyword.

    static void Print1(params int[] args)
    {
        foreach (int arg in args)
        {
            Console.WriteLine(arg);
        }
    }
    
    static void Print2(int[] args)
    {
        foreach (int arg in args)
        {
            Console.WriteLine(arg);
        }
    }

    The above methods can be called as follows

    Print1(42, 84, 126); // variable argument passing
    int[] a = new int[] { 42, 84, 126 };
    Print2(a);           // called with an array

    Obviously in the case above, using variable number of parameters is easier.

    If we see the generated IL for Print1 and Print2 using ILDASM or Reflector and then do a diff, we will get the following diff

    .method private hidebysig static void Print2(object[] args) cil managed
    .method private hidebysig static void Print1(object[] args) cil managed
    {
    
        .param [1]                                                          
        .custom instance void [mscorlib]System.ParamArrayAttribute::.ctor() 
        .maxstack 2
        .locals init (
            [0] object arg,
            [1] object[] CS$6$0000,
            [2] int32 CS$7$0001,
            [3] bool CS$4$0002)
        L_0000: nop 
        L_0001: nop 
        L_0002: ldarg.0 
        L_0003: stloc.1 
        L_0004: ldc.i4.0 
        L_0005: stloc.2 
        L_0006: br.s L_0019
        L_0008: ldloc.1 
        L_0009: ldloc.2 
        L_000a: ldelem.ref 
        L_000b: stloc.0 
        L_000c: nop 
        L_000d: ldloc.0 
        L_000e: call void [mscorlib]System.Console::WriteLine(object)
        L_0013: nop 
        L_0014: nop 
        L_0015: ldloc.2 
        L_0016: ldc.i4.1 
        L_0017: add 
        L_0018: stloc.2 
        L_0019: ldloc.2 
        L_001a: ldloc.1 
        L_001b: ldlen 
        L_001c: conv.i4 
        L_001d: clt 
        L_001f: stloc.3 
        L_0020: ldloc.3 
        L_0021: brtrue.s L_0008
        L_0023: ret 
    }
    

    Only the lines in Green are additional in Print1 (which takes variable arguments) and otherwise both methods looks identical. In this context .param[1*] indicates that the first parameter of Print1 (args) is the variable argument. The ParamArrayAttribute is applied to the method to indicate that the method allows variable number of arguments.

    Effectively all of the above means that the callee is not really bothered with being invoked with variable number of arguments. It receives an array parameter as it would even without the param keyword usage. The only difference is that the method is decorated with the some directive and attribute when param is used. Now it's the caller-code compiler's duty to read this attribute and generate the correct code so that variable number of parameters are put into a array and Print1 is called with that.

    The generated IL for the call Print1(42, 84, 126); is as follows...

    .method private hidebysig static void Main(string[] args) cil managed
    {
        .entrypoint
        .maxstack 3
        .locals init (
            [0] int32[] CS$0$0000)
        L_0000: nop 
        L_0001: ldc.i4.3        ; <= Array of size 3 is created, int32[3]
        L_0002: newarr int32    ; <=
        L_0007: stloc.0         ; <= the array is stored in the var CS$0$0000
        L_0008: ldloc.0 
        L_0009: ldc.i4.0        ; push 0
        L_000a: ldc.i4.s 0x2a   ; push 42
        L_000c: stelem.i4       ; this makes 42 to be stored at index 0 **
        L_000d: ldloc.0 
        L_000e: ldc.i4.1 
        L_000f: ldc.i4.s 0x54
        L_0011: stelem.i4       ; similarly as above stores 84 at index 1
        L_0012: ldloc.0 
        L_0013: ldc.i4.2 
        L_0014: ldc.i4.s 0x7e
        L_0016: stelem.i4       ; stores 126 at index 2
        L_0017: ldloc.0 
        L_0018: call void VariableArgs.Program::Print1(int32[]) ; call Print1 with array
        L_001d: nop 
        L_001e: ret 
    }
    

    This shows that for the call an array is created and all the parameters are placed in it. Then Print1 is called with that array.

    Footnote:
    *interestingly it starts at 1 and not 0 because 0 is used for the return value.
    **stelem takes the stack [..|array|index|value] and replaces the value in array at index with value

    Cross posted here

  • I know the answer (it's 42)

    The differences between int[,] and int[][]

    • 4 Comments

    Mumbai roadside drinks

    A friend asked me the differences between the two. Here goes the answer

    int[,]

    This represents a two dimensional rectangular array. Let's take the following array definition

    int[,] ar = new int[3, 3] { { 0, 1, 2}, 
                                { 3, 4, 5},
                                { 6, 7, 8}};

    The array actually looks like the following

    Capture1

    Which as we can see is rectangular. This kind of array is required when for every items represented in the rows there's exactly the same number of items represented in the column. E.g. a board game like chess.

    int[][]

    This is defined as array of arrays or as jagged arrays. They are created as follows

    int[][] ar2 = new int[3][];
    ar2[0] = new int[] { 0, 1, 2 };
    ar2[1] = new int[] { 3, 4 };
    ar2[2] = new int[] { 5, 6, 7, 8 };

    The array looks like

    Capture

    Here the number columns is not the same for each row. A good example of this kind of usage is when we have a array of polygons where each row contains the coordinates of the vertices of the polygon. Since each polygon has different number of vertices (square, triangle, ...) this data-structure is useful in defining it.

    Since it's jagged it has to be referenced carefully.

    Console.WriteLine(ar2[0][2]); // Prints 2
    Console.WriteLine(ar2[1][2]); // Crashes as row 1 has only 2 columns

    The jagged array can be padded on the right to convert it to a rectangular array, but this will result in a lot of space wastage. In case of the jagged array the usage of space is sizeof(int) * 9 but if we pad it we will use sizeof(int) * max (column). This additional space can be significant.

    Note:

    int and 2D arrays were used only as example. This applied equally well to other data-types and higher dimensions.

Page 5 of 15 (374 items) «34567»