• PFE - Developer Notes for the Field

    Baton Rouge .NET User Group Getting Some Press

    After the second installment of our Speaker Idol format, I was asked by INETA to write an article for the January newsletter targeting other user group leaders.  It came out great.  The article describes the event format and some of our lessons learned...
  • PFE - Developer Notes for the Field

    SharePoint 2010 PnP Guidance Drop 2 Released

    On Monday the PnP team released the 2nd drop of the SharePoint 2010 Guidance.  Included is an example of a sandboxed solution, which is a good list aggregation scenario related to SOW’s (statements of work) and estimates across a number of sub-sites....
  • PFE - Developer Notes for the Field

    SharePoint 2010 PnP Guidance Drop 1 Released

    On Friday the Patterns and Practices team released the first drop of the SharePoint 2010 Guidance, .  It includes the upgrades for service locator, config manager, and logger.  There...
  • PFE - Developer Notes for the Field

    Some Useful TFS Links


    I’ve compiled a list of some current (& some oldies but goodies) links to good reference spots for VS and TFS. A lot of these are well known, but if you are new to VSTS/TFS these are some you should take a look at.

    Mike Learned



    Database Edition:



  • PFE - Developer Notes for the Field

    Creating non-user specific printer mappings


    I was delivering a PowerShell class, and the question of how to create a remote printer mapping came up.

    Turns out that enterprise administrators may have the need to help users with their printer connections, setting them up for them.

    As I started thinking about the problem, it dawned on me that drive and printer mappings are user specific. They are stored as part of a user’s profile, and since the user profile does not get loaded until the user is actually logged on, looks like little can be done.

    Domain user accounts do have a couple of properties that allow administrators to establish a “Home” share, and to assign it a drive letter. Beyond that, they are on their own.

    However, there is a solution!

    For printers, in particular, turns out that there is a feature that allows setting up a “global” printer mapping. This mapping will be created for the computer, and any user that logs on will be able to use it (provided (s)he has permissions).

    The regular Add Printer wizard only allows to create a printer mapping for the user running the wizard:


    So that will not help. However, the PrintUI.dll library exposes the functionality to create a computer specific printer mappings!

    Simply run the PrintUIEntry (case sensitive) entry point for that library using rundll32 and pass the appropriate parameters… and voilà, the printer mapping is created.

    What? How do you do that? Thanks for asking!

    Open a PowerShell window (if you still are in the old days, go ahead, use CMD.exe instead) and type:

    RunDLL32 PrintUI.dll PrintUIEntry /?

    to get a help dialog with information on usage:


    To create a global mapping, use the Global Add command with the /n parameter:

    RunDLL32 PrintUI.dll PrintUIEntry /ga /n\\SERVER\PRINTER_NAME

    To delete the mapping, use the Global Delete command:

    RunDLL32 PrintUI.dll PrintUIEntry /gd /n\\SERVER\PRINTER_NAME

    If you want to remotely do this on another computer, the usage indicates that you can specify the computer name with the /c parameter:

    RunDLL32 PrintUI.dll PrintUIEntry /gd /c\\Computer /n\\SERVER\PRINTER_NAME

    However, this didn’t work for me, so I just used the first command but ran it remote”ly using PowerShell:

    $p = [WMICLASS]"//COMPUTER/root/CIMv2:WIN32_Process"

    $p.Create("RunDLL32 PrintUI.dll PrintUIEntry /ga /n\\SERVER\PRINTER_NAME")

    Note: Make sure that the Print Spooler service in the box where the new share was added is restarted after the addition/deletion of the mapping.

    I found this to be pretty handy, even in my home network.

    Happy printer mapping!


  • PFE - Developer Notes for the Field

    How do I dump out a string?


    The topic came up a while back on one of our aliases – How can I dump out a string?

    There are a lot of ways do this in the debugger and there are also debugger extensions that will help out with.  What most people in WinDBG do is us the “du” command for Unicode strings and “da” for ASCII strings. However this has a problem in that output looks like:

    00000001`40234dd8 "Server=com-appdb-01\moss1;Databa"
    00000001`40234e18 "se=SharedServices2_55;Trusted_Co"
    00000001`40234e58 "nnection=yes;App=Windows SharePo"
    00000001`40234e98 "int Services;Timeout=15"

    You cannot easily cut and paste this into other places.  You have the addresses and such.  Also, this only handles a few lines.  If you check out the help for the du commands there is /c switch which says only dump the characters instead of all the characters.  You would end up with a command like - “du /c”.  To take it one step further you can actually create an alias:

    as !ds du /c 100 c+

    Then all you have to do is - !ds <address to String object>

    So this combines both aliases and come special switches.



  • PFE - Developer Notes for the Field

    What does High Performance Mean?


    Over my years in PFE and Microsoft as a whole I have worked on some REALLY cool websites and applications that are used around the world and drive a lot of business.  Getting work on these cool projects is one of the main reasons that I love what I do.  Getting to see what cool problems people are solving with Microsoft technology.  One of those cases was highlighted a few months back on Channel9!

    Steve Michelotti does a great job of explaining some of the stuff that we did and what the applications look like.  So check it out!



  • PFE - Developer Notes for the Field

    Passing a Managed Function Pointer to Unmanaged Code


    I was working with a customer a while back and they had a situation where they wanted to be able to register a managed callback function with a native API.  This is not a big problem and the sample that I extended did this already.  The question that arose was – How do I pass a managed object to the native API so that it can pass that object to the callback function.  Then I need to be able to determine the type of the managed object because it is possible that any number of objects could get passed to this call back function.  I took the code sample that is provided on MSDN and extended it a bit to handle this.  Here is the original sample - 

    The first commend here is that this is some DANGEROUS territory.  It is really easy to corrupt things and get mess up.  For those that are used to manage code this is very much C++ territory where you have all the power and with that comes all of the rope that you need to hang your self.

    As I mentioned about the customer wanted to be able to pass different types of objects in the lpUserData so the callback can handle different types of data.  So the way to do this is with a GCHandle.  Basically this gives you a native handle the object.  You pass that around on the native size and when it arrives on the managed side you can “trade it in” for the managed object.

    So let’s say you have some sort of managed class.  In this case I called it “callbackData” and you want to pass it off. 

    callbackData^ cd = gcnew callbackData();

    First you need to get a “pointer” to the object that can be passed off:

    GCHandle gch2 = GCHandle::Alloc(cd);
    IntPtr ip2 = GCHandle::ToIntPtr(gch2);
    int obj = ip2.ToInt32();

    You need to keep the GCHandle around so you can free it up later.  It is very possible to leak GCHandles.  In addition GCHandles are roots to CLR objects and therefore you are rooting any memory referenced by the object you have the GCHandle for so the GC will not release any of that memory.  This however is not as bad as pinning the memory.  The GCHandle is a reference to the object that can be used later to retrieve the address of the actual object when needed.

    From the GCHandle you can convert that to a an IntPtr.  Once you have an Int from the IntPtr that you can pass around to the lpUserData parameter.  You could also just directly pass the IntPtr but I just wanted to demonstrate taking it all the way down to the int. 

    Then you can pass that value as part of an LPARAM or whatever you want to the native code. 

    int answer = TakesCallback(cb, 243, 257, obj);

    That will get passed to your callback that uses your delegate to call into your managed function.  The native code might do something simple like this:

    typedef int (__stdcall *ANSWERCB)(int, int, int);
    static ANSWERCB cb;
    int TakesCallback(ANSWERCB fp, int n, int m, int ptr) {
       cb = fp;
       if (cb) {
          printf_s("[unmanaged] got callback address (%d), calling it...\n", cb);
          return cb(n, m, ptr);
       printf_s("[unmanaged] unregistering callback");
       return 0;

    In inside your callback function you simply get back an object:

    public delegate int GetTheAnswerDelegate(int, int, int);
    int GetNumber(int n, int m, int ptr) {
       Console::WriteLine("[managed] callback!");
       static int x = 0;
       IntPtr ip2(ptr);
       GCHandle val = GCHandle::FromIntPtr(ip2);
       System::Object ^obj = val.Target;
       Console::WriteLine("Type - " + obj->GetType()->ToString() );
       return n + m + x;

    Once you have the object you can cast it or do whatever you need to move forward.  This approach allows you to bridge across the managed and native world when you have a native function that makes a callback.  I will attach a complete sample for you to play with if you interested and let me know if there are any questions.  There are other ways to approach this I am sure but this seemed to work pretty well and as long as you manage you rGCHandles and callback references you should be good to go!




  • PFE - Developer Notes for the Field

    How to set a break point at the assembly instruction that returns an error code


    By Linkai Yu 

    When debugging, there are those situations when you don't have access to source code, you might have debugging symbols of a program that neither crashes nor generates an exception but simply displays an error message such as "The program encounters an error: 12055". So you can't use a debugger to break into the precise location of the error. By the time the error message is displayed, all you can see on the call stack is the code related to error display. The real location of instruction where the error was encountered is gone from the call stack. If you can set a break point at the instruction that determines the error, you might look at the assembly instructions and guess what could cause the problem. The question then is how can we find the instruction that generates the error code?

    Here is what I do, not bullet proof but with a good success rate.

     The idea is to search the relevant DLL(s) for the error number. Then verify the error number by unassemblying the instructions backward from the location to see if the instructions makes sense. If you see instruction pattern mov <target>, <value>, then most likely it's the right instruction.

     The following is an example of an error code returned from WININET.DLL: 12055.

    step 1. get the hex of 12055 by debugger command "?"

    0:003> ?0n12055
    Evaluate expression: 12055 = 00002f17

    step 2. find the start and end address of WININET.DLL

    0:003> lm m wininet
    start    end        module name
    46a70000 46b40000   WININET    (private pdb symbols) 

    step 3. search for hex 00002f17. since the search starts from lower address towards higher address, the serach byte order is: 17 2f 00 00

    0:003> s 46a70000 46b40000 17 2f 00 00
    46ac2623  17 2f 00 00 8b 8e d8 00-00 00 57 e8 07 3e ff ff  ./........W..>..
    46ac2870  17 2f 00 00 e9 27 5a ff-ff 8b 46 34 83 e8 07 0f  ./...'Z...F4....
    46ac4273  17 2f 00 00 0f 84 62 4c-fc ff 81 f9 7d 2f 00 00  ./....bL....}/..
    46afcbbb  17 2f 00 00 0f 84 46 01-00 00 48 48 0f 84 3e 01  ./....F...HH..>.
    46b0e9b8  17 2f 00 00 7d 00 00 00-30 f0 af 46 01 00 00 00  ./..}...0..F....

    this search turns out 5 results.

    step 4. unassemble backward from the location to verify if the instructions make sense.

    0:003> ub 46ac2623  +4
    WININET!ICSecureSocket::ReVerifyTrust+0x1f0 [d:\vista_gdr\inetcore\wininet\common\ssocket.cxx @ 1380]:
    46ac2600 2f              das
    46ac2601 0000            add     byte ptr [eax],al
    46ac2603 755a            jne     WININET!ICSecureSocket::ReVerifyTrust+0x292 (46ac265f)
    46ac2605 8b8d24fdffff    mov     ecx,dword ptr [ebp-2DCh]
    46ac260b 8b86d8000000    mov     eax,dword ptr [esi+0D8h]
    46ac2611 81c900000004    or      ecx,4000000h
    46ac2617 0988f8020000    or      dword ptr [eax+2F8h],ecx
    46ac261d c78528fdffff172f0000 mov dword ptr [ebp-2D8h],2F17h 

    0:003> ub 46ac2870  +4
    WININET!ICSecureSocket::VerifyTrust+0x4e9 [d:\vista_gdr\inetcore\wininet\common\ssocket.cxx @ 1674]:
    46ac2844 89bd24fdffff    mov     dword ptr [ebp-2DCh],edi
    46ac284a 899d1cfdffff    mov     dword ptr [ebp-2E4h],ebx
    46ac2850 399d1cfdffff    cmp     dword ptr [ebp-2E4h],ebx
    46ac2856 748f            je      WININET!ICSecureSocket::VerifyTrust+0x422 (46ac27e7)
    46ac2858 8b86d8000000    mov     eax,dword ptr [esi+0D8h]
    46ac285e 8b8d24fdffff    mov     ecx,dword ptr [ebp-2DCh]
    46ac2864 0988f8020000    or      dword ptr [eax+2F8h],ecx
    46ac286a c78528fdffff172f0000 mov dword ptr [ebp-2D8h],2F17h 

    of the five search results, two show valid instructions: 

       c78528fdffff172f0000 mov dword ptr [ebp-2D8h],2F17h 

     [ebp-2D8h] points to a local variable. The C++ equivalent code would look like:

        intError = 12055;

    now you can set break point at address 46ac286a and 46ac261d and have fun debugging!

    bp 46ac2623;bp 46ac2870 ;g


    Linkai Yu

  • PFE - Developer Notes for the Field

    Problems with x86 performance Monitor Counters on x64


    This has come up a few times recently so I thought I would highlight a post from Todd Carter about how to get the x86 ASP.NET performance counters when running on an x64 OS:


    Just thought I would call that posting out since it has been helpful.

  • PFE - Developer Notes for the Field

    IIS and Hyper-V


    The other day the topic came up about best practices with IIS and Hyper-V.  One of our team mates pointed out the following links which are an interesting read:

    These go over the migration of MSDN/TechNet infrastructure over to Hyper-V.  Cool stuff!

  • PFE - Developer Notes for the Field

    Windows Server 2003 Service Pack 2 and 24-bit Color TS Connections


    Okay this is a really weird one!  When you install Service Pack 2 on Windows Server 2003 you can no longer connect using MSTSC via Terminal Services (TS)/Remote Desktop(RDP) with a 24-bit color setting.  For the customer I was working with this meant that the applications they are hosting on those servers did not look good at all over this down graded connection.  For them it meant a blocker to upgrading to SP2.  This has gotten more important recently because Service Pack 1 for Windows Server 2003 is about to be out of support.

    So a hunt ensued for how to get this working.  It turns out that there was a bug in Service Pack 2 that disable some of these higher bit rates.  The bug was corrected in KB – 942610 although this was not apparent that it applied to my problem since the description was different.  However it turns out that the same underlying issue was causing both problems.  A couple of other changes needed to happen and then everything was working.  Here are the steps that I came up with to take a Win2K3 SP2 box and enable 24-bit Color Connections:

    1. Install the following hotfix -
    2. Update the following registry key to enable the fix:

      Registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server
      Registry entry: AllowHigherColorDepth
      Type: REG_DWORD
      Value: 1

      Note that I found the hotfix does add this registry key but it had the value set to 0 so it disabled the fix.  I had to change it to 1.  Also, typical warnings about editing the registry apply.  If you go crazy you can toast your box so back up stuff first!  For those that are sticklers here is the official warning:
    3. Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base:

      322756 How to back up and restore the registry in Windows

    1. Ensure the TS box is excepting 24-bit connections
      1. Go to Start->Run
      2. Type “tscc.msc” and hit enter
      3. Click “Connections”
      4. Right click on “Rdp-tcp” and select properties
      5. Click the “Client Settings” tab
      6. Ensure that “Limit Maximum Color Depth” is set to 24-bit
    2. Restart the server to ensure everything is set.

    Hope this helps!


  • PFE - Developer Notes for the Field

    Best Practice - Workflow and Anonymous Delegates


    Best Practice Recommendation

    In your Workflow application (more exactly in the host of your workflow application) never use anonymous delegate like that :

                      AutoResetEvent waitHandle = new AutoResetEvent(false);
                      workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e)


    The code above is very common and works correctly at least most of the time... Actually when you do something like that, then after a few iterations (if you always keep the same instance of the Workflow runtime as it should be the case) you will notice the number of AutoResetEvent object always increases until eventually an Out Of Memory exception is raised.

    If you take a debugger like Windbg and display the chain of reference for one particular instance of the AutoResetEvent class you will have an output similar to the following:

    DOMAIN(00000000002FF7E0):HANDLE(WeakSh):c1580:Root:  00000000029126e8(System.Workflow.Runtime.WorkflowRuntime)->
      000000000296e500(System.EventHandler`1[[System.Workflow.Runtime.WorkflowTerminatedEventArgs, System.Workflow.Runtime]])->
      0000000002966970(System.Object[])->  00000000029570e0(System.EventHandler`1[[System.Workflow.Runtime.WorkflowTerminatedEventArgs, System.Workflow.Runtime]])->
      0000000002957038(TestLeakWorkflow.Program+<>c__DisplayClass2)->  0000000002957050(System.Threading.AutoResetEvent)

    If now you dump the instance of TestLeakWorkflow.Program+<>c__DisplayClass2 you will have the following output :

    MT                    Field            Offset        Type               VT         Attr                 Value               Name
    000007fef70142d0      4000003          8             AutoResetEvent     0          instance             0000000002957050    waitHandle

    What does it tell us? This class TestLeakWorkflow.Program+<>c__DisplayClass2 contains one property of type AutoResetEvent. In our case this property has a strong reference to the object of type AutoResetEvent, consequently the final object is still referenced and remains logically memory.

    Why that? Well the purpose of this entry is not to give details about anonymous delegate, but basically there is nothing magic with anonymous delegate, and for the delegate function to access to variable which are not in its scope (look closer, the waitHandle object is not in the scope of the delegate function but still it can access to it) a mechanism is needed. For that, the compiler creates an intermediate class with one property per object which is used in the function delegate but which is normally not in its scope, the delegate function is just a member function of this class; it can then easily access to the properties of the class.

    What if you use hundred of objects in the anonymous delegate, objects which are normally not in the scope of the delegate? Well, you will have an intermediate class with hundred of attributes. The instances of the class will then reference the real objects...

    Why is that a problem? Well, again look closely? How can you get rid of this instance? You have to remove the reference to the delegate but you cannot (not with the syntax used above), it means the intermediary class and more problematically the final object are still referenced and remain in memory.

    To better understand the problem, let’s now study the fragment below, this is a quite classical fragment of code that you will often find in multiple blogs (I know it’s from one of this blog that we have got the code that have been implanted in production and on which I have spent hours debugging to understand why the portal was dying after several hours) :

            WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(MyASPNetSequencialWorkFlow));
            workflowRuntime.WorkflowCompleted += delegate(object o, WorkflowCompletedEventArgs e1)
                if (e1.WorkflowInstance.InstanceId == instance.InstanceId)
                { ...

    Yes you got it, in this case this is the Workflow instance which will be referenced and which will stay in memory. Considering the fact that in a web application host (as well as in Windows service host) the WorkflowRuntime is a long live object, you will for sure have an Out Of Memory exception.

    What do then? Well a code like the one below is definitely better :

                        EventHandler<WorkflowTerminatedEventArgs> terminatedHandler = null;
                        EventHandler<WorkflowCompletedEventArgs> completedHandler = null;
                        terminatedHandler = delegate(object sender, WorkflowTerminatedEventArgs e)
                            if (instance.InstanceId == e.WorkflowInstance.InstanceId)
                                workflowRuntime.WorkflowCompleted -= completedHandler;
                                workflowRuntime.WorkflowTerminated -= terminatedHandler;
                        workflowRuntime.WorkflowTerminated += terminatedHandler;
    completedHandler = delegate(object sender, WorkflowCompletedEventArgs e) { if (instance.InstanceId == e.WorkflowInstance.InstanceId) { WorkflowInstance b = instance; workflowRuntime.WorkflowCompleted -= completedHandler; workflowRuntime.WorkflowTerminated -= terminatedHandler; waitHandle.Set(); } }; workflowRuntime.WorkflowCompleted += completedHandler;

    You again have to be prudent, you must remove all the handlers, if at the end WorkflowCompleted Handler is executed, WorkflowTerminated handler will never be executed hence the all the handler are removed in both delegates.


  • PFE - Developer Notes for the Field

    Combining NetMon CAP Files


    Today I was working on a problem that we thought was network related.  That meant it was time to try out some of those NetMon skills.  NetMon is the Microsoft Network Packet Capture and Analysis tool that is similar to WireShark/ethereal and can the current 3.2 version can be download for free from here -

    For the longest time I am a lot of my colleagues used WireShark/ethereal to do network captures.  However Netmon has come a long way and I really am a big fan of it now.

    Back to my problem today.  We were dealing with a Web Server that has a decent amount of traffic and problem that occurred intermittently.  I first started just capturing with the UI but quickly saw the memory grow and the box slow down a bit.  I did not need to see the traffic going by I just needed to do a capture.  This lead me to the NMCAP tool that ships with NetMon.  This is a basic command line tool that allows you do to do captures.

    So I started capturing and by default if you use the *.chn extension on the file name you specify it will create 20 MB CAP files with your network data.  This as great.  I could easily filter out the ones I did not want while I awaited the problem.

    Finally the problem reproduced and now I was ready to analyze.  I opened the trace closest to when the problem occurred and began working through the trace.  I quickly found some conversations that I was interested but – Where was the beginning? Where was the end?

    The answer to this – In some of the other traces!

    Now I had 3, 4, 5, 6 different traces open and that was no good.  I quickly wanted to be able to combine the 6 or 10 traces that I was interested it.  I found out that the same NMCAP tool can easily do this.  All you have to do is use the /InputCapture param.  You end up with a command line like this:

    namcap /InputCapture Trace1.cap Trace2.cap Trace3.cap /capture /file Trace.cap:500M

    This was a life saver so I thought I would pass it along!  Enjoy and have a great weekend.



  • PFE - Developer Notes for the Field

    Dump Analysis in a Disconnected Environment


    Over the years I have come across the following restrictions in a variety of environments:

    1. The machines that we could do dump analysis did not have access to the internet.
    2. The dump could not leave those machines.

    These types of restrictions is a common reason field team members are brought onsite since they cannot ship data offsite.  The lack of internet access though raises the question - “How do I get the symbols I need?”

    The first option is to use the Symbol Packages that Microsoft provides here -

    Now these symbol packages are a great starting point but they have a few key limitations:

    1. They are only for Windows.  So if I need .NET symbols and such I will be out of luck.
    2. They are only for major release milestones.  This means that if you have system that is patched you will find at least some symbols do not match.  While this will not be a problem in all debug scenarios it will be a problem in certain cases.

    For both of these situations the Microsoft Public Symbol Server ( addresses this because the symbols are more frequently updated and also allows you to get symbols for more Microsoft products than just the OS.  The rub is - “How do I know what symbols to pull down if I cannot get the dump to the internet?”

    Enter SYMCHK -

    This is a utility to validate symbols.  It has a “Manifest” feature that we can utilize here.  The idea is that create a series of steps like this:

    1. Generate a Manifest.
    2. Move that manifest to another machine.
    3. Create a Symbol Cache Using the Manifest.
    4. Move the Symbols to the Closed Machine.

    Let’s look at each of these steps in a bit more detail.

    Generate a Manifest

    To do this you can just run the following command:

    D:\debuggersx86>symchk /id D:\DumpFiles\Dump1.dmp /om symbol.txt

    SYMCHK exists in your the directory where you installed the Debugging Tools for Windows.  You can replace the DMP file above with the location of the dump file that you wish to debug.  The “symbol.txt” file is the manifest that we are generating.

    The resultant manifest will contain a line for each module and look like this:


    Moving the Manifest

    This is probably the trickiest part depending on the place where you work.  The idea here is that the file you have to move is simply a text file of file names.  You can do several things:

    1. Scrub out any custom modules so those do not leave.
    2. Write down the contents of the file if need be.

    The net of this is that you have much more limited set of information to move off the system as opposed to moving a dump or even minidump which might be harder in various areas.

    Creating a Symbol Cache Using the Manifest

    After you have the manifest on a machine with internet access you can then sync down the symbols using a command like this:

    symchk /im symbols.txt /s srv*d:\tempSym*

    You can use any directory in place “d:\tempSym”. 

    Move the Symbols to the Closed Machine

    Once the symbols have come down just copy the cache directory that you used in the previous step (d:\tempSym in the example) and move that over to the closed machine.  Then in the debugger set your symbol path to include that folder. For instance if you copied the folder to d:\tempSym on the close machine you would add “srv*d:\tempSym” to your symbol path to start loading symbols from the folder.

  • PFE - Developer Notes for the Field

    .NET Framework 2.0 Service Pack 2


    I have worked with several customers that want .NET Framework 2.0 SP2 but do not want to either require their customers to install .NET Framework 3.5 w/ SP1 or do not want to do it themselves.

    Well due to the way it was packaged up until this point that was the only way to get it.  However ask of just the other week they finally released a standalone installer for .NET Framework 2.0 Service Pack 2:

    This is great news and will make the deployment a bit easier for people on Windows XP and Windows Server 2003.  For Vista and Windows Server 2008 you still have to get .NET Framework 3.5 SP1 installed to get .NET Framework 2.0 SP2.

    Finally, make sure that you also install the following update -  This update fixes some of the known issues that have been found with the service packs since their release.

    Have a great day and happy installing!


  • PFE - Developer Notes for the Field

    Memory Based Recycling in IIS 6.0


    Customers frequently ask questions regarding the recycling options for Application Pools in IIS.

    Several of those options are self explanatory, whereas others need a bit of analysis.

    I’m going to focus on the Memory Recycling options, which allow IIS to monitor worker processes and recycle them based on configured memory limits.

    Recycling application pools is not necessarily a sign of problems in the application being served. Memory fragmentation and other natural degradation cannot be avoided and recycling ensures that the applications are periodically cleaned up for enhanced resilience and performance.

    However, recycling can also be a way to work around issues that cannot be easily tackled. For instance, consider the scenario in which an application uses a third party component, which has a memory leak. In this case, the solution is to obtain a new version of the component with the problem resolved. If this is not an option, and the component can run for a long period of time without compromising the stability of the server, then memory based recycling can be a mitigating solution for the memory leak.

    Even when a solution to a problem is identified, but might take time to implement, memory-based recycling can provide a temporary workaround, until a permanent solution is in place.

    As mentioned before, memory fragmentation is another problem that can affect the stability of an application, causing out-of-memory errors when there seems to be enough memory to satisfy the allocation requests.

    In any case, setting memory limits on application pools can be an effective way to contain any unforeseen situation in which an application that “behaves well” goes haywire. The advantage of setting memory limits on well behaved applications is that in the unlike case something goes wrong, there is a trace left in the event viewer, providing a first place where research of the issue can start.

    When to configure Memory Recycling

    In most scenarios, recycling based on a schedule should be sufficient in order to “refresh” the worker processes at specific points in time. Note that the periodic recycle is the default, with a period of 29 hours (1740 minutes). This can be an inconvenience, since each recycle would occur at different times of a day, eventually occurring during peak times.

    If you have determined that you have to recycle your application pool based on memory threshold, it implies that you have established a baseline for your application and that you know your application’s memory usage patterns. This is a very important assumption, since in order to properly configure memory thresholds you need to understand how the application is using memory, and when it is appropriate to recycle the application based on that usage.

    Configuration Options

    There are two inclusive options that can be configured for application pool recycling:


    Figure 1 - Application Pool properties to configure Memory Recycling

    Maximum Virtual Memory

    This setting sets a threshold limit on the Virtual Memory. This is the memory (Virtual Address Space) that the application has used plus the memory it has reserved but not committed. To understand how the application uses this type of memory, you can monitor it by means of the Process – Virtual Bytes counter in Performance Monitor.

    For instance, if you receive out of memory errors, but less than 800MB are reported as consumed, it is often a sign of memory fragmentation.

    Maximum Used Memory

    This setting sets a threshold limit on the Used Memory. This is the application’s private memory, the non-shared portion of the application’s memory. You can use the Process – Private Bytes counter in Performance Monitor to understand how the application uses this memory.

    In the scenarios mentioned before, this setting would be used when you have detected a memory leak which cannot avoid (or is not cost effective to correct). This setting would set a “cap” on how much memory the application is “allowed” to leak before the application is restarted

    Recycle Event Logging

    Depending on the configuration of the web server and application pools, every time a process is recycled, an event may be logged in the System log.

    By default, only certain recycle events are logged, depending on the cause for the recycle. Timed and Memory based recycling events are logged, whereas all other events are not.

    This setting is managed by the LogEventOnRecycle metabase property for application pools. This property is a byte flag. The each bit indicates a reason for a recycle. Turning on a bit instructs IIS that the particular recycle event should be logged. The following table indicates the available values for the flag:





    The worker process is recycled after a specified elapsed time.



    The worker process is recycled after a specified number of requests



    The worker process is recycled at specified times.



    The worker process is recycled once a specified amount of used or virtual memory, expressed in megabytes, is in use.



    The worker process is recycled if IIS finds that an ISAPI is unhealthy.



    The worker process is recycled on demand by an administrator.



    The worker process is recycled after configuration changes are made.



    The worker process is recycled when private memory reaches a specified amount.


    As mentioned before, only some of the events are configured to be logged by default. This default value is 137 (1 + 8 + 128).

    It’s best to configure all the events to be logged. This will ensure that you understand the reasons why your application is being recycled. For information on how to configure the LogEventOnRecycle metabase property, see the support article at

    Below are the events logged because the memory limit thresholds have been reached:


    Figure 2 - The application pool reached the Used Memory Threshold


    Figure 3 - The application reached the Virtual Memory threshold

    Note that the article lists the 1177 Event ID for Used (Private) Memory recycling, but in Event viewer it shows as Event ID 1117

    Determining a threshold

    Unlike other settings, memory recycling is probably the setting that requires the most analysis, since the memory of a system may behave differently depending on various factors, such as the processor architecture, running applications, usage patterns, /3GB switch, implementation of Web Gardens, etc.

    In order to maximize service uptime, IIS 6.0 does Overlapped recycling by default. This means that when a worker process is due for a recycle, a new process is spawned and only when this new process is ready to start processing requests does the recycle of the old process actually occur. With overlapped mode, there will be two processes running at one point in time. This is one of the reasons why it is very important to understand the memory usage patterns of the application. If the application has a large memory footprint at startup, having two processes running concurrently could starve the system’s memory. For example, in a 32 bit Windows 2003 system with 4GB memory, Non-Paged Pool should remain over 285MB, Paged Pool over 330MB~360MB and System Free PTE should remain above 10,000. Additionally, Available Memory should not be lower than 50MB (these are approximate values and may be different for each system[1]). In a case in which recycling an application based on memory would cause these thresholds to be surpassed, Non-Overlapped Recycling Mode could help mitigate the situation, although it would impact the uptime of the application. In this type of recycling, the worker process is terminated first before spawning the new worker process.

    In general, if the application uses X MB of memory, and it’s configured to recycle when it reaches 50% over the normal consumption (1.5 * X MB), you will want to ensure that the system is able to support 2.5 * X MB during the recycle without suffering from system memory starvation. Consider also the need to determine what type of memory recycling option is needed. Applications that use large amounts of memory to store application data or allocate and de-allocate memory frequently might benefit from having Maximum Virtual Memory caps, whereas applications that have heavy memory requirements (e.g. a large application level cache), or suffer from memory leaks could benefit from Maximum Memory Used caps.

    The documentation suggests setting the Virtual Memory threshold as high as 70% of the system’s memory, and the Used Memory as high as 60% of the system’s memory. However, for recycling purposes, and considering that during the recycle two processes must run concurrently, these settings could prove to be a bit aggressive. As an estimate, for a 32 bit server with 4 GB of RAM, the Virtual Memory should be set to some value between 1.2 GB and 1.5 GB, whereas the Private bytes should be around 0.8 GB to 1 GB. These numbers assume that the application is the only one in the system. Of course, these numbers are quick rules of thumb and do not apply to every case. Different applications have very different memory usage patterns.

    For a more accurate estimation, you should monitor your application for a long enough period so as to capture the information about memory usage (private and virtual bytes) of the application during the most common scenarios and stress levels. For example, if your application is used consistently on an everyday basis, a couple of week’s worth of data should be enough. If your application has a monthly process and each week of the month has different usage (users enter information at the beginning of the month and heavy reporting activity occurs at the end of the month) then a longer period may be appropriate. In addition, it is important to remember that data may be skewed if we monitor the application during the Holidays (when traffic is only 10% of the expected), or during the release of that long-awaited product (when traffic is expected to go as high as 500% of the typical usage).

    Problems associated with recycling

    Although recycling is a mechanism that may enhance application stability and reliability, it comes with a price tag. Too little recycling can cause problems, but too much of a good thing is not good either.

    Non-overlapped recycles

    As discussed above, there are times when overlapped recycling may cause problems. Besides the memory conditions described above, if the application creates or instantiates objects for which only one instance can exist at a particular time for the whole system (like a named kernel object), or sets exclusive locks on files, then overlapped recycle may not be an option. It is very important to understand that non-overlapped recycles may cause users to see error messages during the recycling process. In situations like this, it is very important to limit recycles to a minimum.

    Moreover, if the time it takes for a process to terminate is too long, it may cause noticeably long outages of the application while it is recycling. In this case, it is very important to set an appropriate application pool shutdown timeout. This timeout defines the length of the period of time that IIS will wait for a worker process to terminate normally before it forces it to terminate. This is configured using the ShutdownTimeLimit metabase property (

    To configure an application pool to use non-overlapped recycles, set the application pool’s metabase property DisallowOverlappingRotation to true. For more information on this property, see the metabase property reference at

    Session state

    Stateful applications can be implemented using a variety of choices of were to store the session information. For ASP.Net, some of these are cookie-based, in-process, Session State Server, SQL Server. For classic ASP, this list is more limited. These implementation decisions have an impact on whether to use Web Gardens and how to implement Web Farms. Additionally, these may determine the effect that application pool recycling has on the application.

    In the case of in-memory sessions, the session information is kept in the worker process’ memory space. When a recycle is requested, a new worker process is started and session information on the recycled process is lost. This effectively affects any active session that exists in that application. This is yet another reason why the amount of recycles should be minimized.

    Application Startup

    Recycling an application is an expensive task for the server. Process creation and termination are required to start the new worker process and bring down the old one. In addition, any application level cache and other data in memory are lost and need to be reloaded. Depending on the application, these activities can significantly reduce the performance of the application, providing a degraded user experience.

    Incidentally, another post in this blog discusses another of the reasons application startup may be slow. In many cases this information will help speed up the startup of your application. If you haven’t done it yet, check it out at

    Memory Usage checks

    When an application is configured to recycle based on memory, it is up to the worker process to perform the checks. These checks are done every minute. If the application has surpassed the memory caps at the time of the check, then a recycle is requested.

    Bearing this behavior in mind, consider a case in which an application behaves well most of the times, but under certain conditions, it behaves really badly. By misbehavior I mean very aggressive memory consumption during a very short period of time. In this case, the application can cause other problems by exhausting the system’s memory before the worker process checks for memory consumption and any recycle can kick in. This could lead to problems that are difficult to troubleshoot and limit the effectiveness of memory based recycling.


    The following points summarize good practices when using memory based recycling:

    • Configure IIS so it logs all recycle events in the System Log.
    • Use memory recycling only if you understand the application’s memory usage patterns.
    • Use memory based recycling as a temporary workaround until permanent fixes can be found.
    • Remember that memory based recycling may not work as expected when the application consumes very large amounts of memory in a very short period of time (less than a minute).
    • Closely monitor recycles and ensure that they are not too many (reducing the performance and/or availability of the application), or too few (allowing memory consumption of the application to create too much memory pressure on the server).
    • Use overlapped recycling when possible.



    Santiago Canepa is an Argentinean relocate working as Development PFE. In his beginnings, he made custom made software in Clipper. Worked for the Argentinean IRS (DGI) while working towards his BS in Information Systems Engineering. He worked as a consultant both in Argentina and the US on Microsoft technologies, and lead a development team before joining Microsoft. He has passion for languages (don’t engage in a discussion about English or Spanish syntax – you’ve been warned), loves a wide variety of music and enjoys spending time with his husband and daughter. Can usually be found online at ungodly hours in front of his computer(s).”

    [1] Vital Signs for PFE – Microsoft Services University

  • PFE - Developer Notes for the Field

    Best Practice – WCF and Exceptions


    Best Practice Recommendation

    In your WCF service, never let an exception propagate outside the service boundary without managing it. 2 Alternatives then :

    • Either you manage the exception inside the service boundary and never propagate it outside
    • Or you convert the .Net typed exception in your exception manager as a FaultException before propagating it.


    Most of the developers know how to handle exception in .Net code and good strategy can consist in letting an exception bubble up if it cannot be handled correctly, ultimately this exception will be managed by one of the .Net framework default handler which can depending of the situation close the application.

    WCF manages things slightly differently, indeed if the developer let an exception bubbles up to the WCF default handler, it will convert this .Net typed exception (which has no representation in the SOA world) as a Fault Exception (which is represented in the SOA world as a SOAP fault ) and then propagate this exception to the client. Up to this point, you think everything is OK, and you are right indeed it’s legitimate the client application receives the exception and manages it. However there are 2 problems here :

    • One minor, indeed by default you don’t control the conversion, and you may want to send a custom Fault Exception to the client application, something not too generic.
    • The WCF default handler will fault the channel and this is far more problematic because if the client reuse the proxy (what is a quite common approach after managing the exception), it will fail. It’s quite easy in .Net to work around this issue just by testing if the proxy is faulted in the exception manager but all the languages may not offer these facilities and I remind you by essence SOA is very opened.

    For those of you who wants more information this blog is very good : from a general point of view, the implementation of IErrorHandler is a popular approach to handle this issue so from your favorite search engine ( just search for IErrorhandler.

    Contributed by Fabrice Aubert

  • PFE - Developer Notes for the Field

    Windows Error Reporting (WER) Web Service


    I have long thought that one of the coolest features of the Windows OS for developers is the Windows Error Reporting infrastructure (with roots in Dr. Watson).  You can have your application upload dumps to the WinQual portal –

    One of the problems though has always been getting those dumps off of the WER site.  Before it was a manual process of going to the WinQual website and download CAB files.  Well I found out today that the WER team recently released a Web Service that allows you to pull them down automatically.   They have also released some sample applications to get you started:

    Windows Error Reporting on CodePlex -

    This opens up all sorts of possibilities such as automatically creating bugs in your bug DB (TFS hopefully!) in response to a new dump file getting uploaded so that someone on your team can take a look.

    Finally the WER team has a blog up and running ( and their first entry discusses this and has some demo videos from PDC:



  • PFE - Developer Notes for the Field

    Best Practice - <GeneratePublisherEvidence> in ASPNET.CONFIG


    Best Practice Recommendation

    Add the following line to your ASPNET.CONFIG or APP.CONFIG file:

    <?xml version="1.0" encoding="utf-8"?>
            <generatePublisherEvidence enabled="false"/>

    Note the ASPNET.CONFIG file is located in Framework Directory for the version of the Framework you are using.  For example for a 64-bit ASP.NET application it would be:


    For a 32-bit application it would be:



    I have seen this a bunch of times while onsite.  The problem goes something like this:

    When I restart my ASP.NET application the initial page load is very slow.  Sometimes upwards of 30+ seconds. 

    Many people just blame this on “.NET Startup” costs but there is no out of the box reason that an ASP.NET application should take that long to load.  Some applications do work on startup which can cause a startup slow down but there are other things that cause slow downs.  A common cause that I have seen often recently is Certificate Revocation List (CRL) checking when generating Publisher Evidence for Code Access Security (CAS).

    A little background – CAS is feature in .NET that allows you to have more granular control over what code can execute in your process.  Basically there are 3 parts:

    1. Evidence – Information that a module/code presents to the runtime.  This can be where the module was loaded from, the hash of the binary, strong name, and importantly for this case the Authenticode signature that identifies a modules publisher.
    2. Permissions Set – Group of Permissions to give code (Access to File System, Access to AD, Access to Registry)
    3. Code Group – The evidence is used to provide membership in a code group.  Permission Sets are granted to a code group.

    So when a module loads it presents a bunch of evidence to the CLR and the CLR validates it.  One type of evidence is the “publisher” of the module.  This evidence is validated by looking at the Authenticode signature which involves a Certificate.  When validating the Certificate the OS walks the chain of Certificates and tries to download the Certificate Revocation List from a server on the internet.  This is where the slowdown occurs.

    A lot of servers do not have access to make calls out to internet.  It is either explicitly blocked, the server might be on a secure network, or a proxy server might require credentials to gain access to the internet.  If the DNS/network returns quickly with a failure the OS check will move on but if the DNS/network is slow or does not respond at all to the request we have to timeout. 

    This can occur for multiple modules because we create this evidence for each module that is loaded.  However if we have looked for a CRL and failed we will not recheck.  However different certificates have different CRLs.  For instance a VeriSign Certificate may have one CRL URL but a Microsoft Certificate will have a different one.

    Since this probe can slow things down it is best to just avoid the probe if you do not need it.  For .NET the only reason you would need it is if you are setting Code Access Security based on the module Publisher.  Because this can cause potential slow downs and you do not need to occur this penalty you can just disable the generation of the Publisher Evidence when your module is loaded.  To disable this use the <generatePublisherEvidence> Application configuration.  Just set the enabled property to false and you will avoid all of this.

    Now for ASP.NET applications it was not immediately obvious how to do this but it turns out that you cannot add this to an applications Web.Config but you can add it to the ASPNET.CONFIG file in the Framework directory.  For other applications just add the attribute to the APP.CONFIG file.

    In closing there are several blog entries that do a great job of demonstrating how this will show up in the debugger and other details on CRL issues and workarounds:

    We are highlighting this as the first in a series of general best practice recommendations.

    NOTE – If you have .NET 2.0 RTM you will need this hotfix -

  • PFE - Developer Notes for the Field

    A Few TFS 2008 Pre & Post Installation things you need to know


    The PFE Dev team would like to welcome Micheal Learned to the blog.  Here is a bit about Mike:

    My name Is Micheal Learned, and I’ve been working for Microsoft for over a year now with some of our Premier customers across the US, and helping them support a variety of .NET related systems and tools.  I’m doing a lot of work currently with VSTS/TFS 2008, and helping customers get up and running with new installations, migrations, and general knowledge sharing around this product.  I also enjoy working with web applications, IIS, and just researching various new Microsoft related technologies in general.  In my free time I enjoy sports (I watch now, use to like to play), and spending time with my son Nicholas (7), and daughter Katelyn (3),  and generally just letting them wear me down playing various kids stuff.

    Enough about here is the good stuff:

    I’ve been spending a lot of time recently working with customers on TFS and VSTS 2008. Some of my customers have required some support around getting a clean install, or troubleshooting some post-install issues, so I thought I would post a few of the common issues I’ve been seeing.

    First of all, for any pre-install or post-install issues, I highly recommend you run the TFS Best Practices Analyzer, which is available as a “power tool” download here.

    The analyzer can run a system test, and generate a report indicating the health of your existing system, or a report identifying possible issues with your environment configuration ahead of a TFS installation (pre-install). The information is useful, and generates links to help files that can help you solve various issues. It helped me through a range of issues while troubleshooting an installation recently, including IIS permissions, registry information, etc. The idea is that more and more data collection and reporting points are being added to this tool, so you should keep an eye out for future releases of the tool as well.

    Before doing any new installation it is imperative you download the latest version of the TFS 2008 installation guide available here. I can’t overstate the importance of following the guide step by step with any new installation, and although this version of TFS is a smoother install than the previous version (2005), it is still important you pay close attention to setting up the proper accounts and their perspective network permissions. It may seem a little cumbersome, but you’ll want the TFS services, and various accounts running with the least sufficient permissions as a security best practice, and it is well documented in the guide.

    Enough about general stuff, let’s discuss some specific points:

    “Red X of Death”

    If after installation, you’re client machines are seeing a red “X” on either their documents or reports folders in Team Explorer, then they are encountering a permissions issue on SharePoint (if documents folder), or the Reporting Services layer (if reports folder). I’ve seen this to be quite common, and it should just be pointed out that you should configure proper permissions for your Active Directory users at each of those tiers. Visual Studio Magazine Online has a nice article documenting this here .

    “Firewall Ports”

    If you are running TFS, and have a firewall in play, you will need specific ports for the various TFS components to communicate properly. The ports are well documented in the TFS guide under the “Security Issues for Team Foundation Server” section, and there is some automation with new installations that will configure your windows firewall ports for you during install. If you need to configure them post-install, it is just a matter of setting up the firewall exceptions manually.

    “TFS Security

    TFS security is deserved of its own blog entry, but I want to just mention a few quick items since TFS security can be a common stumbling block initially. The TFSSetup account (as described in the TFS guide) is by default an administrator on the TFS server. You’ll need to configure permissions on several tiers for TFS for your other users and groups. Specifically configure permissions at the overall TFS server level, TFS project level, Reporting Services level, and SharePoint level. It may seem cumbersome at first, but just right click your server node in Team Explorer, and it becomes a point and click exercise. Right click the project node to configure project level permissions. As a best practice for manageability scenarios you will want to simply use Active Directory Groups to drive this, and if you have a need, you can get very granular for setting up permission, new roles, etc. Also there is a power toy available that gives you a singular GUI view for managing permissions all in one place that you can download here .

    “Just remember TFS is multi-tier”

    Finally I would just point out to keep in mind that TFS 2008 is a distributed system with multiple tiers. Open IIS or SQL Server to poke around and look at the databases, directories, permissions, etc to familiarize your-self with the architecture (Do not directly edit the database – Look, don’t touch!).

    Many customers of course are often “viewing” their TFS server from Visual Studio, and if you see issues with connecting to a TFS project, connecting to a TFS server, or having issues with the various views on top of reports or SharePoint documents, you should initially keep in mind that the underlying reasons probably lie on a IIS site being down, SharePoint configuration, an improper permission, or a firewall port, etc. In a nutshell, focus on permissions and configuration settings at each tier, per the TFS install guide, and your issues are likely to be solved!

    Mike Learned --Premier Field Engineer .NET


  • PFE - Developer Notes for the Field

    Why is my Window Grayed Out?


    Starting back in Windows 2000, if I recall correctly, the OS added a feature to try and handle unresponsive windows.  The idea is that if an application hangs we do not want the window that is associated with that application to block other stuff that the user is trying to do.  This feature has been improved through Windows XP and now into Vista.  The feature is not pretty good.

    When the OS detects that the application is not “responding” to messages it will take control of y

    I came across this when working with a customer that has a client side application.  It is single threaded for the most part and has a work flow like this:

    • Start long running Operation
    • Prevent painting main UI and ignore most input to that region
    • Update Status Pane via periodic calls to pump messages for that pane
    • Update full UI after the operation is complete

    This is not an uncommon scenario especially for application that have been around for a while.  Sometimes it can be very hard to allow the window to paint during some long running applications and while allowing painting is the real answer there are times where this is MUCH easier said than done.

    The feature that grays out windows seems to have gotten more aggressive in Windows Vista to better detect various hang scenarios.  This is why this became an issue for them.  Basically it was becoming more common to see the graying or “ghosting” of the application’s window.  This lead to users thinking the application was dead (the status bar also gets grayed out) and terminating the application.  In XP the application would stay there and while the main part of the UI would not respond you could still watch the status bar move.

    The question though became what do we do about this?  The obvious answer is make the application responsive during the long running application.  While this is a goal there is no short term way to do this.  We tried a bit to pump more messages but run into a sticky issue – How do I “appear” responsive without actually being responsive.  We were basically trying to guess what the OS was looking for and since this can change, as we have seen, this is not reliable.

    After some digging I found an interesting API:


    Basically this API tells the OS to skip the ghosting even it detects the application as unresponsive.  This API will take effect for the run of the application.

    The real answer is to fix the application to be responsive as  mentioned but that is not always easy so this is a stop gap. 

    Side note is that when you debug an application the graying or ghosting of the window will not occur.  So if you are trying to reproduce make sure that you are running outside of the debugger (this got me when trying to reproduce!)

    Hope this helps and have a great weekend!


  • PFE - Developer Notes for the Field

    SN.EXE and Empty Temp Files in %TEMP%


    If you have a build server and are doing delay signing this is probably of interest to you.  When delay signing the final step is to post build run the following command:

    sn -R myAssembly.dll sgKey.snk

    I have seen build setups that basically output all binaries to one folder and then run a loop across all the DLLs executing this command.  That way everything is fixed up and ready for the devs to run.  This way you do not have to worry about adding a post build step to each project or you can add one common script to each project.  The release builds of course get fully signed but for daily/rolling builds this works just fine.  Until you check out your temp directory. 

    It turns out for each call to SN.EXE a TMP file is generated.  These TMP files look like this:

    Volume in drive C has no label.
    Volume Serial Number is 1450-ABCF

    Directory of C:\temp

    12/11/2007  12:39 PM    <DIR>          .
    12/11/2007  12:39 PM    <DIR>          ..
    12/11/2007  11:17 AM                 0 Tmp917.tmp
    12/11/2007  11:17 AM                 0 Tmp91C.tmp
    12/11/2007  11:17 AM                 0 Tmp921.tmp

    You probably noticed that these are 0 byte files which means space is not an issue but if you have a ton of these it can slow your hard drive down.  Also, the actual SN.EXE process can start slowing down. 

    If you have 100 DLLs and are doing 30 builds a day that is 3,000 temp files.  Now, let’s say you build debug x86, debug x64, release x86 and release x64 that is now up to 12,000.  After a couple of days you could imagine how many files. 

    I was working on this and we found out that SN.EXE is not actually the problem here (Thanks to Scot Brennecke).  The issue is in the Crypto APIs that SN.EXE uses.  These APIs are the ones that create and do not clean up the TMP files.  It turns out that there is a hotfix for this if you are using Windows Server 2003:

    On a Windows Server 2003-based client computer, the system does not delete a temporary file that is created when an application calls the "CryptQueryObject" function

    After applying this hotfix the temp files are no longer created and life was happy.  I just thought I would share because the connection between this hotfix and the problem was not immediately obvious.

    Have a great day!


  • PFE - Developer Notes for the Field

    Debugging Large ViewState


    This week I have been working with a customer that had pretty large ViewStates that were getting pushed up and down between the client and the server.  The application was moving about 200+ KB of ViewState between the client and server.  This is something we picked out quickly as a scale issue.

    Now Tess has a great blog on why this is a problem from a memory perspective and provides some background on the issue:

    ASP.NET Memory – Identifying pages with high ViewState

    In this customer’s case we were seeing multisecond times for downloading a page from the server and sending the response back to the server so this was also having an impact on performance even before it because a memory problem.  They would run into the memory issues also once load is ramped up.

    The customer posed the question - “Well how do I know what is in my ViewState and what is causing the bloat?”

    This is a fair question and we referred to Tess’s blog but hit some snags that I wanted to comment on incase other people hit these. (read her blog first for background if you have questions)

    Where is the ViewState? 

    To get started we downloaded ViewState Decoder (2.2) but we could not point it directly at the page due to permissions issues. Therefore we went to pull the ViewState from the “View Source” of the page when viewing in IE.  A problem arose because as we navigated the page we would see the page sizes increase in the IIS logs (SC-BYTES value) but the page remained the same size when we did “View Source”.  Basically if we saved the “View Source” that page was identical even though a number of postbacks had occurred.  When I opened the IIS Log I saw each request and the log reported that SC-BYTES (Bytes Sent) was increasing (almost double the initial page request) so something was not adding up.

    In this application it has one page with multiple tabs and as we move through the tabs the page grows but as we said that was not reflected in the “View Source.”  It turns out after talking with the customer that they are using the Updater Panel from the AJAX toolkit and each of the postbacks is an update to that panel.  These updates are not reflected in the “View Source.” To capture this easily we returned to an old favorite – Fiddler from

    From there we were able to get the ViewState out.  Because this was being sent back to AJAX the ViewState showed up like this:


    You will notice that these are not normal tags.  Basically the ViewState continued between the “|”.  We grabbed this but and dropped it into the ViewState Decoder but got an error that the ViewState was invalid:

    There was an error decoding the ViewState string: The serialized data is invalid

    Why is the ViewState string invalid if it worked?

    We puzzled over this for a bit and then I noticed what you may already have noticed in the list of hidden fields:


    It turns out that this indicates that the ViewState is encrypted.  So it was understandable that the tool could not easily decode this via my cut and paste of the ViewState.  We decided to turn off ViewStateEncryption for this troubleshooting since the application was just in test.  Basically we add ViewStateEncryptionMode=”none” for this test to Page Directive on the page.  Then BINGO!  We were able to decode the ViewState and get some rich information.

    Here are some links on encryption:

    @ Page Directive


    For ViewStateEncryptionMode the default is auto which means if any of your control request encryption using the following API ViewState will get encrypted:


    All That Data, There Must be an Easier Way!

    Now as we scrolled through all of the data in the output we could see what is stored in the ViewState but tying back to a particular control is not simple.  Also, since there was so much stored in this ViewState it was impossible to easily know what controls were the worst offenders.  So we moved to one last place – Builtin ASP.NET tracing.  We enabled the tracing because there is a VERY helpful column that gets produced - ViewState Size bytes.  This is done for each control that is rendered and makes short order of what controls are the worst offenders.  You will get output like this:

    Control ID


    Render Size bytes

    ViewState Size bytes

    ControlState Size bytes
















    So there we had it!  Control names and their ViewState Size.  We were able to quickly look at who the worst offenders are and start removing the unneeded stuff and fixing the application.

    Reading ASP.NET Trace Information

    I hope this adds some additional information this discussion of troubleshooting ViewState.

    Have a great day!


  • PFE - Developer Notes for the Field

    $(SolutionDir) and MSBUILD


    A while back I was working with a customer that was moving from doing all of their builds with the devenv command line to using MSBUILD.  However we ran into a hiccup that did not make sense at first.  Basically when they tried to build their solution using MSBUILD some of the custom build steps where failing because $(SolutionDir) was not defined. 

    After thinking about it a bit is became clear that MSBUILD did not define the SolutionBuild property that Visual Studio does.  So where did that leave us?

    There are a couple of basic approaches:

    1. Create a Custom Action (this would involve Custom Code) that walked up the directory structure and found the SLN file and then used that directory to create the SolutionDir.
    2. Use the command line option /Property:SolutionDir=<sln Dir> to set the value.  This way it is controlled in one place.
    3. Change all of the pathing to be relative.  In most groups it is relatively infrequent to be moving projects up and down the directory hierarchy.

    Just thought I would share incase this comes up for someone else.


Page 3 of 4 (82 items) 1234