• Ntdebugging Blog

    x64 Manual Stack Reconstruction and Stack Walking


    My name is Trey Nash and I am an Escalation Engineer on the Core OS team.  My experience is as a software developer, and therefore my blog posts tend to be slanted in the direction of helping developers during the feature development, testing and the support phases.

    In this installment I would like to expand a bit on a previous post of mine called Challenges of Debugging Optimized x64 Code.  In that post I discussed the nuances of the x64 calling convention (thankfully of which there is only one) and how it is used in optimized builds of software.  The calling convention is sometimes referred to as the Application Binary Interface (ABI).  In this post, I would like to discuss the x64 unwind metadata and how you can use it in the debugger to manually walk a stack.

    In some cases, you may have a corrupted stack that the debugger simply cannot effectively walk for you.  This often happens because the debugger walks a stack from the top down (assuming the stack grows upwards as if it were a stack of places on a table), and if the stack is sufficiently trashed then the debugger cannot find its bearing.  In the x86 world, a large percentage of the time, you can spot the stack frames by following the chain of base pointers and then build a crafty stack backtrace command to display the stack at some point in time.  But in the x64 calling convention there is no base pointer.  In fact, once a function’s prolog code has executed the rsp register generally never changes until the epilog code.  To read more about x64 prolog and epilog code conventions, go here.

    Moreover, the syntax for creating a crafty stack backtrace command in the x64 environment is currently undocumented, and I aim to shed some light on that near the end of that blog post. J

    The Example Code

    For this blog post I have used the following example C# code that requires the .NET 4.0 framework and can be easily built from a Visual Studio 2010 command prompt.  You can find the code below:

    using System;
    using System.Numerics;
    using System.Threading;
    using System.Threading.Tasks;
    using System.Collections.Concurrent;

    class EntryPoint
        const int FactorialsToCompute = 2000;

        static void Main() {
            var numbers = new ConcurrentDictionary<BigInteger, BigInteger>(4, FactorialsToCompute);

            // Create a factorial delegate.
            Func<BigInteger, BigInteger> factorial = null;
            factorial = (n) => ( n == 0 ) ? 1 : n * factorial(n-1);

            // Now compute the factorial of the list
            // concurrently.
            Parallel.For( 0,
                          (i) => {
                            numbers[i] = factorial(i);
                          } );

    The spirit of this code is to concurrently compute the first 2000 factorials and store the results in a dictionary.  This code uses the new Task Parallel Library to distribute this work evenly across the multiple cores on the system.  To compile the example (assuming the code is stored in test.cs), you can execute the following command from a Visual Studio 2010 command prompt:

    csc /r:system.numerics.dll test.cs

    Note:    If you are using a 64bit platform, be sure to use the x64 command prompt shortcut installed by the Visual Studio 2010 installer.

    You can download a free evaluation of Visual Studio 2010

    x64 Unwind Metadata

    So how does the debugger and functions such as RtlVirtualUnwind know how to walk the x64 stack if it cannot find a base pointer?  The secret is that it uses unwind metadata that is typically baked into the Portable Executable (PE) file at link time. You can inspect this information using the /UNWINDINFO option of the command line tool dumpbin.  For example, I went to the directory on my machine which contains clr.dll (c:\Windows\Microsoft.NET\Framework\v4.0.30319) and dumped the unwind info looking for CLREvent::WaitEx, which I have pasted below:

      00013F20 000DFDB0 000DFE3C 007267D8  ?WaitEx@CLREvent@@QEAAKKW4WaitMode@@PEAUPendingSync@@@Z (public: unsigned long __cdecl CLREvent::WaitEx(unsigned long,enum WaitMode,struct PendingSync *))
        Unwind version: 1
        Unwind flags: UHANDLER
        Size of prologue: 0x20
        Count of codes: 10
        Unwind codes:
          20: SAVE_NONVOL, register=rbp offset=0xB0
          1C: SAVE_NONVOL, register=rbx offset=0xA8
          0F: ALLOC_SMALL, size=0x70
          0B: PUSH_NONVOL, register=r14
          09: PUSH_NONVOL, register=r13
          07: PUSH_NONVOL, register=r12
          05: PUSH_NONVOL, register=rdi
          04: PUSH_NONVOL, register=rsi
        Handler: 0020ADF0 __CxxFrameHandler3
        EH Handler Data: 007B3F54

    I’ll get into what all of this means shortly.

    Note:    The dumpbin.exe functionality is also exposed via the linker.  For example, the command “dumpbin.exe /?” is identical to “link.exe /dump /?”.


    Within the debugger, you can find this same information for a particular function using the .fnent command.  To demonstrate, I executed the example code within a windbg instance and broke in at some random point and chose one of the threads to look at which has a stack looking like the following:

      12  Id: f80.7f0 Suspend: 1 Teb: 000007ff`fffa0000 Unfrozen
     # Child-SP          RetAddr           Call Site
    00 00000000`04a51e18 000007fe`fd4e10ac ntdll!NtWaitForSingleObject+0xa
    01 00000000`04a51e20 000007fe`f48bffc7 KERNELBASE!WaitForSingleObjectEx+0x79
    02 00000000`04a51ec0 000007fe`f48bff70 clr!CLREvent::WaitEx+0x170
    03 00000000`04a51f00 000007fe`f48bfe23 clr!CLREvent::WaitEx+0xf8
    04 00000000`04a51f60 000007fe`f48d51d8 clr!CLREvent::WaitEx+0x5e
    05 00000000`04a52000 000007fe`f4995249 clr!SVR::gc_heap::wait_for_gc_done+0x98
    06 00000000`04a52030 000007fe`f48aef28 clr!SVR::GCHeap::Alloc+0xb4
    07 00000000`04a520a0 000007fe`f48aecc9 clr!FastAllocatePrimitiveArray+0xc5
    08 00000000`04a52120 000007fe`f071244c clr!JIT_NewArr1+0x389
    09 00000000`04a522f0 000007fe`f07111b5 System_Numerics_ni+0x2244c
    0a 00000000`04a52330 000007ff`00150acf System_Numerics_ni+0x211b5
    0b 00000000`04a523d0 000007ff`0015098c 0x7ff`00150acf
    0c 00000000`04a52580 000007ff`0015098c 0x7ff`0015098c
    0d 00000000`04a52730 000007ff`0015098c 0x7ff`0015098c
    0e 00000000`04a528e0 000007ff`0015098c 0x7ff`0015098c
    0f 00000000`04a52a90 000007ff`0015098c 0x7ff`0015098c
    10 00000000`04a52c40 000007ff`0015098c 0x7ff`0015098c
    11 00000000`04a52df0 000007ff`0015098c 0x7ff`0015098c
    12 00000000`04a52fa0 000007ff`0015098c 0x7ff`0015098c
    13 00000000`04a53150 000007ff`0015098c 0x7ff`0015098c

    At first glance, it may appear that this stack is already trashed since there is no symbol information for the bottom frames in the display.  Before jumping to this conclusion, recall that this is a managed application and therefore contains JIT compiled code.  To verify that the addresses without symbol information are JIT’ed code, you can do a couple of things.

    First, use the !EEHeap extension in the SOS extension to determine if these addresses reside in the JIT code heap.  Below, you can see the commands I used to both load the SOS extension and then display the Execution Engine (EE) Heap information:

    0:014> .loadby sos clr

    0:014> !EEHeap -loader
    Loader Heap:
    System Domain:     000007fef50955a0
    LowFrequencyHeap:  000007ff00020000(2000:1000) Size: 0x1000 (4096) bytes.
    HighFrequencyHeap: 000007ff00022000(8000:1000) Size: 0x1000 (4096) bytes.
    StubHeap:          000007ff0002a000(2000:2000) Size: 0x2000 (8192) bytes.
    Virtual Call Stub Heap:
      IndcellHeap:     000007ff000d0000(6000:1000) Size: 0x1000 (4096) bytes.
      LookupHeap:      000007ff000dc000(4000:1000) Size: 0x1000 (4096) bytes.
      ResolveHeap:     000007ff00106000(3a000:1000) Size: 0x1000 (4096) bytes.
      DispatchHeap:    000007ff000e0000(26000:1000) Size: 0x1000 (4096) bytes.
      CacheEntryHeap:  Size: 0x0 (0) bytes.
    Total size:        Size: 0x8000 (32768) bytes.
    Shared Domain:     000007fef5095040
    LowFrequencyHeap:  000007ff00020000(2000:1000) Size: 0x1000 (4096) bytes.
    HighFrequencyHeap: 000007ff00022000(8000:1000) Size: 0x1000 (4096) bytes.
    StubHeap:          000007ff0002a000(2000:2000) Size: 0x2000 (8192) bytes.
    Virtual Call Stub Heap:
      IndcellHeap:     000007ff000d0000(6000:1000) Size: 0x1000 (4096) bytes.
      LookupHeap:      000007ff000dc000(4000:1000) Size: 0x1000 (4096) bytes.
      ResolveHeap:     000007ff00106000(3a000:1000) Size: 0x1000 (4096) bytes.
      DispatchHeap:    000007ff000e0000(26000:1000) Size: 0x1000 (4096) bytes.
      CacheEntryHeap:  Size: 0x0 (0) bytes.
    Total size:        Size: 0x8000 (32768) bytes.
    Domain 1:          00000000003e73c0
    LowFrequencyHeap:  000007ff00030000(2000:1000) 000007ff00140000(10000:5000) Size: 0x6000 (24576) bytes total, 0x1000 (4096) bytes wasted.
    HighFrequencyHeap: 000007ff00032000(8000:5000) Size: 0x5000 (20480) bytes.
    StubHeap:          Size: 0x0 (0) bytes.
    Virtual Call Stub Heap:
      IndcellHeap:     000007ff00040000(4000:1000) Size: 0x1000 (4096) bytes.
      LookupHeap:      000007ff0004b000(2000:1000) Size: 0x1000 (4096) bytes.
      ResolveHeap:     000007ff0007c000(54000:1000) Size: 0x1000 (4096) bytes.
      DispatchHeap:    000007ff0004d000(2f000:1000) Size: 0x1000 (4096) bytes.
      CacheEntryHeap:  Size: 0x0 (0) bytes.
    Total size:        Size: 0xf000 (61440) bytes total, 0x1000 (4096) bytes wasted.
    Jit code heap:
    LoaderCodeHeap:    000007ff00150000(40000:2000) Size: 0x2000 (8192) bytes.
    Total size:        Size: 0x2000 (8192) bytes.

    Module Thunk heaps:
    Module 000007fee5581000: Size: 0x0 (0) bytes.
    Module 000007ff000330d8: Size: 0x0 (0) bytes.
    Module 000007fef06f1000: Size: 0x0 (0) bytes.
    Total size:      Size: 0x0 (0) bytes.
    Module Lookup Table heaps:
    Module 000007fee5581000: Size: 0x0 (0) bytes.
    Module 000007ff000330d8: Size: 0x0 (0) bytes.
    Module 000007fef06f1000: Size: 0x0 (0) bytes.
    Total size:      Size: 0x0 (0) bytes.
    Total LoaderHeap size:   Size: 0x21000 (135168) bytes total, 0x1000 (4096) bytes wasted.


    I have highlighted the JIT heap information and you can see that the JIT’ed code instruction pointers in the stack fall within this range.

    The second sanity check you can perform is to use a variant of the u instruction to confirm that there is a call instruction just prior to that address as shown below:

    0:012> ub 0x7ff`0015098c
    000007ff`0015095e 488b01          mov     rax,qword ptr [rcx]
    000007ff`00150961 48898424b0000000 mov     qword ptr [rsp+0B0h],rax
    000007ff`00150969 488b4108        mov     rax,qword ptr [rcx+8]
    000007ff`0015096d 48898424b8000000 mov     qword ptr [rsp+0B8h],rax
    000007ff`00150975 4c8d8424b0000000 lea     r8,[rsp+0B0h]
    000007ff`0015097d 488b5308        mov     rdx,qword ptr [rbx+8]
    000007ff`00150981 488d8c24c0000000 lea     rcx,[rsp+0C0h]
    000007ff`00150989 ff5318          call    qword ptr [rbx+18h]

    So at this point we have verified that we probably have a valid stack.  But how does the debugger so effectively walk this stack for us if there is no stack frame pointer?  The answer, of course, is that it uses the unwind information.

    To explore the answer to that question, let’s focus on a particular frame within the stack such as frame 4 in the stack above.  The code at that frame is inside the function clr!CLREvent::WaitEx, and if we pass that to .fnent, we get the following output:

    0:012> .fnent clr!CLREvent::WaitEx
    Debugger function entry 00000000`04075e40 for:
    (000007fe`f48bfdb0)   clr!CLREvent::WaitEx   |  (000007fe`f48bfe3c)   clr!CLREvent::Set
    Exact matches:
        clr!CLREvent::WaitEx = <no type information>

    BeginAddress      = 00000000`000dfdb0
    EndAddress        = 00000000`000dfe3c
    UnwindInfoAddress = 00000000`007267d8

    Unwind info at 000007fe`f4f067d8, 20 bytes
      version 1, flags 2, prolog 20, codes a
      frame reg 0, frame offs 0
      handler routine: clr!_CxxFrameHandler3 (000007fe`f49eadf0), data 7b3f54
      00: offs 20, unwind op 4, op info 5    UWOP_SAVE_NONVOL FrameOffset: b0
      02: offs 1c, unwind op 4, op info 3    UWOP_SAVE_NONVOL FrameOffset: a8
      04: offs f, unwind op 2, op info d     UWOP_ALLOC_SMALL
      05: offs b, unwind op 0, op info e     UWOP_PUSH_NONVOL
      06: offs 9, unwind op 0, op info d     UWOP_PUSH_NONVOL
      07: offs 7, unwind op 0, op info c     UWOP_PUSH_NONVOL
      08: offs 5, unwind op 0, op info 7     UWOP_PUSH_NONVOL
      09: offs 4, unwind op 0, op info 6     UWOP_PUSH_NONVOL

     Notice that this output is virtually identical to the same information provided by dumpbin using the /UNWINDINFO option.

    I have highlighted two interesting values above.  The value highlighted in green is a relative virtual address (RVA) to the unwind info that is baked into the PE file by the linker.  The value highlighted in yellow is the actual virtual address of the unwind info and can be computed by adding the module base address shown below to the RVA for UnwindInfoAddress.

    0:012> lmnm clr

    start             end                 module name

    000007fe`f47e0000 000007fe`f5145000   clr

    By examining the PE header using !dh you can confirm that the unwind information resides in the .rdata section of the module, which I have shown below:

    0:012> !dh clr

    File Type: DLL
        8664 machine (X64)
           6 number of sections
    4BA21EEB time date stamp Thu Mar 18 07:39:07 2010

      .rdata name
      1FC8EC virtual size
      67F000 virtual address
      1FCA00 size of raw data
      67E200 file pointer to raw data
           0 file pointer to relocation table
           0 file pointer to line numbers
           0 number of relocations
           0 number of line numbers
    40000040 flags
             Initialized Data
             (no align specified)
             Read Only

    Using the Unwind Info

    Now let’s take a look at the unwind info and compare it to the prolog code of the function with which it is associated.  For convenience, I have reprinted the .fnent output for the function:

    0:012> .fnent clr!CLREvent::WaitEx
    Debugger function entry 00000000`04075e40 for:
    (000007fe`f48bfdb0)   clr!CLREvent::WaitEx   |  (000007fe`f48bfe3c)   clr!CLREvent::Set
    Exact matches:
        clr!CLREvent::WaitEx = <no type information>

    BeginAddress      = 00000000`000dfdb0
    EndAddress        = 00000000`000dfe3c
    UnwindInfoAddress = 00000000`007267d8

    Unwind info at 000007fe`f4f067d8, 20 bytes
      version 1, flags 2, prolog 20, codes a
      frame reg 0, frame offs 0
      handler routine: clr!_CxxFrameHandler3 (000007fe`f49eadf0), data 7b3f54
      00: offs 20, unwind op 4, op info 5    UWOP_SAVE_NONVOL FrameOffset: b0
      02: offs 1c, unwind op 4, op info 3    UWOP_SAVE_NONVOL FrameOffset: a8
      04: offs f, unwind op 2, op info d     UWOP_ALLOC_SMALL
      05: offs b, unwind op 0, op info e     UWOP_PUSH_NONVOL
      06: offs 9, unwind op 0, op info d     UWOP_PUSH_NONVOL
      07: offs 7, unwind op 0, op info c     UWOP_PUSH_NONVOL
      08: offs 5, unwind op 0, op info 7     UWOP_PUSH_NONVOL
      09: offs 4, unwind op 0, op info 6     UWOP_PUSH_NONVOL

    The yellow highlighted value tells us that the prolog code for the function is 0x20 bytes in length. Using that information we can dump out the prolog code for the function:

    0:012> u clr!CLREvent::WaitEx clr!CLREvent::WaitEx+20
    000007fe`f48bfdb0 488bc4          mov     rax,rsp
    000007fe`f48bfdb3 56              push    rsi
    000007fe`f48bfdb4 57              push    rdi
    000007fe`f48bfdb5 4154            push    r12
    000007fe`f48bfdb7 4155            push    r13
    000007fe`f48bfdb9 4156            push    r14
    000007fe`f48bfdbb 4883ec70        sub     rsp,70h
    000007fe`f48bfdbf 48c7442440feffffff mov   qword ptr [rsp+40h],0FFFFFFFFFFFFFFFEh
    000007fe`f48bfdc8 48895810        mov     qword ptr [rax+10h],rbx
    000007fe`f48bfdcc 48896818        mov     qword ptr [rax+18h],rbp

    The list of operations in the unwind info is listed in the reverse order of the operations in the assembly code.  Each of the UWOP_PUSH_NONVOL operations in the unwind info maps to a nonvolatile register that is pushed onto the stack for safe keeping in the prolog code.  I have highlighted the sections within the prolog and the .fnent output such that highlighting with like colors indicates related information.  Now, let’s take a look at the raw stack and tie all of this information together. 

    Below is the stack with the frame we are focusing on highlighted in yellow:

    0:012> kn
     # Child-SP          RetAddr           Call Site
    00 00000000`04a51e18 000007fe`fd4e10ac ntdll!NtWaitForSingleObject+0xa
    01 00000000`04a51e20 000007fe`f48bffc7 KERNELBASE!WaitForSingleObjectEx+0x79
    02 00000000`04a51ec0 000007fe`f48bff70 clr!CLREvent::WaitEx+0x170
    03 00000000`04a51f00 000007fe`f48bfe23 clr!CLREvent::WaitEx+0xf8
    04 00000000`04a51f60 000007fe`f48d51d8 clr!CLREvent::WaitEx+0x5e
    05 00000000`04a52000 000007fe`f4995249 clr!SVR::gc_heap::wait_for_gc_done+0x98
    06 00000000`04a52030 000007fe`f48aef28 clr!SVR::GCHeap::Alloc+0xb4
    07 00000000`04a520a0 000007fe`f48aecc9 clr!FastAllocatePrimitiveArray+0xc5
    08 00000000`04a52120 000007fe`f071244c clr!JIT_NewArr1+0x389
    09 00000000`04a522f0 000007fe`f07111b5 System_Numerics_ni+0x2244c
    0a 00000000`04a52330 000007ff`00150acf System_Numerics_ni+0x211b5
    0b 00000000`04a523d0 000007ff`0015098c 0x7ff`00150acf
    0c 00000000`04a52580 000007ff`0015098c 0x7ff`0015098c
    0d 00000000`04a52730 000007ff`0015098c 0x7ff`0015098c
    0e 00000000`04a528e0 000007ff`0015098c 0x7ff`0015098c
    0f 00000000`04a52a90 000007ff`0015098c 0x7ff`0015098c
    10 00000000`04a52c40 000007ff`0015098c 0x7ff`0015098c
    11 00000000`04a52df0 000007ff`0015098c 0x7ff`0015098c
    12 00000000`04a52fa0 000007ff`0015098c 0x7ff`0015098c
    13 00000000`04a53150 000007ff`0015098c 0x7ff`0015098c

    Note:    The symbols above look a little weird and may lead you to believe that WaitEx is calling itself recursively, but it is not.  It only appears that way because you need the private symbols for clr.dll to be able to see the real function name.  Only public symbols are available outside of Microsoft.


    And below is the raw stack relevant to this frame with some highlighting and annotations that I have added:

    0:012> dps 00000000`04a51f60-10 L20
    00000000`04a51f50  00000000`00000001
    00000000`04a51f58  000007fe`f48bfe23 clr!CLREvent::WaitEx+0x5e
    00000000`04a51f60  00000000`c0402388
    00000000`04a51f68  00000000`c0402500
    00000000`04a51f70  000007fe`f48afaf0 clr!SystemNative::ArrayCopy
    00000000`04a51f78  00000000`00000182
    00000000`04a51f80  00000000`04a521d0
    00000000`04a51f88  000007fe`00000001
    00000000`04a51f90  00000000`00000057
    00000000`04a51f98  00000000`c0402398
    00000000`04a51fa0  ffffffff`fffffffe
    00000000`04a51fa8  007f0000`04a521d0
    00000000`04a51fb0  fffff880`009ca540
    00000000`04a51fb8  000007fe`f483da5b clr!SVR::heap_select::select_heap+0x1c
    00000000`04a51fc0  fffff880`009ca540
    00000000`04a51fc8  000007fe`fd4e18aa KERNELBASE!ResetEvent+0xa
    00000000`04a51fd0  00000000`0043dc60
    00000000`04a51fd8  00000000`00000178
    00000000`04a51fe0  00000000`00493c10
    00000000`04a51fe8  00000000`0043dc60    
    ß saved rdi
    00000000`04a51ff0  00000000`00000001

       *** call into clr!CLREvent::WaitEx ***

    00000000`04a51ff8  000007fe`f48d51d8 clr!SVR::gc_heap::wait_for_gc_done+0x98
    00000000`04a52000  00000000`00493ba0
    00000000`04a52008  00000000`00493ba0    
    ß saved rbx
    00000000`04a52010  00000000`00000058    
    ß saved rbp
    00000000`04a52018  000007fe`f0711e0f System_Numerics_ni+0x21e0f
    00000000`04a52020  00000000`00000178
    00000000`04a52028  000007fe`f4995249 clr!SVR::GCHeap::Alloc+0xb4
    00000000`04a52030  00000000`0043a140
    00000000`04a52038  00000000`0043dc60
    00000000`04a52040  00000000`00000000
    00000000`04a52048  00000000`04a522e0

    In the stack listing I have used the same color highlighting scheme as before to show how the data on the raw stack correlates to the unwind data.  And, using green highlighting, I have shown how the Child-SP value correlates to the stack frame.

    The cyan highlighting represents nonvolatile registers that are pushed onto the stack in the prolog code.  The blue highlighting represents stack space reserved for locals and for register home space allocated for calling sub routines.  In the unwind data the stack reservation is represented by a UWOP_ALLOC_SMALL operation.  And the red highlighting represents nonvolatile registers that are stored in the home space of the previous stack frame and represented by a UWOP_SAVE_NONVOL operation stored in the unwind information.

    As you can see, we have all of the information we need in the unwind data to determine which slots on the stack are used for what.  The only thing we don’t know is the partitioning of the reserved stack space for locals, which is described by the private symbol information for the clr.dll module.

    Tying it all Together

    .fnent produces its output directly from parsing the definition of the UNWIND_INFO structure and it even gives you the address of where that structure lives in memory.  The UNWIND_INFO structure also contains a variable amount of UNWIND_CODE structures.  You can find details of the structure definitions for UNWIND_INFO and UNWIND_CODE here.  Each parsed line of unwind information in the .fnent output is backed by at least one of these structures.  In fact, you can see the correlation between the structure fields for UNWIND_INFO and the data in the .fnent output as shown below:



    Offset in prolog

    UBYTE: 4

    Unwind operation code

    UBYTE: 4

    Operation info

    From .fnent:

      05: offs b, unwind op 0, op info e     UWOP_PUSH_NONVOL

    The meaning of the OpInfo (operation info) field is dependent on the UnwindOp (unwind operation code) field and is spelled out in the documentation for UNWIND_CODE.  For example, for the UWOP_PUSH_NONVOL operation shown above, the OpInfo field is an index into the following table, which indicates which nonvolatile register this push is associated with.  Note that the values in the below table are in decimal, while the .fnent values are in hex:

















    8 to 15

    R8 to R15

    Therefore, the previous line from the .fnent output represents a push operation for the r14 register (05: offs b, unwind op 0, op info e   UWOP_PUSH_NONVOL).  Looking at the assembly above, we see that the topmost UWOP_PUSH_NONVOL operation in the .fnent output correlates to the last nonvolatile register push in the prolog code (push    r14).

    Note:    Remember, the push operations in the .fnent output are listed in the reverse order of where they are in the actual prolog code.  This helps the unwind code easily calculate offsets of where they should live in the stack.


    One thing that you will notice in the x64 calling convention is that once the prolog code has executed, the value for rsp will very rarely change.  The Child-SP value in the stack displayed by the k commands is the value of rsp for that frame after the prolog code has executed.  So the offsets to access these nonvolatile registers are then applied to the Child-SP value (previously highlighted in green) to find where they live on the stack.  So, in a way, the Child-SP value acts like the base pointer we are used to on the x86 platform.

    In the .fnent output above, you will also see the following:

      00: offs 20, unwind op 4, op info 5    UWOP_SAVE_NONVOL FrameOffset: b0

    For UWOP_SAVE_NONVOL, you see that the .fnent output shows us the offset where we can find this register, and the register in question is represented by the OpInfo value that equates to rbp.  The offset above is applied to the Child-SP value (00000000`04a51f60 in this case) to produce the address 00000000`04a52010, which indicates that’s where we can find a saved copy of rbp.  I have also annotated where it lives in the raw stack output shown previously.

    Note:    If you’re wondering why rbp is stored in the previous stack frame, check out my previous post on this topic where I describe how in optimized builds, the compiler can use the home space from the previous stack frame to save nonvolatile registers thus saving them with a MOV operation as opposed to a PUSH operation.  This is possible because in optimized builds the home space is not necessarily used to store parameters.


    So how does all of This Work for CLR JIT Code?

    If you have asked this question, then you are definitely paying attention!  As we have shown, the compiler and linker are responsible for placing unwind info in the Portable Executable file at build time.  But what about dynamic code that is generated at runtime?  Certainly there must be unwind information for dynamically compiled code as well, otherwise there would be no way to walk the stack or unwind the stack after an exception.

    As it turns out, APIs exist for this very situation, including RtlAddFunctionTable  and RtlInstallFunctionTableCallback.  In fact, the CLR uses RtlInstallFunctionTableCallback.  The generated unwind information is then rooted in a linked list where the head is at ntdll!RtlpDynamicFunctionTable.  The format of the linked list items is undocumented as it is an implementation detail, but using dbghelp.dll you can find the unwind information for a given instruction pointer if you so desire by calling SymFunctionTableAccess64.

    In fact, if you want to see the CLR adding dynamic unwind info in action you can run the test code above under the debugger, and then at the initial breakpoint, before the application starts running, set the following breakpoint:

    bu ntdll!RtlInstallFunctionTableCallback

    When you let the application run you should then end up with a call stack at the breakpoint that looks like the following, which clearly shows the JIT compiler adding the unwind info to the table dynamically:

    0:000> kn
     # Child-SP          RetAddr           Call Site
    00 00000000`0017dca8 000007fe`f4832cc6 ntdll!RtlInstallFunctionTableCallback
    01 00000000`0017dcb0 000007fe`f4831422 clr!InstallEEFunctionTable+0x77
    02 00000000`0017df60 000007fe`f4828ca8 clr!StubLinker::EmitUnwindInfo+0x492
    03 00000000`0017e050 000007fe`f4832c1a clr!StubLinker::EmitStub+0xe8
    04 00000000`0017e0b0 000007fe`f48328e5 clr!StubLinker::LinkInterceptor+0x1ea
    05 00000000`0017e160 000007fe`f4831e40 clr!CTPMethodTable::CreateStubForNonVirtualMethod+0xa35
    06 00000000`0017e300 000007fe`f4832926 clr!CRemotingServices::GetStubForNonVirtualMethod+0x50
    07 00000000`0017e3c0 000007fe`f48223f3 clr!MethodDesc::DoPrestub+0x38b
    08 00000000`0017e4d0 000007fe`f47e2d07 clr!PreStubWorker+0x1df
    09 00000000`0017e590 000007fe`f48210b4 clr!ThePreStubAMD64+0x87
    0a 00000000`0017e660 000007fe`f48211c9 clr!CallDescrWorker+0x84
    0b 00000000`0017e6d0 000007fe`f4821245 clr!CallDescrWorkerWithHandler+0xa9
    0c 00000000`0017e750 000007fe`f4823cf1 clr!MethodDesc::CallDescr+0x2a1
    0d 00000000`0017e9b0 000007fe`f49cdc3d clr!MethodDescCallSite::Call+0x35
    0e 00000000`0017e9f0 000007fe`f4999f0d clr!AppDomain::InitializeDomainContext+0x1ac
    0f 00000000`0017ebf0 000007fe`f49212a1 clr!SystemDomain::InitializeDefaultDomain+0x13d
    10 00000000`0017f0c0 000007fe`f4923dd6 clr!SystemDomain::ExecuteMainMethod+0x191
    11 00000000`0017f670 000007fe`f4923cf3 clr!ExecuteEXE+0x43
    12 00000000`0017f6d0 000007fe`f49a7365 clr!CorExeMainInternal+0xc4
    13 00000000`0017f740 000007fe`f8ad3309 clr!CorExeMain+0x15

    But there is one more wrinkle to this picture.  We now know that by using RtlInstallFunctionTableCallback the CLR, or any other JIT engine, can register a callback that provides the unwind information at runtime.  But how does the debugger access this information?  When the debugger is broken into the process or if you are debugging a dump, it cannot execute the callback function registered with RtlInstallFunctionTableCallback.

    This is where the sixth and final parameter to RtlInstallFunctionTableCallback comes into play.  By providing the OutOfProcessCallbackDll parameter, the CLR is providing a dll which the debugger can use to effectively parse through the JITer’s unwind information statically.  When inspecting which path the CLR passes for OutOfProcessCallbackDll on my machine, I see the following string:

    0:000> du /c 80 000007fe`f5916160
    000007fe`f5916160  "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\mscordacwks.dll"

    So, the debugger uses mscordacwks.dll to statically examine the unwind info while the process is broken in the debugger or while inspecting a dump.

    Note:    This is one of the many reasons why you must have a complete process dump to effectively post-mortem debug managed applications.

    Using the ‘k =’ Command to Dump the Stack

    If you look at the documentation for the k command, you’ll see that there is a way to override the base pointer when walking the stack.  However, the documentation leaves it a complete mystery as to how to apply this in the x64 world.  To demonstrate what I mean, consider the following stack from earlier:

    0:012> kn
     # Child-SP          RetAddr           Call Site
    00 00000000`04a51e18 000007fe`fd4e10ac ntdll!NtWaitForSingleObject+0xa
    01 00000000`04a51e20 000007fe`f48bffc7 KERNELBASE!WaitForSingleObjectEx+0x79

    02 00000000`04a51ec0 000007fe`f48bff70 clr!CLREvent::WaitEx+0x170
    03 00000000`04a51f00 000007fe`f48bfe23 clr!CLREvent::WaitEx+0xf8
    04 00000000`04a51f60 000007fe`f48d51d8 clr!CLREvent::WaitEx+0x5e
    05 00000000`04a52000 000007fe`f4995249 clr!SVR::gc_heap::wait_for_gc_done+0x98
    06 00000000`04a52030 000007fe`f48aef28 clr!SVR::GCHeap::Alloc+0xb4
    07 00000000`04a520a0 000007fe`f48aecc9 clr!FastAllocatePrimitiveArray+0xc5
    08 00000000`04a52120 000007fe`f071244c clr!JIT_NewArr1+0x389
    09 00000000`04a522f0 000007fe`f07111b5 System_Numerics_ni+0x2244c
    0a 00000000`04a52330 000007ff`00150acf System_Numerics_ni+0x211b5
    0b 00000000`04a523d0 000007ff`0015098c 0x7ff`00150acf
    0c 00000000`04a52580 000007ff`0015098c 0x7ff`0015098c
    0d 00000000`04a52730 000007ff`0015098c 0x7ff`0015098c
    0e 00000000`04a528e0 000007ff`0015098c 0x7ff`0015098c
    0f 00000000`04a52a90 000007ff`0015098c 0x7ff`0015098c
    10 00000000`04a52c40 000007ff`0015098c 0x7ff`0015098c
    11 00000000`04a52df0 000007ff`0015098c 0x7ff`0015098c
    12 00000000`04a52fa0 000007ff`0015098c 0x7ff`0015098c
    13 00000000`04a53150 000007ff`0015098c 0x7ff`0015098c

    Now, imagine the top of the stack is corrupted, which I have “simulated” by blacking out the top few frames in this stack dump.  Furthermore, let’s assume that we identified a frame where the stack starts to look sane again by looking at the raw stack below:

    0:012> dps 00000000`04a51e90 
    00000000`04a51e90  00000000`00000000
    00000000`04a51e98  00000000`04a52130
    00000000`04a51ea0  00000000`ffffffff
    00000000`04a51ea8  00000000`ffffffff
    00000000`04a51eb0  00000000`00000108
    00000000`04a51eb8  000007fe`f48bffc7 clr!CLREvent::WaitEx+0x170
    00000000`04a51ec0  00000000`00000000
    00000000`04a51ec8  00000000`00000108
    00000000`04a51ed0  000007fe`00000000
    00000000`04a51ed8  00000000`00000108
    00000000`04a51ee0  ffffffff`fffffffe
    00000000`04a51ee8  00000000`00000001
    00000000`04a51ef0  00000000`00000000
    00000000`04a51ef8  000007fe`f48bff70 clr!CLREvent::WaitEx+0xf8
    00000000`04a51f00  00000000`00000000
    00000000`04a51f08  00000000`00493ba0

    From looking at this stack, we can see the typical pattern of stack frames because the return addresses resolve to symbols of sorts.

    To dump out the corrupted stack, here is the undocumented syntax for the x64 platform:

    k = <rsp> <rip> <frame_count>

    <rsp> is the stack pointer to start with.  You want to use the stack pointer that would have been in rsp when that function was active.  Remember, typically rsp does not change after the function prolog code completes.  Therefore, if you pick the stack pointer just below the return address, you should be good.

    <rip> should be an instruction pointer from within the function that was executing at the time the <rsp> value above was in play.  In this case, the return address directly above <rsp> comes from that function and I have highlighted it in green.  This piece of information is critical so that the debugger can find the unwind metadata for the function that was current at this point in the stack.  Without it, the debugger cannot walk the stack.

    Armed with this information, you can construct a k command to dump the stack starting from this frame as shown below:

    0:012> kn = 00000000`04a51ec0 000007fe`f48bffc7 10
     # Child-SP          RetAddr           Call Site
    00 00000000`04a51ec0 000007fe`f48bff70 clr!CLREvent::WaitEx+0x170
    01 00000000`04a51f00 000007fe`f48bfe23 clr!CLREvent::WaitEx+0xf8
    02 00000000`04a51f60 000007fe`f48d51d8 clr!CLREvent::WaitEx+0x5e
    03 00000000`04a52000 000007fe`f4995249 clr!SVR::gc_heap::wait_for_gc_done+0x98
    04 00000000`04a52030 000007fe`f48aef28 clr!SVR::GCHeap::Alloc+0xb4
    05 00000000`04a520a0 000007fe`f48aecc9 clr!FastAllocatePrimitiveArray+0xc5
    06 00000000`04a52120 000007fe`f071244c clr!JIT_NewArr1+0x389
    07 00000000`04a522f0 000007fe`f07111b5 System_Numerics_ni+0x2244c
    08 00000000`04a52330 000007ff`00150acf System_Numerics_ni+0x211b5
    09 00000000`04a523d0 000007ff`0015098c 0x7ff`00150acf
    0a 00000000`04a52580 000007ff`0015098c 0x7ff`0015098c
    0b 00000000`04a52730 000007ff`0015098c 0x7ff`0015098c
    0c 00000000`04a528e0 000007ff`0015098c 0x7ff`0015098c
    0d 00000000`04a52a90 000007ff`0015098c 0x7ff`0015098c
    0e 00000000`04a52c40 000007ff`0015098c 0x7ff`0015098c
    0f 00000000`04a52df0 000007ff`0015098c 0x7ff`0015098c

    Note:    The frame count in the above k expression is required.  That is the way the debugger engine distinguishes between this variant of the command (with an overridden rip) and the documented form of k that does not provide an overridden rip.


    Since the x64 calling convention does not utilize a base pointer (among other things), we need some extra information to effectively walk the stack.  That extra information comes in the form of unwind metadata and is generated by the compiler and linker for static code and baked into the portable executable file.  If you happen to code in assembly language, there are various macros that you must use to decorate your assembly code so that the assembler can generate the proper unwind metadata.  For dynamically compiled code, that information is instead provided at runtime by registering a callback with the system.  Knowing this information is critical if you encounter a corrupted stack and must piece it together manually.  In such situations you’ll need to know how to dig out the unwind metadata manually and use it to effectively reconstruct the call stack.

    That said, you could spare yourself some effort and use the undocumented variant of the k command described above to dump the stack starting at any frame. J

    Happy debugging everyone!

    "The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious.  No association with any real company, organization, product, domain name, email address, logo, person, places, or events is intended or should be inferred."

  • Ntdebugging Blog

    Missing System Writer Case Explained


    I worked on a case the other day where all I had was a procmon log and event logs to troubleshoot a problem where the System Writer did not appear in the VSSADMIN LIST WRITERS output. This might be review for the folks that know this component pretty well but I figured I would share how I did it for those that are not so familiar with the System Writer.



    1. System State Backups fail
    2. Running a VSS List Writers does not list the system writer


    Looking at the event logs I found the error shown below. This error indicates there was a failure while “Writer Exposing its Metadata Context”. Each writer is responsible for providing a list of files, volumes, and other resources it is designed to protect. This list is called metadata and is formatted as XML. In the example we are working with the error is “Unexpected error calling routine XML document is too long”.  While helpful, this message alone does not provide us with a clear reason why the XML document is too long.


    Event Type: Error

    Event Source: VSS

    Event ID: 8193

    Description: Volume Shadow Copy Service error: Unexpected error calling routine XML document is too long. hr = 0x80070018, The program issued a command but the command length is incorrect. . Operation: Writer Exposing its Metadata Context: Execution Context: Requestor Writer Instance ID: {636923A0-89C2-4823-ADEF-023A739B2515} Writer Class Id: {E8132975-6F93-4464-A53E-1050253AE220} Writer Name: System Writer


    The second event that was logged was also not very helpful as it only indicates that the writer did have a failure. It looks like we are going to need to collect more data to figure this out.


    Event Type: Error

    Event Source: VSS

    Event ID: 8193

    Description: Volume Shadow Copy Service error: Unexpected error calling routine CreateVssExamineWriterMetadata. hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. . Operation: Writer Exposing its Metadata Context: Execution Context: Requestor Writer Instance ID: {636923A0-89C2-4823-ADEF-023A739B2515} Writer Class Id: {E8132975-6F93-4464-A53E-1050253AE220} Writer Name: System Writer


    From the error above we learned that there was an issue with the metadata file for the System Writer. These errors are among some of the most common issues seen with this writer. There are some not so well documented limitations within the writer due to some hard set limits on path depth and the number of files in a given path. These limitations are frequently exposed by the C:\Windows\Microsoft.Net\ path. Often, this path is used by development software like Visual Studio as well as server applications like IIS. Below I have listed a few known issues that should help provide some scope when troubleshooting System Writer issues.


    Known limitations and common points of failure:

    • More than 1,000 folders in a folder causes writer to fail during OnIdentify
    • More than 10,000 files in a folder causes writer to fail during OnIdentify (frequently C:\Windows\Microsoft.Net)
    • Permissions issues (frequently in C:\Windows\WinSXS and C:\Windows\Microsoft.Net)
    • Permissions issues with COM+ Event System Service
      • This service needs to be running and needs to have Network Service with Service User Rights


    What data can I capture to help me find where the issue is?


    The best place to start is with a Process Monitor (Procmon) capture. To prepare for this capture you will need to download Process Monitor, open the Services MMC snap-in, as well as open an administrative command prompt which will be used in a later step of the process.


    You should first stop the Cryptographic Services service using the Services MMC.



    Once stopped you will want to open Procmon, note that by default Procmon will start capturing when opened. Now that you have Procmon open and capturing data you will start the cryptographic service. This will allow you to capture any errors during service initialization. Once the service is started you will use the command prompt opened earlier to run “vssadmin.exe list writers”. This will signal the writers on the system to capture their metadata, which is a common place we see failures with the System Writer. When the vssadmin command completes, stop the Procmon capture and save this data to disk.


    Now that we have data how do we find the errors?


    Open your newly created Procmon file. First, add a new filter for the path field that contains “VSS\Diag”.



    We do this because this is the registry path that all writers will log to when entering and leaving major functions. Now that we have our filtered view we need to look for an entry from the System Writer. You can see the highlighted line below shows the “IDENTIFY” entry for the System Writer. From here we can ascertain the PID of the svchost.exe that the system writer is running in. We now want to include only this svchost. To accomplish this you can right click on the PID for the svchost.exe and select “Include ‘(PID)’”. 



    Now that we have found our System Writers svchost we will want to remove the filter for “VSS\Diag”; to do that you can return to the filter tool in Procmon and uncheck the box next to the entry.



    We now have a complete view of what this service was doing at the time it started and completed the OnIdentify. Our next step is to locate the IDENTIFY (Leave) entry as this is often a great marker for where your next clue will be. While in most cases we can’t directly see the error the writer hit we can make some educated connections based on the common issues we spoke about above. If we take a look at the events that took place just before the IDENTIFY (Leave) we can see that we were working in the C:\Windows\Microsoft.NET\assembly\ directory. This is one of the paths that the System Writer is responsible for protecting. As mentioned above, there are some known limitations to the depth of paths and number of files in the “C:\Windows\Microsoft.NET” folder. This is a great example of that limitation as seen in our procmon capture. The example below shows the IDENTIFY (Leave) with the line before that being where the last work was taking place. Meaning this is what the writer was touching when it failed.



    What does this tell us and what should we do next?


    Given the known path limitations, we need to check out the number of files and folders in the C:\Windows\Microsoft.Net\ path and see where the bloat is. Some of these files can be safely removed, however only files located in the Temp locations (Temporary ASP.NET Files) are safe to delete.


    Recently we released KB2807849 which addresses the issue shown above.


    There are other possible causes of the event log errors mentioned above, such as issues with file permissions. For those problems follow the same steps as above and you are likely to see the IDENTIFY (Leave) just after file access error is displayed in your procmon log. For these failures you will need to investigate the permissions on the file we failed on. Likely the file is missing permissions for the writer’s service account Network Service or Local System. All that is needed here is to add back the missing permissions for your failed file.


    While these issues can halt your nightly backups, it is often fairly easy to find the root cause. It just takes time and a little bit of experience with Process Monitor. 


    Good luck and successful backups!

  • Ntdebugging Blog

    Basics of Debugging Windows


    Hello, this is East again. This blog post is about a topic that we always skip over when discussing debugging; what and where are the tools for debugging. I will touch on the different types of debuggers, loading symbols and the basics of getting started with loading up a dump under your preferred debugger.

    Microsoft currently offers 4 types of debugging tools. With these tools you can remote debug another machine over firewire or serial cable (USB also but may not work consistently), as well as debug usermode processes and dump files.

    Command line debuggers:

    1 ) kd.exe: kernel debugger – Used to review Crash dumps created by a blue screen crash event or a stop error. (kd –z <location of dump> –y <location of symbols>)

    2 ) cdb.exe: User mode debugger for reviewing applications,  processes, and process dumps  (cdb  –z <location of dump> –y <location of symbols> )

    3 ) ntsd.exe: CDB and NTSD are virtually identical, except that NTSD spawns a new text window when it is started, whereas CDB inherits the Command Prompt window from which it was invoked.  When I refer to "CDB", it applies to both CDB and NTSD.

    Graphical User Interface Debugger:

    4) Windbg.exe is a GUI based debugger. It can debug the same things as KD & CDB using the same commands. Windbg gives you the ability to have multiple windows open simultaneously  to review source code or other selectable items under the view menu.

    I like using windbg for all of my user and kernel debugging, while  others I work with prefer kd for kernel debugging and cdb for user mode debugging.


    There are 32bit and 64bit debuggers available.

    NOTE: Some people use Visual Studio as well, but this blog post will not cover using Visual Studio as a debugger.


    You can review applications that already have started on your machine using CDB or Windbg. You can have the problematic application launch under the debugger as well:

    Cdb or Windbg

    -p <pid> specifies the decimal process ID to attach to ( use tlist or the task manger to obtain the PID)

    -psn <name> specifies the process to attach to by service name

    <application to launch> -y <symbol path>

    NOTE: windbg allows you to use menu options as well: select “Attach to a Process” on the File menu to debug a user-mode application that is currently running.


    What are dumps?

    Memory dumps are a record of what was in memory and the registers at the time of a crash. There are 3 types of memory dumps:

    NOTE: The type of dump that will be written upon bugcheck can be configured by right clicking my computer ->properties ->Advanced tab ->Settings, in the Write debugging section you will use the first drop down box to select what type of memory dump you want. (See KB307973)

    ·         Mini dump – is a subset of memory that is in use by the application creating the dump.

    A mini memory Dump file is written to %SystemRoot%\Minidump\Memory.dmp by default and is usually less than a 1mb in size.

    ·         Kernel only – This is used to review the machine’s kernel memory at the time of the crash.

    ·         Full/Complete – This is the largest kernel mode dump file. It contains all information from kernel and user mode address spaces that was in physical memory at the time of the dump (about the same size as the physical memory on the box).

    Kernel and Complete Memory Dumps are written to %SystemRoot%\Memory.dmp by default.

    Note: You can configure the server to crash using certain keystrokes . This would be useful when troubleshooting a hung server or a timing issue, KB244139 explains how to configure your server for a manual crash.

    You can also create dump files from an application or process, these are known as User-mode dumps.  Additional information can be found on these types dump in the Debugging Tools for Windows help file.


    How do I read a dump file?

    In order to make fast progress with a memory dump file, it is best to load symbol files. Symbol files contains data that the debugger uses to interpret the application or driver code. They may contain:

    -          Global variable names

    -          Function names

    Private Symbols would contain the above information and:

    -          Local variable names

    -          Source-line numbers

    -          Type information for variables, structures, etc.

     Microsoft currently has two ways you can access symbols for the Operating System:

    Service pack download site – You will need to create:

    -          Separate directories for Windows 2000 RTM, Windows 2000 SP1, Windows 2000 SP2, Windows XP RTM, etc.

    -          Separate directories for all of the above for free vs. checked build

    -          Separate directories for hotfix symbols


    Public symbol server – uses a symbol store, which is a collection of symbol files. The symbol server uses the time stamp & file size to match up symbols to the active binary.After getting your symbol files together, you will need a way to tell the debugger where they are located and set up some other options.

    To set the symbol path do one of the following:

    -          _NT_SYMBOL_PATH environment variable

    -          -y command line option

    -          .sympath (Set Symbol Path) debugger command

    -          WinDbg: File | Symbol File Path dialog, or CTRL+S

    To set the executable Image Path (needed for minidumps only), do one of the foolowing:

    -          -i command line option

    -          .exepath debugger command

    -          WinDbg: File | Image File Path dialog, or CTRL+i

    -          Source Path

    -          .srcpath WinDbg: File | Source File Path dialog, or CTRL+P

    If symbol errors appear when you begin, you can try the below commands to help narrow down some problems;

    !sym noisy — gives verbose symbol information


    .reload —  to reload all symbols


    Also using the srv* in your symbol path tells the debugger to load and save symbols being used out to a specific directory:

    srv*DownstreamStore*<symbol locations>


    NOTE: You must always use .reload after you change the symbol path or fix a symbol error — the debugger doesn’t automatically reload your symbols!


    Now that we are done with the overview, let’s configure our machine as a host computer to open memory a dump.  I will be using Microsoft Public Symbol servers and I want to store current symbols locally to my host machine.

    Using windbg I will set my current workspace symbols to: srv*c:\pubsymbols*http://msdl.microsoft.com/download/symbols

    Click the menu option File ->Symbol File Path or Ctrl + S. This will bring up an empty box that will allow you to enter or browse to your symbol path.

    If using kd you want to set an environment variable (_NT_SYMBOL_PATH) under “my computer properties -> advanced tab” to always start with your symbols set to:  “srv*c:\pubsymbols*http://msdl.microsoft.com/download/symbols” or use this same path in your command line:

    Kd –z <path to dump.file> -y srv*c:\pubsymbols*http://msdl.microsoft.com/download/symbols


    NOTE: Windbg will append any workspace symbol path with the one set by the _NT_SYMBOL_PATH environment variable during loading of a memory dump.

    Ok, now we know what debugger we want to use and we know our symbol locations. Let’s open our first kernel memory dump , located on <drive letter> <path to dump file>

    Using windbg, I will load a dump file using menu options File ->Open crash Dump (ctrl + D) or drag the the dump file into the debugger; you can even  start windbg at the command prompt.  My command would look like this:

    Windbg  –z C:\training\case 7f\MEMORY051308.22.DMP

    I did not use the –y for symbol path, as it is set already in my default workspace or in my environment variable.

    When the debugger first loads a dump file it displays several lines of information before giving you a prompt to get started with your commands (by default):

    Microsoft (R) Windows Debugger Version 6.9.0003.113 X86  ß debugger version

    Copyright (c) Microsoft Corporation. All rights reserved. ß Copyright of the debugger creator

    Loading Dump File [C:\training\case 7f\MEMORY051308.22.DMP] ß location of the dump file loading

    Kernel Summary Dump File: Only kernel address space is available ß type of memory dump (mini, kernel, or full)

    Symbol search path is: srv*c:\pubsymbols*http://msdl.microsoft.com/download/symbols ß Symbol path for this debug session

    Executable search path is:  ß points to the directory the executable files are located. For most situations this is not needed. For other situations please check the debugger help file.


    The next 4 lines talk about The OS version, service packs and how many processors are on the box

    1 -Windows Server 2003 Kernel Version 3790 (Service Pack 2) MP (8 procs) Free x86 compatible

    2 - Product: Server, suite: Enterprise TerminalServer SingleUserTS

    3 - Built by: 3790.srv03_sp2_gdr.070304-2240

    4 - Kernel base = 0x80800000 PsLoadedModuleList = 0x808a6ea8


    Next we would see when the machine crashed and how long it was up prior to this crash:

    Debug session time: Wed May 14 01:27:36.768 2008 (GMT-4)

    System Uptime: 0 days 16:32:51.921


    After completing the above process, the debugger starts loading the dump file and parsing through the loaded symbols. Here you may notice some warnings for some user space processes which are not included in the kernel dump. This is ok.

    WARNING: Process directory table base BFF0A080 doesn't match CR3 007AF000

    WARNING: Process directory table base BFF0A080 doesn't match CR3 007AF000

    Loading Kernel Symbols


    Loading User Symbols

    PEB is paged out (Peb.Ldr = 7ffdf00c).  Type ".hh dbgerr001" for details

    Loading unloaded module list


    *                                                                             *

    *                        Bugcheck Analysis                                    *

    *                                                                             *



    1- Use !analyze -v to get detailed debugging information.

    2 - BugCheck 7F, {8, f773ffe0, 0, 0}

    3 - *** ERROR: Module load completed but symbols could not be loaded for ql2300.sy

    The three things I want to point out from above are:

    1 - !analyze –v: This is the debugger command used to help analyze a dump file by reviewing information passed to KeBugCheck including specific parameters of that crash. It will analyze this information and provide a definition of the bugcheck, a stack showing all current function calls, and, when possible, the name of an offending driver or process that the debugger thinks is at fault.  Please review the debugger help file for additional information in this area.

    2 – The type of bugcheck that occurred on the machine.

    3 – An error telling you about symbols missing or not available to help diagnose a particular driver or application. This can lead to a misdiagnostis if you’re not careful.

    Once loading is completed you should be at a kd> prompt. This prompt shows you the current processor you are using (if the machine has more than one).

    For this dump we are at processor 3 on an 8 proc machine:

    3: kd>


    To view the current crash stack location you can use the "K" command. There are multiple forms of this command, each one dumping the basic plus additional information. As functions are executed and call other functions, a call stack is created in stack memory. Here are two common commands to view the stack:


    3: kd> k

    ChildEBP RetAddr

    00000000 baebf0ce nt!KiTrap08+0x75

    b3a4bffc baebf737 storport!RaCallMiniportInterrupt+0x2

    b3a4c008 8088d889 storport!RaidpAdapterInterruptRoutine+0x1d

    b3a4c008 80a59d8e nt!KiInterruptDispatch+0x49

    b3a4c09c 80a5c2fc hal!HalpGenerateInterrupt+0x1d2

    b3a4c0c0 80a5c44d hal!HalpLowerIrqlHardwareInterrupts+0x108

    b3a4c0d0 808256ed hal!KfLowerIrql+0x59



    3: kd> kb

    ChildEBP RetAddr  Args to Child

    00000000 baebf0ce 00000000 00000000 00000000 nt!KiTrap08+0x75

    b3a4bffc baebf737 97bedb88 b3a4c02c 8088d889 storport!RaCallMiniportInterrupt+0x2

    b3a4c008 8088d889 977b9e18 97bedad0 03010006 storport!RaidpAdapterInterruptRoutine+0x1d

    b3a4c008 80a59d8e 977b9e18 97bedad0 03010006 nt!KiInterruptDispatch+0x49

    b3a4c09c 80a5c2fc 97797004 97bedad0 00000102 hal!HalpGenerateInterrupt+0x1d2

    b3a4c0c0 80a5c44d 00000101 977b9e02 b3a4c0d8 hal!HalpLowerIrqlHardwareInterrupts+0x108

    b3a4c0d0 808256ed b3a4c0e8 baebf1c6 977b9bb0 hal!KfLowerIrql+0x59



    Either one can be used depending on how much information you want to see and can use.

    This completes the Basic of Debugging Windows, Part I. I will create a Part II using specific questions gathered from our readers.


    Miscellaneous information:

    To go further with this topic I would suggest starting with the debugger help file included with the Microsoft Debugging Tools. 

    ADPlus – An automated way to use the cdb.exe to capture/create a usermode dump when a process hangs or crashes. (more info - http://msdn.microsoft.com/en-us/library/cc265629.aspx or kb286350)

    Public Symbols for Microsoft Operating Systems:

    Microsoft Public Symbol server : srv * DownstreamStore * http://msdl.microsoft.com/download/symbols

    example: srv*c:\mysyms*http://msdl.microsoft.com/download/symbols

     Microsoft Symbol packages http://www.microsoft.com/whdc/devtools/debugging/symbolpkg.mspx#d

    Use !Analyze-v to gather additional information about the bugcheck and a bucket-id for your dump file. The bucket-id can be submitted to Microsoft for review for similar crashes and resolutions. Try using the Microsoft Online Crash Analysis to submit your crash dump bucket-id for possible follow up from Microsoft or for Microsoft to look for trends: http://oca.microsoft.com/en/Welcome.aspx

    For concepts, tools and information about the system architecture:


    Windows Internal 4th edition (by Mark E. Russinovich & David A. Solomon) the whole book or Chapter 14 - Crash Dump Analysis

    Advanced Windows Debugging (by Mario Hewardt & Daniel Pravat )


  • Ntdebugging Blog

    Configuring a Hyper-V VM For Kernel Debugging


    Yesterday's blog prompted some questions about how to set up a debugger for a Windows OS running in a Hyper-V VM.  I was surprised that I wasn't able to find good, publicly available, Microsoft issued documentation for this configuration.


    The first step is to configure the Windows OS in the VM to enable a kernel debugger on COM1.  One would use these same steps if you were preparing the OS to be debugged using a null modem cable.  Hyper-V will allow us to redirect the COM port so that we don't need such a cable.

    1. Start an administrative command prompt.
    2. Turn on debugging with this command:
      bcdedit /debug on
    3. Configure the debugger to use COM1 with this command:
      bcdedit /dbgsettings SERIAL DEBUGPORT:1 BAUDRATE:115200
      Note that these are the default settings and already exist in most bcd stores.  However setting them again won't damage anything, and guards against a situation where the dbgsettings have been previously modified.
    4. Reboot so that the boot loader can read the new settings and configure the OS for debugging.



    Next, configure Hyper-V to redirect the COM1 port to a named pipe.  We will use this pipe in place of a traditional null modem cable.

    1. Open Hyper-V Manager and browse to the settings page of the VM you configured to debug.
    2. Under the Hardware list choose COM 1.
    3. Change the Attachment to 'Named pipe:' and provide a pipe name.
      1. Note that the Hyper-V Manager provides the complete path to your named pipe.  Make a note of this path as you will need it in the next step.



    After the OS and the VM are configured for debugging, we need to connect a debugger.

    1. On the Hyper-V parent partition download and install the Debugging Tools for Windows from http://msdn.microsoft.com/en-us/windows/hardware/gg463009.
    2. After installing the debugging tools you will have a ‘Debugging Tools for Windows’ entry in your start menu.
      1. From this folder right click ‘WinDbg’ and choose ‘Run as administrator’.  Windbg needs administrative rights to connect to the pipe.
    3. In windbg open the File menu and choose ‘Kernel Debug’.
    4. Enter a Baud Rate of 115200, to match the settings made in the VM.
    5. Enter the Port that you configured in the VM settings page.
      1. To connect to the pipe remotely, substitute the '.' in the path with the Hyper-V server name.
    6. Ensure that the Pipe and Reconnect boxes are checked.
    7. Set Resets to 0.
    8. Click OK to start debugging.
    9. Windbg should display the string ' Waiting to reconnect...'



    To test the debugger connection in windbg, from the ‘Debug’ menu choose ‘Break’.  This should cause the server to break into the debugger and display a kd> prompt.  Please note that breaking into the debugger will cause the OS running in the VM to halt until you tell the debugger to go, the OS will appear to be hung during this time.  The command 'g' followed by Enter will tell the debugger to ‘go’ causing the VM to resume operation.



  • Ntdebugging Blog

    Video: Using XPERF to root cause CPU consumption


    Hi Debuggers,


    Here is a short video showing how to root cause CPU consumption issues on Vista/Server 2008 and higher operating systems with Xperf.


    For this scenario, Xperf is an easy to implement and powerful replacement for previous actions like gathering a series of dumps during the CPU, kernrate data, perfmon, other profilers, etc.


    Xperf allows one to quickly turn on profiling and see where the CPU time is being spent by using tracing already in the box!






    Note - Feel free to post questions here, or jump into a discussion about the video on our Twitter page ( @ntdebugging ).



    Share this post :
  • Ntdebugging Blog

    What Is In A RHS Dump File Created By Windows Error Reporting


    Hello all, East here.  I wanted to give you a hint on how to use a RHS dump to find what thread was part of the Windows Server 2008 R2 cluster RHS recovery deadlock.


    First let me start off with letting you know that Windows Server 2008 R2 will create two types of user-mode dumps:

    1 - A heap dump (.hdmp) - which is an extended mini-dump that contains additional data such as the process memory.

    2 - A mini dump (.mdmp) - which will only contain mainly stack information and not enough info to help you get further.


    When a Windows Server 2008 R2 cluster has a resource that does not respond to one of the main cluster queries in a timely fashion, RHS will try to recover that resource by restarting it in an isolated process as well as creating a WER report and a user-mode dump of the deadlock.


    NOTE: More info about RHS and WER reports can be seen in the following blog - http://blogs.msdn.com/b/clustering/archive/2009/06/27/9806160.aspx


    When a deadlock is detected the cluster log will show:

    "00000dd4.00000aac::2011/05/13-16:12:39.559 ERR   [RHS] WER report is submitted. Result : WerReportQueued."


    You should find the WER report file and dumps located at the default location:

    <SystemDrive>\Users\All Users\Microsoft\Windows\WER\ReportQueue


    Here is what a cluster created Windows Error Report looks like:


    EventType=WSFC Resource Deadlock






    Sig[0].Name=Resource Name

    Sig[0].Value=New flexdisk

    Sig[1].Name=Resource Type


    Sig[2].Name=Call Type


    DynamicSig[1].Name=OS Version


    DynamicSig[2].Name=Locale ID



    File[0].Path=WERC1F2.tmp.hdmp <----------------------------- Name of the dump file   









    FriendlyEventName=WSFC Resource Deadlock

    ConsentKey=WSFC Resource Host Monitor

    AppName=Failover Cluster Resource Host Subsystem <--------- The application that had an issue - RHS


    ReportDescription=WSFC Resource Deadlock


    Now you have the area to look for in the cluster log, and an idea of which resource caused RHS to deadlock, and what it was doing at the time:


    The Resource Name = New flexdisk

    The Resource Type = flexdisk

    The type of call the resource was engaged in = ISALIVE


    You have all this information in the cluster log as well:

    00000dd4.00000aac::2011/05/13-16:12:39.136 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out for resource 'New flexdisk'.  RHS deadlock during an Isalive check for the New flexdisk resource.

    00000dd4.00000aac::2011/05/13-16:12:39.136 INFO  [RHS] Enabling RHS termination watchdog with timeout 1200000 and recovery action 3.

    00000dd4.00000aac::2011/05/13-16:12:39.136 ERR   [RHS] Resource New flexdisk handling deadlock. Cleaning current operation and terminating RHS process.

    00000dd4.00000aac::2011/05/13-16:12:39.136 ERR   [RHS] About to send WER report.

    000004dc.00000fb4::2011/05/13-16:12:39.136 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'New flexdisk', gen(3) result 4.

    000004dc.00000fb4::2011/05/13-16:12:39.136 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'New flexdisk' consecutive failure count 1.

    00000dd4.00000aac::2011/05/13-16:12:39.559 ERR   [RHS] WER report is submitted. Result : WerReportQueued.  WER report and user-mode dump created

    000004dc.00000fb4::2011/05/13-16:12:39.559 ERR   [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 3540 / 0xdd4

    000004dc.00000fb4::2011/05/13-16:12:39.575 INFO  [RCM] Created monitor process 3280 / 0xcd0

    00000cd0.00000200::2011/05/13-16:12:39.575 INFO  [RHS] Initializing.

    000004dc.00000fb4::2011/05/13-16:12:39.591 INFO  [RCM] rcm::RcmResource::ReattachToMonitorProcess: (New flexdisk, OfflineCallIssued)

    000004dc.00000fb4::2011/05/13-16:12:39.591 WARN  [RCM] Canceling pending control GET_RO_PRIVATE_PROPERTIES for resource 'New flexdisk' due to monitor crash.


    Now if you want to get adventurous and review the dump file to see the call stacks of RHS and what the resource was doing, you will look at the WER dump with the extension “.hdmp”.  Since we are looking at a process with multiple threads, we need to find the one that is showing the WER reporting calls and the one showing the RHS deadlock.


    Run the following command after opening the dump under your debugger “~*k”.  In the below example, thread 3 is the one we are interested in. You can tell this because the process and thread ID matches the cluster log above.  If you don’t have a cluster log, this is the thread with wer.dll on the stack.

    0:000> ~*k


    .  0  Id: dd4.9d4 Suspend: 0 Teb: 000007ff`fffdd000 Unfrozen

    Child-SP          RetAddr           Call Site

    00000000`001cf128 000007fe`fd611420 ntdll!ZwWaitForMultipleObjects+0xa

    00000000`001cf130 00000000`772c1220 KERNELBASE!WaitForMultipleObjectsEx+0xe8

    00000000`001cf230 00000000`ff763d53 kernel32!WaitForMultipleObjects+0xb0

    00000000`001cf2c0 00000000`ff74802c rhs!cxl::WaitHandle::WaitAny+0x3f

    00000000`001cf3b0 00000000`ff7458a1 rhs!Rhs::WaitForTermination+0x74

    00000000`001cf450 00000000`ff769099 rhs!wmain+0x4b1

    00000000`001cfbd0 00000000`772c652d rhs!__wmainCRTStartup+0x13d

    00000000`001cfc10 00000000`774fc521 kernel32!BaseThreadInitThunk+0xd

    00000000`001cfc40 00000000`00000000 ntdll!RtlUserThreadStart+0x1d


       1  Id: dd4.d0c Suspend: 0 Teb: 000007ff`fffdb000 Unfrozen

    Child-SP          RetAddr           Call Site

    00000000`00c3fa98 000007fe`fd611420 ntdll!ZwWaitForMultipleObjects+0xa

    00000000`00c3faa0 00000000`772c1220 KERNELBASE!WaitForMultipleObjectsEx+0xe8

    00000000`00c3fba0 00000000`ff763d53 kernel32!WaitForMultipleObjects+0xb0

    00000000`00c3fc30 00000000`ff75785c rhs!cxl::WaitHandle::WaitAny+0x3f

    00000000`00c3fd20 00000000`ff766988 rhs!PingAliveCallScheduler::Run+0x184

    00000000`00c3fda0 00000000`772c652d rhs!cxl::Thread::ThreadStartRoutine+0x18

    00000000`00c3fdf0 00000000`774fc521 kernel32!BaseThreadInitThunk+0xd

    00000000`00c3fe20 00000000`00000000 ntdll!RtlUserThreadStart+0x1d


       2  Id: dd4.e78 Suspend: 0 Teb: 000007ff`fffd9000 Unfrozen

    Child-SP          RetAddr           Call Site

    00000000`00baf998 00000000`774eb007 ntdll!ZwWaitForMultipleObjects+0xa

    00000000`00baf9a0 00000000`772c652d ntdll!TppWaiterpThread+0x14d

    00000000`00bafc40 00000000`774fc521 kernel32!BaseThreadInitThunk+0xd

    00000000`00bafc70 00000000`00000000 ntdll!RtlUserThreadStart+0x1d


       3  Id: dd4.aac Suspend: 0 Teb: 000007ff`fffd7000 Unfrozen

    Child-SP          RetAddr           Call Site

    00000000`00cbe5e8 000007fe`fd611420 ntdll!ZwWaitForMultipleObjects+0xa

    00000000`00cbe5f0 00000000`772d2d53 KERNELBASE!WaitForMultipleObjectsEx+0xe8

    00000000`00cbe6f0 00000000`773e8f7d kernel32!WaitForMultipleObjectsExImplementation+0xb3

    00000000`00cbe780 00000000`773e62b2 user32!RealMsgWaitForMultipleObjectsEx+0x12a

    00000000`00cbe820 00000000`773e62e0 user32!MsgWaitForMultipleObjectsEx+0x46

    00000000`00cbe860 000007fe`f6f3c35c user32!MsgWaitForMultipleObjects+0x20

    00000000`00cbe8a0 000007fe`f6f6dbd0 wer!UtilMsgWaitForMultipleObjects+0x54

    00000000`00cbe930 000007fe`f6f54158 wer!CReportManager::ReportProblemOutOfProcess+0x5c4

    00000000`00cbecc0 000007fe`f6f3b99b wer!CReportManager::ReportProblem+0x1eb

    00000000`00cbede0 000007fe`f6f61cbe wer!CReportHandleInstance::SubmitReport+0x386

    00000000`00cbee50 00000000`ff750dcf wer!WerReportSubmit+0x16a

    00000000`00cbeef0 00000000`ff75117b rhs!RhsResource::WERReportDeadlock+0x43f

    00000000`00cbfaf0 00000000`ff74b270 rhs!RhsResource::HandleDeadlock+0x187

    00000000`00cbfb60 00000000`772c652d rhs!RhsCall::DeadlockMonitor+0x324

    00000000`00cbfc60 00000000`774fc521 kernel32!BaseThreadInitThunk+0xd

    00000000`00cbfc90 00000000`00000000 ntdll!RtlUserThreadStart+0x1d


       4  Id: dd4.e48 Suspend: 0 Teb: 000007ff`fffae000 Unfrozen

    Child-SP          RetAddr           Call Site

    00000000`00e2f0f8 000007fe`fd611203 ntdll!ZwDelayExecution+0xa

    00000000`00e2f100 000007fe`f78f1ad9 KERNELBASE!SleepEx+0xab

    00000000`00e2f1a0 00000000`00326aa0 FlexRes+0x11ad9

    00000000`00e2f1a8 00000000`00000000 0x326aa0


    <snipped to save space>


    We will switch context to thread 3 by using the “ ~3s ” command, and follow that with “ kn “ to show the stack with frame numbers.  Debug Tip: Since we already know that its thread id is “aac” we can also switch directly to thread 3 by using its thread id in the following command syntax:  “ ~~[aac]s ”

    0:003> kn

    # Child-SP          RetAddr           Call Site

    00 00000000`00cbe5e8 000007fe`fd611420 ntdll!NtWaitForMultipleObjects+0xa

    01 00000000`00cbe5f0 00000000`772d2d53 KERNELBASE!WaitForMultipleObjectsEx+0xe8

    02 00000000`00cbe6f0 00000000`773e8f7d kernel32!WaitForMultipleObjectsExImplementation+0xb3

    03 00000000`00cbe780 00000000`773e62b2 user32!RealMsgWaitForMultipleObjectsEx+0x12a

    04 00000000`00cbe820 00000000`773e62e0 user32!MsgWaitForMultipleObjectsEx+0x46

    05 00000000`00cbe860 000007fe`f6f3c35c user32!MsgWaitForMultipleObjects+0x20

    06 00000000`00cbe8a0 000007fe`f6f6dbd0 wer!UtilMsgWaitForMultipleObjects+0x54

    07 00000000`00cbe930 000007fe`f6f54158 wer!CReportManager::ReportProblemOutOfProcess+0x5c4

    08 00000000`00cbecc0 000007fe`f6f3b99b wer!CReportManager::ReportProblem+0x1eb

    09 00000000`00cbede0 000007fe`f6f61cbe wer!CReportHandleInstance::SubmitReport+0x386

    0a 00000000`00cbee50 00000000`ff750dcf wer!WerReportSubmit+0x16a

    0b 00000000`00cbeef0 00000000`ff75117b rhs!RhsResource::WERReportDeadlock+0x43f

    0c 00000000`00cbfaf0 00000000`ff74b270 rhs!RhsResource::HandleDeadlock+0x187

    0d 00000000`00cbfb60 00000000`772c652d rhs!RhsCall::DeadlockMonitor+0x324

    0e 00000000`00cbfc60 00000000`774fc521 kernel32!BaseThreadInitThunk+0xd

    0f 00000000`00cbfc90 00000000`00000000 ntdll!RtlUserThreadStart+0x1d


    The frame of our concern will be frame d. You can either go the context ( /c ) or just display the registers  ( /r )  for that frame context:

    0:003> .frame /r d

    0d 00000000`00cbfb60 00000000`772c652d rhs!RhsCall::DeadlockMonitor+0x324

    rax=000007fffffd7000 rbx=0000000000346920 rcx=00000000007316c0

    rdx=0000000000000000 rsi=000000000035f8b0 rdi=000000000035f8b0

    rip=00000000ff74b270 rsp=0000000000cbfb60 rbp=0000000000000000

    r8=00000000ffffffff  r9=00000000000004ff r10=0000000000000000

    r11=0000000000cbe810 r12=0000000000000000 r13=0000000000000000

    r14=0000000000000000 r15=000000004dcd5877

    iopl=0         nv up ei pl zr na po nc

    cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000246


    00000000`ff74b270 448b442430      mov     r8d,dword ptr [rsp+30h] ss:00000000`00cbfb90=ffffffff


    Here is where we get into tricky territory about where to look. I know from experience that the RDI register from above is going to end up containing my information for this resource. Another way to verify you are in the right area, dumping RDI will show the CallType.


    Dump that out to find the thread containing the stack with:

    0:003> dqs 000000000035f8b0

    00000000`0035f8b0  00000000`ff704eb8 rhs!IsAliveCall::`vftable' == CallType

    00000000`0035f8b8  00000000`00000004

    00000000`0035f8c0  00000000`001e7da0

    00000000`0035f8c8  00000000`00000e48 <-- Thread id of concerning resource

    00000000`0035f8d0  00000000`00346920

    00000000`0035f8d8  0000000c`00000001

    00000000`0035f8e0  00000000`00000003

    00000000`0035f8e8  00000000`4dcd5876

    00000000`0035f8f0  00000000`048d2dd3

    00000000`0035f8f8  00000000`00000100

    00000000`0035f900  00000001`00000100

    00000000`0035f908  00000000`00000000

    00000000`0035f910  00430000`00000031

    00000000`0035f918  8a0000a4`52e48988

    00000000`0035f920  004c0041`00530049

    00000000`0035f928  00450045`00560049


    If we scan our previous “~*k” output, we should find the thread id above to match our thread with the problem resource, FlexRes:

       4  Id: dd4.e48 Suspend: 0 Teb: 000007ff`fffae000 Unfrozen

    Child-SP          RetAddr           Call Site

    00000000`00e2f0f8 000007fe`fd611203 ntdll!ZwDelayExecution+0xa

    00000000`00e2f100 000007fe`f78f1ad9 KERNELBASE!SleepEx+0xab

    00000000`00e2f1a0 00000000`00326aa0 FlexRes+0x11ad9

    00000000`00e2f1a8 00000000`00000000 0x326aa0


    In this case I was using an in house test app that just went to sleep instead of responding to the LooksAlive() check, which caused IsAlive() to fail.


    Also if you check the cluster log 5 minutes prior to the RhsCall::DeadlockMonitor error you will see the resource and thread id as well, showing what was going on prior to the Deadlock:

    00000dd4.00000e48::2011/05/13-16:07:37.270 INFO  [RES] flexdisk <New flexdisk>: Set new entry point  = ISALIVEERROR:1722,ISALIVEFAILTYPE:1,ISALIVEHANG:1 Error = 0.

    000004dc.00000fd4::2011/05/13-16:07:37.286 WARN  [RCM] ResourceControl(SET_PRIVATE_PROPERTIES) to New flexdisk returned 5024.

    00000dd4.00000e48::2011/05/13-16:07:38.023 INFO  [RES] flexdisk <New flexdisk>: Set new entry point  = ISALIVEERROR:1722,ISALIVEFAILTYPE:1,ISALIVEHANG:0 Error = 0.

    00000dd4.00000e48::2011/05/13-16:07:38.023 INFO  [RES] flexdisk <New flexdisk>: Going to Hang in IsAlive.


    In most cases the cluster log can provide enough information to help you narrow down possible cluster resource issues.  If not, the user-mode dump may be able to help verify additional data, especially if you are debugging a third party resource dll that is deadlocking RHS.

  • Ntdebugging Blog

    Part 1 - ETW Introduction and Overview




    Event Tracing for Windows (ETW) is a system and software diagnostic, troubleshooting and performance monitoring component of Windows that has been around since Windows 2000. However, it wasn't until Windows Vista that major components of the OS were updated to heavily use ETW tracing; making it much more practical and useful.

    ETW is useful from a variety of scenarios, including:

     -User & Admin: Being in control of your system and knowing what is going on.
     -User & Admin: Troubleshooting performance, hardware and OS issues.
     -Enthusiast: Learning further about the OS and the low level guts of the OS.
     -Software Developer/ISV/OEM: Investigating issues with your software's interaction with Microsoft OS & technologies
     -Hardware Developer/IHV/OEM: Investigating issues with hardware interaction with the OS, including kernel, driver subsystems, up to the user stack.

    ETW is a set of technologies and tools that can absolutely complement existing tools while providing a look into the guts of the OS at a very low level.

    A great article from the April 2007 edition of MSDN Magazine - http://msdn.microsoft.com/en-us/magazine/cc163437.aspx, covers ETW in great depth, and is recommended reading.

    Here is graphical overview of the ETW infrastructure that covers how provider(s) log to high performance in memory buffers, which can be kept in memory in a circular buffer, or written to disk in a sequential or circular fashion.



    Some uses of ETW:


    Today on Windows Vista, Windows 7, and sparingly on earlier OSes; ETW is used by Microsoft Development and Microsoft Support, as well as others; to help troubleshoot issues, find root-cause of bugs, analyze performance; and a large variety of other tasks.

    As it stands today, ETW has two major thrusts:

    1. Exposing time-based event data similar in practicality to plain old text based logged (without the normal performance overhead). Troubleshooting logs have long been used across the industry by software developers and IT professionals to troubleshoot issues. Here are some examples of how ETW logging is used.

    First off, shown below is Windows Event Viewer enabling viewing of analytic and debug logs (which show you many more logs than the default view):


    A variety of logs can be viewed graphically using the Event Viewer or via the built-in command line OS utility – wevtutil. There are a quite a variety of logs that are enabled by default and are available via the Event Viewer or in a kernel memory dump.

    Since ETW is also a foundational component, you might find it appearing in other troubleshooting utilities. For example, Office Communications Server 2007 uses ETW for troubleshooting.

    ETW is used to monitor for network events from the kernel which are then added to Process Monitor, a utility from Microsoft Sysinternals. You should also see tools such netsh and NetMon being able to capture and decode ETW traces.



    There are many other methods and utilities to view the ETW tracing providers available, which may be found in the OS, via the Device Driver Kit (DDK), enabled with Microsoft products (such as Exchange or IIS), or used by tools, such as Office Communication System Logging.

    The second major thrust of ETW is exposing performance data.

    2. Performance data is exposed from the lowest levels of the kernel, driver subsystems, and every major user-mode component of Windows (such as Windows Media Player).

    Performance data can be used in a hybrid fashion both for troubleshooting and performance reasons. Take for example a built in OS tool that uses ETW tracing to provide rich data - Resource Monitor.

    Resource Monitor is available from the Windows Task Manager - Performance Tab. The cool thing about Resource Monitor is that it internally uses kernel ETW tracing to gather its data, providing rich data from the kernel itself; and thus providing a practical exposure of ETW technology for Administrators to troubleshoot their systems. This goes beyond the data that Task Manager provides, which often tends to be based on performance counters which you cannot drill down into further.



    Resource Monitor is useful from both a performance standpoint, as well as troubleshooting. For example, you can view top cpu users, top disk users (why your disk is thrashing), or search for a locked file handle that is open.

    For a deeper drill-down into performance data XPerf can be used to capture ETW traces and make them available for decoding offline (on another box).

    Here, Xperf (available with the Windows DDK and as a standalone download) is shown looking at Internet Explorer's file access time.


    There have been several posts on the NTDebugging Blog regarding XPerf, other blog posts, as well as the Windows Performance Analysis Dev Center.

    Where to go from here:


    In Windows 7 and Windows Server 2008 R2, ETW tracing has even been further improved with many more providers including 600+ inbox providers’ registered on the box. This number increases when add-on features or products are installed (such as Hyper-V).

    The tracing provided by ETW, whether it is used directly in the OS, or via add-on tools, provides deep insight into Windows and how it works. We will explore ETW in depth in future blog posts.



    Share this post :

  • Ntdebugging Blog

    Understanding Pool Corruption Part 1 – Buffer Overflows


    Before we can discuss pool corruption we must understand what pool is.  Pool is kernel mode memory used as a storage space for drivers.  Pool is organized in a similar way to how you might use a notepad when taking notes from a lecture or a book.  Some notes may be 1 line, others may be many lines.  Many different notes are on the same page.


    Memory is also organized into pages, typically a page of memory is 4KB.  The Windows memory manager breaks up this 4KB page into smaller blocks.  One block may be as small as 8 bytes or possibly much larger.  Each of these blocks exists side by side with other blocks.


    The !pool command can be used to see the pool blocks stored in a page.


    kd> !pool fffffa8003f42000

    Pool page fffffa8003f42000 region is Nonpaged pool

    *fffffa8003f42000 size:  410 previous size:    0  (Free)      *Irp

                Pooltag Irp  : Io, IRP packets

    fffffa8003f42410 size:   40 previous size:  410  (Allocated)  MmSe

    fffffa8003f42450 size:  150 previous size:   40  (Allocated)  File

    fffffa8003f425a0 size:   80 previous size:  150  (Allocated)  Even

    fffffa8003f42620 size:   c0 previous size:   80  (Allocated)  EtwR

    fffffa8003f426e0 size:   d0 previous size:   c0  (Allocated)  CcBc

    fffffa8003f427b0 size:   d0 previous size:   d0  (Allocated)  CcBc

    fffffa8003f42880 size:   20 previous size:   d0  (Free)       Free

    fffffa8003f428a0 size:   d0 previous size:   20  (Allocated)  Wait

    fffffa8003f42970 size:   80 previous size:   d0  (Allocated)  CM44

    fffffa8003f429f0 size:   80 previous size:   80  (Allocated)  Even

    fffffa8003f42a70 size:   80 previous size:   80  (Allocated)  Even

    fffffa8003f42af0 size:   d0 previous size:   80  (Allocated)  Wait

    fffffa8003f42bc0 size:   80 previous size:   d0  (Allocated)  CM44

    fffffa8003f42c40 size:   d0 previous size:   80  (Allocated)  Wait

    fffffa8003f42d10 size:  230 previous size:   d0  (Allocated)  ALPC

    fffffa8003f42f40 size:   c0 previous size:  230  (Allocated)  EtwR


    Because many pool allocations are stored in the same page, it is critical that every driver only use the space they have allocated.  If DriverA uses more space than it allocated they will write into the next driver’s space (DriverB) and corrupt DriverB’s data.  This overwrite into the next driver’s space is called a buffer overflow.  Later either the memory manager or DriverB will attempt to use this corrupted memory and will encounter unexpected information.  This unexpected information typically results in a blue screen.


    The NotMyFault application from Sysinternals has an option to force a buffer overflow.  This can be used to demonstrate pool corruption.  Choosing the “Buffer overflow” option and clicking “Crash” will cause a buffer overflow in pool.  The system may not immediately blue screen after clicking the Crash button.  The system will remain stable until something attempts to use the corrupted memory.  Using the system will often eventually result in a blue screen.




    Often pool corruption appears as a stop 0x19 BAD_POOL_HEADER or stop 0xC2 BAD_POOL_CALLER.  These stop codes make it easy to determine that pool corruption is involved in the crash.  However, the results of accessing unexpected memory can vary widely, as a result pool corruption can result in many different types of bugchecks.


    As with any blue screen dump analysis the best place to start is with !analyze -v.  This command will display the stop code and parameters, and do some basic interpretation of the crash.


    kd> !analyze -v


    *                                                                             *

    *                        Bugcheck Analysis                                    *

    *                                                                             *




    An exception happened while executing a system service routine.


    Arg1: 00000000c0000005, Exception code that caused the bugcheck

    Arg2: fffff8009267244a, Address of the instruction which caused the bugcheck

    Arg3: fffff88004763560, Address of the context record for the exception that caused the bugcheck

    Arg4: 0000000000000000, zero.


    In my example the bugcheck was a stop 0x3B SYSTEM_SERVICE_EXCEPTION.  The first parameter of this stop code is c0000005, which is a status code for an access violation.  An access violation is an attempt to access invalid memory (this error is not related to permissions).  Status codes can be looked up in the WDK header ntstatus.h.


    The !analyze -v command also provides a helpful shortcut to get into the context of the failure.


    CONTEXT:  fffff88004763560 -- (.cxr 0xfffff88004763560;r)


    Running this command shows us the registers at the time of the crash.


    kd> .cxr 0xfffff88004763560

    rax=4f4f4f4f4f4f4f4f rbx=fffff80092690460 rcx=fffff800926fbc60

    rdx=0000000000000000 rsi=0000000000001000 rdi=0000000000000000

    rip=fffff8009267244a rsp=fffff88004763f60 rbp=fffff8009268fb40

    r8=fffffa8001a1b820  r9=0000000000000001 r10=fffff800926fbc60

    r11=0000000000000011 r12=0000000000000000 r13=fffff8009268fb48

    r14=0000000000000012 r15=000000006374504d

    iopl=0         nv up ei pl nz na po nc

    cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00010206


    fffff800`9267244a 4c8b4808        mov     r9,qword ptr [rax+8] ds:002b:4f4f4f4f`4f4f4f57=????????????????


    From the above output we can see that the crash occurred in ExAllocatePoolWithTag, which is a good indication that the crash is due to pool corruption.  Often an engineer looking at a dump will stop at this point and conclude that a crash was caused by corruption, however we can go further.


    The instruction that we failed on was dereferencing rax+8.  The rax register contains 4f4f4f4f4f4f4f4f, which does not fit with the canonical form required for pointers on x64 systems.  This tells us that the system crashed because the data in rax is expected to be a pointer but it is not one.


    To determine why rax does not contain the expected data we must examine the instructions prior to where the failure occurred.


    kd> ub .

    nt!KzAcquireQueuedSpinLock [inlined in nt!ExAllocatePoolWithTag+0x421]:

    fffff800`92672429 488d542440      lea     rdx,[rsp+40h]

    fffff800`9267242e 49875500        xchg    rdx,qword ptr [r13]

    fffff800`92672432 4885d2          test    rdx,rdx

    fffff800`92672435 0f85c3030000    jne     nt!ExAllocatePoolWithTag+0x7ec (fffff800`926727fe)

    fffff800`9267243b 48391b          cmp     qword ptr [rbx],rbx

    fffff800`9267243e 0f8464060000    je      nt!ExAllocatePoolWithTag+0xa94 (fffff800`92672aa8)

    fffff800`92672444 4c8b03          mov     r8,qword ptr [rbx]

    fffff800`92672447 498b00          mov     rax,qword ptr [r8]


    The assembly shows that rax originated from the data pointed to by r8.  The .cxr command we ran earlier shows that r8 is fffffa8001a1b820.  If we examine the data at fffffa8001a1b820 we see that it matches the contents of rax, which confirms this memory is the source of the unexpected data in rax.


    kd> dq fffffa8001a1b820 l1

    fffffa80`01a1b820  4f4f4f4f`4f4f4f4f


    To determine if this unexpected data is caused by pool corruption we can use the !pool command.


    kd> !pool fffffa8001a1b820

    Pool page fffffa8001a1b820 region is Nonpaged pool

     fffffa8001a1b000 size:  810 previous size:    0  (Allocated)  None


    fffffa8001a1b810 doesn't look like a valid small pool allocation, checking to see

    if the entire page is actually part of a large page allocation...


    fffffa8001a1b810 is not a valid large pool allocation, checking large session pool...

    fffffa8001a1b810 is freed (or corrupt) pool

    Bad previous allocation size @fffffa8001a1b810, last size was 81



    *** An error (or corruption) in the pool was detected;

    *** Attempting to diagnose the problem.


    *** Use !poolval fffffa8001a1b000 for more details.



    Pool page [ fffffa8001a1b000 ] is __inVALID.


    Analyzing linked list...

    [ fffffa8001a1b000 --> fffffa8001a1b010 (size = 0x10 bytes)]: Corrupt region



    Scanning for single bit errors...


    None found


    The above output does not look like the !pool command we used earlier.   This output shows corruption to the pool header which prevented the command from walking the chain of allocations.


    The above output shows that there is an allocation at fffffa8001a1b000 of size 810.  If we look at this memory we should see a pool header.  Instead what we see is a pattern of 4f4f4f4f`4f4f4f4f.


    kd> dq fffffa8001a1b000 + 810

    fffffa80`01a1b810  4f4f4f4f`4f4f4f4f 4f4f4f4f`4f4f4f4f

    fffffa80`01a1b820  4f4f4f4f`4f4f4f4f 4f4f4f4f`4f4f4f4f

    fffffa80`01a1b830  4f4f4f4f`4f4f4f4f 00574f4c`46524556

    fffffa80`01a1b840  00000000`00000000 00000000`00000000

    fffffa80`01a1b850  00000000`00000000 00000000`00000000

    fffffa80`01a1b860  00000000`00000000 00000000`00000000

    fffffa80`01a1b870  00000000`00000000 00000000`00000000

    fffffa80`01a1b880  00000000`00000000 00000000`00000000


    At this point we can be confident that the system crashed because of pool corruption.


    Because the corruption occurred in the past, and a dump is a snapshot of the current state of the system, there is no concrete evidence to indicate how the memory came to be corrupted.  It is possible the driver that allocated the pool block immediately preceding the corruption is the one that wrote to the wrong location and caused this corruption.  This pool block is marked with the tag “None”, we can search for this tag in memory to determine which drivers use it.


    kd> !for_each_module s -a @#Base @#End "None"

    fffff800`92411bc2  4e 6f 6e 65 e9 45 04 26-00 90 90 90 90 90 90 90  None.E.&........

    kd> u fffff800`92411bc2-1


    fffff800`92411bc1 b84e6f6e65      mov     eax,656E6F4Eh

    fffff800`92411bc6 e945042600      jmp     nt!ExAllocatePoolWithTag (fffff800`92672010)

    fffff800`92411bcb 90              nop


    The file Pooltag.txt lists the pool tags used for pool allocations by kernel-mode components and drivers supplied with Windows, the associated file or component (if known), and the name of the component. Pooltag.txt is installed with Debugging Tools for Windows (in the triage folder) and with the Windows WDK (in \tools\other\platform\poolmon).  Pooltag.txt shows the following for this tag:


    None - <unknown>    - call to ExAllocatePool


    Unfortunately what we find is that this tag is used when a driver calls ExAllocatePool, which does not specify a tag.  This does not allow us to determine what driver allocated the block prior to the corruption.  Even if we could tie the tag back to a driver it may not be sufficient to conclude that the driver using this tag is the one that corrupted the memory.


    The next step should be to enable special pool and hope to catch the corruptor in the act.  We will discuss special pool in our next article.

  • Ntdebugging Blog

    Debugging a Bugcheck 0xF4


    My name is Ryan Mangipano (ryanman) and I am a Sr. Support Escalation Engineer at Microsoft.  Today’s blog will be a quick walkthrough of the analysis of a bugcheck 0xF4 and how I determined that the action plan going forward should consist of enabling pool tagging on this system.


    I began my review with !analyze –v. From the output I can see that a process required for the system to function properly unexpectedly exited or was terminated. The goal of this debugging session will be to determine what failed and why.


    0: kd> !analyze -v


    *                                                                             *

    *                        Bugcheck Analysis                                    *

    *                                                                             *




    A process or thread crucial to system operation has unexpectedly exited or been


    Several processes and threads are necessary for the operation of the

    system; when they are terminated (for any reason), the system can no

    longer function.


    Arg1: 00000003, Process                                A value of 0x3 in this parameter indicated that it was a process that terminated, not a thread

    Arg2: 8a03ada0, Terminating object        This value is a pointer to the _EPROCESS object that terminated

    Arg3: 8a03af14, Process image file name              Process Name

    Arg4: 805d1204, Explanatory message (ascii)         text message about the problem


    We shall begin by dumping out all the parameters of the bugcheck. Let’s dump out the "Terminating Object" below

    0: kd> !object 8a03ada0

    Object: 8a03ada0  Type: (8a490900) Process

        ObjectHeader: 8a03ad88 (old version)

        HandleCount: 3  PointerCount: 228


    First, let’s dump out the process image file name from the bugcheck parameter 3 above.

    0: kd> dc 8a03af14

    8a03af14  73727363 78652e73 00000065 00000000  csrss.exe



    0: kd> dt _EPROCESS 8a03ada0 imageFileName


       +0x174 ImageFileName : [16]  "csrss.exe"


    Notice that if we add the base of the _EPROCESS object (8a03ada0- Parameter 2)to the offset of the imageFileName field (+0x174) we get parameter 3. The imageFileName field.

    0: kd> ? 8a03ada0+0x174

    Evaluate expression: -1979470060 = 8a03af14


    0: kd> dc 8a03af14

    8a03af14  73727363 78652e73 00000065 00000000  csrss.exe.......


    Let’s dump out the ASCII message from parameter number 4

    0: kd> dc 805d1204

    805d1204  6d726554 74616e69 20676e69 74697263  Terminating crit

    805d1214  6c616369 6f727020 73736563 25783020  ical process


    Let’s review the debugger help file for more information. We can see that this bugcheck occurs when a critical process or thread terminates. “Several processes and threads are necessary for the operation of the system. When they are terminated for any reason, the system can no longer function. “

    0: kd> .hh bug check 0xf4







    Next, we need to determine why this process terminated. !analyze –v also provided us with an exception record which provides us with an error code:


    PROCESS_NAME:  csrss.exe


    EXCEPTION_RECORD:  9a85e9d8 -- (.exr 0xffffffff9a85e9d8)

    ExceptionAddress: 7c92c375 (ntdll!RtlFindMessage+0x0000007c)

       ExceptionCode: c0000006 (In-page I/O error)

      ExceptionFlags: 00000000

    NumberParameters: 3

       Parameter[0]: 00000000

       Parameter[1]: 7c99c3d8

       Parameter[2]: c000009a

    Inpage operation failed at 7c99c3d8, due to I/O error c000009a


    EXCEPTION_CODE: (NTSTATUS) 0xc0000006 - The instruction at 0x%p referenced memory at 0x%p. The required data was not placed into memory because of an I/O error status of 0x%x.


    Since we have an error code, let’s investigate that error code. We can quickly perform this operation from within the debugger using the !error command


    0: kd> !error c000009a

    Error code: (NTSTATUS) 0xc000009a (3221225626) - Insufficient system resources exist to complete the API.


    0: kd > .hh !error



    Let’s check the output of the !vm command


    0: kd> !vm 2


    *** Virtual Memory Usage ***

                    Physical Memory:      760875 (   3043500 Kb)

                    Page File: \??\C:\pagefile.sys

                      Current:   4190208 Kb  Free Space:   4156380 Kb

                      Minimum:   4190208 Kb  Maximum:      4190208 Kb

                    Available Pages:      579241 (   2316964 Kb)

                    ResAvail Pages:       673481 (   2693924 Kb)

                    Locked IO Pages:          69 (       276 Kb)

                    Free System PTEs:     115226 (    460904 Kb)

                    Free NP PTEs:              0 (         0 Kb)

                    Free Special NP:           0 (         0 Kb)

                    Modified Pages:          221 (       884 Kb)

                    Modified PF Pages:       219 (       876 Kb)

                    NonPagedPool Usage:    65534 (    262136 Kb)

                    NonPagedPool Max:      65536 (    262144 Kb)

                    ********** Excessive NonPaged Pool Usage *****

                    PagedPool 0 Usage:     24167 (     96668 Kb)

                    PagedPool 1 Usage:       967 (      3868 Kb)

                    PagedPool 2 Usage:       967 (      3868 Kb)

                    PagedPool 3 Usage:       984 (      3936 Kb)

                    PagedPool 4 Usage:       977 (      3908 Kb)

                    PagedPool Usage:       28062 (    112248 Kb)

                    PagedPool Maximum:     92160 (    368640 Kb)


                    ********** 2075 pool allocations have failed **********


                    Session Commit:         1562 (      6248 Kb)

                    Shared Commit:          2526 (     10104 Kb)

                    Special Pool:              0 (         0 Kb)

                    Shared Process:         4821 (     19284 Kb)

                    PagedPool Commit:      28062 (    112248 Kb)

                    Driver Commit:          5138 (     20552 Kb)

                    Committed pages:      153449 (    613796 Kb)

                    Commit limit:        1767229 (   7068916 Kb)


    0: kd> !poolused

    unable to get PoolTrackTable - pool tagging is disabled, enable it to use this command

    Use gflags.exe and check the box that says "Enable pool tagging".


    The output above has informed us that pool tagging is disabled. Let’s demonstrate how you can verify that it is disabled:


    0: kd> dd nt!NtGlobalFlag L1

    805597ec  00000000


    0: kd> !gflag

    Current NtGlobalFlag contents: 0x00000000


    Let’s explore the debugging help file entry on the !poolused command


    0: kd > .hh !poolused




    Reading the text above, we are informed that “Pool tagging is permanently enabled on Windows Server 2003 and later versions of Windows. On Windows XP and earlier versions of Windows, you must enable pool tagging by using Gflags.”



    Using the vertarget command, I can see that this system was running Windows XP.

    0: kd> vertarget

    Windows XP Kernel Version 2600 (Service Pack 2) MP (2 procs) Free x86 compatible




    0: kd > .hh !gflag




    By reviewing the help file entry for the !gflag extension, I was able to determine that if pooltagging was set, the following bit would have been set:


    0x400 "ptg"         Enable pool tagging.


    0: kd> .formats 0x400

    Evaluate expression:


      Binary:  00000000 00000000 00000100 00000000                0x00000400


    Gflags is included in the Debugging Tools for Windows. The screenshot below is from a Windows 7 system. Notice that Pool Tagging is enabled permanently as described above.




    Summary: This system bugchecked when the critical process csrss.exe failed an I/O operation due to insufficient non-paged pool. For an action plan, we recommended the use of gflags to enable pool tagging in order to obtain more information about pool consumption.



    Share this post :

  • Ntdebugging Blog

    For testing purposes only, and other desktop watermarks


    Hi all, Matt here again.  One of our team’s main functions is to work with our development teams to create hotfixes when customers run into issues that can only be resolved through a code change.  The developers will often prepare a private test fix that either tests the proposed change, or adds additional instrumentation to help pinpoint the issue. The private test fix is sent to the customer reporting the problem so they can confirm that it does indeed correct (or identify) the flaw.


    When testing a private test fix, customers frequently ask, why does my desktop now show a message on the lower right corner of the desktop, and what does it mean?  The message reads “For testing purposes only”, and looks like this:

    For testing purposes only


    Often, users are concerned that his message means that they aren’t allowed to use the server in production, or maybe that it is now “unsupported.”  These aren’t the case!  Since this message appears as a result of installing a fix during the course of a Microsoft Support case, the servers are, by definition, being supported. 


    The purpose of this message is simply to remind users that code that Microsoft Support has asked them to test has been installed on the system, and this code may not have yet undergone the full suite of quality assurance testing that fixes that are made public usually do.   


    For comparison, let’s look at some of the other watermarks you may find in the lower corner of the desktop – as these can often be confused for the above message, and may explain some of the customer concerns around these messages.


    First up is the old trusty text you see when a box is booted into ‘Safe Mode’. I’m sure every IT Pro has seen this at one time or another, so I won’t go into detail, but rest assured, the testing purposes text is completely unrelated to booting in safe mode or having a subset of services running.

    Safe Mode


    Next up is our ‘Evaluation copy’ watermark.  This message is shown on the desktops of copies of Windows that have a “time bomb” (ones that will cease to function after a certain date.)  This message is typically seen on beta versions of Windows which are designed to stop functioning sometime after the desired beta testing period ends. 

    Evaluation copy


    Third, we have our Windows is not genuine message.  This is shown if, for example, a copy of Windows is not activated during the grace period after the installation process, or if a number of hardware changes have been made and Windows needs to be re-activated.  This has nothing to do with the ‘testing purposes’ message.  See http://technet.microsoft.com/en-us/library/dd979803.aspx for more information about this message.

    This copy of Windows is not genuine


    Fourth, we have the general Windows build stamp.  This is enabled via the registry using the PaintDesktopVersion DWORD (http://technet.microsoft.com/en-us/library/cc782257(WS.10).aspx).  Some administrators like to enable this option so they always know what version of Windows they are using, sort of like a mini-bginfo.  Unlike the others, this message does not indicate anything else about a server’s state.

    Windows 7


    Finally, we have ‘Test Mode’.  This is actually somewhat related to the testing purposes message.  This ‘Test Mode’ text is shown when test signing is enabled on a PC.  This is done by running “bcdedit /set testsigning on” from an UAC-elevated command prompt.  Test signing is used to allow developers to load drivers they are still working on that have not yet been code signed with an official certificate.  This is actually one of the steps we need to do when loading our test fixes.  For more information on Test Signing, see http://msdn.microsoft.com/en-us/library/ff553484%28v=VS.85%29.aspx.

    Test Mode


    So now that you know what causes these various watermarks to appear, perhaps you’re wondering how to make the “For testing purposes only” message disappear.  This is a question we are frequently asked.  While you are running a private test fix, there is no way to disable this message.  Your only option is to remove the private test fix from your system.  This is something your engineer will ask you to do before you install the final, public version of a hotfix.  You can easily identify and uninstall private test fixes by going into Control Panel, Programs and Features, View Installed Updates, then look for hotfixes with the words FOR TESTING PURPOSES ONLY in their name, like the one shown in the image below.  You may also notice that the KB number listed for these fixes is often a place holder, and not a real KB article ID.

    Installed Updates


    If the ‘For testing purposes only’ message is still displayed even after uninstalling the private test fix, there is one more place to check.  If a system has the Microsoft Test Root Authority certificate installed into its Trusted Root Certification Authorities store, the text will be displayed.  We use this certificate to allow a PC to run test code that has been signed by our development team, but not yet fully tested and signed off with the official Microsoft certificate.  To remove this certificate from your system, go to Start -> Run, and enter certmgr.msc and hit enter.  In the Certificate Manager MMC, browse to Trusted Root Certification Authorities, then into Certificates.  You should see one labeled Microsoft Test Root Authority, as shown below.  This will need to be deleted and the system rebooted to remove the testing purposes message.  Do not do this if you still have a private test fix installed though, as it would prevent that binary from continuing to function and may mean you can no longer boot in to Normal or Safe mode.



    If you reboot and find that ‘Test Mode’ has replaced the ‘For testing purposes only’ text, you’ll need to launch a command prompt with administrative privileges, then run “bcdedit /set testsigning off” and reboot.  You can always check if test signing is enabled by running “bcdedit /enum” and looking for this line:

    bcdedit /enum


    That’s all for today.  Hopefully this post helped clear up any confusion about our different desktop watermarks.

Page 5 of 25 (241 items) «34567»