• Ntdebugging Blog

    Performance Monitor Averages, the Right Way and the Wrong Way


    Performance Monitor (perfmon) is the preferred tool to measure the performance of Windows systems.  The perfmon tool provides an analysis view with a chart and metrics of the Last, Average, Minimum, and Maximum values.


    There are scenarios where the line in the chart is the most valuable piece of information, such as a memory leak.  Other times we may not be looking for a trend, the Last, Average, Minimum, and Maximum metrics may be valuable.  One example where the metrics are valuable is when evaluating average disk latency over a period of time.  In this article we are going to use disk latency counters to illustrate how metrics are calculated for performance counters.  The concepts we will illustrate with disk latency apply to all performance counters.  This article will not be a deep dive into understanding disk latency, there are already many sources of information on that topic.


    Most performance counter metrics are pretty straightforward.  The minimum and maximum metrics are self-explanatory.  The last metric is the last entry in the data.  The metric that is confusing is the average.  When calculating averages it is important to consider the cardinality of the data.  This is especially important when working with data that is already an average, such as the Avg. Disk sec/Read counter which displays the average time per each read from a disk.


    Perfmon logs are gathered at a specific time interval, such as every 15 seconds.  At every interval the counters are read and an entry is written to the log.  In this interval there may have been many reads from the disk, a few reads, or there may have been none.  The number of reads performed is a critical aspect of the average calculation, this is the cardinality of the data.


    Consider the following 10 entries in a perfmon log:

    1 reads took 150ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms

    0 reads took 0 ms


    Often, averages are calculated by adding a column of numbers and dividing by the number of entries.  However this calculation does not work for the above data.  If we simply add and divide we get an average latency of 15ms (150 / 10) per read, but this is clearly incorrect.  There has been 1 read performed and it took 150ms, therefore the average latency is 150ms per read.  Depending on the system configuration, an average read latency of less than 20ms may be considered fast and more than 20ms may be considered slow.  If we perform the calculation incorrectly we may believe the disk is performing adequately while the correct calculation shows the disk is actually very slow.


    What data is used to calculate averages?

    Let’s take a look at the data perfmon is working with.  Perfmon stores data in two different structures.  Formatted values are stored as PDH_FMT_COUNTERVALUE.  Raw values are stored as PDH_RAW_COUNTER.


    Formatted values are just plain numbers.  They contain only the result of calculating the average of one or more raw values, but not the raw data used to obtain that calculation.  Data stored in a perfmon CSV or TSV file is already formatted, which means they contain a column of floating point numbers.  If our previous example was stored in a CSV or TSV we would have the following data:












    The above numbers contain no information about how many reads were performed over the course of this log.  Therefore it is impossible to calculate an accurate average from these numbers.  That is not to say CSV and TSV files are worthless, there are many performance scenarios (such as memory leaks) where the average is not important.


    Raw counters contain the raw performance information, as delivered by the performance counter to pdh.dll.  In the case of Avg. Disk sec/Read the FirstValue contains the total time for all reads and the SecondValue contains the total number of reads performed.  This information can be used to calculate the average while taking into consideration the cardinality of the data.


    Again using the above example, the raw data would look like this:

    FirstValue: 0

    SecondValue: 0

    FirstValue: 2147727

    SecondValue: 1

    FirstValue: 2147727

    SecondValue: 1


    On first look the above raw data does not resemble our formatted data at all.  In order to calculate the average we need to know what the correct algorithm is.  The Avg. Disk sec/Read counter is of type PERF_AVERAGE_TIMER and the average calculation is ((Nx - N0) / F) / (Dx - D0).  N refers to FirstValue in the raw counter data, F refers to the number of ticks per second, and D refers to SecondValue.  Ticks per second can be obtained from the PerformanceFrequency parameter of KeQueryPerformanceCounter, in my example it is 14318180.


    Using the algorithm for PERF_AVERAGE_TIMER the calculation for the formatted values would be:

    ((2147727 - 0) / 14318180) / (1 - 0) = 0.15

    ((2147727 - 2147727) / 14318180) / (1 - 1) = 0*

    *If the denominator is 0 there is no new data and the result is 0.


    Because the raw counter contains both the number of reads performed during each interval and the time it took for these reads to complete, we can accurately calculate the average for many entries.


    If you’ve taken the time to read this far you may be wondering why I have taken the time to explain such a mundane topic.  It is important to explain how this works because many performance tools are not using the correct average calculation and many users are trying to calculate averages using data that is not appropriate for such calculations (such as CSV and TSV files).  Programmers should use PdhComputeCounterStatistics to calculate averages and should not sum and divide by the count or duplicate the calculations described in MSDN.


    Recently we have found that under some conditions perfmon will use the incorrect algorithm to calculate averages.  When reading from log files perfmon has been formatting the values, summing them, and dividing by the number of entries.  This issue has been corrected in perfmon for Windows 8/Server 2012 with KB2877211 and for Windows 8.1/Server 2012 R2 as part of KB2883200.  We recommend using these fixes when analyzing perfmon logs to determine the average of a performance counter.  Note that KB2877211/KB2883200 only change the behavior when analyzing logs, there is no change when the data is collected.  This means you can collect performance logs from any version of Windows and analyze them on a system with these fixes installed.

  • Ntdebugging Blog

    ResAvail Pages and Working Sets


    Hello everyone, I'm Ray and I'm here to talk a bit about a dump I recently looked at and a little-referenced memory counter called ResAvail Pages (resident available pages).


    The problem statement was:  The server hangs after a while.


    Not terribly informative, but that's where we start with many cases. First some good housekeeping:


    0: kd> vertarget

    Windows 7 Kernel Version 7601 (Service Pack 1) MP (2 procs) Free x64

    Product: Server, suite: TerminalServer SingleUserTS

    Built by: 7601.18113.amd64fre.win7sp1_gdr.130318-1533

    Machine Name: "ASDFASDF1234"

    Kernel base = 0xfffff800`01665000 PsLoadedModuleList = 0xfffff800`018a8670

    Debug session time: Thu Aug  8 09:39:26.992 2013 (UTC - 4:00)

    System Uptime: 9 days 1:08:39.307


    Of course Windows 7 Server == Server 2008 R2.


    One of the basic things I check at the beginning of these hang dumps with vague problem statements is the memory information.


    0: kd> !vm 21


    *** Virtual Memory Usage ***

    Physical Memory:     2097038 (   8388152 Kb)

    Page File: \??\C:\pagefile.sys

      Current:  12582912 Kb  Free Space:  12539700 Kb

      Minimum:  12582912 Kb  Maximum:     12582912 Kb

    Available Pages:      286693 (   1146772 Kb)

    ResAvail Pages:          135 (       540 Kb)


    ********** Running out of physical memory **********


    Locked IO Pages:           0 (         0 Kb)

    Free System PTEs:   33526408 ( 134105632 Kb)


    ******* 12 system cache map requests have failed ******


    Modified Pages:         4017 (     16068 Kb)

    Modified PF Pages:      4017 (     16068 Kb)

    NonPagedPool Usage:   113241 (    452964 Kb)

    NonPagedPool Max:    1561592 (   6246368 Kb)

    PagedPool 0 Usage:     35325 (    141300 Kb)

    PagedPool 1 Usage:     28162 (    112648 Kb)

    PagedPool 2 Usage:     24351 (     97404 Kb)

    PagedPool 3 Usage:     24350 (     97400 Kb)

    PagedPool 4 Usage:     24516 (     98064 Kb)

    PagedPool Usage:      136704 (    546816 Kb)

    PagedPool Maximum:  33554432 ( 134217728 Kb)


    ********** 222 pool allocations have failed **********


    Session Commit:         6013 (     24052 Kb)

    Shared Commit:          6150 (     24600 Kb)

    Special Pool:              0 (         0 Kb)

    Shared Process:      1214088 (   4856352 Kb)

    Pages For MDLs:           67 (       268 Kb)

    PagedPool Commit:     136768 (    547072 Kb)

    Driver Commit:         15548 (     62192 Kb)

    Committed pages:     1648790 (   6595160 Kb)

    Commit limit:        5242301 (  20969204 Kb)


    So we're failing to allocate pool, but we aren't out of virtual memory for paged pool or nonpaged pool.  Let's look at the breakdown:


    0: kd> dd nt!MmPoolFailures l?9

    fffff800`01892160  000001be 00000000 00000000 00000002

    fffff800`01892170  00000000 00000000 00000000 00000000

    fffff800`01892180  00000000



        yellow   = Nonpaged high/medium/low priority failures

        green    = Paged high/medium/low priority failures

        cyan      = Session paged high/medium/low priority failures


    So we actually failed both nonpaged AND paged pool allocations in this case.  Why?  We're "Running out of physical memory", obviously.  So where does this running out of physical memory message come from?  In the above example this is from the ResAvail Pages counter.


    ResAvail Pages is the amount of physical memory there would be if every working set was at its minimum size and only what needs to be resident in RAM was present (e.g. PFN database, system PTEs, driver images, kernel thread stacks, nonpaged pool, etc).


    Where did this memory go then?  We have plenty of Available Pages (Free + Zero + Standby) for use.  So something is claiming memory it isn't actually using.  In this type of situation one of the things I immediately suspect is process working set minimums. Working set basically means the physical memory used by a process.


    So let's check.


    0: kd> !process 0 1


    <a lot of processes in this output>.


    PROCESS fffffa8008f76060

        SessionId: 0  Cid: 0adc    Peb: 7fffffda000  ParentCid: 0678

        DirBase: 204ac9000  ObjectTable: 00000000  HandleCount:   0.

        Image: cscript.exe

        VadRoot 0000000000000000 Vads 0 Clone 0 Private 1. Modified 3. Locked 0.

        DeviceMap fffff8a000008a70

        Token                             fffff8a0046f9c50

        ElapsedTime                       9 Days 01:08:00.134

        UserTime                          00:00:00.000

        KernelTime                        00:00:00.015

        QuotaPoolUsage[PagedPool]         0

        QuotaPoolUsage[NonPagedPool]      0

        Working Set Sizes (now,min,max)  (5, 50, 345) (20KB, 200KB, 1380KB)

        PeakWorkingSetSize                1454

        VirtualSize                       65 Mb

        PeakVirtualSize                   84 Mb

        PageFaultCount                    1628

        MemoryPriority                    BACKGROUND

        BasePriority                      8

        CommitCharge                      0


    I have only shown one example process above for brevity's sake, but there were thousands returned.  241,423 to be precise.  None had abnormally high process working set minimums, but cumulatively their usage adds up.


    The “now” process working set is lower than the minimum working set.  How is that possible?  Well, the minimum and maximum are not hard limits, but suggested limits.  For example, the minimum working set is honored unless there is memory pressure, in which case it can be trimmed below this value.  There is a way to set the min and/or max as hard limits on specific processes by using the QUOTA_LIMITS_HARDWS_MIN_ENABLE flag via SetProcessWorkingSetSize.


    You can view if the minimum and maximum working set values are configured in the _EPROCESS->Vm->Flags structure.  Note these numbers are from another system as this structure was already torn down for the processes we were looking at.


    0: kd> dt _EPROCESS fffffa8008f76060 Vm


       +0x398 Vm : _MMSUPPORT

    0: kd> dt _MMSUPPORT fffffa8008f76060+0x398


       +0x000 WorkingSetMutex  : _EX_PUSH_LOCK

       +0x008 ExitGate         : 0xfffff880`00961000 _KGATE

       +0x010 AccessLog        : (null)

       +0x018 WorkingSetExpansionLinks : _LIST_ENTRY [ 0x00000000`00000000 - 0xfffffa80`08f3c410 ]

       +0x028 AgeDistribution  : [7] 0

       +0x044 MinimumWorkingSetSize : 0x32

       +0x048 WorkingSetSize   : 5

       +0x04c WorkingSetPrivateSize : 5

       +0x050 MaximumWorkingSetSize : 0x159

       +0x054 ChargedWslePages : 0

       +0x058 ActualWslePages  : 0

       +0x05c WorkingSetSizeOverhead : 0

       +0x060 PeakWorkingSetSize : 0x5ae

       +0x064 HardFaultCount   : 0x41

       +0x068 VmWorkingSetList : 0xfffff700`01080000 _MMWSL

       +0x070 NextPageColor    : 0x2dac

       +0x072 LastTrimStamp    : 0

       +0x074 PageFaultCount   : 0x65c

       +0x078 RepurposeCount   : 0x1e1

       +0x07c Spare            : [2] 0

       +0x084 Flags            : _MMSUPPORT_FLAGS

    0: kd> dt _MMSUPPORT_FLAGS fffffa8008f76060+0x398+0x84


       +0x000 WorkingSetType   : 0y000

       +0x000 ModwriterAttached : 0y0

       +0x000 TrimHard         : 0y0

       +0x000 MaximumWorkingSetHard : 0y0

       +0x000 ForceTrim        : 0y0

       +0x000 MinimumWorkingSetHard : 0y0

       +0x001 SessionMaster    : 0y0

       +0x001 TrimmerState     : 0y00

       +0x001 Reserved         : 0y0

       +0x001 PageStealers     : 0y0000

       +0x002 MemoryPriority   : 0y00000000 (0)

       +0x003 WsleDeleted      : 0y1

       +0x003 VmExiting        : 0y1

       +0x003 ExpansionFailed  : 0y0

       +0x003 Available        : 0y00000 (0)


    How about some more detail?


    0: kd> !process fffffa8008f76060

    PROCESS fffffa8008f76060

        SessionId: 0  Cid: 0adc    Peb: 7fffffda000  ParentCid: 0678

        DirBase: 204ac9000  ObjectTable: 00000000  HandleCount:  0.

        Image: cscript.exe

        VadRoot 0000000000000000 Vads 0 Clone 0 Private 1. Modified 3. Locked 0.

        DeviceMap fffff8a000008a70

        Token                             fffff8a0046f9c50

        ElapsedTime                       9 Days 01:08:00.134

        UserTime                          00:00:00.000

        KernelTime                        00:00:00.015

        QuotaPoolUsage[PagedPool]         0

        QuotaPoolUsage[NonPagedPool]      0

        Working Set Sizes (now,min,max)  (5, 50, 345) (20KB, 200KB, 1380KB)

        PeakWorkingSetSize                1454

        VirtualSize                       65 Mb

        PeakVirtualSize                   84 Mb

        PageFaultCount                    1628

        MemoryPriority                    BACKGROUND

        BasePriority                      8

        CommitCharge                      0


    No active threads


    0: kd> !object fffffa8008f76060

    Object: fffffa8008f76060  Type: (fffffa8006cccc90) Process

        ObjectHeader: fffffa8008f76030 (new version)

        HandleCount: 0  PointerCount: 1


    The highlighted information shows us that this process has no active threads left but the process object itself (and its 20KB working set use) were still hanging around because a kernel driver had a reference to the object that it never released.  Sampling other entries shows the server had been leaking process objects since it was booted.


    Unfortunately trying to directly track down pointer leaks on process objects is difficult and requires an instrumented kernel, so we tried to check the easy stuff first before going that route.  We know it has to be a kernel driver doing this (since it is a pointer and not a handle leak) so we looked at the list of 3rd party drivers installed.  Note: The driver names have been redacted.


    0: kd> lm

    start             end                 module name


    fffff880`04112000 fffff880`04121e00   driver1    (no symbols)   <-- no symbols usually means 3rd party       

    fffff880`04158000 fffff880`041a4c00   driver2    (no symbols)          



    0: kd> lmvm driver1   

    Browse full module list

    start             end                 module name

    fffff880`04112000 fffff880`04121e00   driver1    (no symbols)          

        Loaded symbol image file: driver1.sys

        Image path: \SystemRoot\system32\DRIVERS\driver1.sys

        Image name: driver1.sys

        Browse all global symbols  functions  data

        Timestamp:        Wed Dec 13 12:09:32 2006 (458033CC)

        CheckSum:         0001669E

        ImageSize:        0000FE00

        Translations:     0000.04b0 0000.04e4 0409.04b0 0409.04e4

    0: kd> lmvm driver2

    Browse full module list

    start             end                 module name

    fffff880`04158000 fffff880`041a4c00   driver2    (no symbols)          

        Loaded symbol image file: driver2.sys

        Image path: \??\C:\Windows\system32\drivers\driver2.sys

        Image name: driver2.sys

        Browse all global symbols  functions  data

        Timestamp:        Thu Nov 30 12:12:07 2006 (456F10E7)

        CheckSum:         0004FE8E

        ImageSize:        0004CC00

        Translations:     0000.04b0 0000.04e4 0409.04b0 0409.04e4


    Fortunately for both the customer and us we turned up a pair of drivers that predated Windows Vista (meaning they were designed for XP/2003) that raised an eyebrow.  Of course we need a more solid evidence link than just "it's an old driver", so I did a quick search of our internal KB.  This turned up several other customers who had these same drivers installed, experienced the same problem, then removed them and the problem went away.  That sounds like a pretty good evidence link. We implemented the same plan for this customer successfully.

  • Ntdebugging Blog

    Missing System Writer Case Explained


    I worked on a case the other day where all I had was a procmon log and event logs to troubleshoot a problem where the System Writer did not appear in the VSSADMIN LIST WRITERS output. This might be review for the folks that know this component pretty well but I figured I would share how I did it for those that are not so familiar with the System Writer.



    1. System State Backups fail
    2. Running a VSS List Writers does not list the system writer


    Looking at the event logs I found the error shown below. This error indicates there was a failure while “Writer Exposing its Metadata Context”. Each writer is responsible for providing a list of files, volumes, and other resources it is designed to protect. This list is called metadata and is formatted as XML. In the example we are working with the error is “Unexpected error calling routine XML document is too long”.  While helpful, this message alone does not provide us with a clear reason why the XML document is too long.


    Event Type: Error

    Event Source: VSS

    Event ID: 8193

    Description: Volume Shadow Copy Service error: Unexpected error calling routine XML document is too long. hr = 0x80070018, The program issued a command but the command length is incorrect. . Operation: Writer Exposing its Metadata Context: Execution Context: Requestor Writer Instance ID: {636923A0-89C2-4823-ADEF-023A739B2515} Writer Class Id: {E8132975-6F93-4464-A53E-1050253AE220} Writer Name: System Writer


    The second event that was logged was also not very helpful as it only indicates that the writer did have a failure. It looks like we are going to need to collect more data to figure this out.


    Event Type: Error

    Event Source: VSS

    Event ID: 8193

    Description: Volume Shadow Copy Service error: Unexpected error calling routine CreateVssExamineWriterMetadata. hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. . Operation: Writer Exposing its Metadata Context: Execution Context: Requestor Writer Instance ID: {636923A0-89C2-4823-ADEF-023A739B2515} Writer Class Id: {E8132975-6F93-4464-A53E-1050253AE220} Writer Name: System Writer


    From the error above we learned that there was an issue with the metadata file for the System Writer. These errors are among some of the most common issues seen with this writer. There are some not so well documented limitations within the writer due to some hard set limits on path depth and the number of files in a given path. These limitations are frequently exposed by the C:\Windows\Microsoft.Net\ path. Often, this path is used by development software like Visual Studio as well as server applications like IIS. Below I have listed a few known issues that should help provide some scope when troubleshooting System Writer issues.


    Known limitations and common points of failure:

    • More than 1,000 folders in a folder causes writer to fail during OnIdentify
    • More than 10,000 files in a folder causes writer to fail during OnIdentify (frequently C:\Windows\Microsoft.Net)
    • Permissions issues (frequently in C:\Windows\WinSXS and C:\Windows\Microsoft.Net)
    • Permissions issues with COM+ Event System Service
      • This service needs to be running and needs to have Network Service with Service User Rights


    What data can I capture to help me find where the issue is?


    The best place to start is with a Process Monitor (Procmon) capture. To prepare for this capture you will need to download Process Monitor, open the Services MMC snap-in, as well as open an administrative command prompt which will be used in a later step of the process.


    You should first stop the Cryptographic Services service using the Services MMC.



    Once stopped you will want to open Procmon, note that by default Procmon will start capturing when opened. Now that you have Procmon open and capturing data you will start the cryptographic service. This will allow you to capture any errors during service initialization. Once the service is started you will use the command prompt opened earlier to run “vssadmin.exe list writers”. This will signal the writers on the system to capture their metadata, which is a common place we see failures with the System Writer. When the vssadmin command completes, stop the Procmon capture and save this data to disk.


    Now that we have data how do we find the errors?


    Open your newly created Procmon file. First, add a new filter for the path field that contains “VSS\Diag”.



    We do this because this is the registry path that all writers will log to when entering and leaving major functions. Now that we have our filtered view we need to look for an entry from the System Writer. You can see the highlighted line below shows the “IDENTIFY” entry for the System Writer. From here we can ascertain the PID of the svchost.exe that the system writer is running in. We now want to include only this svchost. To accomplish this you can right click on the PID for the svchost.exe and select “Include ‘(PID)’”. 



    Now that we have found our System Writers svchost we will want to remove the filter for “VSS\Diag”; to do that you can return to the filter tool in Procmon and uncheck the box next to the entry.



    We now have a complete view of what this service was doing at the time it started and completed the OnIdentify. Our next step is to locate the IDENTIFY (Leave) entry as this is often a great marker for where your next clue will be. While in most cases we can’t directly see the error the writer hit we can make some educated connections based on the common issues we spoke about above. If we take a look at the events that took place just before the IDENTIFY (Leave) we can see that we were working in the C:\Windows\Microsoft.NET\assembly\ directory. This is one of the paths that the System Writer is responsible for protecting. As mentioned above, there are some known limitations to the depth of paths and number of files in the “C:\Windows\Microsoft.NET” folder. This is a great example of that limitation as seen in our procmon capture. The example below shows the IDENTIFY (Leave) with the line before that being where the last work was taking place. Meaning this is what the writer was touching when it failed.



    What does this tell us and what should we do next?


    Given the known path limitations, we need to check out the number of files and folders in the C:\Windows\Microsoft.Net\ path and see where the bloat is. Some of these files can be safely removed, however only files located in the Temp locations (Temporary ASP.NET Files) are safe to delete.


    Recently we released KB2807849 which addresses the issue shown above.


    There are other possible causes of the event log errors mentioned above, such as issues with file permissions. For those problems follow the same steps as above and you are likely to see the IDENTIFY (Leave) just after file access error is displayed in your procmon log. For these failures you will need to investigate the permissions on the file we failed on. Likely the file is missing permissions for the writer’s service account Network Service or Local System. All that is needed here is to add back the missing permissions for your failed file.


    While these issues can halt your nightly backups, it is often fairly easy to find the root cause. It just takes time and a little bit of experience with Process Monitor. 


    Good luck and successful backups!

  • Ntdebugging Blog

    Understanding Pool Corruption Part 2 – Special Pool for Buffer Overruns


    In our previous article we discussed pool corruption that occurs when a driver writes too much data in a buffer.  In this article we will discuss how special pool can help identify the driver that writes too much data.


    Pool is typically organized to allow multiple drivers to store data in the same page of memory, as shown in Figure 1.  By allowing multiple drivers to share the same page, pool provides for an efficient use of the available kernel memory space.  However this sharing requires that each driver be careful in how it uses pool, any bugs where the driver uses pool improperly may corrupt the pool of other drivers and cause a crash.


    Figure 1 – Uncorrupted Pool


    With pool organized as shown in Figure 1, if DriverA allocates 100 bytes but writes 120 bytes it will overwrite the pool header and data stored by DriverB.  In Part 1 we demonstrated this type of buffer overflow using NotMyFault, but we were not able to identify which code had corrupted the pool.


    Figure 2 – Corrupted Pool


    To catch the driver that corrupted pool we can use special pool.  Special pool changes the organization of the pool so that each driver’s allocation is in a separate page of memory.  This helps prevent drivers from accidentally writing to another driver’s memory.  Special pool also configures the driver’s allocation at the end of the page and sets the next virtual page as a guard page by marking it as invalid.  The guard page causes an attempt to write past the end of the allocation to result in an immediate bugcheck.


    Special pool also fills the unused portion of the page with a repeating pattern, referred to as “slop bytes”.  These slop bytes will be checked when the page is freed, if any errors are found in the pattern a bugcheck will be generated to indicate that the memory was corrupted.  This type of corruption is not a buffer overflow, it may be an underflow or some other form of corruption.


    Figure 3 – Special Pool


    Because special pool stores each pool allocation in its own 4KB page, it causes an increase in memory usage.  When special pool is enabled the memory manager will configure a limit of how much special pool may be allocated on the system, when this limit is reached the normal pools will be used instead.  This limitation may be especially pronounced on 32-bit systems which have less kernel space than 64-bit systems.


    Now that we have explained how special pool works, we should use it.


    There are two methods to enable special pool.  Driver verifier allows special pool to be enabled on specific drivers.  The PoolTag registry value described in KB188831 allows special pool to be enabled for a particular pool tag.  Starting in Windows Vista and Windows Server 2008, driver verifier captures additional information for special pool allocations so this is typically the recommended method.


    To enable special pool using driver verifier use the following command line, or choose the option from the verifier GUI.  Use the /driver flag to specify drivers you want to verify, this is the place to list drivers you suspect as the cause of the problem.  You may want to verify drivers you have written and want to test or drivers you have recently updated on the system.  In the command line below I am only verifying myfault.sys.  A reboot is required to enable special pool.


    verifier /flags 1 /driver myfault.sys


    After enabling verifier and rebooting the system, repeat the activity that causes the crash.  For some problems the activity may just be to wait for a period of time.  For our demonstration we are running NotMyFault (see Part 1 for details).


    The crash resulting from a buffer overflow in special pool will be a stop 0xD6, DRIVER_PAGE_FAULT_BEYOND_END_OF_ALLOCATION.


    kd> !analyze -v


    *                                                                             *

    *                        Bugcheck Analysis                                    *

    *                                                                             *




    N bytes of memory was allocated and more than N bytes are being referenced.

    This cannot be protected by try-except.

    When possible, the guilty driver's name (Unicode string) is printed on

    the bugcheck screen and saved in KiBugCheckDriver.


    Arg1: fffff9800b5ff000, memory referenced

    Arg2: 0000000000000001, value 0 = read operation, 1 = write operation

    Arg3: fffff88004f834eb, if non-zero, the address which referenced memory.

    Arg4: 0000000000000000, (reserved)


    We can debug this crash and determine that notmyfault.sys wrote beyond its pool buffer.


    The call stack shows that myfault.sys accessed invalid memory and this generated a page fault.


    kd> k

    Child-SP          RetAddr           Call Site

    fffff880`04822658 fffff803`721333f1 nt!KeBugCheckEx

    fffff880`04822660 fffff803`720acacb nt! ?? ::FNODOBFM::`string'+0x33c2b

    fffff880`04822700 fffff803`7206feee nt!MmAccessFault+0x55b

    fffff880`04822840 fffff880`04f834eb nt!KiPageFault+0x16e

    fffff880`048229d0 fffff880`04f83727 myfault+0x14eb

    fffff880`04822b20 fffff803`72658a4a myfault+0x1727

    fffff880`04822b80 fffff803`724476c7 nt!IovCallDriver+0xba

    fffff880`04822bd0 fffff803`7245c8a6 nt!IopXxxControlFile+0x7e5

    fffff880`04822d60 fffff803`72071453 nt!NtDeviceIoControlFile+0x56

    fffff880`04822dd0 000007fc`4fe22c5a nt!KiSystemServiceCopyEnd+0x13

    00000000`004debb8 00000000`00000000 0x000007fc`4fe22c5a


    The !pool command shows that the address being referenced by myfault.sys is special pool.


    kd> !pool fffff9800b5ff000

    Pool page fffff9800b5ff000 region is Special pool

    fffff9800b5ff000: Unable to get contents of special pool block


    The page table entry shows that the address is not valid.  This is the guard page used by special pool to catch overruns.


    kd> !pte fffff9800b5ff000

                                               VA fffff9800b5ff000

    PXE at FFFFF6FB7DBEDF98    PPE at FFFFF6FB7DBF3000    PDE at FFFFF6FB7E6002D0    PTE at FFFFF6FCC005AFF8

    contains 0000000001B8F863  contains 000000000138E863  contains 000000001A6A1863  contains 0000000000000000

    pfn 1b8f      ---DA--KWEV  pfn 138e      ---DA--KWEV  pfn 1a6a1     ---DA--KWEV  not valid


    The allocation prior to this memory is an 800 byte block of non paged pool tagged as “Wrap”.  “Wrap” is the tag used by verifier when pool is allocated without a tag, it is the equivalent to the “None” tag we saw in Part 1.


    kd> !pool fffff9800b5ff000-1000

    Pool page fffff9800b5fe000 region is Special pool

    *fffff9800b5fe000 size:  800 data: fffff9800b5fe800 (NonPaged) *Wrap

                Owning component : Unknown (update pooltag.txt)


    Special pool is an effective mechanism to track down buffer overflow pool corruption.  It can also be used to catch other types of pool corruption which we will discuss in future articles.

  • Ntdebugging Blog

    Understanding Pool Corruption Part 1 – Buffer Overflows


    Before we can discuss pool corruption we must understand what pool is.  Pool is kernel mode memory used as a storage space for drivers.  Pool is organized in a similar way to how you might use a notepad when taking notes from a lecture or a book.  Some notes may be 1 line, others may be many lines.  Many different notes are on the same page.


    Memory is also organized into pages, typically a page of memory is 4KB.  The Windows memory manager breaks up this 4KB page into smaller blocks.  One block may be as small as 8 bytes or possibly much larger.  Each of these blocks exists side by side with other blocks.


    The !pool command can be used to see the pool blocks stored in a page.


    kd> !pool fffffa8003f42000

    Pool page fffffa8003f42000 region is Nonpaged pool

    *fffffa8003f42000 size:  410 previous size:    0  (Free)      *Irp

                Pooltag Irp  : Io, IRP packets

    fffffa8003f42410 size:   40 previous size:  410  (Allocated)  MmSe

    fffffa8003f42450 size:  150 previous size:   40  (Allocated)  File

    fffffa8003f425a0 size:   80 previous size:  150  (Allocated)  Even

    fffffa8003f42620 size:   c0 previous size:   80  (Allocated)  EtwR

    fffffa8003f426e0 size:   d0 previous size:   c0  (Allocated)  CcBc

    fffffa8003f427b0 size:   d0 previous size:   d0  (Allocated)  CcBc

    fffffa8003f42880 size:   20 previous size:   d0  (Free)       Free

    fffffa8003f428a0 size:   d0 previous size:   20  (Allocated)  Wait

    fffffa8003f42970 size:   80 previous size:   d0  (Allocated)  CM44

    fffffa8003f429f0 size:   80 previous size:   80  (Allocated)  Even

    fffffa8003f42a70 size:   80 previous size:   80  (Allocated)  Even

    fffffa8003f42af0 size:   d0 previous size:   80  (Allocated)  Wait

    fffffa8003f42bc0 size:   80 previous size:   d0  (Allocated)  CM44

    fffffa8003f42c40 size:   d0 previous size:   80  (Allocated)  Wait

    fffffa8003f42d10 size:  230 previous size:   d0  (Allocated)  ALPC

    fffffa8003f42f40 size:   c0 previous size:  230  (Allocated)  EtwR


    Because many pool allocations are stored in the same page, it is critical that every driver only use the space they have allocated.  If DriverA uses more space than it allocated they will write into the next driver’s space (DriverB) and corrupt DriverB’s data.  This overwrite into the next driver’s space is called a buffer overflow.  Later either the memory manager or DriverB will attempt to use this corrupted memory and will encounter unexpected information.  This unexpected information typically results in a blue screen.


    The NotMyFault application from Sysinternals has an option to force a buffer overflow.  This can be used to demonstrate pool corruption.  Choosing the “Buffer overflow” option and clicking “Crash” will cause a buffer overflow in pool.  The system may not immediately blue screen after clicking the Crash button.  The system will remain stable until something attempts to use the corrupted memory.  Using the system will often eventually result in a blue screen.




    Often pool corruption appears as a stop 0x19 BAD_POOL_HEADER or stop 0xC2 BAD_POOL_CALLER.  These stop codes make it easy to determine that pool corruption is involved in the crash.  However, the results of accessing unexpected memory can vary widely, as a result pool corruption can result in many different types of bugchecks.


    As with any blue screen dump analysis the best place to start is with !analyze -v.  This command will display the stop code and parameters, and do some basic interpretation of the crash.


    kd> !analyze -v


    *                                                                             *

    *                        Bugcheck Analysis                                    *

    *                                                                             *




    An exception happened while executing a system service routine.


    Arg1: 00000000c0000005, Exception code that caused the bugcheck

    Arg2: fffff8009267244a, Address of the instruction which caused the bugcheck

    Arg3: fffff88004763560, Address of the context record for the exception that caused the bugcheck

    Arg4: 0000000000000000, zero.


    In my example the bugcheck was a stop 0x3B SYSTEM_SERVICE_EXCEPTION.  The first parameter of this stop code is c0000005, which is a status code for an access violation.  An access violation is an attempt to access invalid memory (this error is not related to permissions).  Status codes can be looked up in the WDK header ntstatus.h.


    The !analyze -v command also provides a helpful shortcut to get into the context of the failure.


    CONTEXT:  fffff88004763560 -- (.cxr 0xfffff88004763560;r)


    Running this command shows us the registers at the time of the crash.


    kd> .cxr 0xfffff88004763560

    rax=4f4f4f4f4f4f4f4f rbx=fffff80092690460 rcx=fffff800926fbc60

    rdx=0000000000000000 rsi=0000000000001000 rdi=0000000000000000

    rip=fffff8009267244a rsp=fffff88004763f60 rbp=fffff8009268fb40

    r8=fffffa8001a1b820  r9=0000000000000001 r10=fffff800926fbc60

    r11=0000000000000011 r12=0000000000000000 r13=fffff8009268fb48

    r14=0000000000000012 r15=000000006374504d

    iopl=0         nv up ei pl nz na po nc

    cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00010206


    fffff800`9267244a 4c8b4808        mov     r9,qword ptr [rax+8] ds:002b:4f4f4f4f`4f4f4f57=????????????????


    From the above output we can see that the crash occurred in ExAllocatePoolWithTag, which is a good indication that the crash is due to pool corruption.  Often an engineer looking at a dump will stop at this point and conclude that a crash was caused by corruption, however we can go further.


    The instruction that we failed on was dereferencing rax+8.  The rax register contains 4f4f4f4f4f4f4f4f, which does not fit with the canonical form required for pointers on x64 systems.  This tells us that the system crashed because the data in rax is expected to be a pointer but it is not one.


    To determine why rax does not contain the expected data we must examine the instructions prior to where the failure occurred.


    kd> ub .

    nt!KzAcquireQueuedSpinLock [inlined in nt!ExAllocatePoolWithTag+0x421]:

    fffff800`92672429 488d542440      lea     rdx,[rsp+40h]

    fffff800`9267242e 49875500        xchg    rdx,qword ptr [r13]

    fffff800`92672432 4885d2          test    rdx,rdx

    fffff800`92672435 0f85c3030000    jne     nt!ExAllocatePoolWithTag+0x7ec (fffff800`926727fe)

    fffff800`9267243b 48391b          cmp     qword ptr [rbx],rbx

    fffff800`9267243e 0f8464060000    je      nt!ExAllocatePoolWithTag+0xa94 (fffff800`92672aa8)

    fffff800`92672444 4c8b03          mov     r8,qword ptr [rbx]

    fffff800`92672447 498b00          mov     rax,qword ptr [r8]


    The assembly shows that rax originated from the data pointed to by r8.  The .cxr command we ran earlier shows that r8 is fffffa8001a1b820.  If we examine the data at fffffa8001a1b820 we see that it matches the contents of rax, which confirms this memory is the source of the unexpected data in rax.


    kd> dq fffffa8001a1b820 l1

    fffffa80`01a1b820  4f4f4f4f`4f4f4f4f


    To determine if this unexpected data is caused by pool corruption we can use the !pool command.


    kd> !pool fffffa8001a1b820

    Pool page fffffa8001a1b820 region is Nonpaged pool

     fffffa8001a1b000 size:  810 previous size:    0  (Allocated)  None


    fffffa8001a1b810 doesn't look like a valid small pool allocation, checking to see

    if the entire page is actually part of a large page allocation...


    fffffa8001a1b810 is not a valid large pool allocation, checking large session pool...

    fffffa8001a1b810 is freed (or corrupt) pool

    Bad previous allocation size @fffffa8001a1b810, last size was 81



    *** An error (or corruption) in the pool was detected;

    *** Attempting to diagnose the problem.


    *** Use !poolval fffffa8001a1b000 for more details.



    Pool page [ fffffa8001a1b000 ] is __inVALID.


    Analyzing linked list...

    [ fffffa8001a1b000 --> fffffa8001a1b010 (size = 0x10 bytes)]: Corrupt region



    Scanning for single bit errors...


    None found


    The above output does not look like the !pool command we used earlier.   This output shows corruption to the pool header which prevented the command from walking the chain of allocations.


    The above output shows that there is an allocation at fffffa8001a1b000 of size 810.  If we look at this memory we should see a pool header.  Instead what we see is a pattern of 4f4f4f4f`4f4f4f4f.


    kd> dq fffffa8001a1b000 + 810

    fffffa80`01a1b810  4f4f4f4f`4f4f4f4f 4f4f4f4f`4f4f4f4f

    fffffa80`01a1b820  4f4f4f4f`4f4f4f4f 4f4f4f4f`4f4f4f4f

    fffffa80`01a1b830  4f4f4f4f`4f4f4f4f 00574f4c`46524556

    fffffa80`01a1b840  00000000`00000000 00000000`00000000

    fffffa80`01a1b850  00000000`00000000 00000000`00000000

    fffffa80`01a1b860  00000000`00000000 00000000`00000000

    fffffa80`01a1b870  00000000`00000000 00000000`00000000

    fffffa80`01a1b880  00000000`00000000 00000000`00000000


    At this point we can be confident that the system crashed because of pool corruption.


    Because the corruption occurred in the past, and a dump is a snapshot of the current state of the system, there is no concrete evidence to indicate how the memory came to be corrupted.  It is possible the driver that allocated the pool block immediately preceding the corruption is the one that wrote to the wrong location and caused this corruption.  This pool block is marked with the tag “None”, we can search for this tag in memory to determine which drivers use it.


    kd> !for_each_module s -a @#Base @#End "None"

    fffff800`92411bc2  4e 6f 6e 65 e9 45 04 26-00 90 90 90 90 90 90 90  None.E.&........

    kd> u fffff800`92411bc2-1


    fffff800`92411bc1 b84e6f6e65      mov     eax,656E6F4Eh

    fffff800`92411bc6 e945042600      jmp     nt!ExAllocatePoolWithTag (fffff800`92672010)

    fffff800`92411bcb 90              nop


    The file Pooltag.txt lists the pool tags used for pool allocations by kernel-mode components and drivers supplied with Windows, the associated file or component (if known), and the name of the component. Pooltag.txt is installed with Debugging Tools for Windows (in the triage folder) and with the Windows WDK (in \tools\other\platform\poolmon).  Pooltag.txt shows the following for this tag:


    None - <unknown>    - call to ExAllocatePool


    Unfortunately what we find is that this tag is used when a driver calls ExAllocatePool, which does not specify a tag.  This does not allow us to determine what driver allocated the block prior to the corruption.  Even if we could tie the tag back to a driver it may not be sufficient to conclude that the driver using this tag is the one that corrupted the memory.


    The next step should be to enable special pool and hope to catch the corruptor in the act.  We will discuss special pool in our next article.

  • Ntdebugging Blog

    Another Who Done It


    Hi my name is Bob Golding, I am an EE in GES. I want to share an interesting problem I recently worked on.  The initial symptom was the system bugchecked with a Stop 0xA which means there was an invalid memory reference.  The cause of the crash was a driver making I/O requests while Asynchronous Procedure Calls (APCs) were disabled.  The bugcheck caused by an invalid memory reference was the result of the problem and not the cause.


    An APC is queued to a thread during I/O completion. This is to guarantee the last phase of the I/O completion occurs in the same context as the process that issued the request.


    The stack of the trap is presented below.  The call stack shows that APCs are being enabled allowing queued APCs to run.


    Child-SP          RetAddr           Call Site

    fffff880`07bf3598 fffff800`030b85a9 nt!KeBugCheckEx

    fffff880`07bf35a0 fffff800`030b7220 nt!KiBugCheckDispatch+0x69

    fffff880`07bf36e0 fffff800`030d8b56 nt!KiPageFault+0x260

    fffff880`07bf3870 fffff800`030959ff nt!IopCompleteRequest+0xc73

    fffff880`07bf3940 fffff800`0306c0d9 nt!KiDeliverApc+0x1d7

    fffff880`07bf39c0 fffff800`033f8a1a nt!KiCheckForKernelApcDelivery+0x25

    fffff880`07bf39f0 fffff800`033cce2f nt!MiMapViewOfSection+0x2bafa

    fffff880`07bf3ae0 fffff800`030b8293 nt!NtMapViewOfSection+0x2be

    fffff880`07bf3bb0 00000000`772df93a nt!KiSystemServiceCopyEnd+0x13

    00000000`0015dea8 00000000`00000000 0x772df93a


    The reason the trap occurred is because when issuing requests to lower drivers it is common practice in drivers to implement code similar to:


    KEVENT event;


    status = IoCallDriver( DeviceObject, irp );



    //  Wait for the event to be signaled if STATUS_PENDING is returned.


    if (status == STATUS_PENDING) {

       (VOID)KeWaitForSingleObject( &event, // event is a local which is declared on the stack




                                    NULL );



    As you can see in the above code, if the return from IoCallDriver does not return pending the code continues and exits. Part of the last phase of I/O processing that takes place in the APC is signaling the event. If the call to IoCallDriver returns success, because the event is on the stack it is critical that the APC execute immediately before the stack unwinds. Since APCs where disabled, the execution of the APC was delayed and during this time the event became invalid. The APCs were delayed because the memory manager was in a critical area and APCs could not run.


    I needed to determine which driver did this so I enabled IRP logging in Driver Verifier to trace I/O requests.  With this enabled the next dump should contain a transaction log that will help identify what driver is performing I/O while APCs are disabled.  The command line to enable this is:

    verifier /flags 0x410 /all


    The new dump with verifier enabled also crashed after delivering an APC to the thread and completing the IRP.  From the debug output below I can find the IRPs that were issued and the thread that issued them, this is what I need to look for them in the log.


    1: kd> !thread

    THREAD fffffa80064c9b50 Cid 0200.0204  Teb: 000007fffffde000 Win32Thread: 0000000000000000 RUNNING on processor 1

    IRP List:

        fffff9800a33ec60: (0006,03a0) Flags: 40060070  Mdl: 00000000

        fffff9800a250c60: (0006,03a0) Flags: 40060070  Mdl: 00000000

        fffff9800a3f4ee0: (0006,0118) Flags: 40060070  Mdl: 00000000

    Not impersonating

    DeviceMap                 fffff8a000007890

    Owning Process            fffffa80064bbb30       Image:         csrss.exe

    Attached Process          N/A            Image:         N/A

    Wait Start TickCount      1656           Ticks: 0

    Context Switch Count      25             IdealProcessor: 0

    UserTime                  00:00:00.000

    KernelTime                00:00:00.000

    Win32 Start Address 0x000000004a061540

    Stack Init fffff88003b21c70 Current fffff88003b20890

    Base fffff88003b22000 Limit fffff88003b1c000 Call 0

    Priority 14 BasePriority 13 UnusualBoost 0 ForegroundBoost 0 IoPriority 2 PagePriority 5

    Child-SP          RetAddr           Call Site

    fffff880`03b21428 fffff800`0307a54c nt!KeBugCheckEx

    fffff880`03b21430 fffff800`030d02ee nt!MmAccessFault+0xffffffff`fff9c15c

    fffff880`03b21590 fffff800`030c8db9 nt!KiPageFault+0x16e

    fffff880`03b21728 fffff800`030e6ab3 nt!memcpy+0x229

    fffff880`03b21730 fffff800`030c4bd7 nt!IopCompleteRequest+0x5a3

    fffff880`03b21800 fffff800`0307ba85 nt!KiDeliverApc+0x1c7

    fffff880`03b21880 fffff800`0331d96a nt!KiCheckForKernelApcDelivery+0x25

    fffff880`03b218b0 fffff800`033e742e nt!MiMapViewOfSection+0xffffffff`fff36baa

    fffff880`03b219a0 fffff800`030d1453 nt!NtMapViewOfSection+0x2bd

    fffff880`03b21a70 00000000`7761159a nt!KiSystemServiceCopyEnd+0x13

    00000000`0025f078 00000000`00000000 0x7761159a


    The command “!verifier 100” will dump the transaction log.  Below is the relevant portion of the log containing the IRPs for our thread.


    IRP fffff9800a3f4ee0, Thread fffffa80064c9b50, IRQL = 0, KernelApcDisable = -4, SpecialApcDisable = -1

    fffff80003573a68 nt!IovAllocateIrp+0x28

    fffff800033b20e2 nt!IoBuildDeviceIoControlRequest+0x32

    fffff8000356d72e nt!IovBuildDeviceIoControlRequest+0x4e

    fffff880010f8bcc fltmgr!FltGetVolumeGuidName+0x18c

    fffff88004e4fbe1 baddriver+0x12be1

    fffff88004e73523 baddriver +0x36523

    fffff88004e7300c baddriver +0x3600c

    fffff88004e72cce baddriver +0x35cce

    fffff88004e5f715 baddriver +0x22715

    fffff88004e4c6c7 baddriver +0xf6c7

    fffff88004e48342 baddriver +0xb342

    fffff88004e5e44e baddriver +0x2144e

    fffff88004e5e638 baddriver +0x21638


    IRP fffff9800a250c60, Thread fffffa80064c9b50, IRQL = 0, KernelApcDisable = -5, SpecialApcDisable = -1

    fffff80003573a68 nt!IovAllocateIrp+0x28

    fffff800033b20e2 nt!IoBuildDeviceIoControlRequest+0x32

    fffff8000356d72e nt!IovBuildDeviceIoControlRequest+0x4e

    fffff8800101eec7 mountmgr!MountMgrSendDeviceControl+0x73

    fffff88001010a6b mountmgr!QueryDeviceInformation+0x207

    fffff8800101986b mountmgr!QueryPointsFromMemory+0x57

    fffff88001019f86 mountmgr!MountMgrQueryPoints+0x36a

    fffff8800101ea71 mountmgr!MountMgrDeviceControl+0xe9

    fffff80003574c16 nt!IovCallDriver+0x566

    fffff880010f8bec fltmgr!FltGetVolumeGuidName+0x1ac

    fffff88004e4fbe1 baddriver +0x12be1

    fffff88004e73523 baddriver +0x36523

    fffff88004e7300c baddriver +0x3600c


    IRP fffff9800a33ec60, Thread fffffa80064c9b50, IRQL = 0, KernelApcDisable = -5, SpecialApcDisable = -1

    fffff80003573a68 nt!IovAllocateIrp+0x28

    fffff800033b20e2 nt!IoBuildDeviceIoControlRequest+0x32

    fffff8000356d72e nt!IovBuildDeviceIoControlRequest+0x4e

    fffff8800101eec7 mountmgr!MountMgrSendDeviceControl+0x73

    fffff88001010afd mountmgr!QueryDeviceInformation+0x299

    fffff8800101986b mountmgr!QueryPointsFromMemory+0x57

    fffff88001019f86 mountmgr!MountMgrQueryPoints+0x36a

    fffff8800101ea71 mountmgr!MountMgrDeviceControl+0xe9

    fffff80003574c16 nt!IovCallDriver+0x566

    fffff880010f8bec fltmgr!FltGetVolumeGuidName+0x1ac

    fffff88004e4fbe1 baddriver +0x12be1

    fffff88004e73523 baddriver +0x36523

    fffff88004e7300c baddriver +0x3600c


    From the IRP log in verifier I can see that baddriver.sys is calling FltGetVolumeGuidName while APCs are disabled. Further investigation found that baddriver.sys had registered a function for image load notification, and the memory manager has APCs disabled when it calls the image notification routine. The image notification routine in baddriver.sys called FltGetVolumeGuidName which issued the I/O.  From the log output I see KernelApcDisable and SpecialApcDisable, the issue is SpecialApcDisable being –1.  The I/O completion APCs are considered special APCs, so kernel APC disable would not affect them.


    The solution was for the driver to check for APCs disabled before issuing the FltGetVolumeGuidName and not make this call if APCs are disabled.

  • Ntdebugging Blog

    Remoting Your Debug Crash Cart With KDNET


    This is Christian Sträßner from the Global Escalation Services team based in Munich, Germany.


    Back in January, my colleague Ron Stock posted an interesting article about Kernel Debugging using a serial cable: How to Setup a Debug Crash Cart to Prevent Your Server from Flat Lining


    Today we look at a new kernel debugging transport introduced in Windows 8 and Windows Server 2012 that makes the cabling much easier, now a network cable can be used as a debug cable. The new KDNET transport utilizes a PCI Ethernet network card in the Target. Most major NIC Vendors have compatible NICs. You can find a list of supported NICs here:



    Note that this will not work with Wireless or USB attached NICs in the Target.


    In the example below, we utilized an Acer AC 100 Server as the Target. It ships with an onboard Intel 82579LM Gigabit NIC:


    Network Adapters


    The great thing about KDNET is that the NIC can still be used for normal network activity. The “Microsoft Kernel Debug Network Adapter” driver is the magic behind this. When KDNET.DLL is active, the NIC’s driver will be “banged out” and KDNET takes control of the NIC.


    BCD Configuration

    To configure KDNET, you first need to determine the IPv4 Address of the machine with the debugger. In our example, ipconfig.exe tells us that it is




    Next go to your Target machine.


    The kernel debug settings used to configure KDNET are stored globally in the BCD Store in the {dbgsettings} area. The kernel debug settings apply to all boot entries.


    Use bcdedit.exe /dbgsettings net hostip:<addr>port:<port> to set the transport to KDNET, the IP Address of the debugger and the port. You can connect multiple targets to the same debug host by using a different port for each target.


    BCD will generate a cryptographic key for you automatically the first time. You can generate a new cryptographic key by appending the ‘newkey’ keyword. Copy the ‘Key’ to a secure location - you will need it in the debugger.


    You can display the debug settings using: bcdedit /dbgsettings


    Next, for safety, copy the {current} entry to a new entry (bcdedit /copy {current} /d <description>).


    Then enable kernel debugging on the copy (bcdedit.exe /debug {new-guid} on).


    If required, also use this (new) entry to enable the checked kernel (bcdedit /set {new-guid} hal <path> and bcdedit.exe /set {new-guid} kernel <path>).





    On your Debugger Machine open WinDbg->File->Kernel Debugging (Ctrl-K) and choose the NET tab:


    Copy and paste the ‘Key’ here and set the port to the value specified on the Target (the default is 50000):


     Kernel Debugging


    Next a dialog from Windows Firewall might pop up (depending on your configuration). You want to allow access at this point.


    Windows Firewall


    You need to make sure that your debug host machine allows inbound UDP traffic on the configured port (50000 in this example, and by default) for the network type in use.


    If your company has implemented IPSec Policies, make sure you have exceptions in place that allow unsecured communication on the port used (KDNET does not talk IPSec).


    The Debugger Window will now look like this:


    windbg waiting to reconnect


    The Debugger is now set up and ready to go.


    Reboot the target system now.


    When the target comes back online, it will try to connect to the IP Address and Port that was configured with the bcdedit.exe command. The Debugger Command Window will look something like the screenshot below.


    windbg connected


    You now can break in as usual. This is a good time to fix your symbol setup if you have not done it yet.



    You still can communicate normally over the NIC and IP that you use on the target. You do not need an additional NIC in the target to use KDNET. When debugging production servers with heavy traffic, we recommend using a dedicated NIC for debugging (note, 10GigE NICs are currently not supported).


    If you don’t want the NIC to be used by the OS as well, it can be disabled via: bcdedit.exe -set loadoptions NO_KDNIC


    Normal Network IO


    Although you can use KDNET to debug power state transitions (in particular Connected Standby), it is best avoided. The KDNET protocol polls on a regular basis and as such, many systems will not drop to a lower power state. Instead, use USB, 1394, or serial.


    Disconnecting the NIC from media (unplugging the NIC in the target machine) is not supported and will most likely blue screen the target machine.


    Note 1:

    If you have more than one NIC in your target, please read the following (copied from the debugger help):

    If there is more than one network adapter in the target computer, use Device Manager to determine the PCI bus, device, and function numbers for the adapter you want to use for debugging. Then in an elevated Command Prompt window, enter the following command, where b, d, and f are the bus number, device number, and function number of the adapter:

    bcdedit /set {dbgsettings} busparams b.d.f


    Note 2:

    If you use the Windows NIC Teaming (LBFO) in Server 2012: KDNET is not compatible with NIC Teaming as indicated by the Whitepaper:



    How does it look on the network?

    This is a packet sent from the target to the debug host machine.


    Network Packet


    The TTL of the packets sent from the target to the debug host is currently set to 16 (this is not configurable).


    This screenshot shows that your connection can only run over 16 IP hops max. This is a theoretical limitation, but it highlights some important facts. Your host is not talking to the Windows IP stack on the target, instead it talks to a basic IPv4/UDP implementation in KDNET. The transport is UDP/IPv4 based, so there is not much tolerance for poor network conditions aside from retry operations at the Debugger Transport Protocol Level.


    A few words on performance.

    The performance is generally limited by the latency of the link between the host and target. Therefore, even with a LAN like latency (<=1ms), you will not be able to get even close to wire speed of a 1GigE Connection. Expect to see speeds between 1.5 – 2.5Mbytes/s.


    Keep this in mind when you plan to pull large portions of memory from the target over KDNET (like the .dump command). This screenshot was taken while executing the .dump /f command (Full Kernel Dump):


    Network Activity


    Even with the performance restrictions mentioned, KDNET is a valuable extension of the existing debugging methods.  It allows you to debug a Windows machine without the need for special hardware (1394) or legacy ports (serial) that not every machine has today (especially tablets and notebooks).  It also saves you from using USB2 debugging - which requires special cables and a good amount of hope that the machine’s vendor has attached the single debug capable USB port to an external port on the chassis.


    Also, there is no need for you to physically enter the Datacenter where the target is located.  You can do all these steps from your convenient office chair. J


    To see network kernel debugging in action, watch Episode #27 of Defrag Tools on Channel 9.


    Thanks to Andrew Richards and Joe Ballantyne for their help in writing this article.

  • Ntdebugging Blog

    Our Bangalore Team is Hiring - Windows Server Escalation Engineer


    Would you like to join the world’s best and most elite debuggers to enable the success of Microsoft solutions?


    As a trusted advisor to our top customers you will be working with to the most experienced IT professionals and developers in the industry. You will influence our product teams in sustained engineering efforts to drive improvements in our products.


    This role involves deep analysis of product source code and debugging to solve problems in multi-million dollar configurations and will give you an opportunity to stretch your critical thinking skills. During the course of debugging, you will uncover opportunities to improve the customer experience while influencing the current and future design of our products.


    In addition to providing support to customers while being the primary interface to our sustained engineering teams, you will also have the opportunity to work with new technologies and unreleased software. Through our continuous investment in depth training and hands-on experience with tough customer challenges you will become the world’s best in this area. Expect to partner with many various roles at Microsoft launching a very successful career!


    This position is located is at the Microsoft Global Technical Support Center in Bangalore, India.


    Learn more about what an Escalation Engineer does at:

    Profile: Ron Stock, CTS Escalation Engineer - Microsoft Customer Service & Support - What is CSS?

    Microsoft JobsBlog JobCast with Escalation Engineer Jeff Dailey

    Microsoft JobsBlog JobCast with Escalation Engineer Scott Oseychik


    Apply here:


  • Ntdebugging Blog

    Interpreting Event 153 Errors


    Hello my name is Bob Golding and I would like to share with you a new event that you may see in the system event log.  Event ID 153 is an error associated with the storage subsystem. This event was new in Windows 8 and Windows Server 2012 and was added to Windows 7 and Windows Server 2008 R2 starting with hot fix KB2819485.


    An event 153 is similar to an event 129.  An event 129 is logged when the storport driver times out a request to the disk; I described event 129 messages in a previous article.  The difference between a 153 and a 129 is that a 129 is logged when storport times out a request, a 153 is logged when the storport miniport driver times out a request.  The miniport driver may also be referred to as an adapter driver or HBA driver, this driver is typically written the hardware vendor.


    Because the miniport driver has a better knowledge of the request execution environment, some miniport drivers time the request themselves instead of letting storport handle request timing.  This is because the miniport driver can abort the individual request and return an error rather than storport resetting the drive after a timeout.  Resetting the drive is disruptive to the I/O subsystem and may not be necessary if only one request has timed out.  The error returned from the miniport driver is bubbled up to the class driver who can log an event 153 and retry the request.


    Below is an example event 153:


    Event 153 Example


    This error means that a request failed and was retried by the class driver.  In the past no message would be logged in this situation because storport did not timeout the request.  The lack of messages resulted in confusion when troubleshooting disk errors because timeouts would occur but there would be no evidence of the error.


    The details section of the event the log record will present what error caused the retry and whether the request was a read or write. Below is the details output:


    Event 153 Details


    In the example above at byte offset 29 is the SCSI status, at offset 30 is the SRB status that caused the retry, and at offset 31 is the SCSI command that is being retried.  In this case the SCSI status was 00 (SCSISTAT_GOOD), the SRB status was 09 (SRB_STATUS_TIMEOUT), and the command was 28 (SCSIOP_READ). 


    The most common SCSI commands are:

    SCSIOP_READ - 0x28



    The most common SRB statuses are below:





    A complete list of SCSI operations and statuses can be found in scsi.h in the WDK.  A list of SRB statuses can be found in srb.h.


    The timeout errors (SRB_STATUS_TIMEOUT and SRB_STATUS_COMMAND_TIMEOUT) indicate a request timed out in the adapter. In other words a request was sent to the drive and there was no response within the timeout period.  The bus reset error (SRB_STATUS_BUS_RESET) indicates that the device was reset and that the request is being retried due to the reset since all outstanding requests are aborted when a drive receives a reset.


    A system administrator who encounters event 153 errors should investigate the health of the computer’s disk subsystem.  Although an occasional timeout may be part of the normal operation of a system, the frequent need to retry requests indicates a performance issue with the storage that should be corrected.

  • Ntdebugging Blog

    Commitment Failures, Not Just a Failed Love Story


    I was working on a debug the other day when I ran the “!vm” command and saw that the system had some 48,000 commit requests that failed. This was strange as the system was not out of memory and the page file was not full. I was left scratching my head and thinking “I wish I knew where !vm got that information from.” So I went on a quest to find out, here is what I found.


    13: kd> !vm 1


    *** Virtual Memory Usage ***

          Physical Memory:    12580300 (  50321200 Kb)

          Page File: \??\C:\pagefile.sys

            Current:  50331648 Kb  Free Space:  50306732 Kb

            Minimum:  50331648 Kb  Maximum:     50331648 Kb

          Available Pages:     4606721 (  18426884 Kb)

          ResAvail Pages:     12189247 (  48756988 Kb)

          Locked IO Pages:           0 (         0 Kb)

          Free System PTEs:   33460257 ( 133841028 Kb)

          Modified Pages:        20299 (     81196 Kb)

          Modified PF Pages:      6154 (     24616 Kb)

          NonPagedPool 0 Used:   19544 (     78176 Kb)

          NonPagedPool 1 Used:   22308 (     89232 Kb)

          NonPagedPool Usage:    53108 (    212432 Kb)

          NonPagedPool Max:    9408956 (  37635824 Kb)

          PagedPool 0 Usage:    168921 (    675684 Kb)

          PagedPool 1 Usage:   4149241 (  16596964 Kb)

          PagedPool 2 Usage:     17908 (     71632 Kb)

          PagedPool Usage:     4336070 (  17344280 Kb)

          PagedPool Maximum:  33554432 ( 134217728 Kb)

          Session Commit:         3438 (     13752 Kb)

          Shared Commit:          6522 (     26088 Kb)

          Special Pool:              0 (         0 Kb)

          Shared Process:        53597 (    214388 Kb)

          PagedPool Commit:    4336140 (  17344560 Kb)

          Driver Commit:          5691 (     22764 Kb)

          Committed pages:     5565215 (  22260860 Kb)

          Commit limit:       25162749 ( 100650996 Kb)


          ********** 48440 commit requests have failed  **********


    It turns out that this calculation is from a global ULONG array named “nt!MiChargeCommitmentFailures”.  The array has 4 members and they are used to trace the types of commit failures that have taken place. This is done by first calculating the new commit size NewCommitValue = CurrentCommitValue + SystemReservedMemory. Based on this calculation commit errors are tracked in a few different ways, which are listed below with the corresponding member in the array that is incremented.


    MiChargeCommitmentFailures[0] - If the system failed a commit request and an expansion of the pagefile has failed.

    MiChargeCommitmentFailures[1] - If the system failed a commit and we have already reached the maximum pagefile size.

    MiChargeCommitmentFailures[2] - If the system failed a commit while the pagefile lock is held.

    MiChargeCommitmentFailures[3] - If the system failed a commit and the NewCommitValue is less than or equal to CurrentCommitValue.


    In order to calculate the count of failures, "!vm" adds up the values stored in each array member of the array. Members 0 and 1 are always counted, member 2 is counted if the OS version is Windows 2003/XP and member 3 is counted if the build version is newer than Windows 2003/XP. 


    Let's look at the array in the dump I was debugging:


    13: kd> dc nt!MiChargeCommitmentFailures L4

    fffff800`01e45ce0  00000000 0000bd38 00000000 00000000  ....8...........



    Converting this to decimal we find the 48000+ commit failures I was seeing the in output of !VM.


    13: kd> ?0000bd38

    Evaluate expression: 48440 = 00000000`0000bd38


    Since I now had my answer, “where does the number come from?”, I was left wanting to know a bit more about the overall flow of why a VirtualAlloc fails to commit.


    When memory is allocated by VirtualAlloc the newly allocated memory is not committed to physical memory. Only when the memory is accessed by a read or write is it backed by physical memory.


    When this newly allocated memory is accessed for the first time it will need to be backed by commit space. Under normal conditions this is a smooth process, however when the system hits what’s called the commit limit and can’t expand this limit we see commit failures.


    So how is the commit limit calculated? Let’s say we have a system with 4GB of physical memory and a pagefile that is 6GB in size. To determine the commit limit we add physical memory and the pagefile size together - in this example the commit limit would be 10GB. Since memory manger will not let any user mode allocation consume every last morsel of commit space it keeps a small amount of the commit space for the system to avoid hangs. When the limit is reached the system tries to grow the page file. If there is no more room to grow the pagefile or the pagefile has reached its configured maximum size, the system will try and free some committed memory to make room for more requests. If expansion of the page file or the attempt to free memory do not allow the allocation to complete, the allocation fails and MiChargeCommitmentFailures is incremented.



    To sum it all up, commit limit is RAM + pagefile, commit failures happen when we hit the commit limit and the system is unable to grow the pagefile because it is already at its max.  It’s that simple, well almost.


    For those that will want to know more about how memory manger works please see the post from Somak: The Memory Shell Game.


    Randy Monteleone

Page 3 of 25 (241 items) 12345»