Lately we had a customer changing the hardware in parts of their SAP application layer for their SAP ERP system. Goal was to reduce the number of servers with newer models which could deliver double the throughput with around the same latency as experienced before. Please also see an earlier article published in regards to per CPU thread performance and throughput performance, which can be found here: . Hence there was a bit of surprise when the next day in the production SAP ERP system some extract jobs took significantly longer in pure run time. Something that wasn’t the case in the test system where the same hardware exchange happened a few weeks earlier. SAP performance statistics afterwards revealed the whole story with simply telling that those extract jobs nearly spend double the time on the CPU in the application server. Instead of a time of around 5000sec spent on the CPU to extract a few GB of data and write it into a file, the CPU spent was nearly exactly double (around 10,000sec) when running on those new servers

After a bit of looking around the solution figured out to be easy then. The folks in the datacenter forgot to change the power mode of the Windows 2008 R2 installations from ‘Balanced’ to ‘High Performance’. That was the whole deal. Since this setting got changed and also made sure that the BIOS setting was set to leave it to the OS influencing the energy consumption, the performance is back to where it was expected to be. Around the same as it was before with the older server, however able to run more of those jobs than with the former hardware.

The difference between earlier versions of Windows Server and Windows Server 2008 R2 and later releases is the change in Power Mode Default settings after installation. Well knowing that there certain latency sensitive application types (which SAP Netweaver applications can fall into), there are masses of cases where latencies experienced on the server backend are not as influential. Think about scenarios where the largest share of the latency of an application are mostly determined by connectivity through the internet. Therefore we found it justified to move the default to a power mode that on the one side would reduce energy consumption with little to no performance impact. Based on usage profiles of customers and workload running on servers, the choice for setting the default to ‘Balanced’ definitely justified. For extreme latency sensitive as the SAP Netweaver Application instances and SQL Server under SAP workload, recommendations can be to change the default settings to the ‘High Performance’ mode. E.g. see:

Fine for now, let’s look how the Power mode being set to ‘Balanced’ could impact system performance in the way as our customer experienced. To answer this question, let’s look in how those Power Modes in Windows work.

Windows 2008 R2 Power Modes

As earlier Windows versions, Windows 7 and Windows Server 2008 R2 include support for ACPI processor power management (PPM) features, including support for processor performance states and processor idle sleep states of modern multiprocessor systems. One of the many components of PPM in Windows 2008 R2 is the Processor Clocking Control (PCC). With this one the Windows Operating System can ask or request the underlying hardware through a firmware defined interface for the ideal system performance. In essence the hardware then will adapt the frequency of the processors accordingly. Dependent on the processor architecture, there might be several frequency steps that can be chosen by Windows and the hardware. Typical processors are offering states like this one of an older AMD processor:


Or the states of a more modern Intel E7 processor, which can be:


The basic effect of reducing frequency is to lower the voltage and with that reduce the energy consumption of a processor/socket. And with that the overall power draw of the server. Opposite effect with increasing the frequency. Given that many servers are run with small or non-consistent workloads with inactive periods, reducing the frequency and with that reducing energy consumption is a great thing to do.

Some processors also offer the capability of ‘turbo’ frequencies that are above the nominal processor frequencies. However those are only chosen when there is enough power to spare. Or in other words, not all the logical processors would be in use to give the processor/socket enough headroom in power. With some processors, Windows Processor Clocking Control also can influence the choice of turbo frequency.

There also is a difference on the granularity different processor architectures offer to control power. Some processor architectures allow to regulate frequency on the granularity of a socket only. Some others allow to regulate on a per-core or even per-CPU-Thread basis. Therefore we used and will keep the term ‘processor’ related to frequency or energy consumption throughout this article.

When Windows does experience little load on the system only will ask the hardware for a lower performance level and the hardware then will reduce the frequency of the processor and with that certainly will reduce energy consumption. However it also will increase the time it takes to simply execute a number of instructions.

So in principle Windows at a point going below a certain threshold on CPU utilization of a processor is requesting lower performance. This will trigger the hardware to move to another frequency state. Very often it ends up moving to the lowest frequency state when the periods of low CPU utilization are hundreds of milliseconds long. Means in the example of the AMD processor above to 50% of the maximum frequency and to 57% of the Intel E7 processor as listed above. If the load on the processor does increase again and does exceed a certain threshold, Windows will ask for a higher level of performance and as result (depending on some other parameters of PPM – like heat) will increase the frequency up to the maximum possible frequency again. The period of Windows trying to adapt the performance level is certainly not in the second space, but is way more agile in the lower double digit millisecond space.

A very detailed and very technical documentation of Windows Server 2008R2 PPM can be found here:

Problems for SAP application servers with this behavior

With SAP we are looking into an application which handles requests per processor in a single threaded way. Usually a SAP Dialog Step spends hundred to a few hundred milliseconds on a single CPU thread of the hardware. In batch requests eventually even seconds. While such a dialog step is executed on the application side, requests are issued to the database server. While that is done, processing of the SAP dialog steps stops since the SAP logic now waits for results from the DBMS. Hence no CPU is spent on the application side. Means when we assume something like a SAP batch job where data is retrieved from the database, processed and then again data is fetched from the database, it is very well imaginable that the detection in Windows will be not agile enough to adapt the frequency. This is more applicable for cases where the server is little used like in the case encountered by the customer. And where the server overall is leveraged in the low double digit CPU resource utilization only. Means chances that the frequency of the processor is not ramped up fast enough are comprehensible and extremely likely. As a result many requests could be executed with lower than maximum frequency and with that increase latency of the execution

How to detect what processor frequency you are running?

In Windows Server 2008R2 a new collection of performance counters got introduced, as explained in this article:

In that new collection there is a new counter called ‘% maximum frequency’. Please also check some explanations to this counter in the article mentioned above.

How to detect the possible frequency of a given processor?

In order to find out what the possible processor frequencies are, you would need to download the tool pwrtest.exe from here: , where you can download the Windows 8 Driver Development Kit.

In order to get the information on the possible frequency you need to execute:

pwrtest.exe /info:ppm

in a Windows Command window. This will give you data on the possible processor frequencies as shown above plus more data.

For more information on pwrtest.exe please also check this document:

What does this all mean for our SAP environments?

SAP landscapes as in a more environmentally friendly configuration as possible with lesser consumption of energy and lesser heat emission and good performance? Or is the highest priority when running the SAP landscape to provide the highest performance and predictable service in terms of response and run times to the user community. Let’s keep in mind that we are measuring SAP dialog step response times in the milliseconds space and not in the seconds space. If the determinable and reliable response time is the highest priority, then we need to think on which of the DBMS and SAP application layers to move from the ‘Balanced’ mode’ to the ‘High Performance’ mode’. We could confirm similar negative impact as our customer incidentally got to, by running a SAP Standard Benchmark with the two different power settings. It pretty much confirmed that the time of a SAP dialog step nearly doubled with ‘balanced’ power mode compared to ‘High Performance’ power mode for the case where the server workload was overall loaded low and the load was distributed very evenly over all CPU threads.

Means at least for your productive SAP systems (SAP Netweaver application servers and related servers running SQL Server), it would make sense having ‘High Performance’ power mode enabled instead of ‘Balanced.’ On the other side, we want to be conscious about the energy consumption of the servers and all the heat those servers are emitting. Therefore you should think about which systems would be fine with a more variable response times. Does a SAP development, sandbox system or training systems need to be absolutely deterministic in its run time of jobs or response time to interactive users? Eventually not at all or only during special time periods. Since the power mode can be switched between the different modes w/o reboot, it might be advisable to leave the power mode of those systems to ‘balanced.’

A lot of IT organizations stick with the default of the Power Mode as installed with Windows Server 2008 R2 and later and with the majority of applications don’t have an issue and hence increase power efficiency within the datacenter. This means that teams owning applications which that are more sensitive to latencies need to work with their datacenter folks to eventually change from the default settings of ‘Balanced’ to ‘High Performance’ for their specific servers where the ‘High Performance’ mode is required.


Thanks to Bruce Worthington for reviewing the article and guidance for improving this article.