Is it possible to force a behavior change on ReadTrace so that it calculate Duration from End-Start?
I have problems with the microsecond timing in my traces and I can't disable the "clock variation feature" on my servers.
I ran a stress test with a poorly performing query. My 16-way SQL Server was at 100% all the time.
I then stress tested a tuned version of the query. The same server stood at 25% to 30% CPU usage all the time.
The comparision results shows that the tuned query uses less CPU and does less IO, but its duration is "higher".
Also, the original query scans 2 tables, sorts the results and merge-joins it, while the tuned query does just a non-clustered index seek.
A digged a lot about the problem and realized that the only explanation for this phenomen is the RDTSC problem.
Are there any trace flags that can avoid the microsecond timing?
Thanks in advance!