Abstract:
A method for measuring performance with RDTSC instruction.

Created by Peter Kankowski
Last changed
Contributors: Ace and Marcus Aurelius
Filed under Low-level code optimization

Share on social sitesReddit Digg Delicious Buzz Facebook Twitter

Performance measurements with RDTSC

This article explain a method for comparing optimized functions and choosing the fastest one.

The scope of the method

  • This method is useful for measuring the tasks that are mostly computation-bounded, e.g. various calculations, cryptographic functions, fractal generation, some text-processing tasks, etc.
  • Your data should be located in L1 or L2 cache entirely. If you are measuring memcpy function for several megabytes of data, you should use a different method. Ditto for most image-processing functions and other memory-intensive tasks.
  • If your task involves reading or writing files, calling "heavy" API functions, you should use a different method.
  • This method is ideal if you have a simple function that is called very often, doesn't call any API functions, and operates on a small data set.

Using RDTSC instruction

The RDTSC instruction returns a 64-bit time stamp counter (TSC), which is increased on every clock cycle. It's the most precise counter available on x86 architecture.

MSVC++ 2005 compiler supports a handy __rdtsc intrinsic that returns the result in 64-bit variable. However, you should flush the instruction pipeline before using RDTSC, so you usually have to use inline assembly function shown below.

Serializing the instruction stream

RDTSC can be executed out-of-order, so you should flush the instruction pipeline to prevent the counter from stopping measurement before the code has actually finished executing.

unsigned __int64 inline GetRDTSC() {
   __asm {
      ; Flush the pipeline
      XOR eax, eax
      CPUID
      ; Get RDTSC counter in edx:eax
      RDTSC
   }
}

Dealing with context switches

Many people worry about context switches that may occur during the measurement. Context switches on Windows NT take several thousand clock cycles, so they might bias your results. The best way to avoid this problem is to arrange a small test case, so that your thread will be rarely interrupted.

The scheduling quant on Windows NT is 20 msec. If your function takes 60 000 clock cycles on 1 GHz processor, there is 0.3% probability that the context switch will happen. On the other hand, if your function takes 100 times more, it will be interrupted with 30% probability.

Some people use SetPriorityClass with REALTIME_PRIORITY_CLASS, SetThreadPriority with THREAD_PRIORITY_TIME_CRITICAL, or SetProcessPriorityBoost to prevent their threads from being preempted. This can help sometimes, but it's not a panacea. Please don't try to measure the functions that take several seconds with this method.

Handling multi-core processors

Time-stamp counters on different cores or different processors are not synchonized with each other. Use SetThreadAffinityMask function to prevent your function from executing on different cores.

Repeating the measurements

To detect the context switches and eliminate cache warm-up effects, you should repeat the measurements at least 5 times. The last measurements should give constant results:

     strlen:       1489 ticks  <== cache warm-up
     strlen:       1041 ticks
     strlen:       1041 ticks
     strlen:       1034 ticks
     strlen:       1013 ticks
     strlen:       1019 ticks  <== constant performance is reached
     strlen:       1019 ticks
     strlen:       1019 ticks
     strlen:       1019 ticks
     strlen:       1019 ticks
     strlen:       1019 ticks

The result for this function will be 1019 clock cycles.

Subtracting overhead

Intel and Agner Fog recommend measuring the overhead of RDTSC function and subtracting it from your result. The overhead is relatively low (150-200 clock cycles) and it occurs in all tested functions, so you can neglect it when measuring long functions (e.g., 100 000 clock cycles).

If you are measuring a short function, you should subtract the overhead using the method described in Intel's paper.

Preventing frequency changes

If your processor supports Intel SpeedStep (usually supported on laptop computers), you should set the power management scheme in Windows to "Always on" before starting long measurements. Otherwise, the processor will change its frequency, and the process of switching to another frequency ("power state transition", in Intel terminology) may bias your results.

Reporting the result in clock cycles, not in seconds

You should not convert your results to seconds. Report them in clock cycles.

From user's point of view, execution time in seconds makes more sense than clock cycles. But remember that:

time_in_seconds = number_of_clock_cycles / frequency

Frequency is a constant (we are comparing the functions on the same processor), so both methods will give the same result.

Clock cycles are more useful, because you can calculate the theoretical number of clock cycles using the Agner Fog's instruction tables and compare it with the real number. If you also monitor performance events (see below), you will know not only which function is faster, but also why it is faster and how to improve it.

Also note that your processor counts in clock cycles, not in seconds. For example, your function takes 5000 clock cycles on 1.5 GHz Pentium M processor. If you will switch the processor to 600 MHz (using Intel SpeedStep), it will take the same 5000 clock cycles. Moreover, on another Pentium M with different frequency, the function will take 5000 clock cycles again.

The time in clock cycles in a consistent, predicable measure, which is independent of frequency changes.

Getting not only timings, but also performance event counters

Reporting only execution time is not enough. If you wish to know the reasons of low performance, you should use performance events monitor (RDPMC instruction), which report you the precise reasons of slow down (for example, is it slow because of cache misses? partial register stalls? instruction fetch stalls?). You can get performance events data with these programs:

  • AMD CodeAnalyst (freeware, for AMD processors only);
  • Intel VTune (very expensive and buggy bloatware);
  • Agner Fog's TestP.Zip package (open source, for all kinds of processors).

Common misunderstandings of the concept

Responses collected from private communication and articles by other authors:

> You should repeat the test 1000 times, so it will take several seconds, and then average the time.

You will get a lot of context switches. The possible implications:

  • Because the real code does not call the function 1000 times on the same data, your cache usage pattern will be different from the real code. You must be careful enough to reproduce the real cache usage pattern in your tests. It's hard.
  • The processes executing in background may do crazy things, for example, your Vista indexing service will decide it's a good time to index your folders.
  • You cannot monitor performance events, because you don't know which performance events are from your program, and which are not. So you will have to optimize blindly, using trial-and-error method without understanding the real reasons of bad performance.

Instead of such tests, you should run the whole program on real data. Your cache usage pattern will then be absolutely real and reliable. Also, the execution time on real data is more valuable to the end user than the time that some synthesized test takes.

So, you should have two tests: the one with the small function, which is known to be a bottleneck, and the one with the large program and the real data for estimating the overall effect of your optimization. In the first case, the time will be measured with RDTSC in clock cycles. In the second case, it will be measured in seconds (you can even use GetTickCount for this case).

> Microsoft recommends using QueryPerformanceCounter instead of RDTSC.

  • Note that Microsoft does not recommend using RDTSC for game timing, not for performance measurements. Timing in games is a completely different topic.
  • QueryPerformanceCounter can report data from various counters, and one of them is RDTSC. On single-core processors without Intel SpeedStep, QueryPerformanceCounter is implemented using RDTSC. The function just adds a lot of overhead and some additional problems. As the developer of VirtualDub says: "So, realistically, using QueryPerformanceCounter actually exposes you to all of the existing problems of the time stamp counter AND some other bugs".
  • When used with care and understanding, RDTSC is more precise than QueryPerformanceCounter. See the thread where a person tried both methods and finally chose RDTSC.
  • Microsoft itself uses RDTSC in Visual Studio profiler.

Recommended reading

Error in Intel's paper

On page 5:

rdtsc
sub             eax, time_low  
sub             edx, time_high

should be

rdtsc
sub             eax, time_low  
sbb             edx, time_high  ; Subtract with borrow from low 32 bits
Peter Kankowski
Peter Kankowski

About the author

Peter is the developer of Aba Search and Replace, a tool for replacing text in multiple files. He likes to program in C with a bit of C++, also in x86 assembly language, Python, and PHP.

Created by Peter Kankowski
Last changed
Contributors: Ace and Marcus Aurelius

27 comments

Ten recent comments are shown below. Show all comments

Peter Kankowski,
Please use a search engine.
ace,

How to Benchmark Code Execution Times on Intel IA-32 and IA-64 Instruction Set Architectures

http://download.intel.com/embedded/software/IA/324264.pdf

Peter Kankowski,
Thank you! The paper basically says RDTSCP is more precise than RDTSC. I will compare them.
ace,

What I understood is that they use both -- one to start and one to end the measurement, since one combination of cpuid and one prevents instructions to be executed before, and another combination prevents instructions to be executed after, therefore reducing deviation in the kind of measurements where only a few instructions are measured. For longer time periods, it should matter less.

On another side, I don't even know if CPU I use here (i5) has RDTSCP, and if it's one of those newest where RDTSC counter started to increment even when the power management brings the CPU to sleep. Anybody has the code to write such properties?

ace,

BTW, I typically don't measure things that lasts only a few instructions. Then even the resolution of GetTickCount is good enough for me. If I measure something that takes around 1 second, and GetTickCount changes once in 1/60th of second, my accuracy is better than 2%. This allows me to measure systems that provide only such time resolution, i.e. I don't need RDTSC to calculate that IE8 on my computer can do order of 6M FPU calculations per second, Firefox order of 240M, Opera order of 600M and C with classic FPU order of 700M (everything on one core of course). What I'm interested in are typically "order of" numbers.

TS,

Is it possible to use "RDTSC" alone without calling "CPUID"? "RSTSC" takes around 20-25 cycles, but CPUID takes 100+ cycles.

The path that I want to measure roughly takes around 2000 cycles.

"RDTSC" with "CPUID" is around 200 cycles, so the overhead is 10% approx. If RDTSC used alone without CPUID, the overhead will be around 1%. The question is how accurate if RDTSC is used alone?

Peter Kankowski,

RDTSC can be executed out-of-order (before or after the nearby instructions), so the measurement will be inaccurate. CPUID flushes the instruction pipeline; all preceding instructions are completed before starting RDTSC.

You can subtract the bias that RDTSC adds to the results. Agner Fog does such subtraction in his test programs.

Morris,

How can measure the nhz speed of rdtsc counter? How to compare hpet to rdtsc to verify which is the faster?

Peter Kankowski,

Morris,

please save the time-stamp counter value (using RDTSC) and wait for one second. Find the difference between the current value and the saved value.

Note that the frequency (i.e., CPU speed) may change because of power saving on older processors. Newer processors have a constant-rate time-stamp counter. See https://en.wikipedia.org/wiki/Time_Stamp_Counter

zzzzzz,

minimum on old (iP2) PC maybe replacent cpuid => cld (L1 not flushes, but for cmd-anti-parallesation, for cmd micro-tests)

Your name:


Comment: