I am writing C code for the Raspberry Pi 4 (ARM Cortex-A72), which relies on precise timing in periods of less than 1μs. To get precise timing, I use the following algorithm:
clock_gettime(CLOCK_MONOTONIC,&ttime);
ntime = ttime.tv_sec * (uint64_t)1000000000L + ttime.tv_nsec + time_period_in_ns;
while(1)
{
clock_gettime(CLOCK_MONOTONIC,&ctime);
if (ctime.tv_sec * (uint64_t)1000000000L + ctime.tv_nsec >= ntime) break;
}
The precision of the time measurement depends on the time required to execute one loop. To measure this time, I have written the following program:
#include <stdio.h>
#include <stdint.h>
#include <time.h>
int loops(int micsec)
{
struct timespec ctime, ttime;
uint64_t ntime;
int i;
i = 0;
clock_gettime(CLOCK_MONOTONIC,&ttime);
ntime = ttime.tv_sec * (uint64_t)1000000000L + ttime.tv_nsec + 1000*micsec;
while(1)
{
clock_gettime(CLOCK_MONOTONIC,&ctime);
if (ctime.tv_sec * (uint64_t)1000000000L + ctime.tv_nsec >= ntime) break;
i = i+1;
}
return(i);
}
void main()
{
int j;
int lismicsec[13] = {1,1e1,1e2,1e3,1e4,1e5,1e6,1e5,1e4,1e3,1e2,1e1,1};
int lisloops[13];
for (j=0; j<13; j++)
{
lisloops[j] = loops(lismicsec[j]);
}
for (j=0; j<13; j++)
{
fprintf(stderr, "Time in ms:%d, Loops: %d, One loop time in ns: %d\n", lismicsec[j], lisloops[j], 1000*lismicsec[j]/lisloops[j]);
}
}
To my great surprise, the time varied considerably, namely by a factor of four:
Time in ms:1, Loops: 5, One loop time in ns: 200
Time in ms:10, Loops: 71, One loop time in ns: 140
Time in ms:100, Loops: 721, One loop time in ns: 138
Time in ms:1000, Loops: 6426, One loop time in ns: 155
Time in ms:10000, Loops: 71385, One loop time in ns: 140
Time in ms:100000, Loops: 1378552, One loop time in ns: 72
Time in ms:1000000, Loops: 21626183, One loop time in ns: 46
Time in ms:100000, Loops: 2161626, One loop time in ns: 46
Time in ms:10000, Loops: 216592, One loop time in ns: 46
Time in ms:1000, Loops: 21687, One loop time in ns: 46
Time in ms:100, Loops: 2168, One loop time in ns: 46
Time in ms:10, Loops: 216, One loop time in ns: 46
Time in ms:1, Loops: 21, One loop time in ns: 47
If I understand it correctly, the CPU reduces the frequency when idle, so the time measurement is less precise, but when it is more heavily utilised, the CPU increases the frequency and the time measurement becomes more precise. Is that correct?
Is there a way to instruct the CPU from C to increase the frequency and make the timing more precise?