[plug] Error in Time() command
tenzero at iinet.net.au
Mon May 24 11:17:19 WST 2010
On 24/05/2010, at 8:56 AM, Daniel Pittman wrote:
> "tenzero at iinet.net.au" <tenzero at iinet.net.au> writes:
>> I'm seeking a preferably citeable reference to the amount of error in the
>> returned result from a Time() command. I want to be able to quote the level
>> of error in timing the execution speed of my project.
> You mean time(1), the command line utility, as you did when you mentioned this
> elsewhere, right?
I mentioned this on SLUG last night, when I found out I was not getting through to
Plug nor receiving anything from Plug. This was resolved some time later.
> I doubt anyone has bothered with a scholarly citation, in part because the
> answer is necessarily going to depend on the software and hardware
> configuration of the system.
> Given that time uses the wait4 rusage results, and those use the system clock,
> that will referencing a hardware clock based on availability and reliability
> of those clocks...
> So, the short answer is: it varies. Generally, you should be able to find out
> through inspection of the kernel code and all, though.
> I very much suspect, however, that your question is meaningless: you are
> asking us this because you want to use it to prove something, and you thought
> about that, then looked at time(1), then asked questions about that.
> If you go back and tell us what your actual goal is, we should be able to give
> you a better answer that will help you get useful information.
Cool. I was testing my java app to produce a baseline performance value
to compare the relative performance of an hardware accelerator I have built.
As parting of measuring or testing anything, we are supposed to discuss sources
of error in the measurement.
I accept that the system is variable, caches need to be filled and so on. So I repeated
measured the timing of execution a number of times, to consider the variability of
the execution time.
What I was wondering however, was if the test returns a time of say 0.258 seconds
and repeated samples vary by 1 ms eg 0.259, 0.257. Can I truly trust the accuracy of
that claimed 1ms variability?
Which is why I was wondering if there was anything on the level of error in the Time(1)
command. But as you say and my nights reading supports, the answer depends on
simply too many things to ever be conclusive.
> Starting with "run the process multiple times, because cache effects and
> memory pressure from other applications *will* change the results."
Thanks for your insights.
More information about the plug