I'm timing how long various parts of a computer program takes to run, with the intent of including this in a research paper.
There are fluctuations in the timing based on the usual things inside a computer (other processes running, scheduling, timer not being 100% accurate, etc).
I'd like to report a value for how long each section takes, but since each section can vary by some amount, I'm not really sure what I should report.
I could for instance time it over 10 runs and report the average but maybe that isn't enough samples, or maybe a strong outlier would move the average too much.
I was thinking maybe I could use the arithmetic mean, which is good for getting rid of large outliers, and seems possibly appropriate for my usage case where random things will only slow things down, not ever make it faster.
Still even with using arithmetic mean, I'm not sure how many samples I ought to take or how i ought to report the values. Should it be a single value? or a range with some sort of confidence interval?
Thanks!