0

My question is, knowing everything there is to know about several systems(CPU and GPU stats, OS), is it possible to approximate when each system will finish a specific processing operation? And if it is, could you please share some resources as to how one could do that?

For example is it possible to know approximately when a system will finish a rendering operation? (if we know the rendering algorithm and software, along with the complexity of the project)

2 Answers 2

7

Quote from Motorola, 1986, 68020 processor manual: “Estimating precise execution times is very difficult, even if you understand the processor completely”.

That was a relatively primitive processor. Nowadays it is practically impossible.

What you can do: Measure.

3

If you can break your operation down to assembly-level, it should be possible to estimate execution time.

Since each machines assembly-code can be easily translated to its respective machine instructions, you can simply "count" the number of clock-cycles a piece of software would take to run. This approach is sometimes used on simple processors running simple programs (e.g. micro-controllers), when precise timing is essential.

However keep in mind:

  1. Every processor architecture has it's very unique way of handling high-level operations. This means the amount of work, required to "estimate" the runtime of your high-level code on different machines would be insane.

  2. That modern computing systems usually run your application on top of an OS. This additional layer of indirection makes it impossible to "guess" when your code will be executed and thus how quickly it will finish.

TL;DR No, you probably won't be able to know, when your software will be finished rendering something without testing it.

3
  • 1
    You actually can't count clock cycles any more. Commented Feb 8, 2021 at 15:31
  • @user253751 not exactly "counting". But primitive architectures like 8-bit AVRs will tell you how many clock-cycles each instruction takes in their data-sheet. Of course that's not possible on a modern x86-CPU or GPU that use more sophisticated techniques. Commented Feb 9, 2021 at 16:21
  • There's a question on SO with 10k+ upvotes on why changing data changes execution time. It ended up being due to the hit rate of the branch predictor.
    – jaskij
    Commented Feb 10, 2021 at 10:17

Not the answer you're looking for? Browse other questions tagged or ask your own question.