So, this comes from a programming problem, but it is math that I feel like should be easy, that I just can't figure out a quick way to do, short of writing a python script. It's an optimization problem, so maybe calculus could help me, but I need integer solutions, and I wouldn't know how to apply calculus to that.
I'm trying to write an assembly language program which delays the CPU for exactly n
ms. The CPU is running at 16 MHz, and I have written an inner loop which delays for 160 cycles. It is called repeatedly by an outer loop which has an overhead of 2 cycles. So, each time the out loop runs, it delays the CPU by 162 cycles.
The CPU also has overhead of 12 cycles to account for additional needed instructions. So, I need the inner loop to run for 15,988 cycles. I can modify the outer loop to include additional instructions which delay the CPU by 1 cycle each, but the keep the routine short, I want to minimize the number of instructions for this.
In other words, if $x=162$ is the number of cycles the loop takes, $\Delta{x}$ is the amount of cycles I add to the loop, and $15,988 \mod{(x+\Delta{x})} =n$, I'd like to minimize $\Delta{x} + n$
How can I efficiently solve this, without just writing a program to iterate through each x and check the remainder?
(We're just gonna pretend we live in a perfect world where all CPU's run exactly at their advertised clock speed and hardware interrupts never happen)