0
\$\begingroup\$

I am trying to measure the absolute time between two 1Hz clocks (GPS PPS events) to within 50 nanoseconds. I do not necessarily care which clock occurs first, I am only looking for the delta T between each pulse. The pulse width is on the order of 100s of milliseconds so it is not a constraint for me here.

My plan is to use a microcontroller to read in these two 1Hz clocks as 2 different interrupt events. The rising edge of the leading PPS signal will trigger a counter which will count cycles until the arrival of the second PPS signal. I assume this clock/crystal for the counter will need to be 20MHz or greater due to the 50nS resolution I need.

My questions is, how do I go about determining how fast of a dedicated micro-controller I need to perform such a task?

I highly doubt an MCU with a core speed of 20Mhz will suffice, since the code to answer interrupts and hold a count will consist of more than one instruction. I understand that if I want to do anything else with this microcontroller such as send this time delay via UART will consume more clock cycles as well. For simplicity, let's assume the controller is just being used to count rising edges from a 20MHz clock between two interrupt events.

I am looking for a guideline as to how I can calculate/ballpark the bare minimum MCU core speed needed to handle a 20Mhz internal/external timer for the measurement portion of this project? I am asking to get a better idea of how to source appropriate hardware instead of simply throwing a >200MHz ARM at the problem and hope I overshoot my requirements. Thank you.

\$\endgroup\$
3
  • 2
    \$\begingroup\$ You can probably use the built in capture registers that are on many microprocessors. They capture the value if a free running counter when an external edge is detected. You can then subtract the two counter values to get the delta T. You just need counters that are long enough. Some chips can cascade two 16 bit counters to get a 32 bit counter, which would be enough \$\endgroup\$
    – crj11
    Commented Mar 2, 2019 at 1:34
  • 2
    \$\begingroup\$ Capture modules are probably the way to do it, but you will probably need a higher clock frequency than 20MHz, depending on how you define your 50ns accuracy since there is an uncertainty in synchronization with the internal peripheral clock of the MCU on each edge, so the difference will suffer from double that uncertainty. \$\endgroup\$ Commented Mar 2, 2019 at 2:46
  • \$\begingroup\$ Logic hardware needed to capture a timer window is trivial (flip-flops, Xtal clock, AND gate). Some microcontrollers have internal ripple counters independent of their master clock. For example, a $2 PIC TMR0 can count 100 MHz, with an external gate. Its prescaler can be dumped out later under software control. Ideal when you have lots of time from one count window to the next. The PIC itself can be clocked with its crappy internal RC master. \$\endgroup\$
    – glen_geek
    Commented Mar 2, 2019 at 18:34

1 Answer 1

1
\$\begingroup\$

If I were designing this circuit myself, I'd aim for 40MHz, or 40MHz-ish. (Well actually if I were designing this circuit myself I'd find a CPLD or some small discrete logic to handle the high-speed timer part, but this question is how to best apply a microcontroller to the problem.) The reasoning goes as follows:

I highly doubt an MCU with a core speed of 20Mhz will suffice, since the code to answer interrupts and hold a count will consist of more than one instruction.

That's actually not a concern. As long as the interrupt latency is constant then it falls right out of the equation, i.e. your first timer capture would be TC1 = T1 + Tlatency, your second capture would be TC2 = T2 + Tlatency, so your time difference you are calculating would be TC2 - TC1 = (T2 + Tlatency) - (T1 + Tlatency) = T2 - T1, exactly what you were trying to measure. Or alternately it takes you Tlatency to kick off the clock and Tlatency to stop it, so again it cancels out.

Of course, the above only holds true if Tlatency is nice and constant, so the key is to figure out how much jitter you will have in your interrupt latency. For the time being, I'm assuming the jitter is low (latency is constant). As @crj11 pointed out, if you have hardware capture registers then you can sidestep this whole issue entirely.

As you point out, a 20MHz clock has a 50ns period, but I would presume that the interrupt would have an uncertainty of about 1 clock cycle, since the 1PPS GPS signal is free-running with respect to your microcontroller clock. So the extra factor of 2 in the clock accounts for that.

I understand that if I want to do anything else with this microcontroller such as send this time delay via UART will consume more clock cycles as well.

Again, not a problem presuming you are using the microcontroller's built-in counter. Whether you are using a hardware capture like @crj11 mentioned, or even if you have an interrupt service routine starting and stopping the timer, you have two really small and fast interrupt events happening every second, and you can spent the rest of the time (in a main loop or a lower-priority interrupt routine) shuffling the data out the UART.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.