13
\$\begingroup\$

Satellites in low earth orbit are moving close to 8km/sec. Most consumer-grade GPS chips still invoke the CoCom limits of 1000 knots, about 514 m/s. CoCom limits are voluntary limits for exports that you can read more about in this question and answer and this question and answer and elsewhere.

For this question, let's assume they are numerical limits in the output section of the firmware. The chip must actually calculate the speed (and altitude) before it can decide if the limit is exceeded, and then either present the solution to the output, or block it.

At 8000 m/s the doppler shift at 2GHz is about 0.05 MHz, a small fraction of the natural width of the signal due to its modulation.

There are several companies that sell GPS units for cubesats, and they are expensive (hundreds to thousands of dollars) and probably worth every penny because (at least some of them) are designed for satellite applications and space tested.

Ignoring the implementation of the CoCom limits, and all other issues of operation in space besides velocity, are there any reasons why a modern GPS chip specked at 500 m/s max velocity would not be able to work at 8000 m/s? If so, what are they?

note: 8000m/s divided by c (3E+08m/s) gives about 27ppm expansion/compression of the received sequences. This might affect some implementations of correlation (both in hardware and software).

\$\endgroup\$
8
  • 4
    \$\begingroup\$ The first reason that comes to my mind is that it makes no sense to even test, let alone design for these speeds, thus working there is mere luck or coincidence. \$\endgroup\$
    – PlasmaHH
    Commented Jun 6, 2016 at 12:44
  • 3
    \$\begingroup\$ I'm with PlasmaHH on this one. If I'm to release a product which 99.9% of my customers will be using at typical automotive speeds or less, it's not worth the money to test it at 8000 kph even if I expect it to work. Needless to say, it's foolish to put in a spec something you didn't test for. \$\endgroup\$ Commented Jun 6, 2016 at 12:50
  • 6
    \$\begingroup\$ @DmitryGrigoryev GPS testing is usually done with a signal simulator - velocity is just a number entered. It doesn't cost to check, and good engineers will always want to know the performance limit of a design. But please, my question asking what part of the GPS function will likely be the first to fail at high velocity, not "what would you do if you were a product engineer". \$\endgroup\$
    – uhoh
    Commented Jun 6, 2016 at 12:52
  • 2
    \$\begingroup\$ @uhoh Perhaps they are tested at 8000 kph using a simulator. Still, I wouldn't put that number in the spec without testing the real thing. I've seen plenty of stuff working on a simulator or test bench, then fail spectacularly in practice. \$\endgroup\$ Commented Jun 6, 2016 at 12:57
  • 2
    \$\begingroup\$ @PlasmaHH "velocity may be just a number entered, doppler shift is not" It's not because it varies depending on the relative velocity between the receiver and the satellite, and a simulator simulates doppler shifts based on individual relative positions between each satellite and the simulated position. So, in fact, the simulator does much more than you expect, and it's critical to make the simulation accurate. \$\endgroup\$ Commented Nov 23, 2020 at 0:13

5 Answers 5

9
\$\begingroup\$

I would not advise to use an integrated GPS solution (containing MCU and closed source firmware) for a satellite application. There are several reasons why this might fail to work:

  1. The frontend frequency plan might be optimized for a limited doppler range. Typically, the RF frontend will mix down the signal to an IF lower than 10MHz (higher IF will require higher sampling rate and consume more energy). This IF is not arbitrarily choosen! The quotient IF/samplerate should be nonharmonic for the whole doppler range to avoid spurious tones from a/d-truncation errors in the sampled signal. You may observe beating effects, that make the signal unusable at some doppler rates.
  2. The digital domain correlator needs to reproduce a replica of the carrier and the C/A code at the correct rate, including doppler effects. It uses DCOs (digital controlled oscillators) to pace carrier and code generation, that are tuned via configuration registers from the MCU. The bit-width of these registers may be constrained to the doppler range expected for a ground based receiver, making it impossible to tune the channel to the signal if you are traveling too fast.
  3. Firmware will have to do a cold acquisition if no position/time estimate is available. It will search doppler frequency bins and code phases to find a signal. The search range will be restricted to the range expected for a ground based user.
  4. Firmware will typically use kalman filtering for position solutions. This involves a model of receiver position/velocity/acceleration. While acceleration will not be a concern for a satellite, the model will fail for velocity, if the firmware is not adapted for in-orbit use.

All of these issues can be addressed, if you use a freely programmable frontend and correlator with a custom firmware. You may, f.e. look at Piksy.

\$\endgroup\$
4
  • \$\begingroup\$ For point 1. (front-end bandwidth) the original bandwidth of the signal is much wider than the doppler shift - consider worst case about 10km/sec relative velocities vs 3E+05 km/sec speed of light will be around 50 kHz. But 2, 3, 4 all sound like potential deal-breakers for consumer-optimized chips and firmware. \$\endgroup\$
    – uhoh
    Commented Sep 22, 2016 at 2:05
  • 3
    \$\begingroup\$ @uhoh: I agree with your bandwidth argument, but point 1 is not about bandwidth. I should have explained better. If your sample rate is 16,368,000/s and the signal in IF is centered at 4,092,000 Hz, and you have an a/d with 4bits resolution, then you have a problem with beating. Every samples truncation error will go the same direction. There is a whole bunch of such bad spots (zero IF is another really bad one but any harmonic is bad). You will want to keep distance (depends on integration period) to these spots for any expected doppler. \$\endgroup\$
    – Andreas
    Commented Sep 22, 2016 at 8:12
  • \$\begingroup\$ Great, thank you very much for this answer! It gives me a lot of insight into what is going on. I still don't understand the beating/truncation error but I can go do some reading and maybe ask a question afterward. I have a different ACD question that is related to high frequency three bit ADCs (PiSky has 3 bit ADC). \$\endgroup\$
    – uhoh
    Commented Sep 22, 2016 at 9:34
  • 1
    \$\begingroup\$ It has to do with the S/N of the individual samples, which is really bad. Investing in more precision at the ADC will not improve the overall system performance that much. It is a complicated tradeoff, I will try to give a useful answer to your ALMA question. \$\endgroup\$
    – Andreas
    Commented Sep 22, 2016 at 10:26
5
\$\begingroup\$

Some folks implement COCOM as an or, others as an and. Either way, for qualified customers under EAR or ITAR, vendors will happily sell you a firmware option for $$$ that disables that functionality. Hardware is identical.

Outside of that hard limitation, it becomes a RF communications problem, along with designing your hardware to tolerate radiation effects. Your Eb/N0 will probably be somewhat better as you are (literally) closer to the SVs and avoiding the atmospheric path-loss, but your receive circuitry is also going to need to tolerate a considerable amount of Doppler.

It's not just position though that CubeSats are interested in, by the way -- GPS time is a valuable data commodity that helps a satellite figure out where it is, given a TLE. Even if the receiver refuses to give you a position due to COCOM, if it gives the time, that can be worth it.

\$\endgroup\$
5
  • \$\begingroup\$ What do "Eb/N0" and "SVs" mean? Do you know for sure if actual time is reported when spatial coordinates are blocked, or do you just mean the 1pps signal? Please note, I specified: "Ignoring the implementation of the CoCom limits, and all other issues of operation in space besides velocity.." \$\endgroup\$
    – uhoh
    Commented Jun 7, 2016 at 1:14
  • \$\begingroup\$ Two years ago satellites were reclassified as "non-munitions" so ITAR no longer applied - but now EAR applies as you mention. There are still MTCR and the Wassenaar Arrangement also and possibly more as well! \$\endgroup\$
    – uhoh
    Commented Jun 7, 2016 at 1:15
  • 3
    \$\begingroup\$ @uhoh I assume that the term are Eb/N0 => signal to noise ratio and SVs => space vehicles (the actual GPS satellites) \$\endgroup\$ Commented Jun 7, 2016 at 2:23
  • \$\begingroup\$ @user2943160 Thanks, makes sense. I'm always trying to learn new things - if Eb is a specific term, I'd like to learn it. \$\endgroup\$
    – uhoh
    Commented Jun 7, 2016 at 3:59
  • \$\begingroup\$ I've just been doing a lot of comm stuff lately, Eb/No is just the "normalized" SNR, or SNR per bit. Really it probably would have been more accurate to just use SNR or RSSI in that answer. Anecdotally, I have heard that some chipsets (SiRF I think) will still report time but freeze you out of position, but I have not personally confirmed that. \$\endgroup\$ Commented Jun 7, 2016 at 17:14
4
\$\begingroup\$

It really depends on the implementation. As an example, one receiver I've worked on has a fixed-point carrier NCO frequency register in each correlator channel, with a width of 17 bits. The maximum value that can be stored in this register corresponds to around 6 km/s, and also has to include a contribution from receiver clock frequency error. So it wouldn't be able to track any satellites whose range rate exceeds that limit, which would be quite a lot of them if the receiver is moving at orbital speeds.

\$\endgroup\$
2
\$\begingroup\$

If this paper on example GPS architecture is representative, then the chips consist of an RF front-end, hardware correlators in the digital domain, and all the actual decoding of the signal is performed in software.

In which case the only likely problem is doppler. The software may discard "exceptional" values, but you'll need to replace or modify the firmware anyway if you want to bypass the CoCom limits.

A more interesting question is if you can borrow a GPS simulator which can be programmed to simulate the high-speed case. I would have thought it would be possible - after all, how would a manufacturer test that their device is applying the CoCom limits?

\$\endgroup\$
7
  • 7
    \$\begingroup\$ Note that even at 0 kph you have to deal with Doppler, since the satellites are already moving at 8000 m/s. \$\endgroup\$ Commented Jun 6, 2016 at 13:07
  • \$\begingroup\$ I like your logic! It's really an (up to) +/- 60 kHz type shift applied differently to each satellite signal, good chance most simulators can do it. Just for the record, I am not actually doing this - I'm just asking this! \$\endgroup\$
    – uhoh
    Commented Jun 6, 2016 at 13:09
  • 3
    \$\begingroup\$ No @DmitryGrigoryev your are wrong about the 8000. They are moving much more slowly because they are in much higher orbits. But you are right that there is plenty of Doppler besides the motion of the GPS unit. It's a good point! \$\endgroup\$
    – uhoh
    Commented Jun 6, 2016 at 13:10
  • 6
    \$\begingroup\$ That's much less relevant on the ground though - speed tangential to the observer doesn't cause doppler. It does however cause a small relativistic effect: physics.stackexchange.com/questions/1061/… \$\endgroup\$
    – pjc50
    Commented Jun 6, 2016 at 13:25
  • 1
    \$\begingroup\$ A doppler shift of 27ppm may not affect the correlator for the initial C/A (Coarse Acqusition) with a lengh of only 1023, but for longer sequences it might cause trouble, depending on how correlation of long sequences is actually implemented. \$\endgroup\$
    – uhoh
    Commented Jun 6, 2016 at 14:02
2
\$\begingroup\$

Cubesats can be used with off the shelf commercial GPS units that are less than 1000$. The manufacturer removes the limits, so one would hope that they'd be able to test with the them removed. They have GPS emulators or access to them.

The cocom limits have to be removed by the manufacturer, and the manufacturer will only do that if you can get an exception through your government. I'm not sure the process, but I know its possible at least in the US. Outside of the US this may be close to impossible.

I don't know the accuracy of the GPS unit, but there are still ionospheric effects that have to be accounted for, if your flying in LEO. You'll also need a decent ADCS system to estimate your spacecrafts position

\$\endgroup\$
1
  • \$\begingroup\$ Wouldn't the ionospheric effects still induce errors on the scale of meters or worst case tens of meters? Unless one's cubesat is doing things that require millisecond timing, or GPS-based formation flying, this wouldn't end up mattering for most cubesats. It's good to remember though, thanks! \$\endgroup\$
    – uhoh
    Commented Sep 22, 2016 at 7:34

Not the answer you're looking for? Browse other questions tagged or ask your own question.