I had my first very exciting and successful go at an oscilloscope today. The question which arose at the end of my first experience has to do with the accuracy I was expecting. My microcontroller was programmed to output signals such as a HIGH signal for 80us, LOW for 50us, HIGH for 25ms, etc. I used a 16MHz crystal with my ATMega8.
When I recorded the output with the scope, I noticed that the measurements were not exact. For example, when I'd expect a 80us HIGH signal on the scope, it would look something like 77us. The signals which lasted longer times (i.e. 25ms) seemed more accurate.
My question is simply: is this what I should expect? I mean, the device I hooked up to my microcontroller requires specific signals for specific amounts of time, and I programmed my microcontroller to deliver it those signals for the lengths of time it requires. Everything works great, I just can't seem to understand why this discrepancy exists. For reference, the sensor is the DHT-11 temperature sensor. If the microcontroller is actually not delivering the precise signals, why is my sensor responding correctly?
Here are some of my guesses:
- The sensor can tolerate some amount of error, which is why it works. I can't seem to identify and put that error into perspective based on the data sheet.
- The crystal is not perfectly accurate. We are dealing with microsecond resolution here, so it is possible.
- Other components like resistors and capacitors introduce error.
- The clock on the oscilloscope is not perfect either - albeit much better than the crystal used for my microcontroller
Perhaps there is something else going on which I am not aware of.
Note that the timings I used above are just examples, and may differ from the data sheet and what I actually used in my code.