Skip to main content
added 317 characters in body
Source Link

Here's an answer:

TLDR Answer:

It depends on your application. I will give you my two answers below, which I parse into the following:

  • The General answer: It doesn't matter for most
  • Specific answer: It does matter for the few, usually critical application. In that specific answer, I provide a potential method to obtain your own confidence interval data to your liking.

First, we'll start with a prelude. Then, with that as context, you can read the general answer (for general technicians and non-critical applications) then specific answers (for more serious applications).

Prelude

First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.

Here's an answer:

Prelude

First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.

Here's an answer:

TLDR Answer:

It depends on your application. I will give you my two answers below, which I parse into the following:

  • The General answer: It doesn't matter for most
  • Specific answer: It does matter for the few, usually critical application. In that specific answer, I provide a potential method to obtain your own confidence interval data to your liking.

First, we'll start with a prelude. Then, with that as context, you can read the general answer (for general technicians and non-critical applications) then specific answers (for more serious applications).

Prelude

First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.

added 8 characters in body
Source Link

Here's an answer:

PreludePrelude First

First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.

Here's an answer:

Prelude First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.

Here's an answer:

Prelude

First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.

Source Link

Here's an answer:

Prelude First, just to interpret the question as: "How confident is UNI-T, statistically speaking, in its measurements variation to be due to noise across multiple samples for the UT181 device?"

If the question is interpreted corerctly, it's a very interesting question to be answered and can be answered in various ways depending on the application and purpose.

General Answer

Generally speaking, if the product quality is high (from a reputable brand), most electronics technicians don't care (and, the non-engineer electricians generally don't get into such deep electronics considerations as its almost irrelevantly philosophical for them as their goal is to simply get the job done in as little steps as possible without worrying about such abstractions.)

So, in that case, generally speaking, I don't think it matters here as it would in other scientific fields as, due to the nature of electricity, the measurements are made so frequently that, if the measurement value is constant over say a few seconds, the mean sample measurement approaches the value of all possible measurements (sigma, i.e., the population measurement) so the confidence interval can be quite tight for a series of very low-variance readings.

Specific Answer

So, then why does this question matter and is this a good question? Answer follows:

Yes, I do think the question is valid to ask. After considering the general answer above, on the other hand, when high precision and high accuracy DOES matter (depending on your application--say NASA stuff or next-gen military tech), this topic becomes incredibly relevant. Furthermore, what you are asking does matter and is important when you are trying to assess the quality of a product (to which company employees may disagree with and respond negatively here).

That said, in this specific situation, you would focus on the sample rate of your device, and possibly use a fancy benchtop oscilloscope/signal generator to send various known very precise signals to your device. Then see how long it takes, on average, in time (seconds) for you to see a variation from the specifically set signal on your measurement device under test. You can then compute the number of samples using the sampling rate. Then, you can develop a confidence of measurement. If, while sampling, you don't see variation (your measurement device reads 50.00 Hz and matches your signal generator), you start to increase the fineness of the changes to your supply test signal (voltage or current or whatever, in your case, the test frequency of 50 Hz).

Caveat

  • You need to know statistics and basic electronics to pull this off and also need the devices. You probably do since you are talking about confidence intervals.
  • One issue is the electrical signal produced by the device may vary at some level of precision from the stated value of that signal (i.e., it says 50.000000000 Hz but it's really 49.99999995 Hz. So, your measurement is only good as your signal generator/oscilloscope.

Hope this helps.