0
$\begingroup$

I don't quite understand what the **sinter.plot_error_rate** function is actually doing. From looking at the code, it seems to perform some kind of binomial fit. I'm not very familiar with data fitting techniques—could you explain this briefly?

Additionally, when I calculate the logical failure rate by running a for loop and counting the number of times the decoder's prediction doesn't match the logical observable outcome, I get different results compared to when I use the sinter.plot_error_rate function. Why might this be happening?

$\endgroup$
6
  • 1
    $\begingroup$ You need to give more information about what you're doing in order to debug something like this. What's your for loop? What're the stats showing as; and what're they expected to be instead? $\endgroup$ Commented Jun 16 at 17:34
  • $\begingroup$ I have a Steane error correction circuit and was using the count_logical_errors function from the getting started notebook of Stim to count logical errors. I ran a for loop to count the total logical errors and then normalized it per round. However, when I plotted the results, they were poor, even though the circuit is fault-tolerant. On the other hand, when I used the sinter.plot_error_rate function, the results were as expected. $\endgroup$ Commented Jun 17 at 2:17
  • $\begingroup$ What are the raw numbers? What does sinter.read_stats_from_csv_files report, and what does your for loop report? $\endgroup$ Commented Jun 17 at 4:51
  • $\begingroup$ sinter.read_stats_from_csv_files has shots, errors and which decoder was used. While the for loop also has logical errors. $\endgroup$ Commented Jun 17 at 6:27
  • $\begingroup$ This answer about the behaviour of sinter.plot_error_rate might be valuable to you: quantumcomputing.stackexchange.com/a/37268/22557 $\endgroup$
    – AG47
    Commented Jun 17 at 8:24

0

Browse other questions tagged or ask your own question.