0
$\begingroup$

While I am running some simulations on IBM hardware, I've noticed that the number of shots recorded in the metadata of the job is different from the number of shots I've specified. For example, I've run three circuits (with no gates, equivalent to Hartree Fock states in the Fermionic space) to calculate the expectation value of some Hamiltonian (which contains about 5000 Pauli words). I'm specifying the number of shots by

from qiskit_ibm_runtime import EstimatorV2 as Estimator
from qiskit_ibm_runtime import QiskitRuntimeService

shots = 4096
service = QiskitRuntimeService()
backend = service.backend("ibm_cusco")

estimator = Estimator(backend=backend) 
estimator.options.default_shots = shots

Looking at the metadata of the completed job, I see that the number of shots stored in job.result()[0].metadata['shots'] is different from the shots I specify. For instance, on IBM_cusco, I notice the job is executed with 8192, 4096, and 4096 shots respectively for a job where the number of shots I've specified is 4096.

I observe the following trends, even though I've admittedly only performed a handful of calculations on each device.

  1. This seems to occur only on the Eagle processors.
  2. Number of shots performed only has increased (doubled) or stayed the same. It has not decreased.
  3. If the number of shots have doubled, the number of randomisations have doubled as well. The default value for randomisations is 32, but it has been increased to 64 in all the calculations where the shots have doubled.

As I would like to compare results of the three circuits, I'd like to fix the number of shots executed. However, these results seem to indicate that the property I should use to compare the different circuits is number of shots/number of randomisations. Is that a fair assessment? I'd also appreciate some insights on what IBM Quantum is doing under the hood.

$\endgroup$
2

0