According to the circuit model, the output for a quantum computation on $n$ qubits is an $n$-bit string. But what if we instead got a full two qubit tomography for all $n(n-1)$ pairs of qubits?
This would need to be calculated over many shots. If we simple used 1 shot, then we could easily just read out the resulting bit string. If we had infinite shots, and hence arbitrarily accurate tomography, perhaps we'd get some undesirable superpowers. Let's rule out both with the restriction that the number of shots must be both upper and lower bounded by $\textrm{poly}(n)$.
For algorithms with deterministic outputs, all shots give the same output. So in these cases, we'll still just be able to read out the standard result.
For algorithms with random outputs (like Shor's which gives random factors), things might be more tricky.
So what kinds of algorithms could be run with this restriction? And what kind of speed-ups could be obtained?