Consider the following two statistical principles: 1) an exact test's $p$-value gives the exact frequency with which the observed random sample appears by chance, i.e., under a true null hypothesis; and 2) the Fisher information in a statistic is an inverse function of the estimator's standard error, the extent to which its observed value varies around its true value when computed on a sample of size $n$.
My interpretation of these two principles is that an exact $p$-value can contain no Fisher information, and in fact the same must be true of all permutation statistics and the samples on which they are computed. Is this correct? If so, is the exact $p$-value considered to contain some other kind of information defined under estimation theory?
Edit: My question implicitly assumes that an exact $p$-value is a unique, sample-specific quantity, and that "inexact" $p$-values estimate the "true value" of a probability parameter$-$not a parameter of the population from which the sample has been drawn, but a parameter of the experiment, conditional on the sample and the null hypothesis. An acceptable answer would be to show this assumption is false, and why. But, even if $p$ is not estimating a probability parameter, it is inarguably conveying information in some sense. I'd still like an explanation about what interpretation, if any, estimation theory gives that information.
To be clear, this is a conceptual question, a question of estimation theory, not a computational question. I understand one could easily compute the expected information in both the $p$-value and the sample as their combinatorial entropy. But I'm asking whether there's a conception of information under estimation theory that applies here, either as an alternative to Fisher information or as a broader definition than I give above.