In Aldrich (2005), and specifically in sections 10 and 11, the author describes the sufficient statistic for the parameter $\beta$ in the simple regression of random $Y$ on fixed $X$, with a bivariate normal population with known variance $\sigma^2$, when $X$ is normal with known variance $\alpha$ and $b$ is normal with variance $\sigma^2/A$, where $A$ is an ancillary statistic computed from the sample, equal to the sum of squared deviations in $X$. The joint statistic $(b, A)$ is sufficient for $\beta$, and he shows one would lose information by estimating $b$ "by ignoring the value of $A$ and using the value of $\alpha$ and the sample size $N$."
Although I understand regression and correlation are not the same thing, I naturally wonder about the implications of Aldrich (2005) for defining the most informative estimators of the bivariate normal and its correlation, when $X$ is fixed with known population variance. To wit, the likelihood function for the bivariate normal depends on only two functions of the sample, the sample means and the sample covariance matrix (see here); the latter obviously includes the sample variance of $X$. Likewise, the $MLE$ of the bivariate standard normal is the Pearson correlation (see here), which is obviously a function of the sample standard deviation of $X$.
My questions are: 1) Does Aldrich (2005) imply one would obtain a better estimate of the bivariate normal by using the sample covariance matrix while ignoring the known, true variance of $X$? 2) Does it imply the Pearson correlation is a better estimate of population correlation $\rho$, when computed from the sample variance of $X$ while ignoring the known population variance for $X$? In short, is it useless to know an actual parameter value in these cases?