I have a collection of bond distances from a series of $Fm\bar{3}m$ crystal structures that I would like to compare versus each metal's ionic radius with a linear regression.
The data quality for some of the structures are worse than others, so the associated ESDs/error bars for those bond distances are larger.
Weighing a linear regression fit with the associated ESDs from the crystal structures causes the abnormal data points to appear off the fit whereas an unweighted regression fit captures those data points well.
My understanding is that an unweighted analysis assumes the distribution of errors estimated from the residuals is assumed to not only be normal, but also uniform. This non-uniformity in error "appears" to be how the data are (in the sense that there is one particular data point with much larger ESds than the rest), but my intuition is that there is nothing "intrinsic" or systematic that affects these data and the large ESDs are simply a result of poor crystal quality and if a good crystal could be supplied then it would have "normal" ESDs.
Is there a statistical test to determine when one should or should not perform a weighted analysis?
Should one reflexively always perform a weighted linear analysis if the errors at each point are known?