3
$\begingroup$

I have a model that is log-log and I would like to make raw predictions of $Y$ with it:

$\ln(Y) = B_0 + B_1\ln(X)$

All answers and articles I have found concerning back transforming for prediction only deal with semi-log models:

either $Y \sim B_0 + B_1 \ln(X)$ or $\ln(Y) \sim B_0 + B_1X $

I have seen Duan's Smear, but does this apply to log-log models as well? I'm looking to move from percentages to actual $Y$ predictions.

$\endgroup$
2
  • $\begingroup$ If your model says $\ln(\hat Y) = B_0 + B_1\ln(X)$ then this is equivalent to $\hat Y = e^{B_0} X^{B_1}.$ If your model is unbiased for $\ln(\hat Y)$ then it is going to be biased for $\hat Y$, but that is not necessarily a bad thing - it is a consequence of your model. $\endgroup$
    – Henry
    Commented May 30 at 8:20
  • $\begingroup$ @dimitriy When you say "wrong" you seem to be worried about "biased" which I mentioned in my earlier comment. Simple linear regression passes through the arithmetic mean of the observations; if instead a log-log model is a sensible approach, it seems reasonable to me that you instead get something going through the geometric mean of the original data. If that is, in your view, "wrong" or a bad thing, then perhaps you should not be using a log-log model. $\endgroup$
    – Henry
    Commented May 30 at 17:26

1 Answer 1

7
$\begingroup$

Duan's approach applies to logged outcome models, as does the normal approximation described here.

It does not matter whether the covariates are logged, except insofar as you will need to calculate $\ln X$ before multiplying by $\hat B_1$.

$\endgroup$
1
  • 3
    $\begingroup$ @OberonQuinn If this answers your question, you can select it as the answer! $\endgroup$
    – dimitriy
    Commented May 30 at 0:24

Not the answer you're looking for? Browse other questions tagged or ask your own question.