I have a model that is log-log and I would like to make raw predictions of $Y$ with it:
$\ln(Y) = B_0 + B_1\ln(X)$
All answers and articles I have found concerning back transforming for prediction only deal with semi-log models:
either $Y \sim B_0 + B_1 \ln(X)$ or $\ln(Y) \sim B_0 + B_1X $
I have seen Duan's Smear, but does this apply to log-log models as well? I'm looking to move from percentages to actual $Y$ predictions.