1
$\begingroup$

I need to run a one-sided test on one parameter of a logistic regression model:

$H_0$: $\beta = 0$

$H_1$: $\beta \geq 0$

I want to avoid Wald-equivalent methods as these are known to have problems with logistic regression. (Piegorsch, 1990) suggests that one sided likelihood-ratio based tests are possible for glms. Have any been implemented in R?

If not, is the following a legitimate way to implement the test (for alpha = 0.05)?

  1. use confint to compute a two-sided 90% confidence interval; note that since R 4.4.0 this uses the profile likelihood method from MASS.
  2. Replace the lower end of the CI by -∞ to get a one-sided confidence interval.
  3. Reject H0 if the CI excludes 0.

Piegorsch W. W. (1990). One-sided significance tests for generalized linear models under dichotomous response. Biometrics, 46(2), 309–316.

$\endgroup$
2
  • 1
    $\begingroup$ Yes, the logistic regression estimator is an M-estimator. Under weak conditions M-estimators converge to a normal distribution. Because the estimator is normally distributed, you can compute 95% one-sided tests, by using the critical values of a 90% two-sided CI. $\endgroup$ Commented May 21 at 1:15
  • $\begingroup$ You're asking for a likelihood-ratio test, but then you're sketching a confidence-interval based pseudo-test. So what gives? $\endgroup$
    – Durden
    Commented May 21 at 1:18

1 Answer 1

2
$\begingroup$

A one-sided LRT is straightforward in R using the signed LRT statistic. Fit the logistic regression models with and without the $\beta$ term. Compute the ordinary LRT statistic from the deviance difference

Q <- fit.null$deviance - fit.full$deviance

where fit.full is the output from glm with the full model and fit.null is the output from the model without the $\beta$ term. It is well known from Wilk's theorem that $Q$ follows an aysmptotic chisquare distribution on 1 df under the null hypothesis. It follows then from basic probability theory that the signed LRT statistic $$Z = {\rm sign}(\hat\beta) \sqrt{Q},$$ follows a standard normal under the null hypothesis (Fraser 1991). The one-sided p-value is the right tail probability, computed in R as pnorm(Z, lower.tail=FALSE).

Another possibility is to compute a signed score test statistic using the glm.scoretest function of the statmod package. The score test also yields a standard normal deviate equal to the log-likelihood derivative divided by the square root of the conditional Fisher information for $\beta$. The null distribution follows from the Central Limit Theorem applied to the derivative and the Law of Large Numbers applied to the Fisher information.

References

Pierce DA, Bellio R (2017). Modern likelihood-frequentist inference. International Statistical Review 85(3) 519–541. https://doi.org/10.1111/insr.12232

Fraser DAS (1991). Inference: likelihood to significance. Journal of the American Statistical Association 86(414), 258-265. https://doi.org/10.2307/2290557

Barndorff-Nielsen OE (1986). Inference on full or partial parameters based on the standardized signed log likelihood ratio. Biometrika 73(2) 307–322. https://doi.org/10.1093/biomet/73.2.307

$\endgroup$
7
  • $\begingroup$ It's intuitively plausible that that's normally distributed, but could you point me at a proof? $\endgroup$
    – Mohan
    Commented May 20 at 9:30
  • $\begingroup$ Note that with Bayesian modeling you trivially compute $\Pr(\beta \geq 0)$. $\endgroup$ Commented May 20 at 11:45
  • $\begingroup$ @FrankHarrell sorry, I'm being slow. I do indeed want a proof that Pr(β≥0) = 0.5, but how does it come out of Bayesian modelling? $\endgroup$
    – Mohan
    Commented May 20 at 18:31
  • 1
    $\begingroup$ @Mohan The null distribution for $Z$ follows from Wilk's theorem and basic probability theory. I've added more detail to my answer. The Central Limit Theorem applied to the log-likelihood derivative ensures that sign($\hat\beta$) equally likely to be positive or negative under the null. The fact that the square root of a chisquare 1 random variable with random sign is standard normal is basic probability theory. $\endgroup$ Commented May 21 at 0:35
  • 1
    $\begingroup$ @Mohan I've added a couple of references if you want to delve further. The Fraser paper is relatively accessible. The Barndorff-Nielsen paper is a very technical discussion of high order asymptotics. $\endgroup$ Commented May 21 at 4:12

Not the answer you're looking for? Browse other questions tagged or ask your own question.