A one-sided LRT is straightforward in R using the signed LRT statistic. Fit the logistic regression models with and without the $\beta$ term. Compute the ordinary LRT statistic from the deviance difference
Q <- fit.null$deviance - fit.full$deviance
where fit.full
is the output from glm
with the full model and fit.null
is the output from the model without the $\beta$ term.
It is well known from Wilk's theorem that $Q$ follows an aysmptotic chisquare distribution on 1 df under the null hypothesis.
It follows then from basic probability theory that the signed LRT statistic
$$Z = {\rm sign}(\hat\beta) \sqrt{Q},$$
follows a standard normal under the null hypothesis (Fraser 1991).
The one-sided p-value is the right tail probability, computed in R as pnorm(Z, lower.tail=FALSE)
.
Another possibility is to compute a signed score test statistic using the glm.scoretest
function of the statmod package.
The score test also yields a standard normal deviate equal to the log-likelihood derivative divided by the square root of the conditional Fisher information for $\beta$.
The null distribution follows from the Central Limit Theorem applied to the derivative and the Law of Large Numbers applied to the Fisher information.
References
Pierce DA, Bellio R (2017).
Modern likelihood-frequentist inference.
International Statistical Review 85(3) 519–541.
https://doi.org/10.1111/insr.12232
Fraser DAS (1991). Inference: likelihood to significance.
Journal of the American Statistical Association 86(414), 258-265.
https://doi.org/10.2307/2290557
Barndorff-Nielsen OE (1986).
Inference on full or partial parameters based on the standardized signed log likelihood ratio.
Biometrika 73(2) 307–322.
https://doi.org/10.1093/biomet/73.2.307