Skip to main content

All Questions

Tagged with
2 votes
0 answers
26 views

Model has higher (and closer to 1) $\beta$, but similar $R^2$ and correlation

I have model one which produces prediction $\hat{y_1}$, later I came up with a new model which produces prediction $\hat{y_2}$. I have ground truth $y$. The models are not regression based but they ...
zvi's user avatar
  • 21
3 votes
2 answers
76 views

Maximum liklihood estimators for simple linear regression with $\sigma^2$ unknown

Suppose that we have the simple linear regression model for the form: $$Y_i = \beta X_i +\varepsilon_i$$ With the following set of 'classical assumptions' holding: $E(\varepsilon_i)=0$ $Var(\...
hmmmm's user avatar
  • 539
1 vote
0 answers
23 views

Degrees of freedom for estimation

In the context of estimators, why is it that in general dividing by the degrees of freedom(instead of the sample size) leads to unbiasedness? I see the value in substituting degrees of freedom for ...
secretrevaler's user avatar
2 votes
1 answer
77 views

Is an estimator that always have a value of zero is a linear estimator?

Consider a simple linear regression model: $$Y=\beta_0+\beta_1 X +u$$ Here, we can consider an estimator that does not use any data: $$\hat{\beta}_1=0$$ That is, regardless of the observed data, the ...
MinChul Park's user avatar
0 votes
0 answers
10 views

Variations of Correlation Coefficient of Simple Linear Regression with Estimators [duplicate]

Suppose we are using an Ordinary Least Squares (OLS) estimator of $\alpha_{0}$ and $\alpha_{1}$ for the simple linear regression below: $$ H_{i} = \alpha_{0} + \alpha_{1}X_{i} + \epsilon_{i} $$ How ...
Plesiozaurus's user avatar
0 votes
0 answers
52 views

Does the rank transform preserve signum of Spearman correlation between parameter estimates across samples?

Suppose we have real-valued random variables $X$, $Y$, with noise $\epsilon$ that is independent of $X$ and $Y$ and $\mathbb{E}[\epsilon] = 0$, and measurable function $f$. I am thinking about ...
Galen's user avatar
  • 9,401
1 vote
0 answers
56 views

Kernelization vs pre-defined basis functions: which one is better and why?

I am learning about kernels and how linear models can use them to model nonlinear data. Consider, for example, linear regression for nonlinear function $y(\textbf{x})$. The idea is to project the ...
Botond's user avatar
  • 217
15 votes
3 answers
2k views

In some sense, is linear regression an estimate of an estimate of an estimate?

Consider the problem of estimating a random variable $Y$ using another random variable $X$. The best estimator of $Y$ by a function of $X$ is the conditional expectation $E[Y|X]$. It minimizes the ...
user141240's user avatar
1 vote
0 answers
128 views

Gasser Müller estimator for estimating the derivative $m'(x)$ of a nonparametric regression function

I would like to compare the performance of the Gasser Müller estimator with other estimators for estimating the the derivative $m'(x)$ of the regression function $m(x)$. Let's say we have the ...
Mathieu Rousseau's user avatar
1 vote
0 answers
46 views

Rescaling logistic regression coefficients such that variance remains constant

I'm reading "A Modern Maximum-Likelihood Theory for High-dimensional Logistic Regression" by Pragya Sur, and trying to recreate Figure 2, for my own edification. The covariates, $X$, are i.i....
kevkev9957's user avatar
4 votes
1 answer
641 views

Parameter estimation of state-space models with hidden variables

I have a time-series analysis problem, that I am having trouble finding a suitable regression technique for. I have a coupled linear three dimensional system \begin{align*} X_{t} & =\left(1+J\...
011's user avatar
  • 41
1 vote
0 answers
34 views

Does a linear regression assume that the (unconditional) predictor data is i.i.d?

Say I have a linear, cross sectional relationship - $y_{i}=x_{i}b+e_{i}$. Where $E(e_{i}|X_{j})=0$ for all relevant $i,j$. Given this, one can prove that the OLS estimator is unbiased. However, ...
user121416's user avatar
0 votes
1 answer
575 views

Closed form equations for simple linear regression estimators

I'm learning specifically about different forms of simple linear regression including ordinary least squares, median absolute deviation, and Theil-Sen. I have no background whatsoever in linear ...
hachiko's user avatar
  • 89
3 votes
1 answer
319 views

Why for Least square estimators for Multiple Linear Regression will not be affected after shifting the variable with its mean

Suppose we have $Y = \beta_{0} + \beta_{1}X1 + \beta_{2}X2 + \epsilon$ we have a estimator $\beta$ for this model. Now we substitute $\tilde{Y} = Y - \bar{Y}$( Y - mean of (Y)) and $\tilde{X1} = X1 - \...
Song Calderone Zhang's user avatar
0 votes
0 answers
308 views

Can you explain LINEAR in BLUE?

I have hard time understanding the LINEAR part. Found something like this: Linear property of OLS estimator means that OLS belongs to that class of estimators, which are linear in Y, the dependent ...
Retko's user avatar
  • 131

15 30 50 per page
1
2 3 4 5