Skip to main content

All Questions

0 votes
0 answers
26 views

Local linear kernel regression

It is know that the prediction for a given point $x$ is given by: $$\hat{f}_h(x) = \hat{\beta}_0(x)$$ where $$\hat{\beta}(x) = \arg\min_{\beta_0, \beta_1}\sum_{i=1}^nK\left(\frac{x - x_i}{h}\right)(...
1 vote
0 answers
40 views

How to show $\sup_{x\in [a,b]}|f_n(x)-f(x)|=O_p(\sqrt{\frac{\log n}{nh}}+h^2)$ when the kernel $K(\cdot) $ is of bounded variation?

Consider the kernel estimate $f_n$ of a real univariate density defined by $$f_n(x)=\sum_{i=1}^{n}(nh)^{-1}K\left\{h^{-1}(x-X_i)\right\}$$ where $X_1,...,X_n$ are independent and identically ...
11 votes
4 answers
3k views

Why is Kernel Density Estimation still nonparametric with parametrized kernel?

I am new to kernel density estimation (KDE), but I want to learn about it to help me calculate probabilities of outcomes in sequencing data. I watched this https://www.youtube.com/watch?v=QSNN0no4dSI ...
0 votes
0 answers
36 views

Implementing Convolution Function for Gaussian Kernel in Python for PDF Estimation

I am currently working on estimating a probability density function (PDF) nonparametrically using a Gaussian kernel. My goal is to determine the optimal bandwidth $h$ that minimizes the cross-...
1 vote
0 answers
32 views

Strong consistency of kernel density estimator

I am studying the book Nonparametric and Semiparametric Models written by Wolfgang Hardle and have difficulty with the following exercise: $\textbf{Exercise 3.13}$ Show that $\hat{f_h}^{(n)}(x) \...
5 votes
1 answer
698 views

Expected value and variance of KDE

I need to find the expected value and variance of KDE given that $$(i) E[u] = 0 \to \int u\phi(u)du=0\\ (ii)V[u] = \sigma^2 \to \int u^2\phi(u)du=\sigma^2$$ where $\phi$ is the kernel function. I've ...
2 votes
1 answer
56 views

Gronwall's inequality

I am reading the article. I am getting stuck with the first proof proposition 4 on page 32. To be more specific, they understood the reason why they obtained $F(x) \le \frac{2K}{1-\frac{2R\epsilon}{\...
4 votes
1 answer
188 views

Proving that the bias of the derivative of Parzen-Rosenblatt (kernel density) estimator is of order $O(h^2) $ and $O(h)$ when $h$ approaches $0$

I came across this property that I don't get and I couldn't find the proof anywhere: Suppose we have a density $K$ of the standard normal distribution and $K'$ its derivative. Suppose that the density ...
6 votes
1 answer
1k views

Is there a non-boostrap way to estimate confidence intervals for Kernel regression predictions?

Simple problem of estimating: $$ y = f(x) + \epsilon $$ Where I use your standard Nadaraya-Watson Regression to guess $f(x)$. This is relatively fast and works well even in an online setting. Now I ...
2 votes
0 answers
134 views

Propensity score non parametric estimation

In several papers, in the 'double machine learning' literature, the propensity score (a nuisance parameter) is estimated non parametrically. It is a bit unclear how this estimation is performed, as ...
9 votes
2 answers
3k views

Estimating the gradient of log density given samples

I am interested in estimating the gradient of the log probability distribution $\nabla\log p(x)$ when $p(x)$ is not analytically available but is only accessed via samples $x_i \sim p(x)$. There ...
1 vote
0 answers
128 views

Gasser Müller estimator for estimating the derivative $m'(x)$ of a nonparametric regression function

I would like to compare the performance of the Gasser Müller estimator with other estimators for estimating the the derivative $m'(x)$ of the regression function $m(x)$. Let's say we have the ...
1 vote
0 answers
38 views

Maximum bias for NW estimator when $r(x)$ is Lipschitz (question 17, chapter 5 All of Non-Parametric Statistics)

The general condition is that $Y_i = r(X_i) + \epsilon_i$, and we want to estimate $r$ using Nadaraya–Watson kernel regression. We additionally assume $r\colon [0,1] \to \mathbb{R}$ is lipschitz, so $|...
1 vote
0 answers
251 views

Bias of kernel density estimator of pdf $f$, where $f$ has bounded first derivative $f'$

Let's say the kernel density estimator is given by $$\hat f(x) = \frac{1}{nh_n} \sum_{i=1}^n K\left(\frac{X_i-x}{h_n}\right),$$ where $h_n \to 0$, $nh_n \to \infty$, $K$ a symmetric probability ...
0 votes
0 answers
40 views

Kernel Density Estimator: Misunderstanding in Taylor Series and the bias of KDE [duplicate]

Let's say the kernel density estimator is given by $\hat f(x) = \frac{1}{nh_n} \sum_{i=1}^n K(\frac{X_i-x}{h_n})$, where $h_n \to 0$, $nh_n \to \infty$, $K$ a symmetric probability distribution ...

15 30 50 per page
1
2 3 4 5
7