Skip to main content

All Questions

1 vote
0 answers
40 views

How to show $\sup_{x\in [a,b]}|f_n(x)-f(x)|=O_p(\sqrt{\frac{\log n}{nh}}+h^2)$ when the kernel $K(\cdot) $ is of bounded variation?

Consider the kernel estimate $f_n$ of a real univariate density defined by $$f_n(x)=\sum_{i=1}^{n}(nh)^{-1}K\left\{h^{-1}(x-X_i)\right\}$$ where $X_1,...,X_n$ are independent and identically ...
Kevin's user avatar
  • 31
1 vote
0 answers
43 views

Why is histogram density estimation nonparametric?

My understanding of histogram density estimation: For $k$ predefined equal-width bins $(b_0, b_1], (b_1, b_2], ..., (b_{k-1}, b_k]$ and $n$ observations $x_1,...,x_n \in (b_0,b_k]$, we estimate ...
fin's user avatar
  • 11
0 votes
0 answers
85 views

Expected value (and variance) of a Dirichlet Process

Suppose I have a measure $G$ that follows a Dirichlet Process, $$G \sim DP(H_0,\alpha)$$ where $H_0$ is some base measure. Is there a closed form solution for the expected value of $G$?
dogs4ever's user avatar
5 votes
2 answers
544 views

Is density estimation the same as parameter estimation?

I was studying parameter estimation from Sheldon Ross' probability and statistics book. Here the task of parameter estimation is described as follows: Is this task the same of density estimation in ...
tail's user avatar
  • 151
1 vote
0 answers
250 views

Bias of kernel density estimator of pdf $f$, where $f$ has bounded first derivative $f'$

Let's say the kernel density estimator is given by $$\hat f(x) = \frac{1}{nh_n} \sum_{i=1}^n K\left(\frac{X_i-x}{h_n}\right),$$ where $h_n \to 0$, $nh_n \to \infty$, $K$ a symmetric probability ...
Phil's user avatar
  • 636
0 votes
0 answers
40 views

Kernel Density Estimator: Misunderstanding in Taylor Series and the bias of KDE [duplicate]

Let's say the kernel density estimator is given by $\hat f(x) = \frac{1}{nh_n} \sum_{i=1}^n K(\frac{X_i-x}{h_n})$, where $h_n \to 0$, $nh_n \to \infty$, $K$ a symmetric probability distribution ...
Phil's user avatar
  • 636
0 votes
0 answers
50 views

How to prove symmetry of a Uniform kernel?

I am trying to prove this kernel is valid, $$ K(x) = \frac{1}{2}I(-1 < x < 1) $$ So far I can integrate to 1, but how do I prove $$k(x) = k(-x)$$ Also, how do we satisfy that k(x) is $\ge$ 0 for ...
user359211's user avatar
1 vote
0 answers
100 views

Optimal rate of convergence of nonparametric density estimators

Suppose that $X_1, X_2, \dots, X_n$ forms an independent and identically distributed sample from some $d$-dimensional probability distribution with unknown probability density function $f$. Let $x$ be ...
lmaosome's user avatar
  • 140
1 vote
0 answers
273 views

histogram vs. kernel in density estimation

Assume we have a problem of estimation of a density $f(x)$ over an interval $[0, 1]$. Can a regular histogram (i.e. with equal-sized bins) be viewed as some kind of a kernel?
ABK's user avatar
  • 666
1 vote
0 answers
135 views

Extraction of modes from a multi-modal density function

I am trying to extract modes from a multi-modal density function and not just peaks. For example, in the two density functions below (images), I would like to extract the curves contained in the black ...
curiosus's user avatar
  • 303
1 vote
0 answers
107 views

Convex hull version of density estimation (or lines of constant density)

Background: So I had a thought, tried it out, and liked what it did. I'm sure someone else has done this. It feels very convenient. It also gives an interesting take on robust nonparametric density ...
EngrStudent's user avatar
  • 9,570
0 votes
0 answers
288 views

Building a classifier using Parzen window

Considering the application of the Parzen window method to model a probability density function in a binary classification problem, and assume a training set where the 4 points {−5, −1, 1, 5} belong ...
AfonsoSalgadoSousa's user avatar
2 votes
1 answer
39 views

Why might the functional form of a distribution be "inappropriate" for a particular application?

Working through Bishop's Pattern Recognition and Machine Learning(a great read so far!) and on page 67 he says: "One limitation of the parametric approach is that it assumes a specific ...
stochasticmrfox's user avatar
2 votes
0 answers
41 views

Unexpected zero on posterior density of Dirichlet process mixture

I was reading this notebook from the PyMC3 documentation about Dirichlet Process Mixtures and, on the last figure, the estimated density reaches almost zero for a particular value, despite the ...
PedroSebe's user avatar
  • 2,680
4 votes
0 answers
441 views

Derivation of k nearest neighbor classification rule

One way to derive the k-NN decision rule based on the k-NN density estimation goes as follows: given $k$ the number of neighbors, $k_i$ the number of neighbors of class $i$ in the bucket, $N$ the ...
diegobatt's user avatar
  • 426

15 30 50 per page