1
$\begingroup$

I am interested in parametric and non-parametric machine learning algorithms, their advantages and disadvantages and also their main differences regarding computational complexities. In particular I am interested in the parametric Gaussian Mixture Model (GMM) and the non-parametric kernel density estimation (KDE). As I understood it that if a "small" number of data points is used then parametric (like GMM/EM) are the better choice but if the amount of data points increases to a much higher number then non-parametric algorithms are better. Could someone please explain both in bit more detail regarding comparison?

$\endgroup$
10
  • 1
    $\begingroup$ Why did you include the computer-vision tag? Is your question specific to that area of application? $\endgroup$ Commented Dec 16, 2020 at 8:35
  • $\begingroup$ what source said GMM specifically is better than KDE for small sample sizes? $\endgroup$
    – develarist
    Commented Dec 16, 2020 at 9:34
  • $\begingroup$ @RichardHardy This question relates, among other things, to the topic of Computer Vision/Image Processing $\endgroup$
    – john price
    Commented Dec 16, 2020 at 9:37
  • $\begingroup$ @develarist That is what I was mainly able to take from some papers. Do you have a different point of view? $\endgroup$
    – john price
    Commented Dec 16, 2020 at 9:38
  • 1
    $\begingroup$ @develarist it is not the case that parametric models should be used for small sample sizes. For example in the realm of statistical methods, with small samples we don't have enough information to assess model assumptions and tend to use nonparametric and semiparametric methods. $\endgroup$ Commented Dec 16, 2020 at 12:52

0