3
votes
Accepted
Estimate Gaussian Mixture Model (GMM) Parameters Embedded in Linear System
I can see 2 approaches to tackle this:
Solve in 2 Steps
Basically $\boldsymbol{x}$ is a 1D Gaussian Mixture Model (GMM) data.
So you can find $\hat{\boldsymbol{x}}$ using Linear System Solver and in ...
1
vote
What is the collection of functions that a given finite neural network can approximate with ease?
I think the numbers of effective parameters needed to approximate these functions are very different. With a fixed sample size $m=3000$, there could be insufficient information to estimate too many ...
1
vote
Accepted
sorting functions by amount of conditions for a random dataset to be described using it?
If you want to characterize functions, you probably need something like Kolmogorov complexity - length of shortest program that outputs the sequence. Unfortunately, it's defined only up to constant (...
1
vote
Accepted
Solve the Soft SVM Dual Problem with L1 Regularization
The problem is formulated as:
$$
\begin{align*}
\arg \min_{\boldsymbol{x}} \quad & \frac{1}{2} \boldsymbol{x}^{T} \boldsymbol{K} \boldsymbol{x} - \boldsymbol{x}^{T} \boldsymbol{y} + \varepsilon {\...
1
vote
Sigmoid vs heaviside step function
$σ(0)=.5$ and $σ'(x)>0$ for any real $x$, so any $σ(x)>.5$ implies that $x>0$, so if $x>0$ then $H(x)=1$. Note that if $x=0$ then $H(x)=σ(x)=.5$
Only top scored, non community-wiki answers of a minimum length are eligible
Related Tags
machine-learning × 3358statistics × 659
linear-algebra × 457
probability × 450
optimization × 402
neural-networks × 286
regression × 193
convex-optimization × 159
probability-theory × 153
gradient-descent × 150
calculus × 149
matrices × 137
bayesian × 110
probability-distributions × 107
linear-regression × 107
derivatives × 105
statistical-inference × 100
multivariable-calculus × 83
algorithms × 83
functional-analysis × 69
reference-request × 69
logistic-regression × 69
real-analysis × 66
normal-distribution × 66
data-analysis × 62