Assuming a normal likelihood function is used for a maximum likelihood estimate of $\mu$, how can I prove that the maximum likelihood estimate of $\mu$ actually provide a maximum likelihood estimate?
The following is what I have done to get a maximum likelihood estimate of $\mu$ for a multivariate normal distribution:
$$\tag{1}\text{Likelihood function}$$ $$f(X_1, X_2, \dots, X_n\mid\mu, \Sigma) = \prod_{j=1}^n \left\{\dfrac 1 {(2\pi)^{p/2} |\Sigma|^{1/2}} e ^{-(x_j-\mu)^{\text{T}}\Sigma^{-1}(x_j-\mu)/2} \right\}$$
$$\tag{2} \text{Log likelihood}$$ $$\ln f(X_1, X_2, \dots, X_n\mid \mu, \Sigma) = \ln \prod_{j=1}^n \left\{\frac 1 {(2\pi)^{p/2}|\Sigma|^{1/2}} e ^{-(x-\mu)^\text{T}\Sigma^{-1}(x-\mu)/2} \right\}$$
$$\tag{3} \text{Differentiated the log-likelihood function}$$ $$= \frac{np}{2} \log 2\pi + \frac n 2 \log |\Sigma| + \frac 1 2 \sum_{i=1}^n (x_i-\mu)^T \Sigma^{-1}(x_i-\mu) $$
$$\frac{\partial \ln \ell(\mu, \Sigma)}{\partial \mu}=\frac 1 2 \sum_{i=1}^n 2\Sigma^{-1}(\mu-x_i) = \Sigma^{-1} \sum_{i=1}^n(\mu-x_i)=0 $$
Did some algebra to get $\mu$
$$\Sigma^{-1}\sum_{i=1}^n(\mu-x_i) = 0$$
$$n\mu-\sum_{i=1}^nx_i = 0$$
$$n\mu=\sum_{i=1}^nx_i$$
$$\mu=\frac 1 n \sum_{i=1}^nx_i$$
$$\mu^*_\text{MLE}=\frac 1 n \sum_{i=1}^ n x_i$$