[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1.0 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5
[2,] -0.5 1.0 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5
[3,] -0.5 -0.5 1.0 -0.5 -0.5 -0.5 -0.5 -0.5
[4,] -0.5 -0.5 -0.5 1.0 -0.5 -0.5 -0.5 -0.5
[5,] -0.5 -0.5 -0.5 -0.5 1.0 -0.5 -0.5 -0.5
[6,] -0.5 -0.5 -0.5 -0.5 -0.5 1.0 -0.5 -0.5
[7,] -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 1.0 -0.5
[8,] -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 1.0
The reason I know is because this results in a negative eigenvalue, and variance-covariance matrices are positive semi-definite.
My thinking here was to have the variances be one, so that the correlations were the covariances, and thus equal -0.5.
Is there something with the theory I am missing? I understand that it is not positive semi-definite and how to show as such, but I am more curious what assumptions this is violating in terms of probability/statistics.
I went to generate MVN data with this variance-covariance structure and realized this wasn't positive semi-definite, and then became curious what was inherently wrong with this matrix.