Skip to main content

All Questions

1 vote
0 answers
33 views

From a constraint satisfaction problem (CSP) to a sudoku grid [closed]

one of the existing methods of solvin a sudoku grid is via constraints satisfaction (CSP), but can we do the inverse ie convert a CSP problem into a sudoku grid and then solve it ?
youssef Lmourabite's user avatar
1 vote
0 answers
36 views

Interpolation in convex hull

I'm reading a paper, Learning in High Dimension Always Amounts to Extrapolation, that provides a result I don't understand. It provides this theorem which I do understand: Theorem 1: (Bárány and ...
Christopher D'Arcy's user avatar
1 vote
0 answers
127 views

Matrix valued word embeddings for natural language processing

In natural language processing, an area of machine learning, one would like to represent words as objects that can easily be understood and manipulated using machine learning. A word embedding is a ...
Joseph Van Name's user avatar
1 vote
0 answers
98 views

Problems Correction of "Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning "' [closed]

Where I can find the problems correction of this book " Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning "
zdo0x0's user avatar
  • 11
2 votes
0 answers
86 views

Nuclear norm minimization of convolution matrix (circular matrix) with fast Fourier transform

I am reading a paper Recovery of Future Data via Convolution Nuclear Norm Minimization. Here, I know there is a definition for convolution matrix. Given any vector $\boldsymbol{x}=(x_1,x_2,\ldots,x_n)^...
Xinyu Chen's user avatar
2 votes
0 answers
44 views

Combining SVD subspaces for low dimensional representations

Suppose we have matrix $A$ of size $N_t \times N_m$, containing $N_m$ measurements corrupted by some (e.g. Gaussian) noise. An SVD of this data $A = U_AS_A{V_A}^T$ can reveal the singular vectors $U_A$...
user2600239's user avatar
1 vote
0 answers
96 views

Converting an indexed equation to a matrix one

I am helping a friend with a project involving neural networks and he wants to convert this equation into matrix notation: $$w_{ij} = \sum_{n=1}^N\left[\sum_{i=1}^I(r_{in}-y_{in})v_{ih}\right](1-z_{hn}...
user3308874's user avatar
8 votes
0 answers
319 views

Monotonicity of log determinant of Gaussian kernel matrix

Let \begin{equation} k({x},{y}) = \sigma \exp\left(-\frac{(x-y)^2}{2\theta^2}\right)\end{equation} be a squared-exponential (Gaussian) kernel, with $\sigma,\vartheta>0$. Consider, for a set of $N$ ...
Heinrich A's user avatar
5 votes
1 answer
207 views

Hermite polynomial after rotation

When we consider the $n$-dimensional standard normal distribution, the orthogonal basis is $\{H_S(x)\}_{S}$ where $H_S(x) = \prod_{k=1}^n H_{s_k}(x_k)$. Here $H_*(x)$ is the normalized probabilist's ...
Pascalprimer's user avatar
16 votes
1 answer
832 views

Are primes linearly separable?

Let $X_1,\cdots,X_n$ be finite subsets of some set $Z$. Then the symmetric difference metric space: $$d(X_i,X_j) = \sqrt{ |X_i|+|X_j|-2|X_i\cap X_j|}$$ can be embedded in Euclidean space. The value $|...
user avatar
-2 votes
1 answer
137 views

Find a columns of matrix $A$ which form a basis of columns space of matrix $A$ [closed]

We have a matrix $A$ whose rows are data records and whose columns are features. We would like to omit useless features such as zero or constant columns, duplicate columns, columns that are equal to ...
a4lBob's user avatar
  • 1
0 votes
1 answer
108 views

General results regarding linear separability?

I'm reading up on the theory behind support vector machines and would like a good reference with some general results about linear separability. Specifically, questions like below: Given two ...
Fred Byrd's user avatar
  • 101