12
$\begingroup$

An full $n$-dimensional lattice $\Lambda$ is a discrete subgroup of $\mathbb{R}^n$ (equipped with some norm $\lVert \cdot \rVert$) containing $n$ linearly independent points. If $\Lambda = \{ A z, z\in \mathbb{Z}^n\}$ for $A \in GL(n,\mathbb{R})$, we call the $n$ columns of $A$ a basis of $\Lambda$ (every full lattice has a basis). We call $n$ points $l_1,\dots,l_n \in \Lambda$ a minimal system if for $k\in\{1,\dots,n\},$ $$\lVert l_k \rVert = \min \{ \lVert l \rVert: l \in \Lambda \setminus \left< l_1,\dots,l_{k-1}\right>_\mathbb{R}\}.$$ Let's just consider the standard $2$-norm $\lVert x \rVert = \left< x,x\right>^\frac{1}{2}$. In two dimensions, every minimal system is also a basis. In five dimensions, this is no longer true, as is stated in the answer to this post. The lattice generated by $$ A = \begin{pmatrix} 1 & 0 & 0 & 0 & \frac{1}{2} \\ 0 & 1 & 0 & 0 & \frac{1}{2} \\ 0 & 0 & 1 & 0 & \frac{1}{2} \\ 0 & 0 & 0 & 1 & \frac{1}{2} \\ 0 & 0 & 0 & 0 & \frac{1}{2} \\ \end{pmatrix} $$ has the five standard unit vectors as a minimal system, but they do not form a basis of the lattice.

In four dimensions, the lattice generated by the basis $$ A = \begin{pmatrix} 1 & 0 & 0 & \frac{1}{2} \\ 0 & 1 & 0 & \frac{1}{2} \\ 0 & 0 & 1 & \frac{1}{2} \\ 0 & 0 & 0 & \frac{1}{2} \\ \end{pmatrix} $$ has both $A$ and the standard basis as minimal systems. This is not a stable property: If the last vector is perturbed a little, the minimal system is unique (up to reflections) and a basis again. This has led to my feeling that this is an extremal case, and the situation is as follows:

Every four-dimensional lattice has a minimal system that's also a lattice basis. Furthermore, only a zero set of matrices generate lattices that have minimal systems that are not bases.

Is that true?

$\endgroup$
3
  • 2
    $\begingroup$ Your definition of the minimal set needs some repair. We need to exclude $l_i, -l_i$ and of course $0$. (Seems like a pretty good question btw). $\endgroup$
    – user24142
    Commented Nov 4, 2019 at 9:33
  • 1
    $\begingroup$ @user24142 I thinks it's good. In the span $<l_1,\dots, l_{k-1}>$, the elements $0$, $l_1$ and $-l_1$ are all included, as well as any other linear combination. The empty span is canonically defined to be $\{0\}$. Or is there something else wrong with it? $\endgroup$ Commented Nov 4, 2019 at 16:45
  • $\begingroup$ Good call, just unfamiliar with that notation here. $\endgroup$
    – user24142
    Commented Nov 5, 2019 at 5:18

1 Answer 1

8
+200
$\begingroup$

Josef E. Greilhuber's conjecture is correct, and the "extremal case" he found is the only one up to scaling.

This follows from the following estimate:

Proposition. Let $l_1,\ldots,l_n$ be a $\bf Z$-basis for a lattice $\Lambda_0 \subset {\bf R}^n$; and let $x = \sum_{i=1}^n c_i l_i$ for some real $c_i$ with $0 \leq c_i \leq 1$ for each $i$. Then there exists $x' \in x + \Lambda_0$ such that $$ \| x' \|^2 \leq M := \sum_{i=1}^n c_i (1-c_i) \|l_i\|^2. $$ Moreover, if $\min_{x' \in x + \Lambda_0} \| x' \|^2 = M$ then each $c_i \in \{0, 1/2, 1\}$, and the $l_i$ with $c_i = 1/2$ are pairwise orthogonal.

Assume this for now. If $l_1,\ldots,l_n$ form a minimal system in some lattice $\Lambda$ but generate some proper sublattice $\Lambda_0$ then we may find some $x \in \Lambda$ that is not in $\Lambda_0$. Write $x = \sum_{i=1}^n c_i l_i$ for some $c_i \in \bf R$, not all integers; translating $x$ by a $\Lambda_0$ vector, we may assume that $0 \leq c_i \leq 1$ for each $i$. Then $\min_{x' \in x + \Lambda_0} \| x' \|^2 \geq \max_i \|l_i\|^2$. But by the Proposition, if $n \leq 4$ then $$ \min_{x' \in x + \Lambda_0} \| x' \|^2 \leq \frac14 \sum_{i=1}^n \|l_i\|^2 \leq \max_i \|l_i\|^2, $$ with equality if and only if $n=4$, each $c_i = 1/2$, and the $\|l_i\|$ are all equal. Therefore equality holds throughout, and $\Lambda_0$ is the hypercubical lattice ${\bf Z}^4$ scaled by the common value of $\|l_i\|$, while $\Lambda_0$ is the span of $$ \begin{pmatrix} 1 & 0 & 0 & \frac{1}{2} \\ 0 & 1 & 0 & \frac{1}{2} \\ 0 & 0 & 1 & \frac{1}{2} \\ 0 & 0 & 0 & \frac{1}{2} \end{pmatrix} $$ scaled by the same factor.

It remains to prove the Proposition. We show that $M$ is a weighted average of $\|x'\|^2$ over $2^n$ of the candidate vectors $x'$, whence at least one of them must have $\|x'\|^2 \leq M$.

We illustrate the technique with the critical special case that $c_i = 1/2$ for each $i$. Then we average over the $2^n$ vectors $x' = \sum_{i=1}^n \frac12 \epsilon_i l_i$ with each $\epsilon_i = 1$ or $-1$. Then $$ \|x'\| = \frac14 \sum_{i=1}^n \sum_{j=1}^n \epsilon_i \epsilon_j (l_i,l_j). $$ Each of the $n$ terms with $i=j$ contributes $\frac14 \|l_i\|$, while each of the cross-terms with $i \neq j$ averages to zero. Hence the average of $\|x'\|$ is $\frac14 \sum_{i=1}^n \|l_j\|^2$, which is the value of $M$ in this case.

In general we average over the $2^n$ vectors $x' = \sum_{i=1}^n c'_i l_i$ where each $c'_i$ is either $c_i$ or $c_i-1$, taken with weights $w(x') = \prod_{i=1}^n (1-|c'_i|)$. Each factor $1-|c'_i|$ is $1-c_i$ or $c_i$ respectively, so $\sum_{x'} w(x') = 1$ and each $w(x') \geq 0$. Moreover, for each $i$ the weighted average of the $l_i$ coefficients vanishes, and the weighted average of their squares is $$ (1-c_i) c_i^2 + c_i (1-c_i)^2 = c_i (1-c_i). $$ Therefore in the expansion of the weighted sum $$ \sum_{x'} w(x') \|x'\|^2 = \sum_{x'} w(x') \sum_{i=1}^n \sum_{j=1}^n c'_i c'_j (l_i,l_j) $$ each $i=j$ term contributes $c_i (1-c_i) \|l_i\|^2$ and each $i \neq j$ cross-term contributes zero. Hence the weighted sum is $\sum_{i=1}^n c_i (1-c_i) \|l_i\|^2 = M$, and the inequa𝑙ity is proved.

In the case of equality, $c_i (1-c_i) = \min(c_i^2, (1-c_i)^2)$ for each $i$, and all $x'$ of nonzero weight have the same norm. The first condition implies that each $c_i \in \{0, 1/2, 1\}$, and the second quickly forces the orthogonality of any two $l_i$ for which $c_i = 1/2$. $\Box$

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .