LatMRG Guide  1.0
A software package to test and search for new linear congruential random number generators
Theory on Lattices

This is the theory page available in LatticeTester.

This page covers the concept of lattices as well as a few algorithms very important in LatMRG.

Lattices on the Integers

Definitions

\(t\)- dimensional lattices in the real space \(\mathbb{R}^t\) are discrete subspaces of the real vector space \(\mathbb{R}^t\) that can be expressed as

\begin{equation} L_t = \left\{\mathbf{v} = \sum_{j=1}^t z_j\mathbf{v}_j\mid \mbox{ each } z_j\in\mathbb{Z}\right\} = \oplus_{j=1}^t \mathbb{Z} \mathbf{v}_j, \label {eq:lattice} \end{equation}

where \(t\) is a positive integer, and \(\mathbf{v}_1,\dots,\mathbf{v}_t\) are linearly independent vectors in \(\mathbb{R}^t\) which form a basis of the lattice. The matrix \(\mathbf{V}\), whose \(i\)th row is \(\mathbf{v}_i\), is the corresponding generator matrix of \(L_t\). A comprehensive treatment of such lattices can be found in [4]. The algorithms implemented in LatticeTester need to be exact most of the time. To work around this limitation, the library only considers lattices with \(\mathbf{v}_1,\dots,\mathbf{v}_t \in \mathbb{Z}^t\). Note that if the basis vectors are in \(\mathbb{Q}^t\) instead of \(\mathbb{Z}^t\) it is possible rescale the lattice by multiplying all the vectors it contains by an integer such that the resulting lattice is in \(\mathbb{Z}^t\). The details of this conversion are left to the user as it is not implemented.

The dual lattice of \(L_t^*\) is defined as

\[ L_t^* = \{\mathbf{h} \in \mathbb{R}^t \mid \forall \mathbf{v} \in L_t, \mathbf{h}\cdot\mathbf{v} \in \mathbb{Z} \}. \]

The dual of a given basis \(\mathbf{v}_1,\dots,\mathbf{v}_t\) is the set of vectors \(\mathbf{w}_1,\dots,\mathbf{w}_t\) in \(\mathbb{R}^t\) such that \(\mathbf{v}_i\cdot\mathbf{w}_j = \delta_{ij}\), where \(\delta_{ij}=1\) if \(i=j\), and \(\delta_{ij}=0\) otherwise. It forms a basis of the dual lattice. These \(\mathbf{w}_j\)'s are the columns of the dual basis generator matrix \(\mathbf{W} = \mathbf{V}^{-1}\), the inverse of the generator matrix \(\mathbf{V}\).

LatticeTester stores and manipulates only lattices by storing its basis and, possibly, its dual basis in integer coordinates. If a lattice has an integer basis, its dual is often fractional. This is a problem that is solved by storing a rescaled version of the dual: the m-dual \(mL_t^*\). When using the m-dual, m is chosen so that \(m\mathbf{W}\) is an integer matrix. Such a m usually is defined by the application for which lattices are used. It is the responsibility of the user to find it out. In this software and the documentation, we assume that the appropriate rescaling has already been done and we always work directly with a lattice \(\Lambda_t \subset \mathbb{Z}^t\) and its m-dual \(\Lambda_t^*\subset\mathbb{Z}^t\). The advantage of this framework is that all vector coordinates are integers and easily representable exactly on the computer.

There are a few more usefull definitions needed to understand this software. The determinant of the matrix \(\mathbf{V}\) is equal to the volume of the fundamental parallelepiped \(\{\mathbf{v} = \lambda_1\mathbf{v}_1 + \cdots + \lambda_t\mathbf{v}_t \mid 0\le \lambda_i\le 1\) for \(1\le i\le t\}\). It is independent of the choice of basis and is called the determinant or volume of \(L_t\). The quantity \(n = 1/\det(L_t) = 1/\det(\mathbf{V}) = \det(\mathbf{V}^{-1})\) is called the density of \(L_t\) and it represents the average number of points per unit of volume in the lattice. When \(L_t\) contains \(\mathbb{Z}^t\), the density \(n\) is an integer equal to the cardinality of the point set \(L_t \cap [0,1)^t\).

For a given lattice \(L_t\) and a subset of coordinates \(I = \{i_1,\dots,i_d\} \subseteq \{1,\dots,t\}\), denote by \(L_t(I)\) the projection of \(L_t\) over the \(d\)-dimensional subspace determined by the coordinates in \(I\). This projection is also a lattice, whose density divides that of \(L_t\). There are exactly \(\det(L_t(I))/\det(L_t)\) points of \(L_t\) that are projected onto each point of \(L_t(I)\). In group theory language, \(L_t(I)\) corresponds to a coset of \(L_t\).

A shifted lattice is a lattice \(L_t\) shifted by a constant vector \(\mathbf{v}_0\not\in L_t\), i.e., a point set of the form \(L'_t = \{\mathbf{v}+\mathbf{v}_0 : \mathbf{v} \in L_t\}\), where \(L_t\) is a lattice. The uniformity of a shifted lattices \(L'_t\) can be analyzed by subtracting the shift and analyzing the (unshifted) lattice \(L_t\).

Measures of Uniformity

Shortest Nonzero Lattice Vector and Spectral Test

The euclidian length \(l\) of the shortest non-zero vector in the lattice \(\Lambda_t\) corresponds to the distance between the nearest lattice points. For a given lattice density, having a shorter \(l\) means that the points in the lattice are more tightly packed together on fewer parallel hyperplans, which is undesirable. As counter-intuitive as it seems, it turns out that the uniformity of the points of the lattice is maximized when the length of the shortest vector is maximized. This motivates taking this \(\ell\) as a measure of uniformity, which we want to maximize.

To make a measure of uniformity, we need to rescale the value of \(\ell\) so that it can be compared between different lattices with different densities. To do this, we need to make a parallel between lattices and sphere packings. It is possible to see a lattice as a packing of non-interlapping spheres of radius \(\ell/2\) centered on every lattice point. [4] provides the theoretical foundation needed to study this packing. It turns out that the proportion of space covered by these spheres is a good indicator of the uniformity of the points: a more even point distribution and a longer \(\ell\) mean that points are at a more even distance, which is preferable if we want an even point distribution.

Using what is available in [4], it is also possible to provide bounds on the length \(\ell\). Let \(V_t\) be the volume of a sphere in dimension \(t\). Then the proportion of space taken by spheres of radius \(\ell/2\) centered on each point of the lattice is

\[\Delta_t = \frac{V_t \ell n}{2^t}.\]

With this equation it is to rewrite

\[\ell = 2 \left(\frac{\Delta_t}{V_t n}\right)^{1/t} \leq 2\left(\frac{\Delta_t^*}{V_t n}\right)^{1/t}.\]

where \(\Delta_t^*\) is a bound on \(\Delta_t\). [4] provides various values for Hermit constants ([4], [30]) \(\gamma_t = 4(\Delta_t^*/V_t)^{2/t}\) that LatticeTester uses in subclasses of the Normalizer class to give a bound on \(\ell\) as \(\ell \leq \gamma_t^{1/2} \det(L_t)^{1/t}\).

Using these bounds, it is possible to build figures of merit. Figures of merit are values scaled between 0 and 1 that can be used to easily compare different lattice with different \(n\) with some criterion specified beforehand. LatticeTester does not, however, provide figures of merit by itself so this topic is not covered more in depth here.

The Spectral Test

The work previously done with \(\Lambda_t\) can also be applied on the shortest non-zero vector of \(\Lambda_t^*\) the m-dual of \(\Lambda_t\). The normalization \(\ell(\Lambda_t^*)/\ell^*\), with the upper bound \(\ell^*\), is called the spectral test and it is a widely used computation to assess the uniformity of a lattice. Geometrically, \(1/\ell(\Lambda_t)\) is the distance between the familly of hyperplans the furter away from each other that cover the points of the lattice. A large value of \(1/\ell(\Lambda_t^*)\) means large empty portions of space without points, this is undesirable when building a uniform lattice. The Spectral Test is a widely used figure of merit on lattices (see [14], [16], [24]), especially when researching random number generators.

Finding the shortest non-zero vector is equivalent to the resolution of the following quadratic integer programming problem:

\begin{align} \text{Minimize } & & \Vert \mathbf{v} \Vert^2 & = \mathbf{v}^\mathbf{t} \mathbf{v} & & \\ \ \text{Subject to } & & \mathbf{v} & = \sum_{i=1}^t z_i \mathbf{v}_i \ & & 0 & < \sum_{i=1}^t |z_i|\ & & & z_i \in \mathbb{Z}. \end{align}

Solving this problem requires the usage of integer programming and of an algorithm that will ultimately amount to enumerating all the points. There exists a Branch-and-Bound (B&B) procedure that specifically targets this problem first introduced in [6], than improved with [7] and presented in [14]. It is implemented in LatticeTester. Just like every B&B algorithm, this algorithm can take up to exponential time in \(t\). To speed up its computation, LatticeTester also provides other reduction algorithms that can be applied beforehand, so that search on the basis can be applied on already short vectors with a much smaller search space.

We first present the 3 reduction algorithms available in LatticeTester before exposing in detail the B&B procedure.

Lattice Reduction Algorithms

Pairwise reduction

Pairwise reduction (sometimes called Dieter reduction) is a long known heuristic based on the Gram-Schmidt orthogonalization. It was first used on this exact problem by Dieter in [6].

Let first recall and introduce our notation for the Gram-Schmidt orthogonalization. From the basis \(\{\mathbf{v}_1, \ldots, \mathbf{v}_t\}\), we can get an orthogonal basis \(\{\hat{\mathbf{v}}_1, \ldots, \hat{\mathbf{v}}_t\}\) by taking

\begin{align*} \hat{\mathbf{v}}_1 & = \mathbf{v}_1 \ \hat{\mathbf{v}}_i & = \mathbf{v}_i - \sum_{j=1}^{i-1} \mu_{ij} \hat{\mathbf{v}}_j,\ 1< i \leq t\ \mu_{ij} & = \frac{\mathbf{v}_i \cdot \hat{\mathbf{v}}_j}{\hat{\mathbf{v}}_j \cdot \hat{\mathbf{v}}_j},\ 1 \leq j < t,\ i > j \end{align*}

The heuristic consists in trying to reduce the length of a basis vector \(\mathbf{v}_i\) by subtracting from it \(q\) times another basis vector \(\mathbf{v}_j\), for some integer \(q\) and given indices \(i\not=j\). We want to choose \(q\) as a local minima for \(\Vert \mathbf{v}_i - q \mathbf{v}_j\Vert\). This means that we want

\begin{equation} \Vert \mathbf{v}_i - (q-1) \mathbf{v}_j \Vert^2 \geq \Vert \mathbf{v}_i - q \mathbf{v}_j \Vert^2 \leq \Vert \mathbf{v}_i - (q+1) \mathbf{v}_j\Vert^2 \ \mathbf{v}_j \cdot \mathbf{v}_j + 2(\mathbf{v}_i - q \mathbf{v}_j) \cdot \mathbf{v}_j \geq 0 \leq \mathbf{v}_j \cdot \mathbf{v}_j - 2(\mathbf{v}_i - q \mathbf{v}_j) \cdot \mathbf{v}_j \ -\mathbf{v}_j \cdot \mathbf{v}_j \leq 2(\mathbf{v}_i - q \mathbf{v}_j) \cdot \mathbf{v}_j \leq \mathbf{v}_j \cdot \mathbf{v}_j \ -\frac{1}{2} + \frac{\mathbf{v}_i \cdot \mathbf{v}_j}{\mathbf{v}_j\cdot\mathbf{v}_j} \leq q \leq \frac{1}{2} + \frac{\mathbf{v}_i \cdot \mathbf{v}_j}{\mathbf{v}_j\cdot\mathbf{v}_j} \end{equation}

The last line means that \(q = \lfloor1/2 + \frac{\mathbf{v}_i \cdot \mathbf{v}_j}{\mathbf{v}_j\cdot \mathbf{v}_j}\rfloor\) is a suitable integer solution to the problem. By taking this \(q\) we also get that \(\Vert \mathbf{v}_i - q \mathbf{v}_j \Vert <= \Vert \mathbf{v}_i \Vert\).

As noted by Dieter, pairwise reductions can also be applied conjointly on the primal and dual basis. The idea is simply to make it so that every time a vector is modified, the other basis is changed accordingly so that \(\mathbf{V}\mathbf{W} = mI\) at all times. More details are available in [6].

Lenstra-Lenstra-Lovasz reduction

Lenstra, Lenstra, and Lovasz [27] have proposed a popular form of lattice basis reduction known as LLL reduction. Our software implements or uses (via a library implementation) a slightly modified version of this algorithm presented in [32].

For \(1/4 \leq \delta < 1\) (usually close to 1), we say that a basis is \(\delta\)-LLL reduced if

  1. \( |\mu_{ij}| \leq 1/2 \) for \(1 \leq j < i \leq t\) ;
  2. \(\delta \leq \frac{\Vert \hat{\mathbf{v}}_{i+1} + \mu_{i+1\,i} \hat{\mathbf{v}}_i \Vert^2}{\Vert \hat{\mathbf{v}}_i\Vert^2}\), for \(0 < i \leq t-1 \).

As stated by Victor Shoup in the NTL documentation, The conditions that the vectors respect when reduced with the LLL algorithm do not have a clear geometric meaning, but a \(\delta\)-LLL reduced basis with its vectors \(\{\mathbf{v}_1, \ldots, \mathbf{v}_t\}\) ordered by length satisfies the following:

\[ (\delta-1/4)^{i-1} \leq \Vert\mathbf{v}_i\Vert^2 \lambda_i^{-2} \leq (\delta-1/4)^{1-t},\ 1 \leq i \leq t \]

with \(\lambda_i\) the \(i\)-th shortest non-zero vector in the lattice. This condition gives a relatively tight bound on the shortest vector in the basis after the \(\delta\)-LLL reduction. This vector is often much shorter than the one obtained with pairwise reduction and yields a faster B&B. It is also notable that, in large dimensions for which the B&B is impractical, this short vector can be used as an approximation of the shortest non-zero vector in the lattice.

Block Korkine-Zolotarev Reduction

The final reduction algorithm available in LatticeTester is the Block Korkine-Zolotarev (BKZ) reduction. The objective of this reduction algorithm is to provide a generalization of the previous LLL algorithm that gives stronger bounds on the results. As it is, the LLL algorithm only considers a condition on basis vectors two at a time. BKZ however generalizes LLL by changing the condition 2 to:

\[ \delta \Vert \hat{\mathbf{v}}_i \Vert^2 \leq \lambda_1(L^i(\mathbf{v}_1, \ldots, \mathbf{v}_{\min(i+\beta-1, t)}))^2,\ 1 \leq i \leq t-1. \]

where \(2 \leq \beta < t\) is called a block size, \(\lambda_1(L)\) is the shortest vector in lattice \(L\) and \(L_i(b_1, \ldots, b_n)\) is the projection of \(\{b_1,\ldots, b_n\}\) on \(\{\mathbf{v}_1, \ldots, \mathbf{v}_{i-1}\}\), which is a lattice of rank \(t-i+1\). A lattice respecting this condition and the condition 1 of \(\delta\)-LLL reduction is called \(\delta\)-BKZ with block size \(\beta\).

BKZ reduction has been introduced in [33] and perfected to use a variable precision in [32]. It turns out that BKZ reduction gives tighter bounds on the shortest obtained vector in the reduced basis: provided that \(\beta-1\) divides \(t-1\) theorem 2.3 in [33] gives

\[ \lambda_1(L_t)^2 \leq \Vert \mathbf{v}_1 \Vert^2 \leq \alpha_\beta^{(t-1)/(\beta-1)} \lambda_1(L_t)^2 \]

as a bound on the first vector in the basis. Note that \(\alpha_\beta\) is a constant with \(\alpha_2 = \frac{4}{3}\) and generally \(\alpha_\beta \leq \beta^{1+\ln\beta}\). This implies that \(1 < \alpha_\beta^{1/(\beta-1)} \rightarrow 1\) as \(\beta\) increases. This is better than what can be achieved with LLL, which is

\[ \lambda_1(L_t)^2 \leq \Vert \mathbf{v}_1 \Vert^2 \leq \frac{4}{3}^{t-1} \lambda_1(L_t)^2. \]

Note that, in reality, these bounds differ a little from this theory since the implemented algorithms use the floating point versions \(\delta\)-LLL and \(\delta\)-BKZ for efficiency.

LatticeTester does not provide an implementation for the BKZ reduction algorithm and relies on the NTL library to perform it. We therefore do not provide pseudo-code of this algorithm.

A Quick Note on Precision

The previous sections could have been confusing because there are many variants to the presented algorithms. This is because the two algorithms cannot generally be performed in polynomial time, but weakening their conditions with the \(\delta\) factor gives way better bounds on execution times. LatticeTester implements and uses these weakened versions. Some theory that does not concern those versions is presented for comparison purposes only.

Shortest Vector Computation

We now address the problem of computing a shortest nonzero vector with length \(\ell\) in the lattice \(\Lambda_t\). This problem amounts to finding integers \(z_1,\dots,z_t\), not all zero, such that the vector \(\mathbf{v} = z_1 \mathbf{v}_1 + \cdots + z_t \mathbf{v}_t\) is as short as possible. Trying all combinations for those \(z_j\)'s is definitely not an efficient option. For the Euclidean norm, we formulate this problem as a quadratic integer programming (optimization) problem with decision variables \(z_1,\dots, z_t\), and show how to solve it by a branch-and-bound (BB) procedure. Following the ideas of [6] [7], we get the formulation we eluded to before:

\begin{align} \text{Minimize } & & \Vert \mathbf{v} \Vert^2 & = \mathbf{v}^\mathbf{t} \mathbf{v} & & \\ \ \text{Subject to } & & \mathbf{v} & = \sum_{i=1}^t z_i \mathbf{v}_i \ & & 0 & < \sum_{i=1}^t |z_i|\ & & & z_i \in \mathbb{Z}. \end{align}

Suppose that the basis vectors \(\mathbf{v}_1,\dots,\mathbf{v}_t\) are ordered by increasing length and that our best solution so far is the vector \(\mathbf{v}_1\), with square length \(\hat{\ell} = \mathbf{v}'_1 \mathbf{v}_1\). This \(\hat{\ell}\) is an upper bound on the length \(\ell\) of a shortest vector.

In the B&B algorithm, we fix successively \(z_t\), then \(z_{t-1}\), then \(z_{t-2}\), down to fixing \(z_1\). Without any more constraints, this yields an (possibly infinite) exponentially growing tree. To avoid this, at each step, for any fixed values of \(z_t,\dots,z_{j+1}\), we compute a finite range (interval) of values of \(z_j\) such that all values outside that range cannot lead to a better solution than \(\hat{\ell}\). All These values will be tested to verify that there is no possible shorter vector than the one we currently know. When a shorter vector is found, \(\mathbf{v}_1\) is changed to that new vector and the search continues. The search ends when all combinations of \(z_j\) inside the intervals have been tested.

Computing an interval for \(z_j\) is the main challenge of this problem. We present the approach of [2] to get this interval. This is what is implemented in LatticeTester.

At the beginning of the B&B procedure, as in [1] [7] [12] [31], we compute a Cholesky decomposition [11] of the matrix of scalar products of basis vectors \(\mathbf{V}\mathbf{V}^t\) which always is positive-definite:

\[ \mathbf{V} \mathbf{V}^t = \mathbf{U}^t \mathbf{U} \]

where \(\mathbf{U}\) is an upper triangular matrix:

\[ \mathbf{U} = \begin{pmatrix} u_{11} & \ldots & u_{1t} \\ \vdots & \ddots &\vdots \\ \mathbf{0} & \ldots & u_{tt} \end{pmatrix} \]

Now, if \(\mathbf{z} = (z_1,\dots,z_t)\), we have a shortest vector candidate \(\mathbf{v} = \mathbf{V}^t\mathbf{z}^t\). Note \(r_j = \sum_{k=j+1}^t u_{jk} z_k\), and \(s_j = \sum_{k=j+1}^t \left(\sum_{l=k}^t u_{kl} z_l\right)^2\), then we have [1] [7] [31] :

\begin{eqnarray*} \mathbf{v}^t\mathbf{v} &=& \mathbf{z}\mathbf{V} \mathbf{V}^t \mathbf{z}^t = \mathbf{z}\mathbf{U}^t \mathbf{U} \mathbf{z}^t = \sum_{k=1}^t \left(\sum_{l=k}^t u_{kl} z_l\right)^2 \\\ &\ge& \left( u_{jj} z_j + \sum_{l=j+1}^t u_{jl} z_l\right)^2 + \sum_{k=j+1}^t \left(\sum_{l=k}^t u_{kl} z_l\right)^2 \\\ &=& (u_{jj} z_j + r_j)^2 + s_{j} \quad = \; s_{j-1}. \end{eqnarray*}

A better solution must satisfy \(\mathbf{v}^t\mathbf{v} < \mathbf{v}_1^t \mathbf{v}_1\), which implies \((u_{jj} z_j + r_j)^2 + s_j < \mathbf{v}_1^\mathbf{t} \mathbf{v}_1\). centering the inequalities around \(z_j\) gives the following bounds

\[ \left\lceil {-(\mathbf{v}_1^\mathbf{t} \mathbf{v}_1-s_j)^{1/2} - r_j\over u_{jj}}\right\rceil \;\le z_j\le\; \left\lfloor {(\mathbf{v}_1^\mathbf{t} \mathbf{v}_1-s_j)^{1/2} - r_j\over u_{jj}}\right\rfloor. \tag{bounds} \]

Note that to obtain these bounds, the knowledge of \(z_t,\dots,z_j\) is needed and that this strategy uses the continuous relaxation of the original problem. Finding and applying those bounds is a cutting plane method. The numbers obtained mean that the possible values of \(z_j\) to find a shorter vector in the lattice are in the interval. We can restrict ourselves to the integers that lie in that interval when branching from the node \(\{z_t,\dots, z_j\}\). Note that \(s_t = r_t = 0\), so at the beginning the bounds on \(z_t\) are given by \(z_t^2 \le \mathbf{v}_1^\mathbf{t} \mathbf{v}_1\).

Observe that if \(\mathbf{v} = \mathbf{z}\mathbf{V}\) is a lattice vector, \(-\mathbf{v} = (-\mathbf{z})\mathbf{V}\) is also a lattice vector with the same norm. We can exploit this symmetry to cut the BB tree in half, simply by considering only non-negative values of \(z_t\). We can do that because any vector \(\mathbf{z}\) with \(z_t < 0\) has an equivalent vector \(-\mathbf{z}\) having \(z_t > 0\).