Yufei Zhao

Month: December, 2012

Sphere packing bounds via spherical codes

Henry Cohn and I just uploaded to the arXiv our paper “Sphere packing bounds via spherical codes.” [Update: Henry gave a wonderful talk about our paper at the IAS. A video of the talk is available here.]

KeplerConjectureWhat’s the most space-efficient way to arrange a collection of identical balls? This is known as the sphere packing problem. It is a very difficult problem with a long and interesting history. This problem in 3-dimensions is known as Kepler conjecture, which says that the the face-centered cubic formation does best. This is basically how grocers stack oranges. The seemingly innocent Kepler conjecture turned out to be extremely difficult, and it was only solved in the late 1990’s by Thomas Hales who gave a very complex proof involving massive computer-aided calculations. I’ve included a few article links at the end of this blog post on the history and background of the sphere packing problem.

Now, what about sphere packing in higher dimensions? Unfortunately, we know very little about what happens beyond three dimensions. No proof of optimality is known in any higher dimension, and there are only a few dozen dimensions in which there are even plausible conjectures for the densest packing. In dimensions 8 and 24 there are upper bounds that are extremely close to the conjectured optima, thanks to the works of Cohn, Elkies, and Kumar [1,2,3] (dimensions 8 and 24 are somehow special because of the existence of highly symmetric lattices known as the {E_8} lattice in dimension 8 and the Leech lattice in dimension 24). However, in most dimensions we must be content with much cruder bounds.

The current state of art for sphere packing density upper bounds is more or less as follows:

  • In dimensions 1, 2, 3, the exact upper bound is known. The result for dimension 3 is due to Hales.
  • In low dimensions, namely 4 to 42, Cohn and Elkies improved previous record by Rogers, although there were some recent improvements in dimensions 4,5,6,7,9 by de Laat, Filho, and Vallentin using techniques of semidefinite programming.
  • In all high dimensions, namely 43 and above, the best bounds are due to Kabatiansky and Levenshtein in 1978 and have not been improved since then.

The purpose of our of paper is fourfold:

  1. We give a small improvement over the 1978 bounds of Kabatiansky and Levenshtein in high dimensions by giving a simple modification of their geometric argument relating spherical codes to sphere packings.
  2. Kabatiansky and Levenshtein derived their bounds by first formulating a linear program for proving upper bounds on spherical codes. Cohn and Elkies found a more direct approach to bounding sphere packing densities, with no need to consider spherical codes. However, despite the excellent performance in low dimensions, the asymptotic behavior of the Cohn-Elkies bound is far from obvious and it has been unclear whether it improves on, or even matches, the Kabatiansky-Levenshtein bound asymptotically. In our paper, we show that in every dimension {n}, the Cohn-Elkies linear program can always match the Kabatiansky-Levenshtein approach. This further demonstrates the power of the linear programming bound for sphere packing.
  3. We prove an analogue of the Kabatiansky-Levenshtein bound in hyperbolic space. The resulting bound is exponentially better than the best bound previously known in hyperbolic space.
  4. We develop the theory of hyperbolic linear programming bounds and prove that they too subsume the Kabatiansky-Levenshtein approach. Packing in hyperbolic space is much more difficult to handle than in Euclidean space, primarily because the volume of an expanding ball grows exponentially with its radius (instead of polynomially as in the case of Euclidean space), so we that cannot neglect boundary fluctuations. In fact, it is a non-trivial matter to even define the density of a packing (there are examples of packings for which any reasonable definition of density should yield two different answers).

Further reading

Some articles on the history and background of the sphere packing problem:

Popular audience:

General mathematical audience:

Advertisements

Energy-minimizing error-correcting codes

Henry Cohn and I just uploaded to the arXiv our paper “Energy-minimizing error-correcting codes.”

Hamming cubeGeometry has long played a key role in coding theory, starting with the work of Hamming: binary codes can be viewed as packings of Hamming balls in a discrete cube. This framework provides a powerful analogy between discrete and continuous packing problems, which has been extensively developed and remains an active research topic. In our paper, we extend the analogy to a much broader relationship between coding theory and discrete models of physics. Of course, physics is related to coding theory in many ways, ranging from connections between spin glasses and codes to the statistical physics of belief propagation and other applications of graphical models to coding theory. Applications of physics to coding theory typically focus on the limit as the block length tends to infinity. Instead, in this paper we show that certain classical codes are exact ground states of natural physics models.

In addition to extending the analogy with continuous packing problems, our results can be thought of as addressing a philosophical problem. Many classical codes—such as Hamming, Golay, or Reed-Solomon codes—remain very popular, despite the many other good codes that have been found. Why should this be? One obvious answer is that these codes are particularly beautiful and useful, especially given the simplicity of their constructions. Another is that they were discovered early in the development of coding theory and had a chance to cement their place in the canon. We propose a third explanation: a code is most useful if it is robust, in the sense that it optimizes not just one specific measure of quality, but rather a wide range of them simultaneously. We will prove that these classical codes have a rare form of robustness that we call universal optimality, based on an analogy with continuous optimization problems.

Thomson modelTo see this analogy, recall the following classic problem: given {n} particles on a sphere interacting via some mutually repelling force (the Thomson model for electrons is a good example of this), in what configurations would the particles arrange themselves? The configuration(s) of greatest interest is the ground state, which is the one possessing the least potential energy.

This then leads to the following fundamental problem in extremal geometry: given some metric space (e.g. sphere) and an arbitrary potential function based on the distance between pairs of points, for any positive integer {N}, how should {N} points arrange themselves to minimize the total potential energy of the system?

One might expect that difference choices of potential energies might lead to different ground state configurations—and this is usually the case. However, there are some highly symmetric configurations which are highly robust in the sense that they minimize not just a single potential function, but a broad class of potential functions.

What kind of potential functions should we be looking at? For example, the electric potential function in {{\mathbb R}^3} is {f(r) = 1/r}. The natural generalization to {{\mathbb R}^n} is {f(r) = 1/r^{n-2}}. However, we are also interested in more general potentials. Ideally, the potential function should possess certain properties. First, it should be decreasing with distance (since we’re interested in repelling forces; the ground state with an attractive force isn’t so interesting, as all particles would collapse into one point). The potential function should also be convex, since the effect of the force should diminish with distance. We can express these two conditions as {f' \leq 0} and {f'' \geq 0} respectively. Let us extrapolate from these two conditions and impose similar conditions on higher order derivatives, namely, {f''' \leq 0}, {f^{(4)} \geq 0}, etc. Note that all power laws {f(r) = r^{\alpha}} for {\alpha < 0} satisfy these conditions.

The work of Cohn and Kumar studied precise this class potential functions, which they called completely monotonic, for points on a sphere in {{\mathbb R}^n} (actually, for technical reasons, they consider functions {f} of the square of the distance between pairs of points). They studied configurations of points on a sphere which minimize all completely monotonic potential function. Examples of such configurations include the regular simplex, the cross polytope, the icosahedron, the 600-cell, roots of the {E_8}-lattice, and the minimal vectors of the Leech lattice. These beautiful and highly symmetric configurations all have very robust energy minimization properties.

In our current work, we analyze error-correcting codes using this perspective of energy minimization. Binary error-correcting codes can be thought of set of particles on a high-dimensional cube. We are interested in finding out which error-correcting codes minimize a broad class of potential energies? The class of potential function is the discrete analogue of the completely monotonic functions described above, with the conditions on derivatives replaced by conditions on successive finite differences.

As mention in the beginning, we show that many classical codes have robust minimization properties, which translate into good performance according to a broad range of measures. For example, such codes minimize the probability of an undetected error under the symmetric channel, and it also has interesting consequences for other decoding error probabilities.

Our main technical tool for bounding energy is the linear program developed by Delsarte, which was originally used to bound the size of codes given their minimum distance. We will call a code LP universally optimal if its universal optimality follows from these bounds. One of our key results is that LP universal optimality behaves well under duality, thereby allowing us to apply our criteria to many classes of codes.

One result we found particularly surprising is that LP universally optimal codes continue to minimize energy even after we remove a single codeword. We know of no analogue of this property in the continuous setting. This property also implies structural properties, namely that every LP universally optimal code is distance regular, i.e., for each distance, every codeword has the same number of codewords at that distance.

%d bloggers like this: