The March 2026 issue of the Annals of Mathematics contains a paper by Dinur, Evra, Livne, Lubotzky, and Mozes (Dinur et al. 2026) presenting an explicit construction of error-correcting codes with constant rate, constant relative distance, and local testability with constant locality and testability parameters. This result resolves a long-standing central open problem in the theory of error-correcting codes.
An error-correcting code is a method for protecting information transmitted through a noisy communication channel. As a simple example, suppose we want to transmit a message of length consisting of symbols and . One strategy is to replace each symbol with a block of identical symbols . If at most symbols in each block are corrupted during transmission, the receiver can recover the original symbol by taking the majority value within the block. In this example, the length of the encoded message becomes .
In general, an error-correcting code is an injective function for some , where denotes the set of binary strings of length . Let denote the image of . The integer is called the length of the code, while the ratio is called the rate of the code.
For two strings , define their relative distance by The relative distance of the code is that is, the minimal relative distance between two distinct codewords.
The receiver knows in advance that only strings from may be transmitted. If the distance between any two codewords is at least , and at most symbols are corrupted during transmission, then the receiver can uniquely determine which codeword was sent by choosing the closest codeword to the received string. This discussion shows that desirable codes should simultaneously have high rate (allowing many messages to be encoded) and large relative distance (allowing many errors to be corrected).
A family of error-correcting codes with lengths is called good if there exist universal constants such that for all . It is known (Blokh and Zyablov 1973) that such codes exist whenever and satisfy the Gilbert–Varshamov bound where is the binary entropy function.
Another highly desirable property of an error-correcting code is local testability. Given an integer and a real number , we say that a code with image is -locally testable if there exists a probabilistic algorithm that reads at most bits of a string and outputs or such that
If , the algorithm outputs with probability .
If , then the algorithm outputs with probability at least , where is the relative distance from to the nearest codeword.
The rejection probability can be amplified arbitrarily by repeating the test multiple times. Therefore, if is -locally testable with parameters and independent of , we can determine whether extremely quickly and with high confidence. The parameter is called the locality of the code, while is called the testability parameter.
A central open problem in the area has been whether there exist error-correcting codes that simultaneously have constant rate, constant relative distance, and local testability with constant locality and testability parameters. The recent paper (Dinur et al. 2026) (and independently (Panteleev and Kalachev 2022)) answers this question affirmatively.
Theorem not only establishes the existence of codes with constant rate, relative distance, and locality, but also shows that the rate can be made arbitrarily close to , while the codes remain explicitly constructible in polynomial time. Furthermore, combining Theorem with the main result of (Gopi et al. 2018) implies that the existence statement remains valid for any satisfying the Gilbert–Varshamov condition , although a deterministic polynomial-time construction is not known throughout this entire range. Finally, one can choose parameters satisfying for some polynomials , where .