Positive noncommutative polynomials are sums of squares

This post continues my series on favorite theorems of the 21st century. For an overview of the categories and earlier selections, see this post.

My choice for 2001 in Algebra is Helton’s resolution of the matrix analogue of Hilbert’s 17th problem.

Matrices occupy a central place in modern mathematics. They arise naturally in linear algebra, functional analysis, optimization, quantum mechanics, and many other areas. Because matrices generalize numbers in a natural way, mathematicians have long sought to extend classical results about real numbers and polynomials to settings where the variables are matrices rather than scalars. However, this seemingly modest generalization often leads to profound conceptual differences, largely because matrices do not commute: in general . As a result, problems that are solved long ago in the commutative setting may become significantly more subtle in the non-commutative one.

One classical problem where these issues appear is the problem of certifying that a polynomial is non-negative. Suppose is a polynomial in several real variables. A natural strategy for proving that is non-negative for all real inputs is to attempt to express it as a sum of squares of other polynomials. Indeed, any expression of the form is automatically non-negative. Such decompositions therefore provide explicit certificates of non-negativity and play an important role in real algebraic geometry and optimization.

However, not every non-negative polynomial admits a representation as a sum of squares of polynomials. A famous counterexample was discovered by Motzkin in 1967: This polynomial is non-negative for all real values of , yet it cannot be written as a sum of squares of polynomials. The existence of such examples shows that the naive approach to certifying non-negativity is insufficient.

This phenomenon had already been anticipated by Hilbert at the turn of the twentieth century. In his famous list of problems presented in 1900, Hilbert asked whether every non-negative polynomial can at least be written as a sum of squares of rational functions. In other words, does every non-negative polynomial admit a representation of the form for some integer and polynomials ? This question became known as Hilbert’s 17th problem.

The problem was solved in 1927 by Artin, who proved that such a representation always exists. Artin’s theorem established a deep connection between positivity and algebraic structure, showing that even though polynomial sums of squares are insufficient, rational sums of squares are powerful enough to certify non-negativity in full generality. Later work by Delzell (Delzell 1984) provided algorithms for constructing such decompositions explicitly. These developments form a cornerstone of real algebraic geometry and have had lasting influence on optimization and theoretical computer science.

A natural question then arises: what happens if we replace real variables by matrices? In many areas of mathematics and physics one encounters expressions in which the variables are matrices that do not commute. This leads to the study of non-commutative polynomials. Understanding positivity in this setting is particularly important in operator theory, systems theory, and quantum information.

In 2002, Helton (Helton 2002) obtained a striking analogue of Hilbert’s 17th problem in the non-commutative (matrix) setting. Remarkably, the non-commutative world turns out to be simpler than the commutative one: rational functions are no longer needed.

To formulate the result, we introduce some terminology. Consider a set of formal variables A non-commutative monomial in these variables is an expression of the form , where is a real coefficient and each is either or for some . The order of the factors matters, since the variables are not assumed to commute. A non-commutative polynomial is a finite sum of such monomials.

Given real matrices of size , we evaluate by substituting for and for . The result is again an matrix. Let denote the set of all matrices that can arise in this way, ranging over matrices of all dimensions and all possible substitutions.

We say that is symmetric if every matrix in is symmetric. We say that is matrix-positive if every matrix in is positive semidefinite. Finally, we say that is a sum of squares if it can be written in the form where , , are non-commutative polynomials in the same variables.

We are now ready to state Helton’s theorem (Helton 2002).

The theorem is remarkable for several reasons. In the classical commutative setting, non-negative polynomials are not necessarily sums of squares, and one must pass to rational functions to obtain a general representation. In the non-commutative setting, however, positivity automatically forces a sum-of-squares structure at the polynomial level itself. In other words, the obstruction discovered by Motzkin disappears once non-commutativity is introduced.

Helton’s result has had significant impact in several areas, including operator algebras, free probability, and semidefinite optimization. It provides a powerful structural description of matrix-positive polynomials and has led to a flourishing theory of non-commutative real algebraic geometry. From the perspective of Hilbert’s 17th problem, it is particularly striking: the matrix world, which initially appears more complicated, turns out to exhibit a cleaner and more rigid positivity theory than the classical scalar case.

References

Delzell, C. N. 1984. “A Continuous, Constructive Solution to Hilbert’s 17-Th Problem.” Invent. Math. 76 (3): 365–84.
Helton, J. William. 2002. Positive’ Noncommutative Polynomials Are Sums of Squares.” Ann. Of Math. 156 (2): 675–94.

No comment found.

Add a comment

You must log in to post a comment.