Making determinants my friends

The determinant is unique

A fact I’ve known for a very long time, but never bothered to prove for myself, is the unicity of the determinant for square matrices. More precisely, let be any commutative ring with identity; then the determinant is the only -valued function on matrices that: (i) is linear on each column, (ii) is such that switching two columns changes the sign, and (iii) sends the identity matrix to 1.

A nice way to prove this is to find an explicit formula for the determinant of a square matrix using only these three properties. Let’s do the 2x2 matrices for a start. The linearity in the columns means we can decompose the problem into smaller ones: From the fact switching two columns changes the sign, the determinants in the middle give out zero, and the one on the right evaluates to . Hence the determinant comes out to be , as expected.

The idea of the general proof is contained in the previous 2x2 case. The argument is as follows. From linearity in the columns, we can split the general determinant problem into computing the determinant of all matrices which are made up of standard basis vectors (i.e. in each column there is exactly one 1 and the rest are zeroes). As soon as two columns are identical, the determinant vanishes, hence the only determinants left to calculate are those for matrices made up of standard basis vectors that are all different, that is to say, matrices where there’s exactly one 1 in each row and each column, the rest being zeroes. In other words, the only things left are all of the column (or rows) permutations of the identity matrix. From that description we can simply read off the known determinant formula:

Computing the determinant is just multiplying the columns

We can get a little more abstract and obtain another pretty proof/remark. Let be a -dimensional vector space over some field . We could use free -modules instead, where is a commutative ring with unity, but let’s stick with the vector space. The -th exterior power is a one-dimensional vector space over ; the proof of this fact is nice and easy, and uses precisely the same idea as the proof outline I’ve written in the previous section. Writing for a basis for , the -blade is a basis for .

It’s known (the proof isn’t hard) that multilinear alternating maps are in a natural bijective correspondence with linear maps such that . Hence there is a unique multilinear alternating map that maps the identity matrix to ; of course that map is called the determinant. This gives a construction of the determinant: construct the unique linear map sending to , and call its image by the natural bijection the determinant.

Basically as a consequence of the universal property I mentioned in the previous paragraph, defining the determinant in this way gives us the following equation: Hence to compute the determinant you just multiply the columns using the wedge product, and reduce in until you find it.

For instance,

If you can imagine that the -blade represents the volume of the -dimensional cube, then this formula says the determinant is the volume of the -parallelotope spanned by the columns of the matrix.

No comment found.

Add a comment

You must log in to post a comment.