Saturday, May 11, 2013

Matrix representation for tridimensional space geometric algebra

In my previous post I wrote about Geometric Algebra generalities. We saw that the tridimensional space generate a geometric algebra of dimension \(2^3 = 8 = 1 + 3 + 3 + 1\) composed of four linear spaces: scalars, vectors, bivectors and pseudo-scalars.

The elements of the subspaces can be used to describe the geometry of euclidean space. The vectors are associated to a direction in the space, bivectors are associated to rotations and the pseudo-scalar corresponds to volumes.

The product of two vectors is composed of a scalar part, their scalar product, and a bivector part. The bivector part correspond to the oriented area of the parallelogram constructed with the two vectors like is shown in the figure below:

An illustration of a vector, a bivector and a volume, equivalent to a pseudo scalar (image from Wikipedia)

GA computations in euclidean space



If you want to use the GA space to make computations you will need to "represent" the elements of each space. With my surprise I discovered that a generic multivector for tridimensional space can be represented by a 2x2 complex matrix.

A rapid calculation shows that the dimensions of the space is fine: 2x2 complex matrix have dimension 8 just like tridimensional GA space.

The interest of this representation is that the GA product corresponds to the ordinary matrix product.

What is the actual matrix representation ?


 Well, the easy one is the unit scalar. We can easily guess that it does correspond to the unitary matrix\[\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix}\] but things get slightly more interesting for vectors.

If we call \(\hat{\mathbf{x}}\), \(\hat{\mathbf{y}}\), \(\hat{\mathbf{z}}\) the basis vectors, their matrix counterparts should satisfy the following equalities\[\hat{\mathbf{x}}^2 = \hat{\mathbf{y}}^2 = \hat{\mathbf{z}}^2 = 1\] It can be verified that the properties above are verified by hermitian matrices so that we can write \[\hat{\mathbf{x}} = \begin{pmatrix}0 & i \\ -i & 0 \end{pmatrix} , \qquad \hat{\mathbf{y}} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} , \qquad \hat{\mathbf{z}} = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix} \] Once we have the vectors we can derive the matrices for bivectors by taking their products to obtain \[\hat{\mathbf{x}} \hat{\mathbf{y}} = \mathbf{i} \hat{\mathbf{z}} = \begin{pmatrix}i & 0 \\ 0 & -i \end{pmatrix} , \qquad \hat{\mathbf{y}} \hat{\mathbf{z}} = \mathbf{i} \hat{\mathbf{x}} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} , \qquad \hat{\mathbf{z}} \hat{\mathbf{x}} = \mathbf{i} \hat{\mathbf{y}} = \begin{pmatrix}0 & i \\ i & 0 \end{pmatrix} \]
Finally the pseudo-scalar is obtained as the product of the three basis vector\[ \mathbf{i} = \hat{\mathbf{x}} \hat{\mathbf{y}} \hat{\mathbf{z}} = \begin{pmatrix}i & 0 \\ 0 & i \end{pmatrix} \]
Since this latter matrix is equal to \(i I\) we can simply identify the imaginary unit with the unitary pseudo-scalar \(\mathbf{i}\).

And so, what ?


From the practical point of view it means that you can represent a vector of components \((v_x, v_y, v_z)\) with the matrix \[\mathbf{v} = \begin{pmatrix} v_z & v_y + i v_x \\ v_y - i v_x & -v_z \end{pmatrix}\]The vector addition can be performed simply by taking the matrix sum.

The multiplication of two vectors yield a bivector whose general matrix representation is \[\mathbf{w} = \begin{pmatrix} i w_z & i w_y - w_x \\ i w_y + w_x & -i w_z \end{pmatrix}\]

These relation let us compute the matrix representation of any vector or bivector but we need also to perform the opposite operation: given a complex matrix, extract its scalar, vector, bivector and imaginary parts.


This is actually quite trivial to work out. For any complex matrix\[\begin{pmatrix} s & u \\ v & t \end{pmatrix}\] the scalar part, real plus imaginary, is simply \(\frac{s + t}{2}\). The vector part is given by\[\begin{pmatrix} v_x + i w_x \\ v_y + i w_y \\ v_z + i w_z \end{pmatrix} = \begin{pmatrix} \frac{u - v}{2i} \\ \frac{u + v}{2} \\ \frac{s - t}{2} \end{pmatrix}\]

With the relation above you can extract the scalar and vector part for any given complex matrix.

Rotations

Rotations of a vector \(\mathbf{v}\) can be expressed in GA using the relation\[e^{\mathbf{i} \, \hat{\mathbf{u}} \, \theta / 2} \, \mathbf{v} \, e^{-\mathbf{i} \, \hat{\mathbf{u}} \, \theta / 2}\] where \(\hat{\mathbf{u}}\) is the versor oriented along the rotation axis.

So if you want to compute rotations you can do it by using the exponential of a bivector. This can be obtained very easily using the relation\[e^{\mathbf{i} \, \hat{\mathbf{u}} \, \theta} = \cos(\theta) + \mathbf{i} \, \sin(\theta) \hat{\mathbf{u}}\]

The relation above does not let you compute the exponential of any 2x2 complex matrix. It only work for matrices that represents bivectors. You can easily extend the formula to take into account a scalar component but things get complicated for the vector part.

Actually the exponential of a vector involves the hyperbolic sinus and cosinus as given by the relation\[e^{\hat{\mathbf{u}} \, \theta} = \cosh(\theta) + \sinh(\theta) \hat{\mathbf{u}}\]but the exponential of a combination of vector and bivectors is more complicated (I confess I do not know the explicit formula).

Anyway this is not really a problem since for rotations you only need to take the exponential of bivectors plus real numbers.

A silly example...


Let us make an example to compute a rotation of 30 degree of a vector \(\mathbf{b} = 1/2 \, \hat{\mathbf{x}}+\hat{\mathbf{z}}\) around the z axis. In matrix representation we have\[\mathbf{b} = \begin{pmatrix} 1 & i/2 \\ -i/2 & -1 \end{pmatrix}\]The rotation of 30 degree is given by\[U \mathbf{b} = e^{i \pi/12 \hat{\mathbf{z}}} \, \mathbf{b} \, e^{-i \pi/12 \hat{\mathbf{z}}}\] which, in matrix representation becames\[\begin{multline}U \mathbf{b} = \begin{pmatrix}\cos(\pi/12) + i \sin(\pi/12) & 0 \\ 0 & \cos(\pi/12) - i \sin(\pi/12) \end{pmatrix} \begin{pmatrix} 1 & i/2 \\ -i/2 & -1 \end{pmatrix} \\ \begin{pmatrix}\cos(\pi/12) - i \sin(\pi/12) & 0 \\ 0 & \cos(\pi/12) + i \sin(\pi/12) \end{pmatrix} \end{multline}\]and by carrying out the products we obtain\[U \mathbf{b} = \begin{pmatrix}1 & -1/2 \sin(\pi/6)+i/2 \cos(\pi/6) \\ -1/2 \sin(\pi/6)-i/2 \cos(\pi/6) & -1 \end{pmatrix} \]which is the expected results, since rotation around the z axis transform the x component into a mix of x and y with coefficients \(\cos(\pi/6)\) and \(\sin(\pi/6)\).

Le coup de scène

We said above that to generate rotations we only need bivectors and real numbers. It turns out that they constitute a subalgebra of the GA euclidean space. It is called the even subalgebra and it is isomorphic to quaternions. This shows that quaternions and bivectors are essentially equivalent representation and both are inherently related to rotations.

To terminate this post, the reader who already know quantum physics have probably noted that the basis vectors in the matrix representation are actually the Pauli matrices. What is interesting is that in Geometric Algebra they appear naturally as a representation of the basis vector and they are not inherently related to quantum phenomenons.

From the practical point of view we have seen that 2x2 complex matrix can be used to compute geometrical operations in an elegant and logical way. The matrix representation is also reasonable for practical computations even if it is somewhat redundant since it always represent the most general multi-vector with 8 components.

Saturday, May 4, 2013

Geometric Algebra, the wonderful revelation

Some time ago I discovered on Hacker News the Oersted Medal lecture of David Hestenes, Reforming the mathematical language of physics . The text describe among other things the field of geometric algebra with a very didactic and easy to read style.

For me it was a revelation, I was fascinated by elegance of this theory and how naturally it did explain a lot of things that were otherwise unconnected.

Hestenes does explain that you can sum scalar and vectors and it make sense and you can multiply vectors in a natural and meaningful way. The Algebra that you obtain, the Geometric Algebra, is incredibly rich and suggests a lot of ideas that could have been otherwise unseen.

The central idea of the geometric algebra is the product of two vectors written as \(\mathbf{a} \mathbf{b}\). It is not commutative and it does embody the concepts of both scalar and extern product.

One of the most fascinating things to me was the fact that each space of dimension \(n\) is associated to a multi-vector space of dimension of \(2^n\). In turn the multi-vector space is composed of n + 1 subspaces each of dimension \[\binom{n}{k} \qquad \textrm{for} \, \, k = 0, \ldots, n \]
For example the space has dimension 3 so it is broken down to 4 subspaces of dimensions 1, 3, 3, 1. The first one is the space of scalars which are just real numbers. Then we have the familiar vector space of dimension 3. What is interesting is the other two subspaces of dimensions 3 and 1. They are the bivectors and the pseudo-scalars proportional to the imaginary unit \(i\).

Actually one of the first things you discover when you learn geometric algebra is that the imaginary unit is naturally introduced by geometric algebra for each space of dimensions greater then one. It is quite fascinating to me to discover that a purely geometric theory naturally requires the imaginary unit whereas in ordinary geometry it is an extraneous concept.

The imaginary unit is actually related to the vectors by the relation\[\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 = \mathbf{i}\]

Now you can wonder what bivectors are and what is their meaning. Probably their properties are better described by the geometric algebra itself but you can get an idea of what they are quite easily. You can think to them as product of two orthogonal vectors or equivalently as purely imaginary vector.

Actually the following equality holds for the tridimensional space \[\mathbf{e}_1 \mathbf{e}_2 = \mathbf{i} \, \mathbf{e}_3\]
where the vectors \(\mathbf{e}_i\) are the base vectors. The expression above is actually true for any pair permutation of the three indexes.

The image above, taken from Wikipedia, illustrates the basis vectors and their product, including the pseudo-scalar depicted as a volume.

From the relation above it is quite clear that there is a one-to-one relation between vectors and bivectors. The transformation is done simply by multiplying the vector by the imaginary unit like in the right hand side of the equation above.

The bivectors explain the mysterious axial vectors that you probably already know. Actually axial vector are nothing else that bivectors that you represent by an ordinary vector using the biunivocal relation between them.

One of the most wonderful simplification stemming from GA is that the electromagnetic field can be thought as composed of a vector and a bivector part as described by the relation\[F=\mathbf{E}+\mathbf{i}\mathbf{B}\]
so that all the Maxwell equations are expressed by a single equation:
\[\left(\frac{1}{c}\partial_t + \nabla \right) F = \rho - \frac{1}{c} \mathbf{J}\]
For example the equation above, in the electrostatic case, becomes:
\[\nabla F = \rho \] that correspond to the two equations \[\nabla \cdot \mathbf{E} = \rho \qquad \nabla \wedge B = 0\]

Another fascinating thing about Geometric Algebra is that the non-commuting rules of basis vectors in tridimensional space is the same of relations of the Pauli matrices. The difference is that, in geometric algebra the relations are a simple properties of the geometry and does not need to be introduced ad hoc like it happens in quantum mechanics.

Last but not least one of the most clear advantage of GA is to describe spatial rotations directly in term of vectors. For example is \(\hat{\mathbf{u}}\) is an unitary vector a rotation of a vector \(\mathbf{v}\) of \(\theta\) degrees around \(\hat{\mathbf{u}}\) is described by \[e^{\mathbf{i} \, \hat{\mathbf{u}} \, \theta / 2} \, \mathbf{v} \, e^{-\mathbf{i} \, \hat{\mathbf{u}} \, \theta / 2}\]

For the interested people I strongly recommend to read the lecture of Hestenes for an excellent, complete introduction to this fascinating subject.