Matrix Composition

New Section : A convenient 2x2 matrix basis

Introduction, Summary, and Results

Matrix Composition is a useful tool when you know something about matrices (or in general, tensors) in N dimensions, and you want matrices with similar relations in higher dimensions. This makes a N^2 dimensional version, and you can then choose a few. This is useful in group theory, RQM, and maybe string theory and topology.

A,B,C,... are matrices in N dimensions

A#B is the composition of A and B, a N^2 dimensional matrix.

The rules of the composition operation are:

This is all you need to know to use the composition.

What is the composition? It's matrices inside matrices. Let's do A#B :

A#B=	
	| A00 B		A01 B		A02 B ... 
	| A10 B		A11 B		...
	| A20 B ...
(the B elements are expanded inside, :
A#B=	
	| A00 B00	A00 B01 .. A01 B01 ...
	| A00 B10	A00 B11 ...
	| ...
	| A10 B00 ...

An example of use: You have the matrices Ai , i = 1 to N, and they have some commutation rules: [Ai,Aj] = Bij , and {Ai,Aj} = Cij
You want higher-dimensional matrices with the same commutations. This is done easily. You can double your number of matrices: Di = A1 # Ai , and Ei = I # Ai. Now you have:
[Di,Dj] = [A1#Ai,A1#Aj] = A1#[Ai,Aj] = (A1)(A1)#Bij
[Ei,Ej] = II#Bij = I#Bij
[Di,Ej] = [A1#Ai,I#Aj] = A1#[Ai,Aj] = A1#Bij

The same works for the anti-commutator. Now, you have the same result as the 2nd part of the composition, but with and extended set. This may be done repeatedly, and more front-ends will appear in the composition, but the back end will remain the same.

The parts of the composition reduce when Bij = delta(i,j) and the matrices Ai are unitary (AiAi = I).

For example, the Lorentz group is the compose of two 3-D rotations (which are 2x2 matrices) : the 3-D rotation in real space, and the 3-D rotation in (2-D + time) space. Compose these two spin groups (2x2 representations of rotations) and you get the Lorentz.

If the matrices you are composing (Ai and Bj) form groups (A and B) then the new set A#B will form a "ring". (which will sometimes also be a group).

Formalism

Looking at the form of the composition, it is easy to derive the basic rules for operations. (A#B) is a matrix, so it obeys matrix multiplication.

(A#B)(C#D) = (E#F) = R

R is an N^2-d matrix.

R00 = Sum[ij=0 to N-1] { A0i B0j Ci0 Dj0 }

This also shows why we "compose" matrices instead of just multiply them. We get a sum on two different indices, crossing. Separating these sums, we get:

R00 = Sum[i=0 to N-1] { A0i Ci0 } Sum[j=0 to N-1] { B0j Dj0 }

Now writing the more general case:

R00 = E00 F00
R = (E#F)
Ejk = Sum[i=0 to N-1] { Aji Cik }
Fjk = Sum[i=0 to N-1] { Bji Dik }

Ok?

Now, addition.

R = A#B + C#D = E#F

(A0B1)(C0D1) = (A0C0)(B1D1) = (AC0)(BD1)
(A0 + C0)(B1 + D1) = A0B1 + A0D1 + C0B1 + C0D1
A0(B1+D1) = A0B1 + A0D1 = A#B + A#D
(A+C)#(B+D) = A#(B+D) + C#(B+D)
			= A#B + A#D + C#B + C#D

A#B + C#D = (A+C)#(B+D) - A#D - C#B

[A,C]#[B,D] = (AC-CA)#(BD-DB) = (AC - CA)#BD + (AC-CA)#DB
[A#B,C#D] = (A#B)(C#D) - (C#D)(A#B) = AC#BD - CA#DB

[A,C]#[B,D] = AC#BD - CA#DB + AC#DB - CA#BD
		    = [A#B,C#D] + [A#D,C#B]

if A == C :

[A#B,A#D] = (A#B)(A#D) - (A#D)(A#B) = AA#BD - AA#DB = AA#[B,D]

thus,

{A#B,A#D} = (A#B)(A#D) + (A#D)(A#B) = AA#BD + AA#DB = AA#{B,D}


A Convenient 2x2 basis

From a 2x2 basis, we can use composition to make matrices of any even dimensionality, so it is usefull to consider these with some care.

We simply transform the ordinary "basis set" for a 2x2, which is:

(Any 2x2) = a[1 0, 0 0] + b[0 1, 0 0] + c [0 0, 1 0] + d [0 0, 0 1]

(commas indicate rows).

Instead, put:


M1	= [1 0, 0 1]
M1-	= [1 0, 0 -1]
MT	= [0 1, 1 0]
MT-	= [0 1,-1 0]

(M1 is "one", M1- is "one minus", MT is "transpose" (it's the transpose operator), etc.).

so (Any 2x2) = a M1 + b M1- + c MT + d MT-

(not the same abcd as above).

This basis is convenient because they have a particularly simple multiplication table. For example:

M*M* = 1

for all * except MT- , for which

(MT-)(MT-) = -1

Also, M1(M*) = (M*) for all * For all multiplications, (Mx)(My) = (+-) (Mz) , that is, any of the four times any others makes another one of the four, up to a sign.

Furthermore, M1- is very convenient as an eigenvalue generator:

(M1-)f = k f

so f = [x 0] has value k = 1

and f = [0 x] has value k = -1

so the 2-vectors are a basis pointing "with" and "against" (M1-)

This immediately suggests the use of these matrices for the Pauli spin.

Spin (sigma) = { MT , -i MT- , M- }

makes spin-z the basis generator for the 2-vectors.

(I noted earlier that MT is the transpose operator, MT[x,y] = [y,x] and also MT- is the transpose with eigenvalue, that is, a + or - is picked up MT[x,y] = MT-[x,-y])

An interesting application is in the Lorentz group generators. I won't present the full proof here, but you may already know that

J(uv) = -i/4 [Yu,Yv]

(where Y are the gamma matrices related to the Dirac matrices by Y = Y0 a , (up to a constant))

The Y are just composes of spin matrices:

Y(j) = MT- # Spin(j)

Y(0) = MT # M1

(u,v are 0-3 indices, i,j are 1-3 indices) and the result is:

J(0j) = -J(j0) = (-i/2) M1- # Spin(j)

J(ij) = (1/2) M1 # ( Eijk Spin(k) )

(Eijk is "Epsilon ijk" the completely antisymmetric tensor)

so all of J is diagonal in the first compose position.

Furthermore, parity is P = (MT # M1) , so we call the first compose part the "parity" part. We see J(ij) doesn't affect the first part, and acts like an ordinarily spacial rotation on the second part, so we see that J is diagonal except for parity! Also, J(0j) pulls out a (+-) eigenvalue based on the parity state. So we see the 2nd part is a spinor basis, with +- spin(z) eigenvalue, and the 1st part is a simple parity basis, with a +- eigenvalue to indicate if the vector is entirely in the "real world" or the mirrored world.

We can see SO(3) E SO(3,1) (E means "element of") (that is, spacial 3d rotations are a subset of the full lorentz : the J(0j) is a boost, and the second part of J(ij) is just the ordinary SO(3) :

SO(3,1)[ij] = 1 # SO(3)[ij]

SO(3,1)[0j] = boost[j] = M1- # { SO(3) generator vector }

This view leads to an easy way to solve for the basis vectors for the Dirac spinor. I won't go into this here, since the extension is obvious (and the background required is great)

For example, in Weinberg's work, and many others, he labels the Dirac-field bases by a pair of 2-indices instead of a single 4-index, that is: f(+-,+-) instead of f(u). We make this rigorous with matrix composition, in which the +-,+- bases are just the bases of the pre-composed parts.


Charles Bloom / cb at my domain
Send Me Email


Back to the Index

The free web counter says you are visitor number