Back


I wanted to keep the "Differential Geometry" notes strictly to differential geometry, From that I was also keeping it strictly to $GL(n)$ transforms. I was trying to avoid any sort of physics notions like Lorentz invaraice, or the physical applications of Minkowski space. But here I am considering where to fit in the topics of spinors before approaching the topic of a metric, of covariant derivatives, or of a connection. Maybe I'll leave this here, maybe I'll add it all in one place after connections, maybe I'll put it in its own section of "physicsts trying to reproduce mathematicians' previously established work but doing it poorly", I mean, particle-physics...

Motivation:

Schrodinger Equation

I should put this in a QM folder separate of the "Differential Geometry" folder.
And this 'motivation' section should be a previous worksheet.

Schrodinger equation in Cartesian coordinates:
$i \hbar \frac{\partial}{\partial t} \psi = \left( - \frac{\hbar}{2 m} \frac{\partial^2}{(\partial x^i)^2} + V \right) \psi$
... where $x^i$ spans spatial indexes.
Let's distribute that abuse-of-notation:
$i \hbar \frac{\partial}{\partial t} \psi = -\frac{\hbar}{2 m} \frac{\partial^2}{(\partial x^i)^2} \psi + V \psi$

This looks like an ugly parabolic PDE, and not a beautiful hyperbolic PDE. But hold on, it's a hyperbolic PDE in disguise. It's also an advection equation / fluid motion in disguise.

The momentum-operator in position-space is $\hat{p}_i = -i \hbar \frac{d}{dx}$
We can substitute only one partial to get:
$i \hbar \frac{\partial}{\partial t} \psi = -i \frac{\hbar^2}{2 m} \hat{p}_i \frac{\partial}{\partial x^i} \psi + V \psi$
$\frac{\partial}{\partial t} \psi + \frac{\hbar}{2 m} \hat{p}_i \frac{\partial}{\partial x^i} \psi = - \frac{i}{\hbar} V \psi$
Welp, now that looks a lot more like an advection equation.
Alright it's not a real advection equation, because for it to be, the momentum operator would be evaluated, so it would need a derivative-applied squared rather than a derivative-squared applied.

Pauli Equation

$i \hbar \frac{\partial}{\partial t} \psi = \left( \frac{1}{2 m} (\sigma_i (\hat{p}_i - q A_i))^2 + q \phi \right) \psi$

Not sure what I want to do with this...

Spinors:


$O(p,q,V)$ is the "orthogonal group", consisting of transforms between $\underbrace{V \otimes ... V}_{\times p} \otimes \underbrace{V^* \otimes ... V^*}_{\times q}$ whose basis vectors are orthogonal.
$SO(p,q,V)$ is "special orthogonal" group, the group of all orthogonal transforms with determinant 1, for metric signature (p,q), with elements of set V. This turns out to be the group of rotation matrices.
A spinor represents a normalized basis in $SO(p,q,\mathbb{R})$ where $p+q=4$, but even though it's a 4D component, it is represented as a 2D complex vector/transforms, hmm...

For now I'll just consider SO(3,1).

Spinors:
$\upsilon = \left[ \begin{matrix} \upsilon_1 \\ \upsilon_2 \end{matrix} \right]$
exists within the 2D complex space, probably with some constraints to its values.

If we think of the imaginary number i as a 2x2 matrix: $i = \left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right]$
then it has all the same properties as the imaginary number does.
Keep this in mind, it'll be used a lot later...

I don't like the idea of using capital letters for spinor indexes. Ricci convention uses capitals for groups of indexes. I guess you could conider a spinor index as a group of two indexes of a complex space but ...
Now whereas with our tensors, the tensor could always be repreented as a contraction of its components with its basis elements (outer product'd together), I'm not sure what the convention is for the spinor basis indexes. Do we just take it at face value that they are matrices? Meh.

Spinor Minkowski(Cartesian?) special-relativistic metric:
$\epsilon_{ij} = \epsilon^{ij} = \left[ \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right]$
This turns out to be negative of the real 2x2 representation of the imaginary number basis matrix.

Define lowering as $\upsilon_a = \upsilon^b \epsilon_{ba}$ and raising as $\upsilon^a = \epsilon^{ab} \upsilon_b$

Notice that:
$\epsilon_{ij} = \epsilon^{ij}$ $\epsilon_{ij} = -\epsilon_{ji}$ $\epsilon^{ij} = -\epsilon^{ji}$ $\epsilon_{ik} \epsilon^{kj} = \delta^k_i$
$\epsilon^{ij} \epsilon_{ik} = \delta^j_k$
${\epsilon_i}^j = -{\epsilon^j}_i = \delta^j_i$

Spinor scalar product:
$a_i b^i = -a^i b_i$

Spinor transform basis in 3,1 is the Pauli matrices:
I'm going to throw in some one-form basis elements to abuse notation. If the physicists are allowed to do it ...
$\sigma = \sigma_\mu dx^\mu = \left\{ \sigma_0 dx^0, \sigma_1 dx^1, \sigma_2 dx^2, \sigma_3 dx^3 \right\} = \left\{ \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] dx^0, \left[ \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right] dx^1, \left[ \begin{matrix} 0 & -i \\ i & 0 \end{matrix} \right] dx^2, \left[ \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right] dx^3, \right\} $
Let's introduce two more indexes for indexing into the spinors themselves:
$[\sigma_\mu] = {{\sigma_\mu}^i}_j$

Some properties?
$\sigma_a = (\sigma_a)^H$ i.e. ${{\sigma_a}^i}_j = {{\bar{\sigma}_a}^j}_i$ : They are each Hermitian.
$\sigma_a \cdot \sigma_a = I$ : They are their own inverse, therefore they are all roots of the 2x2 identity matrix.
$\sigma_i \cdot \sigma_j = \delta_{ij} \sigma_0 + i \cdot \epsilon_{ijk} \sigma_k$ for $ijk$ in $1,2,3$.

A vector in M(3,1) space is defined as:
$x = x^\mu \partial_\mu = x^0 \partial_0 + x^1 \partial_0 + x^2 \partial_2 + x^3 \partial_3$

Transforming this to a spinor is done by mapping its basis elements ot the spinor basis elemnts:
$\sigma(x) = \sigma_\mu dx^\mu(x^\nu \partial_\nu) = \sigma_\mu x^\nu \delta^\mu_\nu = x^0 \sigma_0 + x^1 \sigma_1 + x^2 \sigma_2 + x^3 \sigma_3$
$= x^0 \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] + x^1 \left[ \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right] + x^2 \left[ \begin{matrix} 0 & -i \\ i & 0 \end{matrix} \right] + x^3 \left[ \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right] $
$= \left[ \begin{matrix} x^0 + x^3 & x^1 - i x^2 \\ x^1 + i x^2 & x^0 - x^3 \end{matrix} \right]$

Mind you I could've chose $\sigma$ to have a vector basis and x to have a form basis, and instead defined the conversion of x to a spinor as $x(\sigma)$...
But then my spinor components would look like ${\sigma^{\mu i}}_j$, and who does that?

Now how about converting back from a spinor to a vector?
$\sigma(x) = \left[ \begin{matrix} x^0 + x^3 & x^1 - i x^2 \\ x^1 + i x^2 & x^0 - x^3 \end{matrix} \right]$

$\frac{1}{2} tr(\sigma_\mu \cdot \sigma(x)) = \frac{1}{2} {{\sigma_\mu}^i}_j x^\nu {{\sigma_\nu}^j}_i = \frac{1}{2} \cdot \delta^i_j \cdot \delta_{\mu\nu} x^\nu = \delta_{\mu\nu} x^\nu$
$\frac{1}{2} tr(\sigma(x) \cdot \sigma_\mu) = \delta_{\mu\nu} x^\nu$ as well, so order of $\sigma_\mu$ multiplication doesn't matter, because either way squared is squared.
Done.
Sort of.
We've managed to abuse our spacetime index valence. This identity depends on multiplying a raised index by a lowered $\delta_{\mu\nu}$. Wouldn't it be nice if that $\delta_{\mu\nu}$ was replaced with an $\eta_{\mu\nu}$ spacetime Minkowski diagonal metric? We're about to see how we can fix Pauli's blunder.


Quaternions

Quaternions come up later, here's a quick quaternion to spinor conversion fact:

Now we outer our spinor basis with the 2x2 identity or i matrix:
$\sigma = \left[ \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right], \left[ \begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{matrix} \right], \left[ \begin{matrix} 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{matrix} \right], \left[ \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right], $
But we can't just outer product our Pauli matrix basis with the i 2x2 matrix, or the $\sigma_0$ and $\sigma_3$ will overlap. So first lets multiply the spatial matrices by i:
$i \sigma_i = \left\{ \left[ \begin{matrix} 0 & i \\ i & 0 \end{matrix} \right], \left[ \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right], \left[ \begin{matrix} i & 0 \\ 0 & -i \end{matrix} \right] \right\}$
Also, we are multiplying our spatial $\sigma_i$'s by i, but we want $\sigma_0 = I$ regardless.
So what we're really doing is multipying our $\sigma_\mu$'s by ${diag(1, i, i, i)^\mu}_\nu$. Which is the sqrt of $\eta_{\mu\nu}$. What a coincidence. Hmmmmmmmmmm.
$\sigma_\mu \cdot {diag(1, i, i, i)^\mu}_\nu = \left\{ \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right], \left[ \begin{matrix} 0 & i \\ i & 0 \end{matrix} \right], \left[ \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right], \left[ \begin{matrix} i & 0 \\ 0 & -i \end{matrix} \right] \right\}$
Then let's swap the x and z. Why are we doing this? We'll call that "foreshadowing".
$\sigma_\mu \cdot {permute(1,4,3,2)^\mu}_\nu \cdot {diag(1, i, i, i)^\nu}_\alpha = \left\{ \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right], \left[ \begin{matrix} i & 0 \\ 0 & -i \end{matrix} \right], \left[ \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right], \left[ \begin{matrix} 0 & i \\ i & 0 \end{matrix} \right] \right\}$

I'll just call this $(\sigma')_\mu$.
Now we have $\sigma'(x) = \left[ \begin{matrix} x^0 + i x^1 & x^2 + i x^3 \\ -x^2 + i x^3 & x^0 - i x^1 \end{matrix} \right]$
Looks much better. 0 is next to 1 (not 3). 2 is next to 3 (not 1). Much more modulo-2.
Also, what do you know, this is the $\mathbb{C}^{2 \times 2}$ representation of the quaternion basis.

Now lets complexify this by outering the real with $I_{2x2}$ and the imaginary with $i_{2x2}$ and see what we get...

$\sigma'(x) = x^0 \left[ \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right] + x^1 \left[ \begin{matrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{matrix} \right] + x^2 \left[ \begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{matrix} \right] + x^3 \left[ \begin{matrix} 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{matrix} \right] $
...and end up with something reembling the quaternion matrix representation:
$\left[ \begin{matrix} x_0 &-x_1 & x_2 &-x_3 \\ x_1 & x_0 & x_3 & x_2 \\ -x_2 &-x_3 & x_0 & x_1 \\ x_3 &-x_2 &-x_1 & x_0 \end{matrix} \right]$
There you go, your $\mathbb{R}^{4 \times 4}$ representation of a quaternion.

Now for either representations, if we call our elements $\mathbf{1}, \mathbf{i}, \mathbf{j}, \mathbf{k}$ and then make a multiplication table out of them, we will end up with this:
$\left[\begin{array}{cccc} \mathbf{1} & \mathbf{i} & \mathbf{j} & \mathbf{k}\\ \mathbf{i} & -{\mathbf{1}} & \mathbf{k} & -{\mathbf{j}}\\ \mathbf{j} & -{\mathbf{k}} & -{\mathbf{1}} & \mathbf{i}\\ \mathbf{k} & \mathbf{j} & -{\mathbf{i}} & -{\mathbf{1}}\end{array}\right]$
There you go, the quaternion multiplication table.

So the lesson? Pauli reinvented quaternions, but with x and z flipped and with the spatial ones multiplied by i.
And in doing so he ... - ended up with the spacetime spinor basis producing $\delta_{\mu\nu}$ instead of a correct $\eta_{\mu\nu}$.
- needed an extra i in the Pauli equation which could've been merged into the matrix basis.
- had with an extra i in the exponential map to produce rotations.



Spin Connection

Alright so there's our vector in spinor form. What's going to be incredibly useful in the future is converting orthonormal vectors to spinors. Or so I've been told...

Ok now for connections.

$\nabla_a v = \nabla_a (v^b \partial_b) = (\partial_a v^b + {\Gamma^b}_{ac} v^c) \partial_b$
Now lets apply our spinor basis:
$\sigma(\nabla_a v) = (\partial_a v^b + {\Gamma^b}_{ac} v^c) \sigma_b$
$= (\partial_a v^b + {\Gamma^b}_{ac} v^c) {{\sigma_b}^i}_j$
... ok, so what? Why make a whole other thing about "spin connection" ? It's just a connection, whose basis has been transformed to another space.

How about the basis?
$\nabla_a e_b = {\Gamma^c}_{ab} e_c$
$\sigma(\nabla_a e_b) = {\Gamma^c}_{ab} {{\sigma_c}^i}_j$

ok but how does the covariant "act" on "spin indexes", which are the 1 <-> 2 indexes produced from that contraction?
$\sigma(v) = v^a {{\sigma_a}^i}_j$
and $\sigma(\nabla_a v) = (\partial_a v^b + {\Gamma^b}_{ac} v^c) {{\sigma_b}^i}_j$
$ = (\partial_a v^b) {{\sigma_b}^i}_j + {\Gamma^b}_{ac} v^c {{\sigma_b}^i}_j$
... because $\sigma_u$ is constant
$ = \partial_a (v^b {{\sigma_b}^i}_j) + {\Gamma^b}_{ac} v^c {{\sigma_b}^i}_j$

Now we use our trick of getting components back from the spinor:
$\sigma_b \cdot \sigma(v) = \partial_a v^b + {\Gamma^b}_{ac} v^c$

But what if we want to pretend a covariant derivative is "acting" on the two indexes of the spinor basis?
$v' = \sigma(v) = v^u \sigma_u = v^u {{\sigma_u}^i}_j$ is a matrix with components ${(v')^i}_j = v^u {{\sigma_u}^i}_j$
$\sigma(D_a v)$
$= D_a(\sigma(v))$
$= D_a {(v')^i}_j$
$= \partial_a {(v')^i}_j + {{\omega_a}^i}_k {(v')^k}_j - {{\omega_a}^k}_j {(v')^i}_k$
...substitute...
$= \partial_a v^u {{\sigma_u}^i}_j + {{\omega_a}^i}_k {{\sigma_u}^k}_j v^u - {{\omega_a}^k}_j {{\sigma_u}^i}_k v^u$
$= \partial_a v^u {{\sigma_u}^i}_j + {{\omega_a}^i}_k {{\sigma_u}^k}_j v^u - {{\omega_a}^k}_j {{\sigma_u}^i}_k v^u$
$= \partial_a v^u {{\sigma_u}^i}_j + {\omega_a}^{ik} \sigma_{ukj} v^u - {\omega_a}^{kj} \sigma_{uik} v^u$
Now if we assume ${{\omega_a}^i}_j = -{{\omega_a}^j}_i$ (still gotta show that one, but honestly this gets into the covariant definition with regards to basis elements, and what even are the basis elements of the "Hilbert half-space" or whatever Rovelli & Vidotto's book call it...
or do we need to assume $\omega_{aij} = -\omega_{aji}$? same? subject to the spinor metric? TODO...
$= \partial_a v^u {{\sigma_u}^i}_j + {\omega_a}^{ik} \sigma_{ukj} v^u - {\omega_a}^{jk} \sigma_{uki} v^u$
then we reconstruct the basis with either a left or right spinor basis mult...
$(D_a \sigma(v)) \cdot \sigma_u = (\partial_a v^v {{\sigma_v}^i}_j + {\omega_a}^{ik} \sigma_{vkj} v^v - {\omega_a}^{jk} \sigma_{vki} v^v) \cdot \sigma_u$
$= \partial_a v^u + {\omega_a}^{ik} \sigma_{ukj} \cdot \sigma v^u - {\omega_a}^{jk} \sigma_{uki} \cdot \sigma v^u$
...idk...
but
$D_a v = (\partial_a v^u + {\Gamma^u}_{ab} v^b) \partial_u$
$\sigma(D_a v) = (\partial_a v^u + {\Gamma^u}_{ab} v^b) \sigma_u$
so
$(\partial_a v^u + {\Gamma^u}_{ab} v^b) \sigma_u = \partial_a v^u {{\sigma_u}^i}_j + {{\omega_a}^i}_k {{\sigma_u}^k}_j v^u - {{\omega_a}^k}_j {{\sigma_u}^i}_k v^u$
${\Gamma^u}_{ab} v^b \sigma_u = {{\omega_a}^i}_k {{\sigma_b}^k}_j v^b - {{\sigma_b}^i}_k {{\omega_a}^k}_j v^b$
${\Gamma^u}_{ab} \sigma_u = \omega_a \cdot \sigma_b - \sigma_b \cdot \omega_a$
so if $\omega_a = (\omega_a)^H$...
${\Gamma^u}_{ab} I \sigma_u = 2 \omega_a \cdot \sigma_b$
$\omega_a = \frac{1}{2} {\Gamma^u}_{ab} I \sigma_u \cdot \sigma_b$



Dirac Gamma Matrices

$\gamma^0 = \left[ \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right]; \gamma^1 = \left[ \begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{matrix} \right]; \gamma^2 = \left[ \begin{matrix} 0 & 0 &-i & 0 \\ 0 & 0 & 0 & i \\ i & 0 & 0 & 0 \\ 0 &-i & 0 & 0 \end{matrix} \right]; \gamma^3 = \left[ \begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 &-1 \\ -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{matrix} \right]; $

Looks like the Dirac gamma matrices are just $-i_{2x2} \otimes \sigma_i$. Kind of fitting since the quaternions are $i \sigma_i$.

And now we get ${\gamma^{(\mu|i}}_k {\gamma^{\nu)k}}_j = \eta^{\mu\nu} {\delta^i}_j$

Why symmetric? Why couldn't Dirac just pick a matrix basis that evaluated to $\eta^{\mu\nu}$ without symmetry? Because he needed the antisymmetric commutation as well.

Ok so one improvement over the Pauli matrices is now I've got a bit more certianty of the valence of $\gamma^\mu$ -- it's contravariant. Idk how this makes any difference, covariant vs contravariant, but at least it tips the reader off into the fact that we're going to get a spacetime metric out of this and not just $\delta_{\mu\nu}$ like the Pauli matrix basis gives us.

Dirac Equation

$(i \hbar \gamma^\mu \partial_\mu - m c) \phi = 0$
This looks like a relativistic wave equation, but it's weird seeing $\phi$ show up on the source term.

Back