Riemann curvature:

Notice that order of application of the exp-integrals-of-connections is important. If you were to swap orders then you would have to deal with the $[\nabla_\mu, \nabla_\mu] e_\alpha = e_\beta {R^\beta}_{\alpha\mu\nu}$ Riemann curvature tensor. Same works with exp-integrals and parallel propagators.

If we were to apply our partial twice then we would end up with this:

$\nabla_\mu \nabla_\nu e_\rho = \nabla_\mu (e_\alpha \cdot {\Gamma^\alpha}_{\nu\rho})$
$\nabla_\mu \nabla_\nu e_\rho = (\nabla_\mu e_\alpha) \cdot {\Gamma^\alpha}_{\nu\rho} + e_\alpha \cdot \nabla_\mu {\Gamma^\alpha}_{\nu\rho}$
$\nabla_\mu \nabla_\nu e_\rho = e_\alpha \cdot ({\Gamma^\alpha}_{\mu\beta} \cdot {\Gamma^\beta}_{\nu\rho} + \nabla_\mu {\Gamma^\alpha}_{\nu\rho})$
Notice because we're considering the covariant derivative on tensor objects rather than on components, the covariant derivative upon a component is just a covariant derivative on a scalar function, which is equal to a partial derivative. This always holds true, yet is often omitted from tensor notation when tensor basii are omitted as well.
$\nabla_\mu \nabla_\nu e_\rho = e_\alpha \cdot \left( {\Gamma^\alpha}_{\mu\beta} \cdot {\Gamma^\beta}_{\nu\rho} + e_\mu\left( {\Gamma^\alpha}_{\nu\rho} \right) \right)$
Notice here I'm using $e_\mu( \cdot )$ as a function. This is making use of the typical $e_\mu = \partial_\mu$ definition, but is open to non-coordinate anholonomic definitions.
$[\nabla_\mu, \nabla_\nu] e_\rho = 2 e_\alpha \cdot \left( e_{[\mu}\left( {\Gamma^\alpha}_{\nu]\rho} \right) + {\Gamma^\alpha}_{[\mu|\beta} \cdot {\Gamma^\beta}_{|\nu]\rho} \right)$

For non-coordinate basis we can add that one last term to equate our tensor to the Riemann curvature tensor:

$\nabla_{({c_{\mu\nu}}^\beta e_\beta)} e_\rho = \nabla_{[e_\mu, e_\nu]} e_\rho = \nabla_{[\mu, \nu]} e_\rho = - e_\alpha \cdot {\Gamma^\alpha}_{\beta\rho} {c_{[\mu\nu]}}^\beta$

...and then we get this:

$[\nabla_\mu, \nabla_\nu] - \nabla_{[\mu, \nu]} e_\rho = e_\alpha \cdot \left( 2 e_{[\mu}\left( {\Gamma^\alpha}_{\nu]\rho} \right) + 2 {\Gamma^\alpha}_{[\mu|\beta} \cdot {\Gamma^\beta}_{|\nu]\rho} - {\Gamma^\alpha}_{\beta\rho} {c_{[\mu\nu]}}^\beta \right)$
$[\nabla_\mu, \nabla_\nu] - \nabla_{[\mu, \nu]} e_\rho = e_\alpha {R^\alpha}_{\rho\mu\nu}$

Let's go back to non-coordinate basis and answer what does this mean in terms of parallel propagation?

$[\nabla_\mu, \nabla_\nu] {e_\rho}^I(\mathbf{x}) = {e_\alpha}^I(\mathbf{x}) \cdot {R^\alpha}_{\rho\mu\nu}(\mathbf{x})$
$\frac{\partial}{\partial x^\mu} \left( \nabla_\nu {e_\rho}^I(\mathbf{x}) \right) - \frac{\partial}{\partial x^\nu} \left( \nabla_\mu {e_\rho}^I(\mathbf{x}) \right) = {e_\alpha}^I(\mathbf{x}) \cdot {R^\alpha}_{\rho\mu\nu}(\mathbf{x})$
$\int_{x^\mu_L}^{x^\mu_R} {e^\alpha}_I(\mathbf{x}) \cdot d \left( \nabla_\nu {e_\rho}^I(\mathbf{x}) \right) = \int_{x^\mu_L}^{x^\mu_R} {R^\alpha}_{\rho\mu\nu}(\mathbf{x}) \mathbf{dx}^\mu$
$\nabla_\nu {e_\rho}^I(\mathbf{x}^\mu_R) = {e_\alpha}^I(\mathbf{x}^\mu_L) \cdot exp(\int_{x^\mu_L}^{x^\mu_R} {R^\alpha}_{\rho\mu\nu}(\mathbf{x}) \mathbf{dx}^\mu)$
$\frac{\partial}{\partial x^\nu} {e_\rho}^I(\mathbf{x}^\mu_R) = {e_\alpha}^I(\mathbf{x}^\mu_L) \cdot exp(\int_{x^\mu_L}^{x^\mu_R} {R^\alpha}_{\rho\mu\nu}(\mathbf{x}) \mathbf{dx}^\mu)$
$\int_{x^\nu_L}^{x^\nu_R} {e^\alpha}_I(\mathbf{x}^\mu_L, \mathbf{x}^\nu) \cdot d {e_\rho}^I(\mathbf{x}^\mu_R, \mathbf{x}^\nu) = \int_{x^\nu_L}^{x^\nu_R} exp(\int_{x^\mu_L}^{x^\mu_R} {R^\alpha}_{\rho\mu\nu}(\mathbf{x}^\mu, \mathbf{x}^\nu) \mathbf{dx}^\mu) \mathbf{dx}^\nu$
${e_\rho}^I(\mathbf{x} \rightarrow \mathbf{x}^\mu_R \rightarrow \mathbf{x}^\nu_R) - {e_\rho}^I(\mathbf{x} \rightarrow \mathbf{x}^\mu_R \rightarrow \mathbf{x}^\nu_R) = {e_\alpha}^I(\mathbf{x}^\mu_L, \mathbf{x}^\nu_L) \cdot exp\left( \int_{x^\nu_L}^{x^\nu_R} exp\left( \int_{x^\mu_L}^{x^\mu_R} {R^\alpha}_{\rho\mu\nu}(\mathbf{x}^\mu, \mathbf{x}^\nu) \mathbf{dx}^\mu \right) \mathbf{dx}^\nu \right)$
TODO FIXME this is not correct.
Somehow you should end up with something similar to ...
$e_\rho(\mathbf{x}^\mu_R, \mathbf{x}^\nu_R) \cdot {(P^{-1})^\sigma}_\rho(\mathbf{x}^\mu_L, \mathbf{x}^\mu_R) \cdot {(P^{-1})^\tau}_\sigma(\mathbf{x}^\nu_L, \mathbf{x}^\nu_R) - e_\rho(\mathbf{x}^\mu_R, \mathbf{x}^\nu_R) \cdot {(P^{-1})^\sigma}_\rho(\mathbf{x}^\nu_L, \mathbf{x}^\nu_R) \cdot {(P^{-1})^\tau}_\sigma(\mathbf{x}^\mu_L, \mathbf{x}^\mu_R) = e_\alpha(\mathbf{x}^\mu_L, \mathbf{x}^\nu_L) \cdot exp \int exp \int {R^\alpha}_{\sigma\mu\nu}(\mathbf{x}^\mu, \mathbf{x}^\nu) \mathbf{dx}^\nu \mathbf{dx}^\mu$
...or similar. I don't know.


Think about exactly how later. I saw in the referenced S.E. that the commutation of the infintesimal propagator in the $e_\mu$ and then $e_\nu$ direction produces the curvature, so that answers that.
Actually it doesn't...

It turns out all these sources mention the same equation, but none have it referenced. Some S.E.'s even ask for references, but no one provides.
I'm betting they all came from something somewhere like the Wiki references, since they all seem to cite $\mathbf{P}_\alpha^{-1} \mathbf{P}_\beta^{-1} \mathbf{P}_\alpha \mathbf{P}_\beta \mathbf{v} = \mathbf{R} \mathbf{v}$. One source defines as the zero limit of the propagator, the other defines it as the derivative wrt the parameter of the propagator integrals. They cite this instead of the same thing minus identity, which looks like $\mathbf{P}_{[\alpha} \mathbf{P}_{\beta]} \mathbf{v} = \mathbf{R}(\alpha, \beta) \mathbf{v}$, what I am suspicious is true.
Actually wait, they are talking about a similar thing, but instead of commutation they are looking at commutation in terms of powers ... $(\mathbf{P}_\alpha \mathbf{P}_\beta) (\mathbf{P}_\alpha \mathbf{P}_\beta)^{-1} \mathbf{v} = \mathbf{R}(\alpha, \beta) \mathbf{v}$.

Does this mean that, by the Bianchi identity of the Riemann curvature tensor, that exchanging an index may produce a result whose difference with the original is proportional to the Riemann curvature tensor but cycling all indexes will result in the same integral? I'll think about that later. Look into the path ordering stuff of Carroll for more on this.




Ok take two on this.
Instead of deriving Riemann from connection stuff, we can derive the parallel-propagator in terms of the connection and insert it into the Riemann's definition.
However this will depend on a matrix-log, which by definition has multiple solutions.
TODO.





So in the "parallel propagators" worksheet I went over how to reconstruct the basis provided that you know the connection at each chart location. And in my surface-from-connection project I show how important the initial condition of the basis and the connection are. Assuming our initial conditions are good, and we know how to propagate our basis, the next question is how do we propagate our connection?

Lets take a simplest case study: Sphere Surface:

coordinates: $x = \{ \theta, \phi \}$.

chart:
$u^I = I \downarrow \left[ \begin{matrix} r sin(\theta) cos(\phi) \\ r sin(\theta) sin(\phi) \\ r cos(\theta) \end{matrix} \right]$

basis:
${e_\mu}^I = I \downarrow \overset{\mu \rightarrow}{ \left[ \begin{matrix} r cos(\theta) cos(\phi) & -r sin(\theta) sin(\phi) \\ r cos(\theta) sin(\phi) & r sin(\theta) cos(\phi) \\ -r sin(\theta) & 0 \end{matrix} \right] }$

metric:

$\textbf{g} = \left[ \begin{matrix} g_{\theta\theta} & g_{\theta\phi} \\ g_{\phi\theta} & g_{\phi\phi} \end{matrix} \right] = \left[ \begin{matrix} r^2 & 0 \\ 0 & r^2 sin(\theta)^2 \end{matrix} \right] $

metric inverse:

$\textbf{g}^{-1} = \left[ \begin{matrix} g^{\theta\theta} & g^{\theta\phi} \\ g^{\phi\theta} & g^{\phi\phi} \end{matrix} \right] = \left[ \begin{matrix} \frac{1}{r^2} & 0 \\ 0 & \frac{1}{r^2 sin(\theta)^2} \end{matrix} \right] $

connections:

$\mathbf{\Gamma}_\theta = \left[\begin{matrix} {\Gamma^\theta}_{\theta \theta} & {\Gamma^\theta}_{\theta\phi} \\ {\Gamma^\phi}_{\theta \theta} & {\Gamma^\phi}_{\theta\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right]$

$\mathbf{\Gamma}_\phi = \left[\begin{matrix} {\Gamma^\theta}_{\phi \theta} & {\Gamma^\theta}_{\phi\phi} \\ {\Gamma^\phi}_{\phi \theta} & {\Gamma^\phi}_{\phi\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & -cos(\theta) sin(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right]$

Parallel Propagators:

$\mathbf{P}_\theta(\theta_L, \theta_R) = exp \left( -\int_{\theta_L}^{\theta_R} \mathbf{\Gamma}_\theta d\theta \right)$
$\mathbf{P}_\theta(\theta_L, \theta_R) = exp \left( \int_{\theta_L}^{\theta_R} \left[\begin{matrix} 0 & 0 \\ 0 & -\frac{cos(\theta)}{sin(\theta)} \end{matrix}\right] d\theta \right)$
$\mathbf{P}_\theta(\theta_L, \theta_R) = exp \left( \left[\begin{matrix} 0 & 0 \\ 0 & -log(sin(\theta)) \end{matrix}\right]_{\theta_L}^{\theta_R} \right)$
$\mathbf{P}_\theta(\theta_L, \theta_R) = exp \left[\begin{matrix} 0 & 0 \\ 0 & log(sin(\theta_L)) - log(sin(\theta_R)) \end{matrix}\right] $
$\mathbf{P}_\theta(\theta_L, \theta_R) = exp \left[\begin{matrix} 0 & 0 \\ 0 & log \left( \frac{ sin(\theta_L) }{ sin(\theta_R) } \right) \end{matrix}\right] $
$\mathbf{P}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} 1 & 0 \\ 0 & \frac{ sin(\theta_L) }{ sin(\theta_R) } \end{matrix}\right] $
$\mathbf{P}_\theta(\theta_L, \theta_R) = \mathbf{S}(1, \frac{sin(\theta_L)}{sin(\theta_R)})$

$\mathbf{P}_\phi(\phi_L, \phi_R) = exp \left( -\int_{\phi_L}^{\phi_R} \mathbf{\Gamma}_\phi d\phi \right)$
$\mathbf{P}_\phi(\phi_L, \phi_R) = exp \left( -\int_{\phi_L}^{\phi_R} \left[\begin{matrix} 0 & -cos(\theta) sin(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right] d\phi \right)$
$\mathbf{P}_\phi(\phi_L, \phi_R) = exp \left( cos(\theta) (\phi_L - \phi_R) \left[\begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix}\right] \left[\begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix}\right] \left[\begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix}\right] \right)$
$\mathbf{P}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix}\right] \mathbf{R}_z \left( cos(\theta) (\phi_L - \phi_R) \right) \left[\begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix}\right] $
$\mathbf{P}_\phi(\phi_L, \phi_R) = \mathbf{S}(1, \frac{1}{sin(\theta)}) \cdot \mathbf{R}_z \left( cos(\theta) (\phi_L - \phi_R) \right) \cdot \mathbf{S}(1, sin(\theta)) $

Riemann curvature, $\sharp\flat\flat\flat$:
${R^\theta}_{\phi\theta\phi} = -{R^\theta}_{\phi\phi\theta} = sin(\theta)^2$
${R^\phi}_{\theta\phi\theta} = -{R^\phi}_{\theta\theta\phi} = 1$

Riemann curvature, as a matrix:
$\mathbf{R}_{\theta\phi} = [{R^\mu}_{\nu\theta\phi}] = \mu \downarrow \overset{\nu \rightarrow}{ \left[ \begin{matrix} {R^\theta}_{\theta\theta\phi} & {R^\theta}_{\phi\theta\phi} \\ {R^\phi}_{\theta\theta\phi} & {R^\phi}_{\phi\theta\phi} \end{matrix} \right] } = \left[ \begin{matrix} 0 & sin(\theta)^2 \\ -1 & 0 \end{matrix} \right] $
$\mathbf{R}_{\phi\theta} = [{R^\mu}_{\nu\phi\theta}] = \mu \downarrow \overset{\nu \rightarrow}{ \left[ \begin{matrix} {R^\theta}_{\theta\phi\theta} & {R^\theta}_{\phi\phi\theta} \\ {R^\phi}_{\theta\phi\theta} & {R^\phi}_{\phi\phi\theta} \end{matrix} \right] } = \left[ \begin{matrix} 0 & -sin(\theta)^2 \\ 1 & 0 \end{matrix} \right]$

Riemann curvature, $\sharp\sharp\flat\flat$ (wait isn't this the metric-weight-invariant? since the $\sharp$s cancel the $\flat$s?):
${R^{\theta\phi}}_{\theta\phi} = {R^{\phi\theta}}_{\phi\theta} = -{R^{\theta\phi}}_{\phi\theta} = -{R^{\phi\theta}}_{\theta\phi} = \frac{1}{r^2}$

Riemann curvature, $\flat\flat\flat\flat$ (2D invariant):
$R_{\theta\phi\theta\phi} = R_{\phi\theta\phi\theta} = -R_{\theta\phi\phi\theta} = -R_{\phi\theta\theta\phi} = r^2 sin(\theta)^2$

So now that we have the Riemann curvature, how does it relate to parallel propagator commutation of +, or commutation of *?

(Also, let $\theta_L \rightarrow \theta, \theta_R \rightarrow \theta + \Delta \theta, \phi_L \rightarrow \phi, \phi_R \rightarrow \phi + \Delta \phi$)

With + ...

$[\mathbf{P}_\theta, \mathbf{P}_\phi]_{+}$
$= \mathbf{P}_\theta( (\theta_L, \phi_R), (\theta_R, \phi_R) ) \mathbf{P}_\phi( (\theta_L, \phi_L), (\theta_L, \phi_R) ) - \mathbf{P}_\phi( (\theta_R, \phi_L), (\theta_R, \phi_R) ) \mathbf{P}_\theta( (\theta_L, \phi_L), (\theta_R, \phi_L) ) $
$= \mathbf{S}(1, \frac{sin(\theta_L)}{sin(\theta_R)}) \cdot \mathbf{S}(1, \frac{1}{sin(\theta_L)}) \cdot \mathbf{R}_z \left( cos(\theta_L) (\phi_L - \phi_R) \right) \cdot \mathbf{S}(1, sin(\theta_L)) - \mathbf{S}(1, \frac{1}{sin(\theta_R)}) \cdot \mathbf{R}_z \left( cos(\theta_R) (\phi_L - \phi_R) \right) \cdot \mathbf{S}(1, sin(\theta_R)) \cdot \mathbf{S}(1, \frac{sin(\theta_L)}{sin(\theta_R)}) $
$= \mathbf{S}(1, \frac{1}{sin(\theta_R)}) \cdot \mathbf{R}_z \left( cos(\theta_L) (\phi_L - \phi_R) \right) \cdot \mathbf{S}(1, sin(\theta_L)) - \mathbf{S}(1, \frac{1}{sin(\theta_R)}) \cdot \mathbf{R}_z \left( cos(\theta_R) (\phi_L - \phi_R) \right) \cdot \mathbf{S}(1, sin(\theta_L)) $
$= \mathbf{S}(1, \frac{1}{sin(\theta_R)}) \cdot \left( \mathbf{R}_z \left( cos(\theta_L) (\phi_L - \phi_R) \right) - \mathbf{R}_z \left( cos(\theta_R) (\phi_L - \phi_R) \right) \right) \cdot \mathbf{S}(1, sin(\theta_L)) $
$= \mathbf{S}(1, \frac{1}{sin(\theta + \Delta \theta)}) \cdot \left( \mathbf{R}_z \left( \Delta \phi \cdot cos(\theta + \Delta \theta) \right) - \mathbf{R}_z \left( \Delta \phi \cdot cos(\theta) \right) \right) \cdot \mathbf{S}(1, sin(\theta)) $
$= \left[ \begin{matrix} cos(\Delta \phi \cdot cos(\theta + \Delta \theta)) - cos(\Delta \phi \cdot cos(\theta)) & -sin(\theta) \left( sin(\Delta \phi \cdot cos(\theta + \Delta \theta)) - sin(\Delta \phi \cdot cos(\theta)) \right) \\ \frac{1}{sin(\theta + \Delta \theta)} \left( sin(\Delta \phi \cdot cos(\theta + \Delta \theta)) - sin(\Delta \phi \cdot cos(\theta)) \right) & \frac{sin(\theta)}{sin(\theta + \Delta \theta)} \left( cos(\Delta \phi \cdot cos(\theta + \Delta \theta)) - cos(\Delta \phi \cdot cos(\theta)) \right) \end{matrix} \right]$

The limit is:

$\underset{\Delta \theta \rightarrow 0, \Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \theta \Delta \phi} [\mathbf{P}_\theta, \mathbf{P}_\phi]$
$= \underset{\Delta \theta \rightarrow 0, \Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \theta \Delta \phi} \left[ \begin{matrix} cos(\Delta \phi \cdot cos(\theta + \Delta \theta)) - cos(\Delta \phi \cdot cos(\theta)) & -sin(\theta) \left( sin(\Delta \phi \cdot cos(\theta + \Delta \theta)) - sin(\Delta \phi \cdot cos(\theta)) \right) \\ \frac{1}{sin(\theta + \Delta \theta)} \left( sin(\Delta \phi \cdot cos(\theta + \Delta \theta)) - sin(\Delta \phi \cdot cos(\theta)) \right) & \frac{sin(\theta)}{sin(\theta + \Delta \theta)} \left( cos(\Delta \phi \cdot cos(\theta + \Delta \theta)) - cos(\Delta \phi \cdot cos(\theta)) \right) \end{matrix} \right] $
Using $\underset{\Delta \phi \rightarrow 0}{lim} cos(\Delta \phi) = 1$ and $\underset{\Delta \phi \rightarrow 0}{lim} sin(\Delta \phi) = \Delta \phi$:
$= \underset{\Delta \theta \rightarrow 0, \Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \theta \Delta \phi} \left[ \begin{matrix} 1 - 1 & -sin(\theta) \left( \Delta \phi \cdot cos(\theta + \Delta \theta) - \Delta \phi \cdot cos(\theta)) \right) \\ \frac{1}{sin(\theta + \Delta \theta)} \left( \Delta \phi \cdot cos(\theta + \Delta \theta) - \Delta \phi \cdot cos(\theta) \right) & \frac{sin(\theta)}{sin(\theta + \Delta \theta)} \left( 1 - 1 \right) \end{matrix} \right] $
$= \underset{\Delta \theta \rightarrow 0}{lim} \frac{1}{\Delta \theta} \left[ \begin{matrix} 0 & -sin(\theta) \left( cos(\theta + \Delta \theta) - cos(\theta)) \right) \\ \frac{1}{sin(\theta + \Delta \theta)} \left( cos(\theta + \Delta \theta) - cos(\theta) \right) & 0 \end{matrix} \right] $
Using $\underset{\Delta \theta \rightarrow 0}{lim} \left( \frac{f(\theta + \Delta \theta) - f(\theta)}{\Delta \theta} \right) = \frac{\partial}{\partial \theta} f(\theta)$
$= \left[ \begin{matrix} 0 & -sin(\theta) \frac{\partial}{\partial \theta} cos(\theta) \\ \frac{1}{sin(\theta)} \frac{\partial}{\partial \theta} cos(\theta) & 0 \end{matrix} \right] $
$= \left[ \begin{matrix} 0 & sin(\theta)^2 \\ -1 & 0 \end{matrix} \right] $
$= \mathbf{R}_{\theta\phi}(\theta, \phi)$

Nice, this works, even though the commutator definition of the Riemann curvature listed in Wikipedia and StackExchange say to use the $[\cdot, \cdot]_*$ method. What is its result? ...

With * ...

$[\mathbf{P}_\theta, \mathbf{P}_\phi]_{*}$
$= \mathbf{P}_\theta^{-1}( (\theta_L, \phi_L), (\theta_R, \phi_L) ) \cdot \mathbf{P}_\phi^{-1}( (\theta_R, \phi_L), (\theta_R, \phi_R) ) \cdot \mathbf{P}_\theta( (\theta_L, \phi_R), (\theta_R, \phi_R) ) \cdot \mathbf{P}_\phi( (\theta_L, \phi_L), (\theta_L, \phi_R) ) $
Alright, semantics $P^{-1}(a,b) = P(b,a)$...
$= \mathbf{P}_\theta( (\theta_R, \phi_L), (\theta_L, \phi_L) ) \cdot \mathbf{P}_\phi( (\theta_R, \phi_R), (\theta_R, \phi_L) ) \cdot \mathbf{P}_\theta( (\theta_L, \phi_R), (\theta_R, \phi_R) ) \cdot \mathbf{P}_\phi( (\theta_L, \phi_L), (\theta_L, \phi_R) ) $
$= \mathbf{S}(1, \frac{sin(\theta_R)}{sin(\theta_L)}) \cdot \mathbf{S}(1, \frac{1}{sin(\theta_R)}) \cdot \mathbf{R}_z \left( cos(\theta_R) (\phi_R - \phi_L) \right) \cdot \mathbf{S}(1, sin(\theta_R)) \cdot \mathbf{S}(1, \frac{sin(\theta_L)}{sin(\theta_R)}) \cdot \mathbf{S}(1, \frac{1}{sin(\theta_L)}) \cdot \mathbf{R}_z \left( cos(\theta_L) (\phi_L - \phi_R) \right) \cdot \mathbf{S}(1, sin(\theta_L)) $
$= \mathbf{S}(1, \frac{1}{sin(\theta_L)}) \cdot \mathbf{R}_z \left( (cos(\theta_R) - cos(\theta_L)) (\phi_R - \phi_L) \right) \cdot \mathbf{S}(1, sin(\theta_L)) $
$= \mathbf{S}(1, \frac{1}{sin(\theta)}) \cdot \mathbf{R}_z \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) \cdot \mathbf{S}(1, sin(\theta)) $
$= \left[ \begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix} \right] \cdot \left[ \begin{matrix} cos \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) & -sin \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) \\ sin \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) & cos \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix} \right] $
$= \left[ \begin{matrix} cos \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) & -sin(\theta) sin \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) \\ \frac{1}{sin(\theta)} sin \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) & cos \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) \end{matrix} \right] $

The limit is:
$\underset{\Delta \theta \rightarrow 0, \Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \theta \Delta \phi} [\mathbf{P}_\theta, \mathbf{P}_\phi]$
$= \underset{\Delta \theta \rightarrow 0, \Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \theta \Delta \phi} \mathbf{S}(1, \frac{1}{sin(\theta)}) \cdot \mathbf{R}_z \left( \Delta \phi (cos(\theta + \Delta \theta) - cos(\theta)) \right) \cdot \mathbf{S}(1, sin(\theta)) $
TODO what is $\underset{\theta \rightarrow 0}{lim} \mathbf{R}_z(\theta)$? What about the diagonal 1 values from lim cos? Should I be commuting minus instead, in order to remove them?
Technically ... I'm starting to think this is wrong. The rotation limit will have 1's in the diagonal, so we cannot do this. Does that mean $[\cdot, \cdot]_+$ is the correct commutator? That is not what Wikipedia and StackExchange say ...
$= \underset{\Delta \theta \rightarrow 0, \Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \theta \Delta \phi} \left[ \begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix} \right] \cdot \Delta \phi \left( cos(\theta + \Delta \theta) - cos(\theta) \right) \left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix} \right] $
$= \underset{\Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \phi} \left[ \begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix} \right] \cdot \Delta \phi \left( \underset{\Delta \theta \rightarrow 0}{lim} \frac{cos(\theta + \Delta \theta) - cos(\theta)}{\Delta \theta} \right) \left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix} \right] $
$= \underset{\Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \phi} \left[ \begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix} \right] \cdot \Delta \phi \cdot \frac{\partial}{\partial \theta}cos(\theta) \left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix} \right] $
$= \underset{\Delta \phi \rightarrow 0}{lim} \frac{1}{\Delta \phi} \left[ \begin{matrix} 1 & 0 \\ 0 & \frac{1}{sin(\theta)} \end{matrix} \right] \cdot \Delta \phi \cdot -sin(\theta) \left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\theta) \end{matrix} \right] $
$= \underset{\Delta \phi \rightarrow 0}{lim} sin(\theta) \left[ \begin{matrix} 0 & sin(\theta) \\ -\frac{1}{sin(\theta)} & 0 \end{matrix} \right] $
$= \left[ \begin{matrix} 0 & sin(\theta)^2 \\ -1 & 0 \end{matrix} \right] $
$= \mathbf{R}_{\theta\phi}(\theta, \phi)$

Ricci curvature, $\flat\flat$
$R_{\theta\theta} = 1$
$R_{\phi\phi} = sin(\theta)^2$

Ricci curvature, $\sharp\flat$
${R^\theta}_\theta = {R^\phi}_\phi = \frac{1}{r^2}$

Can we use Ricci curvature to find propagator commutation?

$\mathbf{R} = \left[ \begin{matrix} R_{\theta\theta} & R_{\theta\phi} \\ R_{\phi\theta} & R_{\phi\phi} \end{matrix} \right] = \left[ \begin{matrix} {R^\theta}_{\theta\theta\theta} + {R^\phi}_{\theta\phi\theta} & {R^\theta}_{\theta\theta\phi} + {R^\phi}_{\theta\phi\phi} \\ {R^\theta}_{\phi\theta\theta} + {R^\phi}_{\phi\phi\theta} & {R^\theta}_{\phi\theta\phi} + {R^\phi}_{\phi\phi\phi} \end{matrix} \right] = \left[ \begin{matrix} {R^\phi}_{\theta\phi\theta} & {R^\theta}_{\theta\theta\phi} \\ {R^\phi}_{\phi\phi\theta} & {R^\theta}_{\phi\theta\phi} \end{matrix} \right] = \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\theta)^2 \end{matrix} \right] $

So for two dimensions it is easy to reverse infer the Riemann curvature from the Ricci curvature.

Another interesting fact about this manifold:
$\mathbf{R} = \frac{1}{r^2} \mathbf{g}$

Gaussian curvature:
$R = \frac{2}{{r}^{2}}$
Which just happens to mean ...
$\mathbf{R} = \frac{1}{2} R \mathbf{g}$
$\mathbf{R} - \frac{1}{2} R \mathbf{g} = 0$
...which means, in 2D where n=2, the trace-free version of the Ricci-tensor, $R_{\mu\nu} - \frac{1}{n} R g_{\mu\nu}$, is zero.
...which makes sense, that the trace-free version would be zero, since the Ricci tensor is purely trace.



Sidebar on the separate propagators, and on evaluating the linear combination of them:

Define our path vector as linear in coordinate space:
$\mathbf{v} = \left[ \begin{matrix} v^\theta \\ v^\phi \end{matrix} \right] = \left[ \begin{matrix} \Delta \theta \\ \Delta \phi \end{matrix} \right] $

Define our linear combination of connection matrices:
$\mathbf{\Gamma}_\mathbf{v} = \mathbf{\Gamma}_\theta v^\theta + \mathbf{\Gamma}_\phi v^\phi = \left[ \begin{matrix} 0 & -\frac{1}{2}sin(2\theta) \Delta \phi \\ cot(\theta) \Delta \phi & cot(\theta) \Delta \theta \end{matrix} \right]$

Calculate our parallel propagator:
$\mathbf{P}_\mathbf{v} = exp \left( -\int_{\lambda=0}^{\lambda=1} \mathbf{\Gamma}_\mathbf{v} (\mathbf{x} + \lambda \mathbf{v}) d\lambda\right) = exp \left( -\int_{\lambda=0}^{\lambda=1} \left[ \begin{matrix} 0 & -\frac{1}{2}sin(2(\theta + \lambda \Delta \theta)) \Delta \phi \\ cot(\theta + \lambda \Delta \theta) \Delta \phi & cot(\theta + \lambda \Delta \theta) \Delta \theta \end{matrix} \right] d\lambda \right)$

Now to integrate. We have to use substitution...
$u = \theta + \lambda \Delta \theta$, $du = \Delta \theta d\lambda$, $d\lambda = \frac{1}{\Delta \theta} du$
...and we end up with this:
$\mathbf{P}_\mathbf{v} = exp \left( -\int_{u=\theta}^{u=\theta+\Delta\theta} \left[ \begin{matrix} 0 & -\frac{1}{2} \frac{\Delta \phi}{\Delta \theta} sin(2u) \\ \frac{\Delta \phi}{\Delta \theta} cot(u) & cot(u) \end{matrix} \right] du \right) $

And here's where things get interesting. We can't evaluate the integral for $\Delta \theta = 0$, because it has singular entries. So for this $\Delta \theta = 0$ case we have to consider this integral of $\lambda$ separately, without substituting it with $u$. Of course this is the $\mathbf{P}_\phi$ propagator. So already we have to calculate each coordinate propagator separately.



So back to parallel-propagating the propagators.

So everyone knows that connections are pseudo-tensors, not tensors, and do not transform tensoral. The change-of-basis equation for connections two parts: 1) the typical tensor change-of-basis, and 2) a second-derivative component.
I am suspicious simply by the definition of the connection: ${\Gamma^\alpha}_{\beta\gamma} = e^\alpha( \nabla_\beta e_\gamma )$ that this second derivative component of the transformation only pertains to the derivative index, not the other two. I am suspicious that the other two indexes can transform tensoral, seeing as these are the two indexes of the parallel propagator. This would mean that, when propagating the connection, you would use the parallel propagator transform on the $\alpha$ and $\gamma$ indexes, and then any extra would only pertain to the derivative $\beta$ index.

Ok back to the sphere surface.

The chart is:

$u^I = \left[ \begin{matrix} u^x \\ u^y \\ u^z \end{matrix} \right] = \left[ \begin{matrix} r sin \left( \theta \right) cos \left( \phi \right) \\ r sin \left( \theta \right) sin \left( \phi \right) \\ r cos \left( \theta \right) \end{matrix} \right]$

The basis operators $e_\mu = \partial_\mu$ applied to the chart give us, using the lower-as-row, upper-as-column convention:
${e_\mu}^I = I \downarrow \overset{\mu \rightarrow}{\left[ \begin{matrix} {e_\theta}^x & {e_\phi}^x \\ {e_\theta}^y & {e_\phi}^y \\ {e_\theta}^z & {e_\phi}^z \end{matrix} \right]} = I \downarrow \overset{\mu \rightarrow}{\left[ \begin{matrix} r cos(\phi) cos(\theta) & -r sin(\phi) sin(\theta) \\ r sin(\phi) cos(\theta) & r cos(\phi) sin(\theta) \\ -r sin(\theta) & 0 \end{matrix} \right]} $

Already things are going to be interesting, because our embedding manifold has higher dimension than our embedded manifold.
Is this a prerequisite for nonzero Riemann curvature?
Will I need to form a complete basis in the embedding space using the vertical tangent space, in order to correctly perform parallel propagation?
Otherwise, you only ever have 2D worth of information stored in the basis, right? Is this enough to traverse the surface of the 2D sphere surface?

Let's start with $\theta = \frac{\pi}{2}$, $\phi = 0$, and operate on the manifold with $r = 1$ fixed,
such that the initial basis $e_\mu$ is aligned to the flat embedding cartesian basis $e_I$:
${e_\mu}^I = I \downarrow \overset{\mu \rightarrow}{\left[ \begin{matrix} 0 & 0 \\ 0 & 1 \\ -1 & 0 \end{matrix} \right]}$

So, initially, $e_\theta = e_z$ and $e_\phi = e_y$

At this point, for some small $\Delta \theta$ and $\Delta \phi$, our parallel propagators for transforming the basis $e_\theta$ and $e_\phi$ are..
$\mathbf{P}_\theta(\theta, \theta + \Delta \theta) = \mathbf{S}(1, \frac{sin(\theta)}{sin(\theta + \Delta \theta)}) $
$\mathbf{P}_\phi(\phi, \phi + \Delta \phi) = \mathbf{S}(1, \frac{1}{sin(\theta)}) \cdot \mathbf{R}_z \left( -\Delta \phi cos(\theta) \right) \cdot \mathbf{S}(1, sin(\theta)) $
So, as already established, our basis which starts at identity is going to change by:

$\mathbf{e}(\frac{\pi}{2} + \Delta \theta, 0)$
$= \mathbf{e}(\frac{\pi}{2}, 0) \cdot \mathbf{P}_\theta^{-1}(\frac{\pi}{2}, \frac{\pi}{2} + \Delta \theta)$
$= \left[ \begin{matrix} 0 & 0 \\ 0 & 1 \\ -1 & 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & sin(\frac{\pi}{2} + \Delta \theta) \end{matrix} \right]$
$= \left[ \begin{matrix} 0 & 0 \\ 0 & cos(\Delta \theta) \\ -1 & 0 \end{matrix} \right]$

$\mathbf{e}(\frac{\pi}{2}, \Delta \phi)$
$= \mathbf{e}(\frac{\pi}{2}, 0) \cdot \mathbf{P}_\phi^{-1}(0, \Delta \phi)$
$= \left[ \begin{matrix} 0 & 0 \\ 0 & 1 \\ -1 & 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] $
$= \left[ \begin{matrix} 0 & 0 \\ 0 & 1 \\ -1 & 0 \end{matrix} \right]$

So we see, with propagation from our point of origin along $\theta$, we only end up varying the $e_{\phi}^y$ component, and nothing else. Equivalently, with propagation along $\phi$, we vary nothing.

So in order to correctly propagate our basis around the entire surface, we definitely need more information. The connection should provide that.
Maybe now I need to consider this all as a first-order system, like what Palatini does (right?).
What would Cartan do? He would use an orthonormal anholonomic basis, and he would integrate the basis as well as the commutation coefficients / structure constants rather than the connection. Though, to be fair, in an orthonormal anholonomic basis the connection coefficients are rotation matrices and are linear combinations of the structure constants.

At the point $(\theta, \phi) = (\frac{\pi}{2}, 0)$ in sphere surface manifold, the connections ${\Gamma^\alpha}_{\beta\gamma} = 0$.

So combining everything into one system ...
$U = \left[ \begin{matrix} {e_\theta}^x & {e_\phi}^x \\ {e_\theta}^y & {e_\phi}^y \\ {e_\theta}^z & {e_\phi}^z \\ {\Gamma^\theta}_{\theta\theta} & {\Gamma^\theta}_{\theta\phi} \\ {\Gamma^\theta}_{\phi\theta} & {\Gamma^\theta}_{\phi\phi} \\ {\Gamma^\phi}_{\theta\theta} & {\Gamma^\phi}_{\theta\phi} \\ {\Gamma^\phi}_{\phi\theta} & {\Gamma^\phi}_{\phi\phi} \end{matrix} \right]$

In fact, maybe I shouldn't be representing my $e_\mu$ with respect to the embedding manifold, which has higher dimension than the manifold. But if we use the chart coordinates, then always $\mathbf{e} = \mathbf{I}$. Then all we are concerned with is the propagation of the connections...

So lets ignore the basis propagation and just look at the connection propagation, and assume we are in a world where the basis is always correct.
Actually in this case, the connection shouldn't need any information from the basis, whether its horizontal or vertical information.

$\mathbf{\Gamma}_\theta(\theta, \phi) = \left[\begin{matrix} {\Gamma^\theta}_{\theta \theta} & {\Gamma^\theta}_{\theta\phi} \\ {\Gamma^\phi}_{\theta \theta} & {\Gamma^\phi}_{\theta\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right]$

That means:
$\mathbf{\Gamma}_\theta(\theta_R, \phi) = \left[\begin{matrix} 0 & 0 \\ 0 & cot(\theta_R) \end{matrix}\right] = Q \cdot \left[\begin{matrix} 0 & 0 \\ 0 & cot(\theta_L) \end{matrix}\right] \cdot Q^{-1} $
Which would work for ... nothing. Since we are using $Q$ and $Q^{-1}$.
This is probably where the derivative coordinate needs to come into play.
Of course we can get this to work right if we apply just one index, or if we apply all three ... which is it?
Probably using all three rather than using one, since propagating any other (1,1) tensor would require two transforms - one of $P$ and one of $P^{-1}$...
Or better yet, if $Q = \left[ \begin{matrix} 1 & 0 \\ 0 & \frac{cot(\theta_L)}{cot(\theta_R)} \end{matrix} \right]$.
Then the $\alpha$ and $\gamma$ cancel each other, and the $\beta$ transformation is by $Q^{-1}$, then we might have something.
So ${\Gamma^\alpha}_{\beta\gamma}(y) = {Q(x,y)^\alpha}_\mu {Q^{-1}(x,y)^\nu}_\beta {Q^{-1}(x,y)^\rho}_\gamma {\Gamma^\mu}_{\nu\rho}(x)$
But that's just Q for propagating the connection in the $\theta$ direction.

What about propagating the connection in the $\phi$ direction?

$\mathbf{\Gamma}_\phi(\theta, \phi) = \left[\begin{matrix} {\Gamma^\theta}_{\phi \theta} & {\Gamma^\theta}_{\phi\phi} \\ {\Gamma^\phi}_{\phi \theta} & {\Gamma^\phi}_{\phi\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & -cos(\theta) sin(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right] $

$\mathbf{\Gamma}_\phi(\theta, \phi_R) = \left[\begin{matrix} 0 & -cos(\theta) sin(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right] = Q \cdot \left[\begin{matrix} 0 & -cos(\theta) sin(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right] \cdot Q^{-1} \cdot Q^{-1} $