Start with a manifold $\mathcal{M}$ with dimension n.
Define a point on the manifold: $\mathbf{p} \in \mathcal{M}$
Define the tangent space at a point on the manifold: $\mathbf{T}_\mathbf{p}(\mathcal{M})$
Let $\mathbf{u} : \mathbb{R}^n \rightarrow \mathcal{M}$ be a coordinate chart of the manifold, and let $\mathbf{x} = \{x^\mu\}$ be a tuple of coordinates in the chart's domain, with $\mu$ spanning 1 to n (or 0 to n-1 if you would like), such that $\mathbf{u}(\mathbf{x}) = \mathbf{p}$.
Define a basis $\{e_\mu(\mathbf{x})\}$ that spans $\mathbf{T}_\mathbf{p}(\mathcal{M})$.

Define the basis represented in global Cartesian components to be: $e_\mu(\mathbf{x}) = {e_\mu}^I(\mathbf{x}) e_I$, for Cartesian basis $e_I$.
$e_I$ has to be constant wrt all $\mathbf{x}$ because we're going to be integrating them across a region. Don't worry, the transform ${e_\mu}^I$ will be used in the formalities but not in the results.

Let's write that out per-component. I'll use hats to denote the Cartesian basis indexes:
${e_\mu}^I = \downarrow I \overset{\rightarrow \mu}{\left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right]} = \overset{\rightarrow \mu}{\left[\begin{matrix} e_1 & | & ... & | & e_n \end{matrix}\right]} = \downarrow I \left[\begin{matrix} e^\hat{1} \\ --- \\ \vdots \\ --- \\ e^\hat{n} \end{matrix}\right] $

Notice I picked my matrix representation of my ${e_\mu}^I$ tensor components such that contravariant components are distinct per each row (so contravariant indexes are represented as column vectors) and covariant components are distinct per each column (so covariant indexes are represented as row vectors).

While we're here, let's define the dual basis / inverse transform $e^\mu$, such that $e_\mu e^\nu = \delta^\nu_\mu$.
And let's define its Cartesian background components $e^\mu = {e^\mu}_I e^I$ where $e^I = e_I$ since the Cartesian metric is identity.

${e^\mu}_I = \downarrow \mu \overset{\rightarrow I}{\left[\begin{matrix} {e^1}_\hat{1} & ... & {e^1}_\hat{n} \\ \vdots & & \vdots \\ {e^n}_\hat{1} & ... & {e^n}_\hat{n} \end{matrix}\right]}$

Notice that, in matrix form, for matrix $\mathbf{e} = ({e^\mu}_I)$, $\mathbf{e}^{-1} = ({e_\mu}^I)$

The discernment between the basis and its background when written in index notation - by convention - is whether the first index, stated in curvilinear coordinates, is covariant or contravariant. Covariant $e_\mu$ represents the vector basis while contravariant $e^\mu$ represents the dual basis. I should probably use some other symbol to denote the dual basis in index notation (some common examples are $\theta$ or $\omega$), maybe even $(e^{-1})^\mu$.

Matrices will be represented as bold symbols. Ex: $\Gamma_\mu$ would be a scalar, such as the trace of ${\Gamma^\alpha}_{\mu\alpha}$, while $\mathbf{\Gamma}_\mu$ would be a matrix whose values were $\mathbf{\Gamma}_\mu = ({\Gamma^\alpha}_{\mu\beta})$.

Sometimes I'll still keep the summation indexes as a reminder of contra/co-variant representation of the tensor component quantities, other times I will remove the summation indexes if the contra-/co-variance of the tensor should be deducable. If the brackets do remain around a tensor, even if it is in index notation, that is just to serve as a reminder (to me) that this is a matrix value.

If I ever drop all indexes from $e_\mu$, I will shorthand denote $e = ({e_\mu}^I)$ and $e^{-1} = ({e^\mu}_I)$.

Arbitrary (p, q) tensor:
$\mathbf{A} = {A^{\mu_1 ... \mu_p}}_{\nu_1 ... \nu_q} \underset{k=1}{\overset{p}{\otimes}} e_{\mu_k} \underset{k=1}{\overset{q}{\otimes}} e^{\nu_k} $
In multi-index notation this is written as: $\mathbf{A} = {A^\vec{M}}_\vec{N} {e_\vec{M}}^\vec{N}$.
For now I will use vector arrows to distinguish multi-index capital letters from locally-Cartesian(/Minkowski) indexes associated with the basis $e_I$. Maybe later I'll change on or the other?

Vector field:
$\mathbf{v} = v^\mu e_\mu$

Connection and covariant derivative:
$\nabla_\mu e_\nu = {\Gamma^\alpha}_{\mu\nu} e_\alpha$

Notice that, for the rest of this worksheet, I am going to require that the connection is a metric-cancelling Levi-Civita connection. This means it has no torsion. I am not assuming it is a coordinate basis though, so the connection may include partial metric terms and may include commutation terms.

As matrices:

$\nabla_\mu \mathbf{e} = \mathbf{\Gamma}_\mu \mathbf{e}$

Written out per-component:
$\nabla_\mu \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right] = \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right] \left[\begin{matrix} {\Gamma^1}_{\mu 1} & ... & {\Gamma^n}_{\mu 1} \\ \vdots & & \vdots \\ {\Gamma^1}_{\mu n} & ... & {\Gamma^n}_{\mu n} \end{matrix}\right]$

Notice that, because I picked my contravariant components to be distinct per each row (contravariant indexes are represented as column vectors) and covariant components to be distinct per each column (covariant indexes are represented as row vectors), this means the matrix-multiplication of the connection must be on the right hand. If you want to multiply the connection matrix on the left hand side of the basis matrix then you must use distinct contravariant components per each column (contravariant indexes are represented as row vectors) and distinct covariant compoments per each row (covariant indexes are represented as column vectors).

Since we are differentiating the Cartesian components which are constant at all points, let's exchange the covariant derivative with a partial derivative:

$e_\mu \left( {e_\nu}^I(\mathbf{x}) \right) = {e_\alpha}^I(\mathbf{x}) \cdot {\Gamma^\alpha}_{\mu\nu}(\mathbf{x}) $

Notice that in my formulation I am considering non-coordinate basis. In a coordinate basis, $e_\mu = \partial_\mu$. If we are dealing with non-coordinate basis, then we have to consider the non-coordinate basis as a linear combination of the coordinate basis: $e_\mu = {e_\mu}^{\bar\mu} \partial_{\bar\mu}$.

${e_\mu}^{\bar\mu}(\mathbf{x}) \partial_{\bar\mu} \left( {e_\nu}^I(\mathbf{x}) \right) = {e_\alpha}^I(\mathbf{x}) \cdot {\Gamma^\alpha}_{\mu\nu}(\mathbf{x}) $

$\partial_{\bar\mu} \left( {e_\nu}^I(\mathbf{x}) \right) = {e_\alpha}^I(\mathbf{x}) \cdot {e^\mu}_{\bar\mu}(\mathbf{x}) \cdot {\Gamma^\alpha}_{\mu\nu}(\mathbf{x}) $

Notice that the partial derivative is a linear combination of the non-coordinate basis: $\partial_{\bar\mu} = {e^\mu}_{\bar\mu} e_\mu$, which is the inverse of $e_\mu = {e_\mu}^{\bar\mu} \partial_{\bar\mu}$. I'm going to hide the combination of these two inside of notation.

Let ${\Gamma^\alpha}_{\bar{\mu}\nu} = {e^\mu}_{\bar\mu}(\mathbf{x}) {\Gamma^\alpha}_{\mu\nu}$

Then we find:

$\frac{\partial}{\partial x^{\bar\mu}} \left( {e_\nu}^I(\mathbf{x}) \right) = {e_\alpha}^I(\mathbf{x}) \cdot {\Gamma^\alpha}_{\bar{\mu}\nu}(\mathbf{x}) $

Next we can attempt to solve the linear dynamic system. Abuse notation a bit:

$\int {e^\alpha}_I(\mathbf{x}) \cdot d {e_\nu}^I(\mathbf{x}) = \int_{x'^{\bar{\mu}} = x_L^{\bar{\mu}}}^{x'^{\bar{\mu}} = x_R^{\bar{\mu}}} {\Gamma^\alpha}_{\bar{\mu}\nu}(\mathbf{x}') \mathbf{dx}'$

Integrate the linear dynamic system and hide the Cartesian basis index to find:
$e_\nu(\mathbf{x}^{\bar{\mu}}_R) = e_\alpha(\mathbf{x}^{\bar{\mu}}_L) \cdot exp\left( \int_{x'^{\bar{\mu}} = x_L^{\bar{\mu}}}^{x'^{\bar{\mu}} = x_R^{\bar{\mu}}} {\Gamma^\alpha}_{\bar{\mu}\nu}(\mathbf{x}'^{\bar{\mu}}) \mathbf{dx}'^{\bar{\mu}} \right)$

Notice that we're conducting a matrix-exponent, so a better description of this equation would be in matrix form:
$\mathbf{e}(\mathbf{x}^{\bar{\mu}}_R) = \mathbf{e}(\mathbf{x}^{\bar{\mu}}_L) \cdot exp \left( \int_{x'^{\bar{\mu}} = x_L^{\bar{\mu}}}^{x'^{\bar{\mu}} = x_R^{\bar{\mu}}} \mathbf{\Gamma}_{\bar{\mu}}(\mathbf{x}'^{\bar{\mu}}) \mathbf{dx}'^{\bar{\mu}} \right)$

...where $x^{\bar{\mu}}_L$ to indicate the scalar value of the left-bounds of the $\bar{\mu}$ index of the $\mathbf{x}$ coordinate, and $\mathbf{x}^{\bar{\mu}}_L$ is the $\mathbf{x}$ coordinate with the $\bar{\mu}$ component exchanged with the scalar value $x^{\bar{\mu}}_L$.

TODO update the rest of this worksheet, and the 'integrating vector fields' worksheets, to take the non-coordinate definition into account.
Specifically, change the mu's of Gamma into bar-mu's. A better TODO would be to not use bars (since those can be confused with conjugate indexes in complex manifolds, so think of a better denotation for coordinate indexes. Most texts that I see that discern between coordinate and non-coordinate will use plain symbols for coordinate and hats for non-coordinate, especially for orthonormal non-coordinate indexes, however here I am using arbitrary non-coordinate basis more often, and I don't want to put hats everywhere.

So that's how you transport a basis from one point $\mathbf{x}^\mu_L$ to another point $\mathbf{x}^\mu_R$ along a single changing coordinate $x^\mu$.
How about if you want to transport it along an arbitrary coordinate path on the manifold?
For that, for now, I will only consider separate movement along individual coordinate lines:
Let's assume $\mathbf{x}$ and $\mathbf{y}$ are points on our manifold.

$e_\nu(\mathbf{y}) = e_{\alpha_1}(\mathbf{x}) \cdot exp\left( \int_{z^1 = x^1}^{z^1 = y^1} {\Gamma^{\alpha_1}}_{1 {\alpha_2}}(z^1, x^2, ..., x^n) dz^1 \right) \cdot ... \cdot exp\left( \int_{z^\mu = x^\mu}^{z^\mu = y^\mu} {\Gamma^{\alpha_\mu}}_{\mu {\alpha_{\mu+1}}}(y^1, ..., y^{\mu-1}, z^\mu, x^{\mu+1}, ..., x^n) dz^\mu \right) \cdot ... \cdot exp\left( \int_{z^n = x^n}^{z^n = y^n} {\Gamma^{\alpha_n}}_{n \nu}(y^1, ..., y^{n-1}, z^n) dz^n \right) $

Notice that I am transforming the original basis across each connection's coordinate dimension individually, and after I perform each transform I exchange the source coordinate in the integral with the destination component.
Think of it like traversing the edges of a n-hypercube to get from (0,0,...,0) to (1,1,...,1).

Maybe with some more notation abuse I could write that as:
$e_\nu(\mathbf{y}) = e_\alpha(\mathbf{x}) \cdot \underset{\mu=1}{\overset{n}{\Pi}} exp\left( \int_{z^\mu = x^\mu}^{z^\mu = y^\mu} {\Gamma^\alpha}_{\mu \nu}(y^{1..\mu-1} | z^\mu | x^{\mu+1..n}) dz^\mu \right)$

Let's spare ourselves from writing this term any more than one time. Let's define the 'parallel propagator', a linear transform for propagating vector components from the tangent space at one component on a manifold to the tangent space at another point:

${P_\mu(\mathbf{x}_L^\mu, \mathbf{x}_R^\mu)^\alpha}_\nu = exp\left( -\int_{x'^\mu = x_L^\mu}^{x'^\mu = x_R^\mu} {\Gamma^\alpha}_{\mu\nu}(\mathbf{x}'^\mu) \mathbf{dx}'^\mu \right)$

The subscript $\mu$ here is denoting that we are travelling along a curve along the coordinate line of $x^\mu$. Notice the minus sign in the exponent. This means we will perform the inverse of this when we apply it to our ${e_\mu}^I(\mathbf{x})$ basis. Keep this in mind, it'll make sense later when we get to vector components.

Mind you the result of $\mathbf{P}(\mathbf{a},\mathbf{b})$ is a matrix, and the $\alpha$ and $\nu$ in this case are the matrix indexes.
Since it is the exponent of the integral of a matrix of the connection coefficients, I'm willing to bet that its transpose is what we use to propagate one-forms in the same way that the connection matrix transpose is used in the covariant derivative of one-forms.
For more on the parallel propagator, check out chapter 3 of Carroll. Carroll's notes: https://preposterousuniverse.com/wp-content/uploads/grnotes-three.pdf.


Notice that this depends on the identity: $\mathbf{P}^{-1}(\mathbf{x},\mathbf{y}) = \mathbf{P}(\mathbf{y},\mathbf{x})$
That can be proven quickly with:
$\mathbf{P}(\mathbf{y},\mathbf{x})$
$= exp(-\int_\mathbf{y}^\mathbf{x} \mathbf{\Gamma}_\mathbf{v} d\lambda)$
$= exp(\int_\mathbf{x}^\mathbf{y} \mathbf{\Gamma}_\mathbf{v} d\lambda)$
$= exp(-\int_\mathbf{x}^\mathbf{y} \mathbf{\Gamma}_\mathbf{v} d\lambda)^{-1}$
$= \mathbf{P}^{-1}(\mathbf{x},\mathbf{y})$

Now the above transformation from $e_\nu(\mathbf{x}_L^\mu)$ to $e_\nu(\mathbf{y}^\mu_R)$ would look like:
$\mathbf{e}(\mathbf{x}_R^\mu) = \mathbf{e}(\mathbf{x}_L^\mu) \cdot (\mathbf{P}^{-1} (\mathbf{x}_L^\mu, \mathbf{x}_R^\mu)) = \mathbf{e}(\mathbf{x}_L^\mu) \cdot (\mathbf{P}(\mathbf{x}_R^\mu, \mathbf{x}_L^\mu))$

Now the above transformation from $e_\nu(\mathbf{x})$ to $e_\nu(\mathbf{y})$, going component-by-component along the corners of a hypercube, would look like:
$\mathbf{e}(\mathbf{y}) = \mathbf{e}(\mathbf{x}) \cdot \mathbf{P}_1^{-1}((x^1, x^2, ..., x^n), (y^1, x^2, ..., x^n)) \cdot ... \cdot \mathbf{P}_k^{-1}((y^1, ..., y^{k-1}, x^k,, x^{k+1}, ..., x^n), (y^1, ..., y^{k-1}, y^k, x^{k+1}, ..., x^n)) \cdot ... \cdot \mathbf{P}_n^{-1}((y^1, ..., y^{n-1}, x^n), (y^1, ..., y^{n-1}, y^n))$
Where we have slowly replaced one component of our coordinate at a time until we arrived at the destination of the curve.

That can just be written as $\mathbf{e}(\mathbf{x}) \underset{k=1}{\overset{n}{\Pi}} \mathbf{P}_k^{-1}(\mathbf{x}^k, \mathbf{y}^k)$ if we don't mind hiding the chart coordinates, which seem to be an important detail.

This path happens to be one possible path that I have chosen. This matches with Carroll's notes, which then goes into more details on the choice of which order to evaluate the parallel propagation along each coordinate. This is known as "path ordering", and Caroll talks more about it. A bit further and I will go into generalizing this to arbitrary curves on the manifold surface.


Alright, what if you do want to just cut across the diagonal of the coordinate space hypercube instead of following the edges around the outside? I'm guessing that would look like:

$e_\nu(\mathbf{y}) = e_\alpha(\mathbf{x}) \cdot exp\left( -\int_{\lambda = 0}^{\lambda = 1} {\Gamma^\alpha}_{\mu\nu}(\mathbf{x} + \lambda \mathbf{v}) v^\mu d\lambda \right)^{-1}$

...where $\mathbf{v} = \mathbf{y} - \mathbf{x}$. I'll call this the generalized linear case. If you want to propagate along one certain coordinate $\mu$ a distance of $l$, just set $v^\nu = l \cdot \delta^\nu_\mu$.

This definition of the generalized linear parallel propagator would be especially useful in the 3D rotation group, where the parallel propagator along a line corresponds to a rotation about the axis of the vector between line coordinates.
Propagating along the x coordinate then the y coordinate vs y-then-x will give you the commutation of rotating along the x-axis then y-axis vs y-axis then x-axis -- both represent different rotations, and both are different to rotating directly along the (1,1,0) axis.

Mind you, changing the chart coordinates in a linear fashion like this does not produce a geodesic transport. In order to do that, you would (once again) need to make use of the connection. A double integral of some function of the connection? After all, the geodesic equation is a 2nd derivative of the position. Think on this one more later. The shorthand representation of our generalized linear parallel transport could be something like: $\mathbf{e}(\mathbf{y}) = \mathbf{e}(\mathbf{x}) \mathbf{P}_\mathbf{v}^{-1}(\mathbf{x},\mathbf{y})$

The notation for the generalized linear parallel transport looks just like the notation for transport around the edges of a hypercube representaiton above, even though these represent two different transformations. It looks like I need to specify in the notation a way to discern the path of the transport as well. Just write $\mathbf{P}_C(s,t)$ and specify that C is a curve, and whether the curve is a geodesic, or a line in coordinate chart space, or a piecewise edge traversal from one corner of a hypercube to the other (as I first stated). I'm not sure whether s and t should be parameters of that curve (which wouldn't convey much information unless the curve was along a coordinate line), or end-points of the curve (which could translate to some degree with our definition of coordinate-line parallel propoagators, however we would be interchanging points with scalars). So there is your solution to the geodesic case: just calculate the curve, and then substitute that curve into the coordinate evaluating the connection.

Generalizing the curve definition of parallel propagators further, we can write them as:

$\mathbf{e}(\mathbf{y}) = \mathbf{e}(\mathbf{x}) \mathbf{P}_C^{-1} (\mathbf{x}, \mathbf{y})$

for, our parallel propagation along a curve defined as:

${P_C(\mathbf{x}, \mathbf{y})^\alpha}_\nu = exp\left( -\int_{\lambda = 0}^{\lambda = 1} {\Gamma^\alpha}_{\mu\nu}(\vec{C}(\lambda)) \dot{C}^\mu(\lambda) d\lambda \right)$

From here we can do a trick with the definition of connection coefficients. Let ${{\underrightarrow{\omega}}^\alpha}_\nu = {\Gamma^\alpha}_{\mu\nu} e^\mu$ define our connection one-forms. And since one-forms are functions that map vectors to reals, we can rewrite our definition to look like:

${\mathbf{P}_C(\mathbf{x},\mathbf{y})^\alpha}_\nu = exp\left( -\int_{\lambda = 0}^{\lambda = 1} {{\underrightarrow{\omega}}^\alpha}_{\nu} (\vec{C}(\lambda)) (\dot{\vec{C}}(\lambda)) d\lambda \right)$

Now to dispel ambiguities of this representation: The indexes $\alpha, \nu$ do not factor out of the integral -- instead this is a matrix integral. The first set of parenthesis after the one-form ${{\underrightarrow{\omega}}^\alpha}_\nu$ denotes where to evaluate the one-form at. I guess I could use ${\omega^\alpha}_\nu|_{\vec{C}(\lambda)}$ or even more ambiguous just $\mathbf{\omega}|_\lambda$ or $\mathbf{\omega}|_\vec{C}$. The second set of parenthesis is the vector argument of the one-form function ${{\underrightarrow{\omega}}^\alpha}_\nu$.

I could make this look worse (or better depending on your opinion) to write it as:

$\mathbf{P}_\vec{C}(\mathbf{x},\mathbf{y}) = exp\left( -\int_{\lambda = 0}^{\lambda = 1} {\underrightarrow{\omega}}|_{\vec{C}(\lambda)} (\dot{\vec{C}}(\lambda)) d\lambda \right)$
$\mathbf{P}_\vec{C}(\mathbf{x},\mathbf{y}) = exp\left( -\int_{\lambda = 0}^{\lambda = 1} \omega|_\vec{C} (d\vec{C}) \right)$
$\mathbf{P} = exp\left( -\int \omega (dC) \right)$
etc...



So now on to vector/tensor components. I'll start with vector, but you can extrapolate if you want.
$\mathbf{v} = v^\mu e_\mu$

In matrix form:
$\mathbf{v} = \left[\begin{matrix} e_1 & | & ... & | & e_n \end{matrix}\right] \left[\begin{matrix} v^1 \\ --- \\ \vdots \\ --- \\ v^n \end{matrix}\right]$

As a matrix, in our fixed background Cartesian components (since they are independent of our choice of $\mathbf{p}$):
$\mathbf{v} = v^\mu {e_\mu}^I e_I = v^I e_I$

In matrix form:
$v = \left[\begin{matrix} e_\hat{1} & | & ... & | & e_\hat{n} \end{matrix}\right] \left[\begin{matrix} v^\hat{1} \\ --- \\ \vdots \\ --- \\ v^\hat{n} \end{matrix}\right] = \left[\begin{matrix} e_\hat{1} & | & ... & | & e_\hat{n} \end{matrix}\right] \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right] \left[\begin{matrix} v^1 \\ --- \\ \vdots \\ --- \\ v^n \end{matrix}\right]$

So our background basis components of our vector field are:

$v^I = {e_\mu}^I v_\mu$

$\left[\begin{matrix} v^\hat{1} \\ --- \\ \vdots \\ --- \\ v^\hat{n} \end{matrix}\right] = \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right] \left[\begin{matrix} v^1 \\ --- \\ \vdots \\ --- \\ v^n \end{matrix}\right]$

So if we have a vector with components relative to the basis at one point $\mathbf{x}$, how do we find the same vector components relative to a basis at $\mathbf{y}$?
$\mathbf{v}(\mathbf{x}) = \mathbf{v}'(\mathbf{y})$

I'll take advantage of the fact that our Cartesian components are fixed in order to calculate this:
$\mathbf{v}^I(\mathbf{x}) = \mathbf{v}'^I(\mathbf{y})$

In matrix form:

$\left[\begin{matrix} v^\hat{1} \\ --- \\ \vdots \\ --- \\ v^\hat{n} \end{matrix}\right]_{(\mathbf{x})} = \left[\begin{matrix} v'^\hat{1} \\ --- \\ \vdots \\ --- \\ v'^\hat{n} \end{matrix}\right]_{(\mathbf{y})}$

Notice I've equated my hatted quantities, because these are with respect to our fixed background basis $e_I$.
Now to expand it in terms of the coordinate basis components:

${e_\mu}^I(\mathbf{x}) v^\mu(\mathbf{x}) = {e_\mu}^I(\mathbf{y}) v'^\mu(\mathbf{y})$

$\left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right]_{(\mathbf{x})} \left[\begin{matrix} v^1 \\ --- \\ \vdots \\ --- \\ v^n \end{matrix}\right] = \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right]_{(\mathbf{y})} \left[\begin{matrix} v'^1 \\ --- \\ \vdots \\ --- \\ v'^n \end{matrix}\right]$
Solve for $v'^\mu$:
${e^\nu}_I(\mathbf{y}) {e_\mu}^I(\mathbf{x}) v^\mu(\mathbf{x}) = {e^\nu}_I(\mathbf{y}) {e_\mu}^I(\mathbf{y}) v'^\mu(\mathbf{y})$
${e^\nu}_I(\mathbf{y}) {e_\mu}^I(\mathbf{x}) v^\mu(\mathbf{x}) = \delta^\nu_\mu v'^\mu(\mathbf{y})$
${e^\nu}_I(\mathbf{y}) {e_\mu}^I(\mathbf{x}) v^\mu(\mathbf{x}) = v'^\nu(\mathbf{y})$

i.e.:
$\mathbf{e}^{-1}(\mathbf{y}) \cdot \mathbf{e}(\mathbf{x}) \cdot \mathbf{v}(\mathbf{x}) = \mathbf{v}'(\mathbf{y})$
$(\mathbf{e}(\mathbf{y}))^{-1} \cdot \mathbf{e}(\mathbf{x}) \cdot \mathbf{v}(\mathbf{x}) = \mathbf{v}'(\mathbf{y})$

i.e.:
$ \left[\begin{matrix} v'^1 \\ --- \\ \vdots \\ --- \\ v'^n \end{matrix}\right] = \left( \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right]_{(\mathbf{y})} \right)^{-1} \left[\begin{matrix} {e_1}^\hat{1} & ... & {e_n}^\hat{1} \\ \vdots & & \vdots \\ {e_1}^\hat{n} & ... & {e_n}^\hat{n} \end{matrix}\right]_{(\mathbf{x})} \left[\begin{matrix} v^1 \\ --- \\ \vdots \\ --- \\ v^n \end{matrix}\right]$

That looks fine, except we are still using ${e_\mu}^I$, which has components in our Cartesian basis.
How do we represent everything only in our curvilinear chart coordinates?

Let's represent our $\mathbf{e}(\mathbf{y})$ as a parallel transport of the basis from $\mathbf{e}(\mathbf{x})$ to $\mathbf{e}(\mathbf{y})$.
$\mathbf{e}(\mathbf{y}) = \mathbf{e}(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y})$
Substitute to find:
$(\mathbf{e}(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y}))^{-1} \cdot \mathbf{e}(\mathbf{x}) \cdot \mathbf{v}(\mathbf{x}) = \mathbf{v}'(\mathbf{y})$
$\mathbf{P}(\mathbf{x},\mathbf{y}) \cdot \mathbf{e}^{-1}(\mathbf{x}) \cdot \mathbf{e}(\mathbf{x}) \cdot \mathbf{v}(\mathbf{x}) = \mathbf{v}'(\mathbf{y})$
$\mathbf{P}(\mathbf{x},\mathbf{y}) \cdot \mathbf{v}(\mathbf{x}) = \mathbf{v}'(\mathbf{y})$

Tada! Now we have a representation of the parallel transport that depends only on connections, not on the Cartesian basis. I told you that putting that minus sign would make sense. So our 'forward' definition of the parallel propagator will propagate vector components, while its 'inverse' definition propagates basis vectors. Just like how the 'forward' definition acts on contravariant indexes and the 'inverse' definition acts on covariant indexes ... and the basis vectors are denoted with covariant indexes.

The interesting thing is that the resulting components after parallel transport from $\mathbf{x}$ to $\mathbf{y}$ is $\mathbf{v}(\mathbf{x})$ left-multiplied with the transport from $\mathbf{y}$ to $\mathbf{x}$.
This is the same as the inverse of the parallel-transport transform from $\mathbf{x}$ to $\mathbf{y}$, and that itself will be left-multiplied with the original parallel-transport transform from $\mathbf{x}$ to $\mathbf{y}$, and that left-multiplied with $\mathbf{e}(\mathbf{x})$, in order to deduce that $\mathbf{v} = \mathbf{v}'$:
$\mathbf{v} = \mathbf{v}'$:
$\mathbf{v} = \mathbf{e}(\mathbf{x}) \cdot \mathbf{v}'(\mathbf{x})$
$\mathbf{v} = \mathbf{e}(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y}) \cdot \mathbf{P}(\mathbf{x},\mathbf{y}) \cdot \mathbf{v}'(\mathbf{x})$
$\mathbf{v} = \mathbf{e}(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y}) \cdot \mathbf{v}'(\mathbf{y})$
$\mathbf{v} = \mathbf{e}(\mathbf{y}) \cdot \mathbf{v}'(\mathbf{y})$
$\mathbf{v} = \mathbf{v}'$




How about one-forms?

Well usually one-form components are represented as row vectors, and the accompanying one-form dual basis can be seen as a right-multiply with a column-'vector':

$\mathbf{w} = w_\mu e^\mu$

In matrix form:
$\mathbf{w} = \left[\begin{matrix} w_1 & | & ... & | & w_n \end{matrix}\right] \left[\begin{matrix} e^1 \\ --- \\ \vdots \\ --- \\ e^n \end{matrix}\right]$

So what is the propagator of our $e^\mu$ basis? Well what is the PDE?

$\nabla_\mu e^\alpha = -{\Gamma^\alpha}_{\mu\nu} e^\nu$

I could maintain the contravariant-index-is-column-vector, covariant-index-is-row-vector standard and rewrite our linear dynamic system as a left multiplication on the basis (instead of a right) and come up with a parallel propagator matrix which is a left-multiply on the basis and a right-multiply on the one-form components... Maybe I will later. For now I will just say 'transpose it all', use the previous linear dynamic system we were using except with $e_\mu$'s taking the place of $e^\mu$'s. You'll see that, to represent the covariant derivative of the one-form dual basis, you need to replace the matrix of connections with its negative, transpose. Then we integrate and exponent. What does the negative inside the integral do? It becomes an inverse on the outside of the exponent. What does the transpose do? Well in the orthonormal basis case that our connection is antisymmetric and its exponent is orthogonal, transpose gives us an inverse. In the orthonormal case those two modifications combine and cancel, and we see that the propagator of a one-form is the same as the propagator of a vector - which makes sense, since the orthonormal metric is the identity matrix, so the one-form component values equal the vector component values. But take note that, in the non-orthonormal case, you can't use the contravariant propagator to propagate the covariant indexes. You must use its transpose inverse.

Let's walk through this, step-by-step.

Our covariant derivative of our one-form dual-basis:

$\nabla_\mu e^\nu(\mathbf{x}) = -{\Gamma^\nu}_{\mu\alpha}(\mathbf{x}) e^\alpha(\mathbf{x})$

Represented with respect to a fixed global Cartesian basis:

$\nabla_\mu {e^\nu}_I(\mathbf{x}) = -{\Gamma^\nu}_{\mu\alpha}(\mathbf{x}) {e^\alpha}_I(\mathbf{x})$

Written out per-component using our contravariant-is-column, covariant-is-row convention:

$\nabla_\mu \left[\begin{matrix} {e^1}_\hat{1} & ... & {e^1}_\hat{n} \\ \vdots & & \vdots \\ {e^n}_\hat{1} & ... & {e^n}_\hat{n} \end{matrix}\right] = \left[\begin{matrix} -{\Gamma^1}_{\mu 1} & ... & -{\Gamma^1}_{\mu n} \\ \vdots & & \vdots \\ -{\Gamma^n}_{\mu 1} & ... & -{\Gamma^n}_{\mu n} \end{matrix}\right] \left[\begin{matrix} {e^1}_\hat{1} & ... & {e^1}_\hat{n} \\ \vdots & & \vdots \\ {e^n}_\hat{1} & ... & {e^n}_\hat{n} \end{matrix}\right]$

Solve with an exponent:

$\nabla_\mu {e^\nu}_I(\mathbf{x}^\mu_R) = exp( -\int_{x'^\mu=x^\mu_L}^{x'^\mu=x^\mu_R} {\Gamma^\nu}_{\mu\alpha}(\mathbf{x}') \mathbf{dx}' ) {e^\alpha}_I(\mathbf{x}^\mu_L)$

So our matrix is the same matrix as before, except it is negative'd. Also instead of a right multiply it is a left multiply (so now everyone can stop biting their nails over this). In fact, overall, this looks a lot more like the definition of a parallel propagator, and the traditional left-multiply definition of a linear dynamic system.

Notice that if we transpose everything then we match the previous equation for our vector basis, except that now the connection coefficients are negative'd - and therefore the exponent is inverted.

So the parallel propagator of a one-form is the transpose-inverse of the parallel propagator of a vector.

Then of course if you apply this to the one-form, like with the vectors, you must apply its inverse.

$\mathbf{e}^{-1}(\mathbf{y}) = \mathbf{P}(\mathbf{x},\mathbf{y}) \cdot \mathbf{e}^{-1}(\mathbf{x})$

This is just the inverse of our definition for the propagation of the vector basis.

Then we can assert our one-forms between different points $\mathbf{w} = \mathbf{w}'$ match:
$\mathbf{w} = \mathbf{w}'$
$\mathbf{w} = \mathbf{w}'(\mathbf{x}) \cdot \mathbf{e}^{-1}(\mathbf{x})$
$\mathbf{w} = \mathbf{w}'(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y}) \cdot \mathbf{P}(\mathbf{x},\mathbf{y}) \cdot \mathbf{e}^{-1}(\mathbf{x})$
$\mathbf{w} = \mathbf{w}'(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y}) \cdot \mathbf{e}^{-1}(\mathbf{y})$
$\mathbf{w} = \mathbf{w}'(\mathbf{y}) \cdot \mathbf{e}^{-1}(\mathbf{y})$
$\mathbf{w} = \mathbf{w}'$

So here we see:

$\mathbf{w}(\mathbf{y}) = \mathbf{w}(\mathbf{x}) \cdot \mathbf{P}^{-1}(\mathbf{x},\mathbf{y})$

which is the same as

$\mathbf{w}(\mathbf{y}) = \mathbf{w}(\mathbf{x}) \cdot \mathbf{P}(\mathbf{y},\mathbf{x})$

So to parallel-propagate one forms, we right-multiply with the inverse of the propagator - just as when we propagated vectors we left-multiplied the forward parallel propagator. Of course if we want to view this as a left-multiply - so its form matches the parallel propagator of the vector - then we, once again, transpose our equation.

$\mathbf{w}^T(\mathbf{y}) = \mathbf{P}^{-T}(\mathbf{x},\mathbf{y}) \cdot \mathbf{w}^T(\mathbf{x})$



How about permuting a (p,q) tensor?

$\mathbf{A}(\mathbf{y})$
$= {A^{\mu_1 ... \mu_p}}_{\nu_1 ... \nu_q}(\mathbf{y}) \underset{k=1}{\overset{p}{\otimes}} e_{\mu_k}(\mathbf{y}) \underset{k=1}{\overset{q}{\otimes}} e^{\nu_k}(\mathbf{y}) $
$= {P(\mathbf{x},\mathbf{y})^{\mu_1}}_{\alpha_1} \cdot ... \cdot {P(\mathbf{x},\mathbf{y})^{\mu_p}}_{\alpha_p} \cdot {P(\mathbf{y},\mathbf{x})^{\beta_1}}_{\nu_1} \cdot ... \cdot {P(\mathbf{y},\mathbf{x})^{\beta_q}}_{\nu_q} \cdot {A^{\alpha_1 ... \alpha_p}}_{\beta_1 ... \beta_q}(\mathbf{y}) \underset{k=1}{\overset{p}{\otimes}} e_{\mu_k}(\mathbf{x}) \underset{k=1}{\overset{q}{\otimes}} e^{\nu_k}(\mathbf{x}) $

Take note that, true to the definition of one-form / covariant-index propagation, the x and y must be reversed since the parallel propagation matrix must be transposed.

Let's make our notation a bit more concise. I am going to generalize the propagator indexes:

${P(\mathbf{x}, \mathbf{y})^{\mu_1 ... \mu_m}}_{\alpha_1 ... \alpha_m} = P(\mathbf{x}, \mathbf{y}) {{}^{\mu_1}}_{\alpha_1} ... P(\mathbf{x}, \mathbf{y}) {{}^{\mu_m}}_{\alpha_m} $

...where each k and k+m of the 2m indexes match up, so you can write out the destination indexes and then the source indexes next to the parallel propagator of the tensor you are propagating, just as on a single parallel propagator the destination index is first and then the source index is second.

Maybe I could do one better and simply define it as a giant tensor operator, like the projection operator or the covariant derivative operator:

$\mathbf{A}(\mathbf{y})$
$= \mathbf{P}(\mathbf{x}, \mathbf{y}, \mathbf{A}(\mathbf{x}))$
$= \mathbf{P}(\mathbf{x}, \mathbf{y}, {A^{\alpha_1 ... \alpha_p}}_{\beta_1 ... \beta_q}(\mathbf{x}) \underset{k=1}{\overset{p}{\otimes}} e_{\alpha_k}(\mathbf{x}) \underset{k=1}{\overset{q}{\otimes}} e^{\beta_k}(\mathbf{x}) )$
$= {P(\mathbf{x}, \mathbf{y}, \mathbf{A})^{\mu_1 ... \mu_p}}_{\nu_1 ... \nu_q} \underset{k=1}{\overset{p}{\otimes}} e_{\mu_k}(\mathbf{x}) \underset{k=1}{\overset{q}{\otimes}} e^{\nu_k}(\mathbf{x}) $
$= {P(\mathbf{x}, \mathbf{y})^{\mu_1}}_{\alpha_1} \cdot ... \cdot {P(\mathbf{x}, \mathbf{y})^{\mu_p}}_{\alpha_p} \cdot {P(\mathbf{y}, \mathbf{x})^{\beta_1}}_{\nu_1} \cdot ... \cdot {P(\mathbf{y}, \mathbf{x})^{\beta_q}}_{\nu_q} \cdot {A^{\alpha_1 ... \alpha_p}}_{\beta_1 ... \beta_q}(\mathbf{x}) \underset{k=1}{\overset{p}{\otimes}} e_{\mu_k}(\mathbf{x}) \underset{k=1}{\overset{q}{\otimes}} e^{\nu_k}(\mathbf{x}) $

Once again notice that the parallel propagated contravariant indexes are replaced with a multiply of $\mathbf{P}(\mathbf{x},\mathbf{y})$, while the parallel propagated covariant indexes are propagated with $\mathbf{P}(\mathbf{y},\mathbf{x})$




Now for a specific example: Polar coordinates:

chart:
$u^I = \left[\begin{matrix} r cos\phi \\ r sin\phi \end{matrix}\right]$

${e_r}^I = {u^I}_{,r} = \left[\begin{matrix} {e_r}^x \\ {e_r}^y \end{matrix}\right] = \left[\begin{matrix} cos\phi \\ sin\phi \end{matrix}\right]$

${e_\phi}^I = {u^I}_{,\phi} = \left[\begin{matrix} {e_\phi}^x \\ {e_\phi}^y \end{matrix}\right] = \left[\begin{matrix} -r sin\phi \\ r cos\phi \end{matrix}\right]$

${e_\mu}^I = {u^I}_{,\mu} = \left[\begin{matrix} {e_r}^x & {e_\phi}^x \\ {e_r}^y & {e_\phi}^y \end{matrix}\right] = \left[\begin{matrix} cos\phi & -r sin\phi \\ sin\phi & r cos\phi \end{matrix}\right]$

It might be later convenient to think of this as a product of two linear operations:

${e_\mu}^I = \left[\begin{matrix} cos\phi & -sin\phi \\ sin\phi & cos\phi \end{matrix}\right] \left[\begin{matrix} 1 & 0 \\ 0 & r \end{matrix}\right] = \mathbf{R}(\phi) \cdot \mathbf{S}(1,r)$

...where $\mathbf{S}(a,b) = \left[\begin{matrix} a & 0 \\ 0 & b \end{matrix}\right]$ is a scale matrix, and $\mathbf{R}(\phi)$ is defined as whatever above is left, which happens to look like a rotation matrix.


Now for the connections:

${\Gamma^r}_{\phi\phi} = -r$
${\Gamma^\phi}_{r\phi} = {\Gamma^\phi}_{\phi r} = \frac{1}{r}$

$\mathbf{\Gamma}_\mu = \left[\begin{matrix} {\Gamma^r}_{\mu r} & {\Gamma^r}_{\mu\phi} \\ {\Gamma^\phi}_{\mu r} & {\Gamma^\phi}_{\mu\phi} \end{matrix}\right]$

$\mathbf{\Gamma}_r = \left[\begin{matrix} {\Gamma^r}_{r r} & {\Gamma^r}_{r\phi} \\ {\Gamma^\phi}_{r r} & {\Gamma^\phi}_{r\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 0 & \frac{1}{r} \end{matrix}\right]$

$\mathbf{\Gamma}_\phi = \left[\begin{matrix} {\Gamma^r}_{\phi r} & {\Gamma^r}_{\phi\phi} \\ {\Gamma^\phi}_{\phi r} & {\Gamma^\phi}_{\phi\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & -r \\ \frac{1}{r} & 0 \end{matrix}\right]$

And now for the exponentials of the integrals of the connections:

$\mathbf{P}^{-1}_r(r_L, r_R) = exp(\int_{r_L}^{r_R} \mathbf{\Gamma}_r dr)$
$\mathbf{P}^{-1}_r(r_L, r_R) = exp\left( \int_{r_L}^{r_R} \left[\begin{matrix} 0 & 0 \\ 0 & \frac{1}{r} \end{matrix}\right] dr \right)$
$\mathbf{P}^{-1}_r(r_L, r_R) = exp\left( \left[\begin{matrix} 0 & 0 \\ 0 & log(r) \end{matrix}\right]|_{r_L}^{r_R} \right)$
$\mathbf{P}^{-1}_r(r_L, r_R) = exp\left( \left[\begin{matrix} 0 & 0 \\ 0 & log(r_R) - log(r_L) \end{matrix}\right] \right)$
$\mathbf{P}^{-1}_r(r_L, r_R) = exp\left( \left[\begin{matrix} 0 & 0 \\ 0 & log(\frac{r_R}{r_L}) \end{matrix}\right] \right)$
Since we now have a diagonal matrix, we can assert that the exponent of the matrix is a matrix of the exponent of the diagonals:
$\mathbf{P}^{-1}_r(r_L, r_R) = \left[\begin{matrix} exp(0) & 0 \\ 0 & exp(log(\frac{r_R}{r_L})) \end{matrix}\right]$
$\mathbf{P}^{-1}_r(r_L, r_R) = \left[\begin{matrix} 1 & 0 \\ 0 & \frac{r_R}{r_L} \end{matrix}\right]$
$\mathbf{P}^{-1}_r(r_L, r_R) = \mathbf{S}(1, \frac{r_R}{r_L})$

So inverting this to find the forward propagator gives us:
$\mathbf{P}_r(r_L, r_R) = \mathbf{S}(1, \frac{r_L}{r_R})$

$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = exp(\int_{\phi_L}^{\phi_R} \mathbf{\Gamma}_\phi d\phi)$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = exp\left(\int_{\phi_L}^{\phi_R} \left[\begin{matrix} 0 & -r \\ \frac{1}{r} & 0 \end{matrix}\right] d\phi \right)$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = exp\left( \left[\begin{matrix} 0 & -r (\phi_R - \phi_L) \\ \frac{1}{r} (\phi_R - \phi_L) & 0 \end{matrix}\right] \right)$
Now we eigen-decompose the matrix before applying the exponential to the eigenvalues:
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = exp\left( \left[\begin{matrix} 1 & -i r \\ 1 & i r \end{matrix}\right] \left[\begin{matrix} -i (\phi_R - \phi_L) & 0 \\ 0 & i (\phi_R - \phi_L) \end{matrix}\right] \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} \\ \frac{i}{2 r} & -\frac{i}{2r} \end{matrix}\right] \right)$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} 1 & -i r \\ 1 & i r \end{matrix}\right] exp\left( \left[\begin{matrix} -i (\phi_R - \phi_L) & 0 \\ 0 & i (\phi_R - \phi_L) \end{matrix}\right] \right) \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} \\ \frac{i}{2 r} & -\frac{i}{2r} \end{matrix}\right] $
Now we use the fact that the exponent of a diagonal matrix is the matrix of the exponent of the individual diagonal elements:
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} 1 & -i r \\ 1 & i r \end{matrix}\right] \left[\begin{matrix} exp(-i (\phi_R - \phi_L)) & 0 \\ 0 & exp(i (\phi_R - \phi_L)) \end{matrix}\right] \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} \\ \frac{i}{2 r} & -\frac{i}{2r} \end{matrix}\right] $
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} cos(\phi_R - \phi_L) & -r sin(\phi_R - \phi_L) \\ \frac{1}{r} sin(\phi_R - \phi_L) & cos(\phi_R - \phi_L) \end{matrix}\right]$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} 1 & 0 \\ 0 & \frac{1}{r} \end{matrix}\right] \left[\begin{matrix} cos(\phi_R - \phi_L) & -sin(\phi_R - \phi_L) \\ sin(\phi_R - \phi_L) & cos(\phi_R - \phi_L) \end{matrix}\right] \left[\begin{matrix} 1 & 0 \\ 0 & r \end{matrix}\right]$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \mathbf{S}(1,\frac{1}{r}) \mathbf{R}(\phi_R - \phi_L) \mathbf{S}(1, r)$

So inverting this to find the forward propagator gives us:
$\mathbf{P}_\phi(\phi_L, \phi_R) = \mathbf{S}(1,\frac{1}{r}) \mathbf{R}(\phi_L - \phi_R) \mathbf{S}(1, r)$


It just so happens that $\mathbf{P}^{-1}(r_L, r_R) \cdot \mathbf{P}^{-1}(\phi_L, \phi_R) = \mathbf{P}^{-1}(\phi_L, \phi_R) \cdot \mathbf{P}^{-1}(r_L, r_R)$. Maybe because ${R^\alpha}_{\beta\mu\nu} = 0$, but that is still just speculation.
And the applying the parallel transport in different orderings gives us:

$\mathbf{P}^{-1}(r_L, r_R) \cdot \mathbf{P}^{-1}(\phi_L, \phi_R)$
$= \mathbf{S}(1, \frac{r_R}{r_L}) \mathbf{S}(1,\frac{1}{r_R}) \mathbf{R}(\phi_R - \phi_L) \mathbf{S}(1, r_R)$
$= \mathbf{S}(1,\frac{1}{r_L}) \mathbf{R}(\phi_R - \phi_L) \mathbf{S}(1, r_R)$

$\mathbf{P}^{-1}(\phi_L, \phi_R) \cdot \mathbf{P}^{-1}(r_L, r_R)$
$= \mathbf{S}(1,\frac{1}{r_L}) \mathbf{R}(\phi_R - \phi_L) \mathbf{S}(1, r_L) \mathbf{S}(1, \frac{r_R}{r_L})$
$= \mathbf{S}(1,\frac{1}{r_L}) \mathbf{R}(\phi_R - \phi_L) \mathbf{S}(1, r_R)$

So in both cases we end up with the general linear pararallel transport from coordinate chart domain point $x_L$ to $x_R$ as:

$\mathbf{P}^{-1}(\mathbf{x}_L, x_R) = \mathbf{S}(1, \frac{1}{r_L}) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_R) = \left[\begin{matrix} cos(\phi_R - \phi_L) & -r_R sin(\phi_R - \phi_L) \\ \frac{1}{r_L} sin(\phi_R - \phi_L) & \frac{r_R}{r_L} cos(\phi_R - \phi_L) \end{matrix}\right]$


What happens when we parallel transport our basis from $x_L$ to $x_R$?

$e(\mathbf{x}_L) \cdot \mathbf{P}^{-1}(\mathbf{x}_L, x_R) = \left( \mathbf{R}(\phi_L) \cdot \mathbf{S}(1, r_L) \right) \cdot \left( \mathbf{S}(1, \frac{1}{r_L}) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_R) \right)$
$= \mathbf{R}(\phi_L) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_R)$
And since our rotation axis are aligned, we can add our rotation angles together:
$= \mathbf{R}(\phi_R) \cdot \mathbf{S}(1, r_R)$
$= e(\mathbf{x}_R)$

Tada! We started with $e(\mathbf{x}_L)$, we right-applied our parallel-transport transformation, and we ended up at $e(\mathbf{x}_R)$.

TODO try it with the geodesic case ... but don't forget, unless the propagator has commutation then the geodesic path will be no different than any other path.

That was in the perspective of propagating the basis to the new point, but what about parallel-propagating the vector components to align with the new point? Now do the same thing, except give it some vector components. Let's get a better description of what the curvilinear vector components are in terms of the Cartesian vector components:
$e_I v^I = e_\mu v^\mu$
$\left[\begin{matrix} e_x & e_y \end{matrix}\right] \left[\begin{matrix} v^x \\ v^y \end{matrix}\right] = \left[\begin{matrix} e_r & e_\phi \end{matrix}\right] \left[\begin{matrix} v^r \\ v^\phi \end{matrix}\right]$
$\left[\begin{matrix} e_x & e_y \end{matrix}\right] \left[\begin{matrix} v^x \\ v^y \end{matrix}\right] = \left[\begin{matrix} e_x & e_y \end{matrix}\right] \left[\begin{matrix} cos(\phi) & -r sin(\phi) \\ sin(\phi) & r cos(\phi) \end{matrix}\right] \left[\begin{matrix} v^r \\ v^\phi \end{matrix}\right]$
$\left[\begin{matrix} v^x \\ v^y \end{matrix}\right] = \left[\begin{matrix} cos(\phi) & -r sin(\phi) \\ sin(\phi) & r cos(\phi) \end{matrix}\right] \left[\begin{matrix} v^r \\ v^\phi \end{matrix}\right]$
$\left[\begin{matrix} cos(\phi) & sin(\phi) \\ -\frac{1}{r}sin(\phi) & \frac{1}{r} cos(\phi) \end{matrix}\right] \left[\begin{matrix} v^x \\ v^y \end{matrix}\right] = \left[\begin{matrix} v^r \\ v^\phi \end{matrix}\right]$

So there we see that the $|v^\phi| \propto \frac{1}{r}$, probably due to the fact that it is paired with $e_\phi$, and $|e_\phi| \propto r$.

Now to look at the parallel propagation of vector components:

$e_\mu(\mathbf{x}_L) v^\mu(\mathbf{x}_L)$
$= e_\alpha (\mathbf{x}_R) \cdot {P(\mathbf{x}_L, x_R)^\alpha}_\mu v^\mu(\mathbf{x}_L)$
$= e (\mathbf{x}_R) \cdot ( \mathbf{S}(1,\frac{1}{r_R}) \mathbf{R}(\phi_L - \phi_R) \mathbf{S}(1, r_L) ) v(\mathbf{x}_L)$
$= e (\mathbf{x}_R) v'(\mathbf{x}_R)$
So $v'(\mathbf{x}_R) = \mathbf{S}(1,\frac{1}{r_R}) \mathbf{R}(\phi_L - \phi_R) \mathbf{S}(1, r_L) v(\mathbf{x}_L)$

So what is going on here? The $|v^\phi(r_L)| \propto \frac{1}{r_L}$ factor is being removed, and a new $\frac{1}{r_R}$ factor is being introduced. Also an inverse rotation is being applied to the vector components, because a forward-rotation of the vector basis is going to coincide with an inverse-rotation of the components.

What does the vector component parallel propagator look like?

$\mathbf{P}(\mathbf{x}_L, x_R) = \mathbf{S}(1, \frac{1}{r_R}) \cdot \mathbf{R}(\phi_L - \phi_R) \cdot \mathbf{S}(1, r_L) = \left[\begin{matrix} cos(\phi_L - \phi_R) & -r_L sin(\phi_L - \phi_R) \\ \frac{1}{r_R} sin(\phi_L - \phi_R) & \frac{r_L}{r_R} cos(\phi_L - \phi_R) \end{matrix}\right] = \left[\begin{matrix} cos(\phi_R - \phi_L) & r_L sin(\phi_R - \phi_L) \\ -\frac{1}{r_R} sin(\phi_R - \phi_L) & \frac{r_L}{r_R} cos(\phi_R - \phi_L) \end{matrix}\right] $

What does the propagated vector look like?

$v(\mathbf{x}_R) = \mathbf{P}(\mathbf{x}_L, x_R) v(\mathbf{x}_L)$

$\left[\begin{matrix} v^r (\mathbf{x}_R) \\ v^\phi (\mathbf{x}_R) \end{matrix}\right] = \left[\begin{matrix} cos(\phi_R - \phi_L) & r_L sin(\phi_R - \phi_L) \\ -\frac{1}{r_R} sin(\phi_R - \phi_L) & \frac{r_L}{r_R} cos(\phi_R - \phi_L) \end{matrix}\right] \left[\begin{matrix} v^r (\mathbf{x}_L) \\ v^\phi (\mathbf{x}_L) \end{matrix}\right]$
$\left[\begin{matrix} v^r (\mathbf{x}_R) \\ v^\phi (\mathbf{x}_R) \end{matrix}\right] = \left[\begin{matrix} cos(\phi_R - \phi_L) v^r(\mathbf{x}_L) + r_L sin(\phi_R - \phi_L) v^\phi(\mathbf{x}_L) \\ -\frac{1}{r_R} sin(\phi_R - \phi_L) v^r(\mathbf{x}_L) + \frac{r_L}{r_R} cos(\phi_R - \phi_L) v^\phi(\mathbf{x}_L) \end{matrix}\right]$

How about one-forms? Here we use the inverse propagator. If you represent your one-form components as a column vector with the propagator as a left-multiply then you must transpose it as well - but I'll ignore that and stick with my contravariant-column, covariant-row standard.

$w(\mathbf{x}_R) = w(\mathbf{x}_L) \mathbf{P}^{-1}(\mathbf{x}_L, x_R)$

$\left[\begin{matrix} w^r (\mathbf{x}_R) & w^\phi (\mathbf{x}_R) \end{matrix}\right] = \left[\begin{matrix} w^r (\mathbf{x}_L) & w^\phi (\mathbf{x}_L) \end{matrix}\right] \left[\begin{matrix} cos(\phi_R - \phi_L) & -r_R sin(\phi_R - \phi_L) \\ \frac{1}{r_L} sin(\phi_R - \phi_L) & \frac{r_R}{r_L} cos(\phi_R - \phi_L) \end{matrix}\right] $
$\left[\begin{matrix} w^r (\mathbf{x}_R) & w^\phi (\mathbf{x}_R) \end{matrix}\right] = \left[\begin{matrix} \left( cos(\phi_R - \phi_L) w^r (\mathbf{x}_L) + \frac{1}{r_L} sin(\phi_R - \phi_L) w^\phi (\mathbf{x}_L) \right) & \left( -r_R sin(\phi_R - \phi_L) w^r (\mathbf{x}_L) + \frac{r_R}{r_L} cos(\phi_R - \phi_L) w^\phi (\mathbf{x}_L) \right) \end{matrix}\right] $

Looks similar to the vector transform. We see that $|w_\phi| \propto r$ just as $|d\phi| \propto \frac{1}{r}$, so we are scaling out the old $r_L$ value and scaling in the new $r_R$ value. Otherwise the rotation of the components $\{w_r, w_\phi\}$ are equivalent to the rotation of the components $\{v^r, v^\phi\}$, courtesy of the one-form parallel propagator being the inverse-transpose of the vector parallel propagator.

Alright, what about propagator commutation?
Is it even useful to look at commutation, as subtraction, or should I be looking at commutations of multiplication, i.e. $P_2^{-1}(d,a) \cdot P_1^{-1}(c,d) \cdot P_2(b,c) \cdot P_1(a,b)$?

Either way, for a manifold with flat embedding, both should be zero.

$[\mathbf{P}_r(r_L, r_R), \mathbf{P}_\phi(\phi_L, \phi_R)]$
$ = \mathbf{P}_r((r_L, \phi_R), (r_R, \phi_R)) \mathbf{P}_\phi((r_L, \phi_L), (r_L, \phi_R)) - \mathbf{P}_\phi((r_R, \phi_L), (r_R, \phi_R)) \mathbf{P}_r((r_L, \phi_L), (r_R, \phi_L)) $
$= \mathbf{S}(1, \frac{r_L}{r_R}) \cdot \mathbf{S}(1, \frac{1}{r_L}) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_L) - \mathbf{S}(1, \frac{1}{r_R}) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_R) \cdot \mathbf{S}(1, \frac{r_L}{r_R}) $
$ = \mathbf{S}(1, \frac{1}{r_R}) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_L) - \mathbf{S}(1, \frac{1}{r_R}) \cdot \mathbf{R}(\phi_R - \phi_L) \cdot \mathbf{S}(1, r_L) $
$= 0$



Polar, anholonomic orthonormal:

Now if we were going to do the same thing but with a normalized basis then all that really changes is the scale goes away. Let's see how.

$\mathbf{\Gamma}_r = \left[\begin{matrix} {\Gamma^r}_{r r} & {\Gamma^r}_{r\hat{\phi}} \\ {\Gamma^\hat{\phi}}_{r r} & {\Gamma^\hat{\phi}}_{r\hat{\phi}} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}\right]$

$\mathbf{\Gamma}_\hat{\phi} = \left[\begin{matrix} {\Gamma^r}_{\hat{\phi} r} & {\Gamma^r}_{\hat{\phi}\hat{\phi}} \\ {\Gamma^\hat{\phi}}_{\hat{\phi} r} & {\Gamma^\hat{\phi}}_{\hat{\phi}\hat{\phi}} \end{matrix}\right] = \left[\begin{matrix} 0 & -\frac{1}{r} \\ \frac{1}{r} & 0 \end{matrix}\right]$

So our propagators become:

$\mathbf{P}_r(\mathbf{x},\mathbf{y}) = exp(-\int_{r_L}^{r_R} 0 dr) = exp(-\int_{r_L}^{r_R} \left[\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}\right] dr) = \left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right]$

$\mathbf{P}_\phi(\mathbf{x},\mathbf{y}) = exp(-\int_{\phi_L}^{\phi_R} \mathbf{\Gamma}_\hat{\phi} d\hat{\phi}) = exp(-\int_{\phi_L}^{\phi_R} \left[\begin{matrix} 0 & -\frac{1}{r} \\ \frac{1}{r} & 0 \end{matrix}\right] d\hat{\phi})$

Alright now one interesting thing, the propagator acts as an integral along a coordinate, and so it is defined in terms of the coordinate basis. So we need to transform our integrated connection index from being relative to $e^\hat{\phi} = r e^\phi$, and likewise transform the covariant components $w_\hat{\phi} = \frac{1}{r} w_\phi$.

$\mathbf{P}_\phi(\mathbf{x},\mathbf{y}) = exp(-\int_{\phi_L}^{\phi_R} [{e_\phi}^\hat{\phi}] \mathbf{\Gamma}_\hat{\phi} d\phi) = exp(-\int_{\phi_L}^{\phi_R} \left[\begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix}\right] d\phi) = \left[\begin{matrix} cos(\phi_L - \phi_R) & -sin(\phi_L - \phi_R) \\ sin(\phi_L - \phi_R) & cos(\phi_L - \phi_R) \end{matrix}\right] = \mathbf{R}(\phi_L - \phi_R)$

Next of course is the vector basis propagator:

$\mathbf{P}^{-1}_\phi(\mathbf{x},\mathbf{y}) = exp(\int_{\phi_L}^{\phi_R} \mathbf{\Gamma}_\phi d\phi) = \mathbf{R}(\phi_R - \phi_L)$

which looks exactly like we would expect a matrix rotating a vector basis by an angle would look like.



Example: Spherical coordinates:

${\Gamma^\theta}_{r\theta} = \frac{1}{r}$
${\Gamma^\phi}_{r\phi} = \frac{1}{r}$
${\Gamma^r}_{\theta\theta} = -r$
${\Gamma^\theta}_{\theta r} = \frac{1}{r}$
${\Gamma^\phi}_{\theta\phi} = \frac{cos(\theta)}{sin(\theta)}$
${\Gamma^r}_{\phi\phi} = -r sin(\theta)^2$
${\Gamma^\theta}_{\phi\phi} = -sin(\theta) cos(\theta)$
${\Gamma^\phi}_{\phi r} = \frac{1}{r}$
${\Gamma^\phi}_{\phi\theta} = \frac{cos(\theta)}{sin(\theta)}$

As matrices:

$\mathbf{\Gamma}_r = \left[\begin{matrix} {\Gamma^r}_{r r} & {\Gamma^r}_{r \theta} & {\Gamma^r}_{r\phi} \\ {\Gamma^\theta}_{r r} & {\Gamma^\theta}_{r \theta} & {\Gamma^\theta}_{r\phi} \\ {\Gamma^\phi}_{r r} & {\Gamma^\phi}_{r \theta} & {\Gamma^\phi}_{r\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 & 0 \\ 0 & \frac{1}{r} & 0 \\ 0 & 0 & \frac{1}{r} \end{matrix}\right]$

$\mathbf{\Gamma}_\theta = \left[\begin{matrix} {\Gamma^r}_{\theta r} & {\Gamma^r}_{\theta \theta} & {\Gamma^r}_{\theta\phi} \\ {\Gamma^\theta}_{\theta r} & {\Gamma^\theta}_{\theta \theta} & {\Gamma^\theta}_{\theta\phi} \\ {\Gamma^\phi}_{\theta r} & {\Gamma^\phi}_{\theta \theta} & {\Gamma^\phi}_{\theta\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & -r & 0 \\ \frac{1}{r} & 0 & 0 \\ 0 & 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right]$

$\mathbf{\Gamma}_\phi = \left[\begin{matrix} {\Gamma^r}_{\phi r} & {\Gamma^r}_{\phi \theta} & {\Gamma^r}_{\phi\phi} \\ {\Gamma^\theta}_{\phi r} & {\Gamma^\theta}_{\phi \theta} & {\Gamma^\theta}_{\phi\phi} \\ {\Gamma^\phi}_{\phi r} & {\Gamma^\phi}_{\phi \theta} & {\Gamma^\phi}_{\phi\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 & -r sin(\theta)^2 \\ 0 & 0 & -sin(\theta) cos(\theta) \\ \frac{1}{r} & \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right]$

$\mathbf{P}^{-1}_r(r_L, r_R) = \left[\begin{matrix} 1 & 0 & 0 \\ 0 & \frac{r_R}{r_L} & 0 \\ 0 & 0 & \frac{r_R}{r_L} \end{matrix}\right]$
$\mathbf{P}^{-1}_r(r_L, r_R) = \mathbf{S}(1, \frac{r_R}{r_L}, \frac{r_R}{r_L})$
$\mathbf{P}_r(r_L, r_R) = \left[\begin{matrix} 1 & 0 & 0 \\ 0 & \frac{r_L}{r_R} & 0 \\ 0 & 0 & \frac{r_L}{r_R} \end{matrix}\right]$
$\mathbf{P}_r(r_L, r_R) = \mathbf{S}(1, \frac{r_L}{r_R}, \frac{r_L}{r_R})$


$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = exp\left( \int_{\theta_L}^{\theta_R} \left[\begin{matrix} 0 & -r & 0 \\ \frac{1}{r} & 0 & 0 \\ 0 & 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right] d\theta \right)$
$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = exp\left( \left[\begin{matrix} 0 & -r (\theta_R - \theta_L) & 0 \\ \frac{1}{r} (\theta_R - \theta_L) & 0 & 0 \\ 0 & 0 & log\left(\frac{|sin(\theta_R)|}{|sin(\theta_L)|}\right) \end{matrix}\right] \right) $
$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} i r & -i r & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right] exp\left( \left[\begin{matrix} i (\theta_R - \theta_L) & 0 & 0 \\ 0 & -i (\theta_R - \theta_L) & 0 \\ 0 & 0 & log\left(\frac{sin(\theta_R)}{sin(\theta_L)}\right) \end{matrix}\right] \right) \left[\begin{matrix} -\frac{i}{2 r} & \frac{1}{2} & 0 \\ \frac{i}{2 r} & \frac{1}{2} & 0 \\ 0 & 0 & 1 \end{matrix}\right] $
$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} i r & -i r & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right] \left[\begin{matrix} exp(i (\theta_R - \theta_L)) & 0 & 0 \\ 0 & exp(-i (\theta_R - \theta_L)) & 0 \\ 0 & 0 & \frac{sin(\theta_R)}{sin(\theta_L)} \end{matrix}\right] \left[\begin{matrix} -\frac{i}{2 r} & \frac{1}{2} & 0 \\ \frac{i}{2 r} & \frac{1}{2} & 0 \\ 0 & 0 & 1 \end{matrix}\right] $
$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} cos(\theta_R - \theta_L) & -r sin(\theta_R - \theta_L) & 0 \\ \frac{1}{r} sin(\theta_R - \theta_L) & cos(\theta_R - \theta_L) & 0 \\ 0 & 0 & \frac{sin(\theta_R)}{sin(\theta_L)} \end{matrix}\right] $
$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = \mathbf{S}(1, \frac{1}{r}, \frac{1}{sin(\theta_L)}) \cdot \mathbf{R}_z(\theta_R - \theta_L) \cdot \mathbf{S}(1, r, sin(\theta_R)) $
$\mathbf{P}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} cos(\theta_R - \theta_L) & \frac{1}{r} sin(\theta_R - \theta_L) & 0 \\ -r sin(\theta_R - \theta_L) & cos(\theta_R - \theta_L) & 0 \\ 0 & 0 & \frac{sin(\theta_L)}{sin(\theta_R)} \end{matrix}\right] $
$\mathbf{P}_\theta(\theta_L, \theta_R) = \mathbf{S}(1, \frac{1}{r}, \frac{1}{sin(\theta_R)}) \cdot \mathbf{R}_z(\theta_L - \theta_R) \cdot \mathbf{S}(1, r, sin(\theta_L)) $


$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = exp\left(\int_{\phi_L}^{\phi_R} \left[\begin{matrix} 0 & 0 & -r sin(\theta)^2 \\ 0 & 0 & -sin(\theta) cos(\theta) \\ \frac{1}{r} & \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right] d\phi \right)$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = exp\left( \left[\begin{matrix} 0 & 0 & -(\phi_R - \phi_L) r sin(\theta)^2 \\ 0 & 0 & -(\phi_R - \phi_L) sin(\theta) cos(\theta) \\ \frac{1}{r} (\phi_R - \phi_L) & (\phi_R - \phi_L) \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right] \right)$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} -i r sin(\theta)^2 & i r sin(\theta)^2 -\frac{r cos(\theta)}{sin(\theta)} \\ -i sin(\theta) cos(\theta) & i sin(\theta) cos(\theta) & 1 \\ 1 & 1 & 0 \end{matrix}\right] exp\left( \left[\begin{matrix} -i (\phi_R - \phi_L) & 0 & 0 \\ 0 & i (\phi_R - \phi_L) & 0 \\ 0 & 0 & 0 \end{matrix}\right] \right) \left[\begin{matrix} \frac{i}{2 r} & \frac{i cos(\theta)}{2 r sin(\theta)} & \frac{1}{2} \\ \frac{-i}{2 r} & -\frac{i cos(\theta)}{2 r sin(\theta}) & \frac{1}{2} \\ -\frac{1}{r} cos(\theta) sin(\theta) & sin(\theta)^2 & 0 \end{matrix}\right] $
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} -i r sin(\theta)^2 & i r sin(\theta)^2 & -\frac{r cos(\theta)}{sin(\theta)} \\ -i sin(\theta) cos(\theta) & i sin(\theta) cos(\theta) & 1 \\ 1 & 1 & 0 \end{matrix}\right] \left[\begin{matrix} exp(-i (\phi_R - \phi_L)) & 0 & 0 \\ 0 & exp(i (\phi_R - \phi_L)) & 0 \\ 0 & 0 & exp(0) \end{matrix}\right] \left[\begin{matrix} \frac{i}{2 r} & \frac{i cos(\theta)}{2 r sin(\theta)} & \frac{1}{2} \\ \frac{-i}{2 r} & -\frac{i cos(\theta)}{2 r sin(\theta}) & \frac{1}{2} \\ -\frac{1}{r} cos(\theta) sin(\theta) & sin(\theta)^2 & 0 \end{matrix}\right] $
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} cos(\theta)^2 + cos(\phi_R - \phi_L) sin(\theta)^2 & -r cos(\theta) sin(\theta) (1 - cos(\phi_R - \phi_L)) & -r sin(\theta)^2 sin(\phi_R - \phi_L) \\ -\frac{1}{r} cos(\theta) sin(\theta) (1 - cos(\phi_R - \phi_L)) & cos(\phi_R - \phi_L) cos(\theta)^2 + sin(\theta)^2 & -sin(\theta) cos(\theta) sin(\phi_R - \phi_L) \\ \frac{1}{r} sin(\phi_R - \phi_L) & sin(\phi_R - \phi_L) \frac{cos(\theta)}{sin(\theta)} & cos(\phi_R - \phi_L) \end{matrix}\right]$
$\mathbf{P}_\phi^{-1}(\phi_L, \phi_R) = \mathbf{S}(1, \frac{1}{r}, \frac{1}{r sin(\theta)}) \cdot \mathbf{R}_z(-\theta) \cdot \mathbf{R}_x(\phi_R - \phi_L) \cdot \mathbf{R}_z(\theta) \cdot \mathbf{S}(1, r, r sin(\theta)) $
$\mathbf{P}_\phi(\phi_L, \phi_R) = \mathbf{S}(1, \frac{1}{r}, \frac{1}{r sin(\theta)}) \cdot \mathbf{R}_z(-\theta) \cdot \mathbf{R}_x(\phi_L - \phi_R) \cdot \mathbf{R}_z(\theta) \cdot \mathbf{S}(1, r, r sin(\theta)) $
$\mathbf{P}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} cos(\theta)^2 + cos(\phi_L - \phi_R) sin(\theta)^2 & -r cos(\theta) sin(\theta) (1 - cos(\phi_L - \phi_R)) & -r sin(\theta)^2 sin(\phi_L - \phi_R) \\ -\frac{1}{r} cos(\theta) sin(\theta) (1 - cos(\phi_L - \phi_R)) & cos(\phi_L - \phi_R) cos(\theta)^2 + sin(\theta)^2 & -sin(\theta) cos(\theta) sin(\phi_L - \phi_R) \\ \frac{1}{r} sin(\phi_L - \phi_R) & sin(\phi_L - \phi_R) \frac{cos(\theta)}{sin(\theta)} & cos(\phi_L - \phi_R) \end{matrix}\right]$

And you should notice that $[\mathbf{P}_\mu, \mathbf{P}_\nu] = 0$. This is because the Riemann curvature is zero.




Example: Spherical anholonomic orthonormal basis:

Once again, like polar, we are orthonormalizing our coordinates. Also like polar, we are starting with an orthogonal basis, so the only transformation that needs to be applied to our basis vectors (and subsequently our basis and one-form components) is rescaling.

${\Gamma^\theta}_{\theta r} = -{\Gamma^r}_{\theta\theta} = \frac{1}{r}$
${\Gamma^\phi}_{\phi r} = -{\Gamma^r}_{\phi\phi} = \frac{1}{r}$
${\Gamma^\phi}_{\phi\theta} = -{\Gamma^\theta}_{\phi\phi} = \frac{cos(\theta)}{r sin(\theta)}$

Converting it to a coordinate basis.
(TODO either use hats for non-coord here or bars for coord above).

$\mathbf{\Gamma}_r = \left[\begin{matrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right]$

$\mathbf{\Gamma}_{\bar\theta} = \left[\begin{matrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right]$

$\mathbf{\Gamma}_{\bar\phi} = \left[\begin{matrix} 0 & 0 & -sin(\theta) \\ 0 & 0 & -cos(\theta) \\ sin(\theta) & cos(\theta) & 0 \end{matrix}\right]$


$\mathbf{P}^{-1}_r(r_L, r_R) = \mathbf{I}$

$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = \mathbf{R}_z(\theta_R - \theta_L)$

$\mathbf{P}^{-1}_\phi(\phi_R, \phi_L) = \mathbf{R}_z(-\theta) \cdot \mathbf{R}_x(\phi_R - \phi_L) \cdot \mathbf{R}_z(\theta) $


$\mathbf{P}_r(r_L, r_R) = I$

$\mathbf{P}_\theta(\theta_L, \theta_R) = \mathbf{R}_z(\theta_L - \theta_R)$

$\mathbf{P}_\phi(\phi_R, \phi_L) = \mathbf{R}_z(-\theta) \cdot \mathbf{R}_x(\phi_L - \phi_R) \cdot \mathbf{R}_z(\theta) $

And you should notice that $[\mathbf{P}_\mu, \mathbf{P}_\nu] = 0$. This is because the Riemann curvature is zero.




Example: Sphere Surface, Coordinate Basis

Here's where things detour a bit. TODO change the above math to mention the need for extrinsic curvature as well. Long story short: just don't use a manifold with extrinsic curvature. Instead use the whole manifold, including the extrinsic curvature.

Our coordinates are only $x = \{ \theta, \phi \}$.

Our connections are:

${\Gamma^\theta}_{\phi\phi} = -cos(\theta) sin(\theta)$
${\Gamma^\phi}_{\theta\phi} = {\Gamma^\phi}_{\phi\theta} = \frac{cos(\theta)}{sin(\theta)}$

As matrices:

$\mathbf{\Gamma}_\theta = \left[\begin{matrix} {\Gamma^\theta}_{\theta \theta} & {\Gamma^\theta}_{\theta\phi} \\ {\Gamma^\phi}_{\theta \theta} & {\Gamma^\phi}_{\theta\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right]$

$\mathbf{\Gamma}_\phi = \left[\begin{matrix} {\Gamma^\theta}_{\phi \theta} & {\Gamma^\theta}_{\phi\phi} \\ {\Gamma^\phi}_{\phi \theta} & {\Gamma^\phi}_{\phi\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & -cos(\theta) sin(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right]$

Propagators:

$\mathbf{P}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} 0 & 0 \\ 0 & \frac{|sin(\theta_L)|}{|sin(\theta_R)|} \end{matrix}\right]$

$\mathbf{P}^{-1}_\theta(\theta_L, \theta_R) = \left[\begin{matrix} 0 & 0 \\ 0 & \frac{|sin(\theta_R)|}{|sin(\theta_L)|} \end{matrix}\right]$

$\mathbf{P}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} cos(cos(\theta) (\phi_L - \phi_R)) & -sin(\theta) sin(cos(\theta) (\phi_L - \phi_R)) \\ \frac{sin(cos(\theta) (\phi_L - \phi_R))}{sin(\theta)} & cos(cos(\theta) (\phi_L - \phi_R)) \end{matrix}\right]$
$\mathbf{P}^{-1}_\phi(\phi_L, \phi_R) = \left[\begin{matrix} cos(cos(\theta) (\phi_R - \phi_L)) & sin(\theta) sin(cos(\theta) (\phi_R - \phi_L)) \\ -\frac{sin(cos(\theta) (\phi_R - \phi_L))}{sin(\theta)} & cos(cos(\theta) (\phi_R - \phi_L)) \end{matrix}\right]$

Alright, intuitively, when $\theta = 0$ or $\theta = \pi$, the left scale matrix becomes singular, as it would at the poles. Check.
How about at the equator? $\theta = \frac{\pi}{2}$, the scale matrices become identity, and the rotation becomes zero. Check.
And everywhere between, the $e_\theta$ and $e_\phi$ components interchange with one another proportional to $cos(\theta)$. Check.
Makes sense.

Propagator commutation?

$[\mathbf{P}_\theta, \mathbf{P}_\phi] = ... $

...

Sidebar:
If I divide $\mathbf{P}_\phi$ by $cos(\theta)$ then I arrive at the same parallel propagation as the full spherical corodinates has along the $\phi$ coordinate.
So what happens when I try to reconsturct the tangent connections from the extrinsic curvature?

In fact, if you change $\Gamma \rightarrow \frac{1}{cos(\theta)} \Gamma$ then you get something that looks more similar to the spherical propagator. Why does the $cos(\theta)$ get absorbed into the angle of rotation for this one? Maybe because it isn't present in all terms that are used for the eigenvectors in the full sphere, but in this one it is?

Let's look at how things play out if we include extrinsic curvature.
Let's define our matrix notation basis to be $\{\partial_r, \partial_\theta, \partial_\phi\}$.
First let's derive the normal.

$n_a = \nabla_a r = \left[ \begin{matrix} 1 & 0 & 0 \end{matrix} \right]$

$n^a = g^{ab} n_b = \left[ \begin{matrix} 1 \\ 0 \\ 0 \end{matrix} \right]$

$K_{ab} = -\perp \nabla_a n_b$
$K_{ab} = -\perp (e_a (n_b) - {\Gamma^c}_{ab} n_c)$
Remember $n_a$ is constant, so we get:
$K_{ab} = \perp {\Gamma^c}_{ab} n_c$
And remember $n_r = 1, n_a = 0$ otherwise.
$K_{ab} = \perp {\Gamma^r}_{ab}$
$K_{ab} = \left[ \begin{matrix} 0 & 0 & 0 \\ 0 & -r & 0 \\ 0 & 0 & -r sin(\theta)^2 \end{matrix} \right]$

Now since $K_{ab}$ is a projected tensor, therefore is in the tangent space of the submanifold, we know the $K_{r \mu}$ and $K_{\mu r}$ components are zero for $\mu \in \{ r, \theta, \phi\}$, so we can just omit them:
$K_{ab} = \left[ \begin{matrix} -r & 0 \\ 0 & -r sin(\theta)^2 \end{matrix} \right]$

This also fits well with the identity of normal-space connection coefficients:
${(\Gamma^\perp)^i}_{jk} = n^i K_{jk}$

$\mathbf{\Gamma}_\theta = \left[\begin{matrix} {\Gamma^r}_{\theta \theta} & {\Gamma^r}_{\theta\phi} \\ {\Gamma^\theta}_{\theta \theta} & {\Gamma^\theta}_{\theta\phi} \\ {\Gamma^\phi}_{\theta \theta} & {\Gamma^\phi}_{\theta\phi} \end{matrix}\right] = \left[\begin{matrix} -r & 0 \\ 0 & 0 \\ 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right]$

$\mathbf{\Gamma}_\phi = \left[\begin{matrix} {\Gamma^r}_{\phi \theta} & {\Gamma^r}_{\phi\phi} \\ {\Gamma^\theta}_{\phi \theta} & {\Gamma^\theta}_{\phi\phi} \\ {\Gamma^\phi}_{\phi \theta} & {\Gamma^\phi}_{\phi\phi} \end{matrix}\right] = \left[\begin{matrix} 0 & -r sin(\theta)^2 \\ 0 & -sin(\theta) cos(\theta) \\ \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right]$

This accounts for ${\Gamma^r}_{ab} = n^r K_{ab} = K_{ab}$, but what about ${\Gamma^a}_{rb}$ or ${\Gamma^a}_{br}$?

TODO just that.

The connections of the manifold with its perpendicular space was already given above, and it was already shown that it worked in full sphere coordinates.

$\mathbf{\Gamma}_\theta = \left[\begin{matrix} 0 & -r & 0 \\ \frac{1}{r} & 0 & 0 \\ 0 & 0 & \frac{cos(\theta)}{sin(\theta)} \end{matrix}\right]$

$\mathbf{\Gamma}_\phi = \left[\begin{matrix} 0 & 0 & -r sin(\theta)^2 \\ 0 & 0 & -sin(\theta) cos(\theta) \\ \frac{1}{r} & \frac{cos(\theta)}{sin(\theta)} & 0 \end{matrix}\right]$





Example: 3D Rotation Group

Look at https://thenumbernine.github.io/lua/symmath/tests/output/rotation%20group.html for some derivations using symmath.

Start with your generators $\mathbf{K}_j$, to be defined such that ${(K_j)^i}_k = \epsilon_{ijk}$

Define your rotation matrix as the exponential map of the generators:
${(R_j(t))^i}_k = exp(t {(K_j)^i}_k)$
$\frac{\partial}{\partial t} \mathbf{R}_j(t) = \mathbf{K}_j \cdot \mathbf{R}_j(t)$
Solve for the generator now:
$\mathbf{K}_j = \frac{\partial}{\partial t} \mathbf{R}_j(t) \cdot \mathbf{R}_j^{-1}(t)$
then I should show how this becomes $\mathbf{K}_j = \frac{\partial}{\partial t} \mathbf{R}_j(0)$

Now define our Euler angles rotation matrix:
$\mathbf{P} = \mathbf{R}_z(\psi) \mathbf{R}_x(\theta) \mathbf{R}_z(\phi)$
Define our basis:
$e_j(\mathbf{P}) = \frac{\partial}{\partial x^j} \mathbf{P} = \frac{\partial x^\hat{k}}{\partial x^j} \frac{\partial}{\partial x^\hat{k}} \mathbf{P} = \mathbf{K}_j \cdot \mathbf{P}$
Where the $x^j$ corresponds to the Cartesian basis while the $\hat{x}^j$ corresponds to the Euler angle rotation.

Commutation coefficients: ${c_{ij}}^k e_k = [e_i, e_j] = -\epsilon_{ijk} e_k$.

Connection coefficients: ${\Gamma^\alpha}_{\beta\gamma} = \frac{1}{2} \epsilon_{\alpha\beta\gamma}$ is purely based on the commutation since the metric is constant.

But what about transforming our non-coordinate basis indexes into a coordinate basis? The metric is identity, so transforming it from non-coordinate to coordinate means no change.

${\Gamma^i}_{xj} = [\frac{1}{2} \epsilon_{i x j}] = \downarrow i \overset{\rightarrow j}{\left[\begin{matrix} 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{2} \\ 0 & \frac{1}{2} & 0 \end{matrix}\right]}$

${\Gamma^i}_{yj} = [\frac{1}{2} \epsilon_{i y j}] = \left[\begin{matrix} 0 & 0 & \frac{1}{2} \\ 0 & 0 & 0 \\ -\frac{1}{2} & 0 & 0 \end{matrix}\right]$

${\Gamma^i}_{z j} = [\frac{1}{2} \epsilon_{i z j}] = \left[\begin{matrix} 0 & -\frac{1}{2} & 0 \\ \frac{1}{2} & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right]$

Linear combination of the connection:

$v^i \mathbf{\Gamma}_i = \frac{1}{2} \left[\begin{matrix} 0 & -v^z & v^y \\ v^z & 0 & -v^x \\ -v^y & v^x & 0 \end{matrix}\right]$

Integral of connection:

$\int_0^1 v^i \mathbf{\Gamma}_i d\lambda = v^i \mathbf{\Gamma}_i = \frac{1}{2} \left[\begin{matrix} 0 & -v^z & v^y \\ v^z & 0 & -v^x \\ -v^y & v^x & 0 \end{matrix}\right]$

Parallel propagator, based on the 'general linear' form / linear interpolation in chart coordinates:

$\mathbf{P}(\mathbf{x},\mathbf{y}) = exp(-\int_{\lambda = 0}^{\lambda = 1} \mathbf{\Gamma}_i v^i d\lambda)$
$\mathbf{P}(\mathbf{x},\mathbf{y}) = exp\left( \frac{1}{2} \left[\begin{matrix} 0 & v^z & -v^y \\ -v^z & 0 & v^x \\ v^y & -v^x & 0 \end{matrix}\right] \right)$

Let $|\mathbf{v}| = \sqrt{(v^x)^2 + (v^y)^2 + (v^z)^2}$ and $\hat{v}^i = \frac{1}{|\mathbf{v}|} v^i$

$\mathbf{P}(\mathbf{x},\mathbf{y}) = exp\left( \frac{|\mathbf{v}|}{2} \left[\begin{matrix} 0 & \hat{v}^z & -\hat{v}^y \\ -\hat{v}^z & 0 & \hat{v}^x \\ \hat{v}^y & -\hat{v}^x & 0 \end{matrix}\right] \right)$

$\mathbf{P}(\mathbf{x},\mathbf{y}) = \mathbf{R}_\hat{\mathbf{v}}(-\frac{|\mathbf{v}|}{2})$

Applied:
$\mathbf{v} = \mathbf{e}(\mathbf{x}) \mathbf{v}(\mathbf{x})$
$= \mathbf{e}(\mathbf{x}) \mathbf{P}(\mathbf{y}, \mathbf{x}) \mathbf{P}(\mathbf{x},\mathbf{y}) \mathbf{v}(\mathbf{x})$
$= \mathbf{e}(\mathbf{x}) \mathbf{R}_\hat{\mathbf{v}}(\frac{|\mathbf{v}|}{2}) \mathbf{R}_\hat{\mathbf{v}}(-\frac{|\mathbf{v}|}{2}) \mathbf{v}(\mathbf{x})$
$= \mathbf{e}(\mathbf{y}) \mathbf{v}'(\mathbf{y})$

So $\mathbf{e}(\mathbf{y}) = \mathbf{e}(\mathbf{x}) \mathbf{R}_\hat{\mathbf{v}}(\frac{|\mathbf{v}|}{2})$
And $\mathbf{v}'(\mathbf{y}) = \mathbf{R}_\hat{\mathbf{v}}(-\frac{|\mathbf{v}|}{2}) \mathbf{v}(\mathbf{x})$

Conceptually, left-multiplying a transformation is applying the transformation relative to the global basis, and right-multiplying it is apply it relative to the local basis.

Riemann curvature = ${R^i}_{jkl} = \frac{1}{2} \delta^{ij}_{kl}$

Notice that the dual of the matrix is a vector whose magnitude serves as the rotation angle and whose axis serves as the rotation axis.
Parallel propagator = exponent of integral of connection = rotation matrix along axis by angle as stated above.
Commutation of parallel propagator = Levi-Civita permutation tensor again.