Back
Let $\mathcal{M}$ be our manifold of dimension n
Lets separate it into a hypersurface/submanifold $\Sigma$ of dimension m.
Let $e_i, i \in \{1, ..., m\}$ be our basis elements of the submanifold.
Let $e_i, i \in \{m+1, ..., n\}$ be our basis elements of the orthogonal space.
Lets use a coordinate basis so $e_i = \partial_i$ and $[e_i, e_j] = 0$.
Let $\eta(x) = \delta_{ij} \cdot \sigma_i = diag(\sigma_1 ... \sigma_n)$ be the diagonalization of the metric of the manifold at some coordinate $x = \{ x^1 ... x^n \}$, for $\sigma_i \in \pm1$.
So $\eta = \eta_{ij} e^i \otimes e^j = \sigma_i \cdot \delta_{ij} e^i \otimes e^j$
$\eta = i \downarrow \overset{j \rightarrow}{
\left[ \begin{matrix}
\sigma_1 & & 0 \\
& \ddots & \\
0 & & \sigma_n
\end{matrix} \right]
}$
Let $\eta^\perp = diag(\sigma_1 ... \sigma_m, \underset{\times (n-m)}{\underbrace{0 ... 0}})$ be the diagonalization of the metric of the submanifold at some point $x^a$.
So $\eta^\perp = (\eta^\perp)_{ij} e^i \otimes e^j$ and $(\eta^\perp)_{ij} = 0$ for i or j $\in \{ m+1 ... n\}$, or otherwise $= \eta_{ij}$.
$\eta^\perp = i \downarrow \overset{j \rightarrow}{
\left[ \begin{array}{ccc|cc}
\sigma_1 & & 0 & 0 & & 0 \\
& \ddots & & & \ddots & \\
0 & & \sigma_m & 0 & & 0 \\
\hline
0 & & 0 & 0 & & 0 \\
& \ddots & & & \ddots & \\
0 & & 0 & 0 & & 0
\end{array} \right]
}$
Let $\eta^\top = diag(\underset{\times m}{\underbrace{0 ... 0}}, \sigma_{m+1} ... \sigma_n)$ be the diagonalization of the metric of the space perpendicular to the submanifold at point $x^a$.
So $\eta^\top = (\eta^\top)_{ij} e^i \otimes e^j$ and $(\eta^\top)_{ij} = 0$ for i or j $\in \{1 ... m\}$ or otherwise $= \eta_{ij}$.
$\eta^\top = i \downarrow \overset{j \rightarrow}{
\left[ \begin{array}{ccc|cc}
0 & & 0 & 0 & & 0 \\
& \ddots & & & \ddots & \\
0 & & 0 & 0 & & 0 \\
\hline
0 & & 0 & \sigma_{m+1} & & 0 \\
& \ddots & & & \ddots & \\
0 & & 0 & 0 & & \sigma_n
\end{array} \right]
}$
So $\eta = \eta^\perp + \eta^\top$
Let $\Sigma$ be the basis of the submanifold.
$\Sigma = (-1)^{m(n-m)} \eta_{m+1...n}$
$= (-1)^{m(n-m)} \epsilon_{m+1...n \ 1...m} e^1 \wedge ... \wedge e^m$
$= \epsilon_{1 ... m \ m+1 ... n} e^1 \wedge ... \wedge e^m$
$= \sqrt{|g|} e^1 \wedge ... \wedge e^m$
$= \sqrt{|g|} m! e^{[1} \otimes ... \otimes e^{m]}$
$= \sqrt{|g|} \delta^{1 ... m}_{i_1 ... i_m} e^{i_1} \otimes ... \otimes e^{i_m}$
$= \frac{1}{m!} \epsilon_{i_1 ... i_m \ m+1 ... n} e^{i_1} \wedge ... \wedge e^{i_m}$
$= \epsilon_{i_1 ... i_m \ m+1 ... n} e^{[i_1} \otimes ... \otimes e^{i_m]}$
$= \epsilon_{i_1 ... i_m \ m+1 ... n} e^{i_1} \otimes ... \otimes e^{i_m}$
So that in component form:
$\Sigma = \Sigma_{i_1 ... i_m} e^{i_1} \otimes ... \otimes e^{i_m}$
for $\Sigma_{i_1 ... i_m} = \sqrt{|g|} \delta^{1 ... m}_{i_1 ... i_m}$
$\Sigma_{i_1 ... i_m} = \sqrt{|g|} m! \delta^1_{[i_1} \cdot ... \cdot \delta^m_{i_m]}$
$\Sigma_{i_1 ... i_m} = \epsilon_{i_1 ... i_m \ m+1 ... n}$
So $e_j \lrcorner \Sigma$
$= \epsilon_{i_1 ... i_m \ m+1 ... n} (e^{i_1} \otimes ... \otimes e^{i_m}) (e_j)$
$= \epsilon_{i_1 ... i_m \ m+1 ... n} e^{i_1} (e_j) e^{i_2} \otimes ... \otimes e^{i_m}$
$= \epsilon_{i_1 ... i_m \ m+1 ... n} \delta^{i_1}_j e^{i_2} \otimes ... \otimes e^{i_m}$
$= \epsilon_{j \ i_2 ... i_m \ m+1 ... n} e^{i_2} \otimes ... \otimes e^{i_m}$
So $\Sigma(e_j) = 0$ for $j \in \{ m+1 ... n \}$,
and $\Sigma(e_j) = \pm \sqrt{|g|}$ for $j \in \{ 1 ... m \}$.
Let $N = \star \Sigma$ be the basis of the perpendicular space.
$N = \sqrt{|g|} *(e^1 \wedge ... \wedge e^m)$
$= \sqrt{|g|} e^{m+1} \wedge ... \wedge e^n$
$= \eta_{1 ... m}$
$= \epsilon_{1 ... m \ m+1 ... n} e^{m+1} \wedge ... \wedge e^n$
$= \sqrt{|g|} (n-m)! e^{[m+1} \otimes ... \otimes e^{n]}$
$= \sqrt{|g|} \delta^{m+1 ... n}_{i_{m+1} ... i_n} e^{i_{m+1}} \otimes ... \otimes e^{i_n}$
$= \frac{1}{(n-m)!} \epsilon_{1 ... m \ i_{m+1} ... i_n} e^{i_{m+1}} \wedge ... \wedge e^{i_n}$
$= \epsilon_{1 ... m \ i_{m+1} ... i_n} e^{[i_{m+1}} \otimes ... \otimes e^{i_n]}$
$= \epsilon_{1 ... m \ i_{m+1} ... i_n} e^{i_{m+1}} \otimes ... \otimes e^{i_n}$
So that in compoent form:
$N = N_{i_{m+1} ... i_n} e^{i_{m+1}} \otimes ... \otimes e^{i_n}$
for $N_{i_{m+1} ... i_n} = \sqrt{|g|} \delta^{m+1 ... n}_{i_{m+1} ... i_n}$
for $N_{i_{m+1} ... i_n} = \sqrt{|g|} (n-m)! \delta^{m+1}_{i_{m+1}} \cdot ... \cdot \delta^n_{i_n}$
$N_{i_{m+1} ... i_n} = \epsilon_{1 ... m \ i_{m+1} ... i_n}$
So $e_j \lrcorner N$
$= \epsilon_{1 ... m \ i_{m+1} ... i_n} (e^{i_{m+1}} \otimes ... \otimes e^{i_n}) (e_j)$
$= \epsilon_{1 ... m \ i_{m+1} ... i_n} e^{i_{m+2}} \otimes ... \otimes e^{i_n} \delta^{i_{m+1}}_j$
$= \epsilon_{1 ... m \ j \ i_{m+2} ... i_n} e^{i_{m+2}} \otimes ... \otimes e^{i_n}$
$= 0$ if $j \in \{ 1 ... m \}$ since our permutation tensor $\epsilon_I$ will get repeated-index.
$= \pm \sqrt{|g|}$ if $j \in \{ m+1 ... n \}$ since we don't have repeated roots.
Let $n^i$ be the the i'th normal form.
Define the i'th normal $n^i$ as the inverse of the non-coordinate transform to diagonalize the metric but only for the $i \in \{m+1 ... n\}$ indexes:
$n^i = (n^i)_u e^u = {e^i}_u e^u$,
Notice that, for diagonalization:
$g_{uv}$
$= e_u \cdot e_v$
$= {e^i}_u {e^j}_v \eta_{ij}$
$= {e^i}_u {e^j}_v (\eta^\perp_{ij} + \eta^\top_{ij})$
$=
{e^i}_u {e^j}_v \eta^\perp_{ij}
+ \frac{1}{\alpha_i} (n^i)_u \frac{1}{\alpha_j} (n^j)_v \eta^\top_{ij}
$
$= \underset{{i,j \in [m+1,n]}}{\Sigma} \frac{1}{\alpha_i} (n^i)_u \frac{1}{\alpha_j} (n^j)_v \eta^\top_{ij}
+ \underset{{i,j \in [1,m]}}{\Sigma} {e^i}_u {e^j}_v \eta^\perp_{ij}
$
So you still need influence from the submanifold basis.
In contrast:
$(\eta^\top)^{ij}$
$= {e^i}_u {e^j}_v g^{uv}$
$= \frac{1}{\alpha_i} \frac{1}{\alpha_j} (n^i)_u (n^j)_v g^{uv}$
TODO is this true?:
for linear diagonalization of the metric ${e^i}_u$ such that ${e^i}_u {e^j}_v (\eta^\top)_{ij} = g_{uv}$
so $n \eta^\top n = {n^i}_u (\eta^\top)_{ij} {n^j}_v e^u \otimes e^v = g_{uv} e^u \otimes e^v$
There are multiple ways to define an inverse diagonalization ${e^i}_u = (n^i)_u$.
Choose the specific diagonalization:
$ n^i
= -(\alpha_i) \nabla (x^i)
= -(\alpha_i) e^u \nabla_u (x^i)
= -(\alpha_i) e^u \delta_u^i
= -(\alpha_i) e^i
$
Normalize it so $\sigma_i = (\alpha_i)^2 n^i \cdot n^i = (\alpha_i)^2 g^{ii}$
Notice that $\alpha_i$ is a scalar, not tensoral, and the i index isn't paired with a basis element.
The vector form of normal $n^i$ is:
$
n^i
= (n^i)_u e^u
= (n^i)^u e_u
= g^{uv} (n^i)_v e_u
= -g^{uv} (\alpha_i) \delta^i_v e_u
= -g^{ui} (\alpha_i) e_u
$
So $(n^i)^u = -g^{ui} (\alpha_i)$
Let $(\beta_i)^u = g^{ui} (\alpha_i)$ be the i'th shift vector.
TODO the i'th $\beta_i$ will have to be raised/lowered by by the submatrix of $\eta - \overset{i-1}{\underset{j=m+1}{\Sigma}} n_j \eta n_j$
I think this definition of $n^i = (n^i)_u e^u = {e^i}_u e^u$ should work with the definition of $N = \star \Sigma$ as well.
Then $\gamma \ne g + N \otimes N$ can be used as well..
Let $
\gamma
= \gamma_{uv} e^u \otimes e^v
= g - n \eta^\top n
= g - (\eta^\top)_{ij} n^i \otimes n^j
= (g_{uv} - (\eta^\top)_{ij} {n^i}_u {n^j}_v)(e^u \otimes e^v)
$ be the projection operator
So for $k \in \{1 ... m\}$:
$\gamma(e_k)$
$= (g_{uv} - (\eta^\top)_{ij} {n^i}_u {n^j}_v)(e^u \otimes e^v) (e_k)$
$= (g_{uv} - (\eta^\top)_{ij} {e^i}_u {e^j}_v) \delta^u_k e^v$
$= (g_{uv} - (\eta^\top)_{ij} (\alpha_i) (\alpha_j) \delta^i_u \delta^j_v) \delta^u_k e^v$
$= (g_{kv} - (\eta^\top)_{kv} (\alpha_k) (\alpha_v)) e^v$
Since $k \in \{1 ... m\}, (\eta^\top)_{kv} = 0$
$= g_{kv} e^v$
$= e_k$
And for $k \in \{m+1 ... n \}$:
$\gamma(e_k)$
$= (g - n \eta^\top n) (e_k)$
$= (g_{kv} - (\eta^\top)_{kv} (\alpha_k) (\alpha_v)) e^v$
Since $k \in \{1 ... m\}, (\eta^\top)_{kv} = \eta_{kv} = \delta_{kv} \sigma_k$
$= (g_{kv} - \delta_{kv} \sigma_k (\alpha_k) (\alpha_v)) e^v$
TODO prove this is equal ... seems like I'll have to use raised $(n^i)^u = -g^{iu} (\alpha_i)$ somehow.
$= 0$
$\gamma_{ab} = a \downarrow \overset{b \rightarrow}{
\left[ \begin{array}{ccc|c|c}
\gamma_{11} & \cdots & \gamma_{im} & (\beta_{m+1})_1 & \cdots & (\beta_n)_1 \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\gamma_{m1} & \cdots & \gamma_{mm} & (\beta_{m+1})_m & \cdots & (\beta_n)_m \\
\hline
(\beta_{m+1})_1 & \cdots & (\beta_{m+1})_m & (\beta_{m+1})_{m+1} = -\frac{1}{(\alpha_{m+1})^2} & \cdots & (\beta_{m+1})_n = (\beta_n)_{m+1} \\
\hline
& \ddots & & \vdots & \ddots & \vdots \\
\hline
(\beta_n)_1 & \cdots & (\beta_n)_m & (\beta_{m+1})_n = (\beta_n)_{m+1} & & (\beta_n)_n = -\frac{1}{(\alpha_n)^2}
\end{array} \right]
}$
Back