When working with General Relativity, we naturally need to work with tensors. Fortunately, we can easily represent the metric, \(g_{uv}\), stress, and energy tensors using the Tensors
package along with SymPy
and LinearAlgebra
.
Recall that a metric in general is essentially a kind of map we can use to determine how a coordinate system should be laid out.
The convention in GR, and more specifically when using Einstein summation notation, is to write the indices of contravariant vectors as superscripts and the indices of covariant vectors as subscripts. Read my posts on contravariant vs. covariant vectors and Einstein summation notation for a more in-depth treatment of each topic.
Starting from flatspace, we'll using the Pythagorean theorem and construct a metric tensor of rank 2.
\[ds^2 = (dx^1)^2 + (dx^2)^2\]For a tensor transformation from the \(X\) frame of reference to the \(Y\) frame of reference, we can use the following equation as a guide,
\[ \boxed{dX^m = \frac{\partial X^m}{\partial Y^r} dY^r}\,.\]Applying \((2)\) to a transformation of \((1)\) will yield,
\[\begin{aligned} ds^2 &= \delta_{mn} \frac{\partial X^m}{\partial Y^r} \frac{\partial X^n}{\partial Y^s} dY^r dY^s \\ &= g_{rs}dY^r dY^s \,.\end{aligned}\]Before jumping to the metric tensor, \(g_{ij}\), where \(i\) and \(j\) are indices, I think it's a little easier to understand from the perspective of using the Kronecker delta.
Looking at \((1)\), we can see that we want to square the \(dx^1\) and \(dx^2\) terms. Furthermore, there are no terms with the product of \(dx^1\) and \(dx^2\), only \(dx^1\cdot dx^1\) and \(dx^2\cdot dx^2\).
Therefore, when \(m = n = 1\), we will be squaring the \(dx^1\) component and when \(m = n = 2\), we will be squaring the \(dx^2\) component. Since the Kronecker delta function returns \(0\) when \(m\) and \(n\) are different, whenever \(m \neq n\), those terms map to nullity. Therefore, the \(g_{11}\) term is \(1\), the \(g_{22}\) term is \(1\) and the \(g_{12} = g_{21}\) term is \(0\).
In matrix form this is written as,
\[\begin{bmatrix} g_{11} & g_{12} \\ g_{21} & g_{22} \end{bmatrix}\,.\]To re-iterate the result with the coefficients emphasized, the metric is given by,
\[ ds^2 = (1)(dx^1)^2 + (0)dx^1 dx^2 + (1)(dx^2)^2\,.\]Using the Tensors
package, we can quickly construct a metric tensor for computation. We can use the Tensor
or SymmetricTensor
constructors. In either case we need to specify the tensor rank, the dimensions, and datatype.
Tensor{2, 2, Int}([1 0 ; 0 1])
We want a tensor of rank \(2\) with \(2\) dimensions, and a bit or numeric data type since we're working with \(0\)s and \(1\)s. In the example above, I specified the rank, dimension, and data type and then passed in a matrix of the form shown in \((4)\). Another option would be to use the SymmetricTensor
constructor. The difference here is that we are explicitly stating that our \(g_{12}\) and \(g_{21}\) terms are identical, thereby forming a symmetric matrix, so we can use the following syntax.
using Tensors, LinearAlgebra
g_cart = SymmetricTensor{2, 2, Int}((1, 0, 1))
> 2×2 SymmetricTensor{2, 2, Int64, 3}:
> 1 0
> 0 1
It's also worth noting that because of the symmetry here, we're only storing \(3\) data points instead of \(4\). While that's unlikely to make any difference with a \(2-2\) tensor (\(2\) rank, \(2\) dimensional), it could offer improvements with higher-dimensional tensors.
We'll take it a step further. Let's perform a transformation from rectilinear coordinates to polar coordinates.
We'll need to leverage the following mappings,
\[\begin{aligned} x &= r\cos\theta \\ y &= r\sin\theta \,. \end{aligned}\]When we differentiate both \(x\) and \(y\), we need to remember to use the chain rule since \(r\) is a function of \(\theta\).
\[ dx = \cos{\theta}\ dr - r\sin{\theta}\ d\theta\] \[ dy = \sin{\theta}\ dr + r\cos{\theta}\ d\theta\]Plugging \((7)\) and \((8)\) back into \((1)\), expanding out, and collecting like terms yields the algebraical intensive but conceptually straight-forward,
\[\begin{aligned} ds^2 &= (\cos{\theta}\ dr - r\sin{\theta}\ d\theta)(\cos{\theta}\ dr - r\sin{\theta}\ d\theta) + (\sin{\theta}\ dr + r\cos{\theta}\ d\theta)(\sin{\theta}\ dr + r\cos{\theta}\ d\theta) \\ &= \cos^2\theta dr^2 - 2r\sin\theta\cos\theta dr\ d\theta + r^2\sin^2\theta d\theta^2 + \sin^2\theta dr^2 + 2r\sin\theta\cos\theta dr\ d\theta + r^2\cos^2\theta d\theta^2 \\ &= \cos^2\theta dr^2 + \sin^2\theta dr^2 + r^2\sin^2\theta d\theta^2 + r^2\cos^2\theta d\theta^2 \\ &= (\cos^2\theta + \sin^2\theta) dr^2 + r^2(\sin^2\theta + \cos^2\theta) d\theta^2 \\ &= (1)dr^2 + (0)drd\theta + (r^2)d\theta^2\,. \end{aligned}\]Note that above we used the trigonometric identity \(\cos^2(\theta) + \sin^2(\theta) = 1\).
Now that we have simplified \(ds^2\) to \(dr^2 + r^2 d\theta^2\), we can construct our metric tensor of flatspace in polar coordinates.
\[\begin{aligned} g_{rr} &= 1 \\ g_{r\theta} &= g_{\theta r} = 0 \\ g_{\theta\theta} &= r^2 \end{aligned}\]We could use functions here or represent the metric in a few other ways, but we'll leverage the excellent SymPy
package and construct a symbolic tensor using the values from \((10)\).
using SymPy, LinearAlgebra, Tensors
@vars r θ
g_polar = SymmetricTensor{2, 2, Sym}((1, 0, r^2))
g_polar
yields a symbolic tensor of the form,
While I won't go through the full derivation here, it's worth offering the concrete example of how we create a tensor of rank \(2\) and dimension \(3\) constructed with the diagm
function from the LinearAlgebra
package.
After working through some extensive algebra, we will find ourselves with \(ds^2\) in spherical coordinates,
\[ ds^2 = (1)dr^2 + (r^2)d\phi^2 + (r^2 \sin^2\theta)d\theta^2 \,.\]using SymPy, LinearAlgebra, Tensors
@vars r θ ϕ
g_spherical = Tensor{2, 3, Sym}(diagm([1, r^2, r^2*sin(θ)^2]))
Notice that since we are working in dimension \(3\), we've changed the initial directive to \(2\) for rank \(2\), \(3\) for dimension \(3\), and once again we're using \(Sym\) to leverage SymPy
objects. Passed in as an argument is the output of the diagm
function which constructs a diagonal matrix. For a \(3 \times 3\) matrix, diagm
will expect a vector with \(3\) elements as shown above. More generally, for any \(n \times n\) matrix, it will expect a vector with \(n\) elements.
The assignment of g_spherical
displays as,