lie group physics

When Lie Groups Became Physics

Estimated Read Time: 21 minute(s)
Common Topics: group, differential, invariant, lie, transformations

Abstract

We explain by simple examples (one-parameter Lie groups), partly in the original language, and along the historical papers of Sophus Lie, Abraham Cohen, and Emmy Noether how Lie groups became a central topic in physics. Physics, in contrast to mathematics, didn’t experience the Bourbakian transition so the language of for example differential geometry didn’t change quite as much during the last hundred years as it did in mathematics. This also means that mathematics at that time has been written in a way that is far closer to the language of physics, and those papers are not as old-fashioned as you might expect.

Introduction

$$
\operatorname{SU(3)}\times \operatorname{SU(2)} \times \operatorname{U(1)}
$$
is possibly the most prominent example of a Lie group in modern physics. To stress the fact that SM isn’t the final word leads sometimes to
$$
\operatorname{SU(3)}\times \operatorname{SU(2)} \times \operatorname{U(1)} \subset \operatorname{SU(5)}
$$
depending on someone’s favorite candidate for a GUT, here ##\operatorname{SU(5)}##. However, Lie groups entered physics on a much less sophisticated, more rudimentary level that does not need to cite quantum physics. Lie spoke of the theory of invariants of tangent transformations. Well, he actually said touching transformations [2]. Nowadays, we often read generators and language doesn’t always properly distinguish between the groups of analytical coordinate transformations and their linear representations. Lie developed his theory between 1870 and 1880 [3], referencing [2] Jacobi’s work on partial differentials. Cohen already spoke of Lie groups in his book about one-parameter groups 1911 [4], whereas Noether in her famous papers 1918 just called them the group of all analytical transformations of the variables (Lie group), which corresponds to the group of all linear transformations of the differentials (linear representation of the Lie group on its Lie algebra) [5], or continuously group in Lie’s sense in her groundbreaking paper [6] that led to the connection between conservation laws and the theory of tangent transformations, Lie theory.

Peter John Olver – Minneapolis 1986

The intuition is simple. A physical phenomenon in nature is a procedure described by location, time, and the change of location in time, shortly: by a differential equation system ##\mathcal{L}(x,\dot x,t)##. In order to become a physical law, we have to demonstrate by experiments that the solutions of the differential equation system are in accordance with the outcomes of the experiments. This means we have to measure certain quantities. A measurement is a comparison with a reference frame. A reference frame is a set of coordinates in the laboratory which we used to describe the differential equation system. The outcome of the experiments, however, should not depend on the coordinate system we used since nature cannot know what we used. Hence any change of coordinates may not alter the experiment. This means mathematically that a transformation of coordinates results in the same solutions of the differential equation system, i.e. ##\mathcal{L}(x,\dot x,t)## is robust aka invariant under coordinate transformations within our reference frame. It turned out that these transformations are even connected to physical conservation laws, i.e. invariant quantities. How does this read in a modern textbook?

Theorem: A generalized vector field determines a variational symmetry group of the functional ##\mathcal{L}[u]=\int L\,dx## if and only if its characteristic is the characteristic of a conservation law ##\operatorname{Div}P=0## for the corresponding Euler-Lagrange equations ##E(L)=0.## In particular, if ##\mathcal{L}## is a nondegenerate variational problem, there is a one-to-one correspondence between equivalence classes of nontrivial conservation laws of the Euler-Lagrange equations and equivalence classes of variational symmetries of the functional [1].

That’s quite a discrepancy between intuition and technical details. The point is, if we now start to enter the jungle of those technical details and prove the theorem, chances are high that we will lose intuition. Instead, let the time warp begin and see how the subject has been introduced to physics a century ago.

Abraham Cohen – Baltimore 1911

Group of Transformations

The set of parameterized coordinate transformations [and our example in brackets of a rotation with the angle as the parameter]
\begin{align*}
T_a\, : \,x_1&=\phi(x,y,a)\, , \,y_1=\psi(x,y,a)\\
[x_1&= x\cos a – y \sin a\, , \, y_1=x \sin a +y\cos a]\\
T_b\, : \,x_2&=\phi(x_1,y_1,b)\, , \,y_2=\psi(x_1,y_1,b)\\
[x_2&= x_1\cos b – y_1 \sin b\, , \, y_2=x_1\sin b +y_1\cos b]
\end{align*}
carries a group structure if the result of performing one and then the other transformation is again of the form
\begin{align*}
T_aT_b=T_c\, : \,x_2&=\phi(\phi(x,y,a),\psi(x,y,a),b)=\phi(x,y,c)\\
y_2&=\psi(\phi(x,y,a),\psi(x,y,a),b)=\psi(x,y,c)\\
[x_2&= x\cos (a+b) – y \sin (a+b)\, , \, y_2=x\sin (a+b) +y\cos (a+b)]
\end{align*}
where the parameter ##c## depends only on the parameters ##a,b.## Since ##\phi,\psi## are continuous functions of the parameter ##a,## if we start with the value ##a_0,## and allow ##a## to vary continuously, the effect of the corresponding transformations on ##x,y## will be to transform them continuously, too; i.e. for a sufficiently small change of ##a,## the changes in ##x,y## are as small as we want. A variation of the parameter ##a## generates a transformation of the point ##(x,y)## to various points on some curve, which we call the orbit of the group. If ##(x,y)## is considered as a constant point while ##(x_1,y_1)## is variable, then ##T_a## is the parameterized orbit through ##(x,y).## The orbit corresponding to any point ##(x,y)## may be obtained by eliminating ##a## from the two equations of ##T_a.## Cohen called the orbit path-curve of the group.

Infinitesimal Transformation

Let ##a_0## be the value of ##a## that corresponds to the identical transformation, and ##\delta a## an infinitesimal. [##a_0=0## in our example.] As ##\phi,\psi## are analytical, the transformation
\begin{align*}
x_1=\phi(x,y,a_0+\delta a)\, &, \,y_1=\psi(x,y,a_0+\delta a)\\
[x_1=x\cos(\delta a)-y\sin(\delta a)\, &, \,y_1=x\sin(\delta a)+y\cos(\delta a)]
\end{align*}
changes ##x,y## by an infinitesimal amount. The Taylor series becomes
\begin{align*}
\underbrace{x_1-\underbrace{\phi(x,y,a_0)}_{=x}}_{=:\delta x}&=\underbrace{\left(\left. \dfrac{\partial \phi}{\partial a}\right|_{a_0}\right)}_{=:\xi(x,y)} \delta a +O(\delta^2 a)=\delta x=\xi(x,y)\delta a+\ldots \\
\underbrace{y_1-\underbrace{\psi(x,y,a_0)}_{=y}}_{=:\delta y}&=\underbrace{\left(\left. \dfrac{\partial \psi}{\partial a}\right|_{a_0}\right)}_{=:\eta(x,y)} \delta a +O(\delta^2 a)=\delta y=\eta(x,y)\delta a+\ldots
\end{align*}
\begin{align*}
[\delta x&=(-x\sin(0)-y\cos(0))\delta a+O(\delta^2 a)=-y\,\delta a+O(\delta^2 a)]\\
[\delta y&=(x\cos(0)-y\sin(0))\delta a+O(\delta^2 a)=x\,\delta a+O(\delta^2 a)]
\end{align*}
[Note that ##(x,y)\perp (\delta x,\delta y)## in our example as expected for a rotation.]

Neglecting the higher powers of ##\delta a,## we get the infinitesimal transformation (generator, vector field ##U##)
\begin{align*}
U\, : \,\delta x=\xi(x,y)\delta a\, &, \,\delta y=\eta(x,y)\delta a\\
U\, : \,\delta x=\xi \delta a=\left(\left. \dfrac{\partial x_1}{\partial a}\right|_{a=a_0}\right)\delta a\, &, \,\delta y=\eta \delta a=\left(\left. \dfrac{\partial y_1}{\partial a}\right|_{a=a_0}\right)\delta a
\end{align*}

$$[U\, : \,\delta x=\xi\delta a=-y\delta a\, , \,\delta y=\eta\delta a=x\delta a]$$

Symbol of Infinitesimal Transformation U.f

##\delta ## is the symbol of differentiation with respect to the parameter ##a## in the restricted sense that it designates the value which the differential of the new variable ##x_1## or ##y_1## assumes when ##a=a_0.## If ##f(x,y)## is a general analytical function, the effect of the infinitesimal transformation on it, ##U.f,## is to replace it by ##f(x+\xi\delta a,y+\eta \delta a).## The Taylor series at ##a=a_0## is thus
\begin{align*}
\underbrace{f(x+\xi\delta a,y+\eta \delta a)-f(x,y)}_{=:\delta f}=\underbrace{\left(\xi\dfrac{\partial f}{\partial x}+\eta\dfrac{\partial f}{\partial y}\right)}_{=:U.f}\delta a+O(\delta^2 a)
\end{align*}
and with ##f_1=f(x_1,y_1)##
\begin{align*}
\left.\dfrac{\partial f_1}{\partial a}\right|_{a_0}&=\dfrac{\partial f(x_1,y_1)}{\partial x_1}\cdot \left. \dfrac{\partial x_1}{\partial a}\right|_{a_0}+\dfrac{\partial f(x_1,y_1)}{\partial y_1}\cdot \left. \dfrac{\partial y_1}{\partial a}\right|_{a_0}\\&=\xi\dfrac{\partial f(x_1,y_1)}{\partial x_1}+\eta\dfrac{\partial f(x_1,y_1)}{\partial y_1}=\xi\dfrac{\partial f(x,y)}{\partial x}+\eta\dfrac{\partial f(x,y)}{\partial y}=U.f
\end{align*}
In particular ##U.x = \xi\, , \,U.y=\eta.##

##U.f## can be written if the infinitesimal transformation ##\delta x=\xi\delta a, \delta y=\eta\delta a ## is known, and conversely, the infinitesimal transformation is known if ##U.f## is given. We say that ##U.f## represents the infinitesimal transformation.

[Say ##f(x,y)=x^2+y^2## for our example. Then
\begin{align*}
\partial f&=(x+\xi\delta a)^2+(y+\eta \delta a)^2-(x^2+y^2)=\underbrace{2\left(x\xi+y\eta\right)}_{=U.f}\delta a+O(\delta^2 a )\\
U.f&=2\left(x\xi+y\eta\right)=2(-xy+yx)\delta a\equiv 0 \quad
\end{align*}
The effect of an infinitesimal rotation on a circle is zero.]

Group Generated by an Infinitesimal Transformation

The infinitesimal transformation
$$
U.f=\xi\dfrac{\partial f}{\partial x}+\eta \dfrac{\partial f}{\partial y} \;\text{ or }\; \delta x=\xi(x,y)\delta t\, , \,\delta y=\eta(x,y) \delta t
$$
carries the point ##(x,y)## to the neighboring position ##(x+\xi\delta t,y+\eta\delta t).## The repetition of this transformation an indefinite number of times has the effect of carrying the point along an orbit which is precisely that integral curve (flow) of the system of differential equations
$$
\dfrac{d x_1}{d t}=\xi(x_1,y_1)\, , \,\dfrac{d y_1}{d t}=\eta(x_1,y_1)
$$
which passes through the point ##(x,y).## Now ##\dfrac{dx_1}{\xi(x_1,y_1)}=\dfrac{dy_1}{\eta(x_1,y_1)}## being free from ##t## form a differential equation whose solution may be written $$u(x_1,y_1)=constant=u(x,y)$$ since ##x_1=x,y_1=y## when ##t=0.## This is the equation of the orbit corresponding to ##(x,y).## Say we solve the equation for ##x_1=\omega(y_1,c)## then
$$
dt=\dfrac{d y_1}{\eta(\omega (y_1,c),y_1)}\Longrightarrow t=\int dt= \int \dfrac{1}{\eta(\omega (y_1,c),y_1)}dy_1 +c’
$$
and a solution takes the form (##c## replaced by its expression in ##x_1,y_1## again)
$$
v(x_1,y_1)-t=constant =v(x,y)
$$
Considering
$$
\begin{cases}
u(x_1,y_1)=u(x,y)\\
v(x_1,y_1)=v(x,y)+t
\end{cases}
$$
as a transformation from ##(x,y)## at ##t=0## to ##(x_1,y_1)##, we see that these define a one-parameter Lie group, translation by ##(0,t).##

[The integral curve of the differential equations in our example is given by
\begin{align*}
\dfrac{dx_1}{dt}=-y_1\, &, \,\dfrac{dy_1}{dt}=x_1\\
-\dfrac{x_1}{y_1}=\int \dfrac{dx_1}{-y_1}&=\int \dfrac{dy_1}{x_1}=\dfrac{y_1}{x_1}+c\\[6pt]
u(x_1,y_1)=x^2_1+y^2_1&=c
\end{align*}
Let ##x_1=\sqrt{c-y_1^2}=\omega (y_1,c)## so
\begin{align*}
dt&=\dfrac{dy_1}{x_1(\omega (y_1,c),y_1)}=\dfrac{dy_1}{x_1(\sqrt{c-y_1^2},y_1)}=\dfrac{dy_1}{\sqrt{c-y_1^2}}\\
t&=\int \dfrac{dy_1}{\sqrt{c-y_1^2}} =\arcsin\left(\dfrac{y_1}{\sqrt{c}}\right)+c’=\arcsin\left(\dfrac{y_1}{\sqrt{x_1^2+y_1^2}}\right)+c’
\end{align*}
$$
\begin{cases}
x_1^2+y_1^2=x^2+y^2\\
\arcsin\left(\dfrac{y_1}{\sqrt{x_1^2+y_1^2}}\right)=\arcsin\left(\dfrac{y}{\sqrt{x^2+y^2}}\right)+t
\end{cases}
$$
There is another solution to the differential equation. We get from
\begin{align*}
\dfrac{dx}{P}&=\dfrac{dy}{Q}=\dfrac{dt}{R}=\dfrac{\lambda dx+\mu dy+\nu dt}{\lambda P+\mu Q+\nu R}\\[6pt]
\dfrac{dx_1}{-y_1}&=\dfrac{dy_1}{x_1}=\dfrac{dt}{1}=\dfrac{-y_1\lambda dx_1+x_1\mu dy_1}{\lambda y_1^2+\mu x_1^2}\\[6pt]
t&=-\dfrac{\sqrt{\lambda}}{\sqrt{\mu}}\arctan\dfrac{\sqrt{\mu}x_1}{\sqrt{\lambda}y_1}+\dfrac{\sqrt{\mu}}{\sqrt{\lambda}} \arctan\dfrac{\sqrt{\lambda}y_1}{\sqrt{\mu}x_1}\\[6pt]
v(x_1,y_1)&=\arctan \dfrac{y_1}{x_1}-\arctan\dfrac{x_1}{y_1}=\arctan\dfrac{y}{x}-\arctan\dfrac{x}{y}+t=v(x,y)+t \quad ]\\
\end{align*}

Another Method of Finding the Group from its Infinitesimal Transformation

We get from the MacLaurin series for ##f_1##

\begin{align*}
f_1&=f+\left. \dfrac{\partial f_1}{\partial t}\right|_{t=0}t+\left. \dfrac{\partial^2 f_1}{\partial t^2}\right|_{t=0}\dfrac{t^2}{2!}+\left. \dfrac{\partial^3 f_1}{\partial t^3}\right|_{t=0}\dfrac{t^3}{3!}+\ldots\\
f_1&=f+U.f\,t+U^2.f\,\dfrac{t^2}{2!}+U^3.f\,\dfrac{t^3}{3!}+\ldots=\exp(tU).f
\end{align*}
\begin{align*}
[\,\text{Example: }U.f&=-y\dfrac{\partial f}{\partial x} +x\dfrac{\partial f}{\partial y} \\
U.x&=-y,\,U^2.y=U.(-y)=-x,\,U^3.x=U.(-x)=y\text{ etc.}\\[6pt]
x_1&=x\left(1-\dfrac{t^2}{2!}+\dfrac{t^4}{4!}+\ldots\right)-y\left(t-\dfrac{t^3}{3!}+\dfrac{t^5}{5!}-\ldots\right)\\
&=x\cos t-y\sin t\\
y_1&=x\left(t-\dfrac{t^3}{3!}+\dfrac{t^5}{5!}-\ldots\right)+y\left(1-\dfrac{t^2}{2!}+\dfrac{t^4}{4!}+\ldots\right)\\
&=x\sin t+y\cos t \quad]
\end{align*}

Invariants

A function of the variables is said to be an invariant of a group (or invariant under the group) if it is left unaltered by every transformation of the group, i.e. ##f(x_1,y_1)=f(x,y).##

Theorem: The necessary and sufficient condition that ##f(x,y)## be invariant under the group ##U.f## is
$$
U.f =\xi \dfrac{\partial f}{\partial x}+\eta\dfrac{\partial f}{\partial y}\equiv 0.
$$
[We have already seen that ##u(x,y)=x^2+y^2=c## is an integral curve of the differential equations in our example. A general solution of ##U.f=0## is then given by ##f=F(u)=F(x^2+y^2),## see [8,  §79].]

Orbits. Invariant Points and Curves

The differential equations for orbits have been obtained by solutions to
$$
\dfrac{dx}{\xi}=dt =\dfrac{dy}{\eta} \Longleftrightarrow \dfrac{dy}{dx}=\dfrac{\eta}{\xi}.
$$
The general solution ##u(x,y)=c## represents a family of orbits. [##x^2+y^2=c## in our example; circles are the invariants of rotations.] If ##f(x,y)=0## is an invariant equation, then ##f(x_1,y_1)=0## for all points ##(x_1,y_1)## and ##U.f=0 .## This means ##U.f## must contain ##f(x,y)## as a factor (assuming ##f## contains no repeated factors)
$$
U.f=\omega (x,y)\cdot f(x,y)
$$
and ##U^2## contains ##f## as a factor, too, by
$$
U^2.f=U.(U.f)=(U.\omega ) f + \omega (U.f)=(U.\omega +\omega^2).f
$$
This process can be inductively repeated
$$
U^n.f=\theta(x,y)\cdot f(x,y)\, , \,U^{n+1}.f=(U.\theta+\theta \omega )f,
$$
hence the vanishing of ##U.f## whenever ##f(x,y)## does is both the necessary and sufficient condition that ##f(x,y)=0## be an invariant equation.

Theorem: The necessary and sufficient condition that ##f(x,y)=0## be invariant under the group ##U.f## is that ##U.f=0## for all values ##x,y## for which ##f(x,y)=0##, it being presupposed that ##f(x,y)## has no repeated factors. Points whose coordinates satisfy the two equations ##\xi(x,y)=0,\,\eta(x,y)=0## are invariant under the group. If ##\xi(x,y)=0,\,\eta(x,y)=0## whenever ##f(x,y)=0## this curve is composed of invariant points. Curves of this type are not included among the orbits of the group. In all other cases, ##f(x,y)=0## is an orbit.

If ##U.f=0## for all values ##x,y##, ##f(x,y)## is an invariant, and ##f(x,y)=c## constant (including zero) is an orbit.

[##\xi=-y,\,\eta=x,\,u(x,y)=x^2+y^2=c## are the equations of all orbits of rotations. There are no other invariant curves. The point ##(x,y)=(0,0)## is invariant.]

Invariant Family of Curves

A family of curves is said to be invariant under a group, if every transformation of the group transforms each curve ##f(x,y)=c.## into some curve of the family
\begin{align*}
f(x_1,y_1)=f(\phi(x,y,t),\psi(x,y,t))=\omega(x,y,t)=c’
\end{align*}
These equations must be solutions of the same differential equation
$$
\dfrac{\partial f}{\partial x}dx+\dfrac{\partial f}{\partial y}dy=0 \text{ and }\dfrac{\partial \omega }{\partial x}dx+\dfrac{\partial \omega }{\partial y}dy=0.
$$
which is the case if
$$
\det\begin{pmatrix}f_x&f_y\\ \omega_x&\omega_y\end{pmatrix}=0 \Longleftrightarrow \omega =U.f=F(f).
$$
The family of curves ##f(x,y)=c## may equally well be written ##\Phi[f(x,y)]=c,## where ##\Phi(f)## is any holomorphic function of ##f.## From ##U.f=F(f)## and the chain rule we get
$$
U.\Phi(f) = \dfrac{d\Phi}{df}U.f=\dfrac{d\Phi}{df}F(f).
$$
This will be any desired function of ##f,## say ##\Omega(f), ## if the family of orbits is excluded, i.e. if ##F(f)\neq 0## then
$$
\dfrac{d\Phi}{df}F(f)=\Omega(f) \Longleftrightarrow \Phi(f)=\int\dfrac{\Omega(f)}{F(f)}df.
$$

##\left[-y\dfrac{\partial f}{\partial x}+x\dfrac{\partial f}{\partial y}=F(f)\right.## leads to ##\dfrac{dx}{-y}=\dfrac{dy}{x}=\dfrac{df}{F(f)},## so the general solution is of the form
$$
\arctan\left(\dfrac{y}{x}\right)-\phi(f)=\psi(x^2+y^2) \text{ or }
f=\Phi\left(\arctan\left(\dfrac{y}{x}\right)-\psi(x^2+y^2)\right).
$$
The equation ##\dfrac{y}{x}=c## representing the family of straight lines through the origin is a simple example.]

Alternant (Commutator)

Let ##U_1,U_2## be any two homogeneous linear partial differential operators
$$
U_1=\xi_1(x,y)\dfrac{\partial }{\partial x}+\eta_1(x,y)\dfrac{\partial }{\partial y}\; , \;U_2=\xi_2(x,y)\dfrac{\partial }{\partial x}+\eta_2(x,y)\dfrac{\partial }{\partial y}
$$
Then
\begin{align*}
U_1U_2.f&=(U_1.\xi_2)\left(\dfrac{\partial f}{\partial x}\right)+(U_1.\eta_2)\left(\dfrac{\partial f}{\partial y}\right)+
\xi_1\xi_2\dfrac{\partial^2 f}{\partial x^2}+ (\xi_1\eta_2+\xi_2\eta_1)\dfrac{\partial^2f }{\partial x \partial y}+\eta_1\eta_2\dfrac{\partial^2 f}{\partial y^2}\\
U_2U_1.f&=(U_2.\xi_1)\left(\dfrac{\partial f}{\partial x}\right)+(U_2.\eta_1)\left(\dfrac{\partial f}{\partial y}\right)+
\xi_2\xi_1\dfrac{\partial^2 f}{\partial x^2}+ (\xi_2\eta_1+\xi_1\eta_2)\dfrac{\partial^2f }{\partial x \partial y}+\eta_2\eta_1\dfrac{\partial^2 f}{\partial y^2}\\
\end{align*}
and
$$
[U_1,U_2]=U_1U_2-U_2U_1=(U_1.\xi_2-U_2.\xi_1)\dfrac{\partial }{\partial x}+(U_1.\eta_2-U_2.\eta_1)\dfrac{\partial }{\partial y}
$$
is again a homogeneous linear partial differential operator.

Integrating Factor

If ##\phi(x,y)=c## is a family of curves invariant under the group
$$
U.f=\xi\dfrac{\partial f}{\partial x}+\eta\dfrac{\partial f}{\partial y}
$$
then we have learned that ##U.\phi=F(\phi).## Moreover, if the curves of the family are not orbits of the group, the equation of the family can be chosen in such form that ##F(\phi)## shall become any desired function of ##\phi##. In particular, there is no loss
in assuming the equation so chosen that this is ##1.## For if a given choice ##\phi(x,y)=c## leads to ##F(\phi),## the selection ##\Phi(\phi)=c= \displaystyle{\int\dfrac{1}{F(\phi)}d\phi}## will give ##U.\Phi(\phi)=1.## Suppose now that
$$
M\,dx+N\, dy=0 \quad (*)
$$
is a differential equation whose family of integral curves ##\phi(x,y)=c## is invariant under the group ##U.f,## the integral curves not being orbits of the latter. Let further ##\phi## be chosen such that
$$
U.\phi=\xi\dfrac{\partial \phi}{\partial x}+\eta\dfrac{\partial \phi}{\partial y}=1
$$
Since ##\phi## is a solution of ##(*),##
$$
d\phi = \xi \dfrac{\partial \phi}{\partial x}\,dx+\eta\dfrac{\partial \phi}{\partial y}\,dy =0
$$
is the same equation as ##(*)## and thus
$$
\dfrac{\partial \phi /\partial x}{M}=\dfrac{\partial \phi /\partial y}{N} \text{ or } N\dfrac{\partial \phi}{\partial x}-M\dfrac{\partial \phi}{\partial y}=0.
$$
Solving the equation system results in
$$
\dfrac{\partial \phi}{\partial x}=\dfrac{M}{\xi M+\eta N}\, , \,\dfrac{\partial \phi}{\partial y}=\dfrac{N}{\xi M+\eta N}\, , \,
d\phi =\dfrac{M\,dx+N\,dy}{\xi M+\eta N}
$$
Hence we have proven

Marius Sophus Lie – Christiania 1874

Theorem: If the family of integral curves of the differential equation ##M\,dx + N\,dy = 0## is left unaltered by the group ##Uf \equiv \xi \dfrac{df}{dx}+\eta \dfrac{df}{dy},## ##\dfrac{1}{\xi M+\eta N}## is an integrating factor of the differential equation.

This was where, when, and by whom it all got started.

Amalie Emmy Noether – Göttingen 1918

Noether spoke in [5] about differential expressions and meant functions
$$
f(x,dx)=f(x_1,\ldots,x_n; dx_1,\ldots,dx_n)
$$
that are analytical in all arguments and investigated the analytical transformations of the variables and the corresponding linear transformations of their differentials simultaneously
\begin{align*}
f(x,dx) &\longrightarrow g(y,dy)\\
x_i=x_i(y_1,\ldots,y_n)\, &, \,dx_i=\sum_{k=1}^n \dfrac{\partial x_k}{\partial y_k}dy_k
\end{align*}
and an invariant of ##f## as an analytical function
\begin{align*}
J\left(f,\dfrac{\partial f}{\partial dx}\cdots \dfrac{\partial^{\rho+\sigma}f }{\partial x^\rho \partial dx^\sigma}\cdots dx,\delta x,d^2x,\ldots\right)=J\left(g,\dfrac{\partial g}{\partial dy}\cdots \dfrac{\partial^{\rho+\sigma}g }{\partial y^\rho \partial dy^\sigma}\cdots dy,\delta y,d^2y,\ldots\right)
\end{align*}
which already looks like our modern expression ##\mathcal{L}(x,\dot x,t).## The questions about the group of all invariants and their equivalence classes have been reduced to questions of the linear theory of invariants by Christoffel and Ricci in the case of specific differential equations. Noether called it a reduction theorem and was able to prove it for arbitrary differential expressions by a different method [5]. The essence of Lie’s theory can best be described by the following diagram
\begin{equation*} \begin{aligned} G &\longrightarrow GL(\mathfrak{g}) \\ \dfrac{d}{dx}\downarrow & \quad \quad \quad \uparrow\exp \\ \mathfrak{g} &\longrightarrow \mathfrak{gl(g)} \end{aligned} \end{equation*}
Noether’s main theorems say in their original wording [6]

1. If the integral ##I## is invariant with respect to a [Lie group] ##G_\rho,## then ##\rho## linearly independent connections of the Lagrangian expressions become divergences. Conversely, it follows that ##I## is invariant with respect to a [Lie group] ##G_\rho.## The theorem also holds in the limit of infinitely many parameters.

2. If the integral ##I## is invariant with respect to a [Lie group] ##G_{\infty\rho }##, in which the arbitrary functions appear up to the ##\sigma##-th derivative, then there are ##\rho## identical relations between the Lagrangian expressions and their derivatives up to the ##\sigma##-th order; the converse also applies here.

Epilogue – Noether Charge

Let us finish with an example of modern language.

The action on a classical particle is the integral of an orbit ##\gamma\, : \,t \rightarrow \gamma(t)##
$$
S(\gamma)=S(x(t))= \int \mathcal{L}(t,x,\dot{x})\,dt
$$
over the Lagrange function ##\mathcal{L}##, which describes the system considered. Now we consider smooth coordinate transformations
\begin{align*}
x &\longmapsto x^* := x +\varepsilon \psi(t,x,\dot{x})+O(\varepsilon^2)\\
t &\longmapsto t^* := t +\varepsilon \varphi(t,x,\dot{x})+O(\varepsilon^2)
\end{align*}
and we compare
$$
S=S(x(t))=\int \mathcal{L}(t,x,\dot{x})\,dt\text{ and }S^*=S(x^*(t^*))=\int \mathcal{L}(t^*,x^*,\dot{x}^*)\,dt^*
$$
Since the functional ##S## determines the law of motion of the particle, $$S=S^*$$ means, that the action on this particle is unchanged, i.e. invariant under these transformations, and especially
\begin{equation*}
\dfrac{\partial S}{\partial \varepsilon}=0 \quad\text{ resp. }\quad \left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0}\left(\mathcal{L}\left(t^*,x^*,\dot{x}^*\right)\cdot \dfrac{dt^*}{dt} \right) = 0
\end{equation*}
Emmy Noether showed exactly a hundred years ago, that under these circumstances (invariance), there is a conserved quantity ##Q##. ##Q## is called the Noether charge.
$$
S=S^* \Longrightarrow \left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0}\left(\mathcal{L}\left(t^*,x^*,\dot{x}^*\right)\cdot \dfrac{dt^*}{dt} \right) = 0 \Longrightarrow \dfrac{d}{dt}Q(t,x,\dot{x})=0
$$
with
$$
Q=Q(t,x,\dot{x}):= \sum_{i=1}^N \dfrac{\partial \mathcal{L}}{\partial \dot{x}_i}\,\psi_i + \left(\mathcal{L}-\sum_{i=1}^N \dfrac{\partial \mathcal{L}}{\partial \dot{x}_i}\,\dot{x}_i\right)\varphi = \text{ constant}
$$
The general way to proceed is:
(a) Determine the functions ##\psi,\varphi##, i.e. the transformations, which are considered.
(b) Check the symmetry by equation.
(c) If the symmetry condition holds, then compute the conservation quantity ##Q## with ##\mathcal{L},\psi,\varphi\,.##

Example: Given a particle of mass ##m## in the potential ##U(\vec{r})=\dfrac{U_0}{\vec{r\,}^{2}}## with a constant ##U_0##. At time ##t=0## the particle is at ##\vec{r}_0## with velocity ##\dot{\vec{r}}_0\,.##

The Lagrange function with ##\vec{r}=(x,y,z,t)=(x_1,x_2,x_3,t)## of this problem is
$$
\mathcal{L}=T-U=\dfrac{m}{2}\,\dot{\vec{r}}\,^2-\dfrac{U_0}{\vec{r\,}^{2}}\,.
$$
1. Give a reason why the energy of the particle is conserved, and what is its energy?

(a) Energy is homogeneous in time, so we chose ##\psi_i=0 , \varphi=1##
(b) and check
\begin{equation*}
\left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon = 0} \left(\mathcal{L}^*\,\cdot\,\dfrac{d}{dt}\,(t+\varepsilon )\right)=\left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon = 0} \left(\mathcal{L}^*\,\cdot\,1\right) = 0
\end{equation*}
since ##\mathcal{L}^*## doesn’t depend on ##t^*## and thus not on ##\varepsilon##, and calculate
(c) the Noether charge as
\begin{align*}
Q(t,x,\dot{x})&=\mathcal{L}- \sum_{i=1}^N\dfrac{\partial \mathcal{L}}{\partial \dot{x}_i} \,\dot{x}_i=T-U-\dfrac{m}{2}\left( \dfrac{\partial}{\partial \dot{x}_i}\left( \sum_{i=1}^3 \dot{x}^2_i \right)\,\dot{x}_i \right)\\
&=\dfrac{m}{2}\, \dot{\vec{r\,}}^2 – U -m\,\dot{\vec{r\,}}^2=-T-U=-E\\&=-\dfrac{m}{2}\, \dot{\vec{r\,}}^2- \dfrac{U}{\vec{r\,}^2}=-\dfrac{m}{2}\, \dot{\vec{r\,}}_0^2- \dfrac{U}{\vec{r\,}_0^2}
\end{align*}
by time invariance.

2. Consider the following transformations with infinitesimal ##\varepsilon##
$$\vec{r} \longmapsto \vec{r}\,^*=(1+\varepsilon)\,\vec{r}\,\, , \,\,t\longmapsto t^*=(1+\varepsilon)^2\,t$$
and verify the condition of E. Noether’s theorem.

##\dot{\vec{r}}\,^*=\dfrac{d\vec{r}\,^*}{dt^*}=\dfrac{(1+\varepsilon)\,d\vec{r}}{(1+\varepsilon)^2\, dt }=\dfrac{1}{1+\varepsilon}\,\dot{\vec{r}}\,## and thus ##\,\mathcal{L}^*=\dfrac{1}{(1+\varepsilon)^2}\,\mathcal{L}\, ##, i.e.
\begin{align*}
\left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0}&\left(\mathcal{L}\left(t^*,x^*,\dot{x}^*\right)\cdot \dfrac{dt^*}{dt} \right) = \left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0} \mathcal{L}^*\,\dfrac{dt^*}{dt}\\ &=\left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0} \dfrac{\mathcal{L}}{(1+\varepsilon)^2}\cdot (1+\varepsilon)^2=\left. \dfrac{d}{d\varepsilon} \right|_{\varepsilon =0}\mathcal{L} = 0
\end{align*}
and the condition of Noether’s theorem holds.

3. Compute the corresponding Noether charge ##Q## and evaluate ##Q## for ##t=0##.

The transformations we have are
\begin{align*}
x &\longmapsto x^* = (1+\varepsilon)x & \Longrightarrow \quad& \psi_x=x\\
y &\longmapsto y^* = (1+\varepsilon)y & \Longrightarrow \quad& \psi_y=y\\
z &\longmapsto z^* = (1+\varepsilon)z & \Longrightarrow \quad& \psi_z=z\\
t &\longmapsto t^* = (1+2\varepsilon)t & \Longrightarrow \quad& \varphi=2t
\end{align*}
and the Noether charge is thus given by
\begin{align*}
Q(t,x,\dot{x})&= \sum_{i=1}^N \dfrac{\partial \mathcal{L}}{\partial \dot{x}_i}\,\psi_i + \left(\mathcal{L}-\sum_{i=1}^N \dfrac{\partial \mathcal{L}}{\partial \dot{x}_i}\,\dot{x}_i\right)\varphi\\
&=\sum_{i=1}^3 \dfrac{\partial}{\partial \dot{x}_i}\left(\dfrac{m}{2}\,\dot{\vec{r}\,}^2-\dfrac{U_0}{\vec{r\,}^{2}}\right)\,\psi_i \,+\\
&+ \left(\dfrac{m}{2}\,\dot{\vec{r}}\,^2-\dfrac{U_0}{\vec{r\,}^{2}}-\sum_{i=1}^3 \dfrac{\partial }{\partial \dot{x}_i}\,\left(\dfrac{m}{2}\,\dot{\vec{r}}\,^2-\dfrac{U_0}{\vec{r\,}^{2}}\right)\dot{x}_i\right)\varphi\\
&=m(\dot{x}x+\dot{y}y+\dot{z}z) \,+ \\
&+\left( \dfrac{m}{2}\dot{\vec{r\,}}^2-\dfrac{U_0}{\vec{r\,}^{2}}-m(\dot{x}^2+\dot{y}^2+\dot{z}^2)\right)2t\\&=m\, \dot{\vec{r}}\,\vec{r}\,+\left( -\dfrac{m}{2}\dot{\vec{r\,}}^2-\dfrac{U_0}{\vec{r\,}^{2}} \right)2t=m\, \dot{\vec{r}}\,\vec{r}\, -(T+U)2t\\
&=m\, \dot{\vec{r}}\,\vec{r}\, -2Et\;\stackrel{t=0}{=}\; m\, \dot{\vec{r}}_0\,\vec{r}_0
\end{align*}
which shows that invariance under different transformations results in different conversation quantities.

Sources

[1] P.J. Olver, Applications of Lie Groups to Differential Equations, New York 1986, Springer, GTM 107

[2] M.S. Lie, Begründung einer Invarianten-Theorie der Berührungs-Transformationen, Mathematische Annalen 1874, Vol. 8, pages 215-303

[3] M.S. Lie, Classification und Integration von gewöhnlichen Differentialgleichungen zwischen xy, die eine Gruppe von Transformationen gestatten, Leipzig 1883

[4] A. Cohen, An Introduction to Lie Theory of One-Parameter Groups, Baltimore 1911

[5] A.E. Noether, Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen, 1918, Invarianten beliebiger Differentialausdrücke, pages 37-44

[6] A.E. Noether, Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen, 1918, Invariante Variationsprobleme, pages 235-257

[7] Example for Noether’s theorem, Complete Solution Manual, July 2018, I-3, pages 507ff.
https://www.physicsforums.com/threads/solution-manuals-for-the-math-challenges.977057/

[8] A.Cohen, Elementary Treatise on Differential Equations, Baltimore 1906

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply