About semidirect product of Lie algebra

In summary, the conversation discusses the semidirect product of a Lie algebra, specifically the special Lie algebra ##\mathfrak{s l}_2## and its 2-dimensional simple module ##V_2##. The conversation also introduces the basis and multiplication table for the Lie algebra and discusses finding the matrices for the elements of the Lie algebra and the simple module. The concept of linear representation is also mentioned, with the adjoint representation and another possible representation being discussed.
  • #1
HDB1
77
7
Homework Statement: About semidirect product of Lie algebra
Relevant Equations: ##\mathfrak{s l}_2=## ##\mathbb{K} F \oplus \mathbb{K} H \oplus \mathbb{K} E##

Hi,

Please, I have a question about the module of special lie algebra:

Let ##\mathbb{K}## be a field. Let the Lie algebra ##\mathfrak{s l}_2=\mathbb{K} F \oplus \mathbb{K} H \oplus \mathbb{K} E##
is a simple Lie algebra where the Lie bracket is given by the rule: ##[H, E]=2 E,[H, F]=-2 F## and ##[E, F]=H##. Let ##V_2=\mathbb{K} X \oplus \mathbb{K} Y## be the 2-dimensional simple ##\mathfrak{s l}_2##-module with basis ##X## and ##Y##.

Let ##\mathfrak{a}:=\mathfrak{s l}_2 \ltimes V_2## be the semi-direct product of Lie algebras .
The Lie algebra ##\mathfrak{a}## admits the basis ##\{H, E, F, X, Y\}## and the Lie bracket is defined as follows
$$
\begin{array}{lllll}
{[H, E]=2 E,} & {[H, F]=-2 F,} & {[E, F]=H,} & {[E, X]=0,} & {[E, Y]=X,} \\
{[F, X]=Y,} & {[F, Y]=0,} & {[H, X]=X,} & {[H, Y]=-Y,} & {[X, Y]=0 .}
\end{array}
$$
Let ##A=U(\mathfrak{a})## be the enveloping algebra of the Lie algebra ##\mathfrak{a}##.

Please, I know the basis of ##\mathfrak{s l}_2=##, which is ( as above):

$$
E=\left[\begin{array}{ll}
0 & 1 \\
0 & 0
\end{array}\right], \quad F=\left[\begin{array}{ll}
0 & 0 \\
1 & 0
\end{array}\right], \quad H=\left[\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right].
$$

A computation in ##M_2(\mathbb{K})## yields the following set of relations:
$$
[H, E]=2 E, \quad[H, F]=-2 F, \quad[E, F]=H.
$$I need to know please,

1- what is the matrix of X and Y,

2- How we compute the bracket between the elements of ##\mathfrak{s l}_2=## and ##V_2##,
Where: $$[x, y]=x y-y x$$.Thank you so much in advance,🌷
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Dear @fresh_42 I am so sorry for bothering you, if you could help me with this question, I would appreciate it.

Thanks in advance,
 
  • #3
No problem.
HDB1 said:
Let ##V_2=\mathbb{K} X \oplus \mathbb{K} Y## be the 2-dimensional simple ##\mathfrak{s l}_2##-module with basis ##X## and ##Y##.
Let ##\mathfrak{a}:=\mathfrak{s l}_2 \ltimes V_2## be the semi-direct product of Lie algebras .
The Lie algebra ##\mathfrak{a}## admits the basis ##\{H, E, F, X, Y\}## and the Lie bracket is defined as follows
$$
\begin{array}{lllll}
{[H, E]=2 E,} & {[H, F]=-2 F,} & {[E, F]=H,} & {[E, X]=0,} & {[E, Y]=X,} \\
{[F, X]=Y,} & {[F, Y]=0,} & {[H, X]=X,} & {[H, Y]=-Y,} & {[X, Y]=0 .}
\end{array}$$
So far so good. That is how a Lie algebra is defined. In this case five basis vectors and the multiplication table.
HDB1 said:
$$
E=\left[\begin{array}{ll}
0 & 1 \\
0 & 0
\end{array}\right], \quad F=\left[\begin{array}{ll}
0 & 0 \\
1 & 0
\end{array}\right], \quad H=\left[\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right].
$$
1- what is the matrix of X and Y,

Now it's getting complicated. There are none. If we want to have such matrices, then we need a linear representation of this Lie algebra. We have represented ##E,H,F## by ##2\times 2## matrices. The vector space of all ##2\times 2## matrices is ##4-##dimensional. It is the Lie algebra ##\mathfrak{gl}(2)## and is spanned by ##E,H,F## plus the identity matrix. There is no room left for ##X## and ##Y.##

This means, we need a larger representation space than ##\mathbb{K}^2.##

A method that always works is the adjoint representation. We could write
\begin{align*}
\operatorname{ad}\, : \,\mathfrak{a} &\longrightarrow \mathfrak{gl}(\mathfrak{a}) \cong \mathfrak{gl}(5)\\
A&\longmapsto (\operatorname{ad}(A)\, : \,\mathfrak{a} \longrightarrow \mathfrak{a})
\end{align*}

This would result in ##5\times 5## matrices. Let's order the basis of ##\mathfrak{a}## by ##(E,H,F,X,Y)## and consider for example ##\operatorname{ad}F.## Then we get
\begin{align*}
\operatorname{ad}F(E)&=[F,E]=-[E,F]=-H \triangleq (0,-1,0,0,0)\\
\operatorname{ad}F(H)&=[F,H]=-[H,F]=2F \triangleq (0,0,2,0,0)\\
\operatorname{ad}F(F)&=[F,F]=0 \triangleq (0,0,0,0,0)\\
\operatorname{ad}F(X)&=[F,X]=Y \triangleq (0,0,0,0,1)\\
\operatorname{ad}F(Y)&=[F,Y]=0 \triangleq (0,0,0,0,0)
\end{align*}
where the coordinate vectors are the columns of ##\operatorname{ad}F.## Hence we get
$$
\operatorname{ad}F =\begin{pmatrix}0&0&0&0&0\\-1&0&0&0&0\\0&2&0&0&0\\0&0&0&0&0\\0&0&0&1&0\end{pmatrix}
$$
... if I made no typo.

The other four matrices are accordingly. The kernel of ##\operatorname{ad},## i.e. all vectors ##Z:=eE+hH+fF+xX+yY## such that ##\operatorname{ad}Z=e\operatorname{ad}E+h\operatorname{ad}H+f\operatorname{ad}F+x\operatorname{ad}X+y\operatorname{ad}Y## is the zero matrix is the center of ##\mathfrak{a}.## I expect that the center is zero (but I haven't checked, just my experience). That would mean ##e=h=f=x=y=0## is the only solution to ##\operatorname{ad}Z=0.##
HDB1 said:
2- How we compute the bracket between the elements of ##\mathfrak{s l}_2=## and ##V_2##,
Where: $$[x, y]=x y-y x$$.
Once we have matrices, e.g. via the adjoint representation above, then we have with arbitrary vectors ##U=u_1E+u_2H+u_3F+u_4X+u_5Y \triangleq (u_1,\ldots ,u_5)## and ##V=v_1E+v_2H+v_3F+v_4X+v_5Y \triangleq (v_1,\ldots , v_5)##
$$
\operatorname{ad}([U,V])=[\operatorname{ad}U,\operatorname{ad}V]=\operatorname{ad}U \cdot \operatorname{ad}V -\operatorname{ad}V \cdot \operatorname{ad}U
$$
Note that the adjoint representation is only one possible linear representation, even though a convenient one since it is nothing else than the left-multiplication of the Lie algebra. But you can have other representations. As we only need five dimensions it is theoretically possible to find one with ##3\times 3## or ##4\times 4## matrices. However, it's troublesome to find those, if - and it's a big if - if they exist.Here is another possible linear representation that also provides matrices for ##\mathfrak{a}:##

If you like you could determine all linear transformations ##\alpha : \mathfrak{a}\rightarrow \mathfrak{a}## such that ##0=[U,\alpha(V) ]+[\alpha(U),V].## This is again a Lie algebra ##\mathfrak{A(a)}## with multiplication ##[\alpha_1,\alpha_2]=\alpha_1 \cdot \alpha_2 - \alpha_2\cdot \alpha_1.## Finally, ##\mathfrak{A(a)}## becomes a linear representation of ##\mathfrak{a}## by ##f(U)(\alpha)=U.\alpha :=\operatorname{ad}U \cdot \alpha -\alpha \cdot \operatorname{ad}U.## This would be a different linear representation of ##\mathfrak{a}## on the vector space ##\mathfrak{A(a)}.##

I don't know the dimension of ##\mathfrak{A(a)}## which is the number of rows and columns for the matrices ##f(U),## so I don't know whether they are smaller or bigger than those of ##\operatorname{ad}U.##
 
Last edited:
  • Like
Likes HDB1, ChiralSuperfields and berkeman
  • #4
A) I've meanwhile calculated ##\mathfrak{A(a)}## and if I made no mistake, then it is ##\mathfrak{A(a))}=\{0\}.## Sorry, that's the trivial representation.

B) Thank you [@HDB1] so much in advance,🌷

This is the first example of a Lie algebra that is not semisimple AND has trivial anti-symmetric transformations. I will have to double-check, but if so, it rebuts one of many hypotheses I have about ##\mathfrak{A(g)}'s.##

C) If ##\mathfrak{A(a)}=\{0\}## then ##\mathfrak{Z(a)}=\operatorname{\ker\,ad}=\{0\}## and
$$
\mathfrak{a}\cong \operatorname{ad}\mathfrak{a}
$$
and we can identify every basis vector ##E\, , \,F\, , \,H\, , \,X\, , \,Y## of ##\mathfrak{a}## with the matrices ##\operatorname{ad}E\, , \,\operatorname{ad}F\, , \,\operatorname{ad}H\, , \,\operatorname{ad}X\, , \,\operatorname{ad}Y.##
 
Last edited:
  • Like
Likes HDB1 and malawi_glenn
  • #5
fresh_42 said:
No problem.

So far so good. That is how a Lie algebra is defined. In this case five basis vectors and the multiplication table.

fresh_42 said:
please what did you mean by multiplication table,
I thought V2 is the standard vector space with tow basis ##e1=(1,0), e2=(0,1)##, and according to this I calculated the model structure, if this is wrong, please, how we can find ##E.X, H.X, ....##
but I did not can find the commutator between any elements of lie algebras,

ALSO pleas, in the defintion of semi direct product we use derivations: from ##a## to ##Der(V_2)##, it is the same,

fresh_42 said:
Now it's getting complicated. There are none. If we want to have such matrices, then we need a linear representation of this Lie algebra. We have represented ##E,H,F## by ##2\times 2## matrices. The vector space of all ##2\times 2## matrices is ##4-##dimensional. It is the Lie algebra ##\mathfrak{gl}(2)## and is spanned by ##E,H,F## plus the identity matrix. There is no room left for ##X## and ##Y.##
Please here why you put the identity matrix,
fresh_42 said:
This means, we need a larger representation space than ##\mathbb{K}^2.##

A method that always works is the adjoint representation. We could write
\begin{align*}
\operatorname{ad}\, : \,\mathfrak{a} &\longrightarrow \mathfrak{gl}(\mathfrak{a}) \cong \mathfrak{gl}(5)\\
A&\longmapsto (\operatorname{ad}(A)\, : \,\mathfrak{a} \longrightarrow \mathfrak{a})
\end{align*}

This would result in ##5\times 5## matrices. Let's order the basis of ##\mathfrak{a}## by ##(E,H,F,X,Y)## and consider for example ##\operatorname{ad}F.## Then we get
\begin{align*}
\operatorname{ad}F(E)&=[F,E]=-[E,F]=-H \triangleq (0,-1,0,0,0)\\
\operatorname{ad}F(H)&=[F,H]=-[H,F]=2F \triangleq (0,0,2,0,0)\\
\operatorname{ad}F(F)&=[F,F]=0 \triangleq (0,0,0,0,0)\\
\operatorname{ad}F(X)&=[F,X]=Y \triangleq (0,0,0,0,1)\\
\operatorname{ad}F(Y)&=[F,Y]=0 \triangleq (0,0,0,0,0)
\end{align*}
where the coordinate vectors are the columns of ##\operatorname{ad}F.## Hence we get
$$
\operatorname{ad}F =\begin{pmatrix}0&0&0&0&0\\-1&0&0&0&0\\0&2&0&0&0\\0&0&0&0&0\\0&0&0&1&0\end{pmatrix}
$$
... if I made no typo.

The other four matrices are accordingly.
Once we have matrices, e.g. via the adjoint representation above, then we have with arbitrary vectors ##U=u_1E+u_2H+u_3F+u_4X+u_5Y \triangleq (u_1,\ldots ,u_5)## and ##V=v_1E+v_2H+v_3F+v_4X+v_5Y \triangleq (v_1,\ldots , v_5)##
$$
\operatorname{ad}([U,V])=[\operatorname{ad}U,\operatorname{ad}V]=\operatorname{ad}U \cdot \operatorname{ad}V -\operatorname{ad}V \cdot \operatorname{ad}U
$$
here please, I calculated the ##adF## and ##adX##, i get the ##adY##, but please, can we in general calculate the
##[F,X]=Y##

Thank you so so much,
 
Last edited:
  • #6
Where did you see the identity matrix?

The procedure goes as follows:
$$
\mathfrak{sl}(2)=\left\{\begin{bmatrix}a&b\\c&d\end{bmatrix}\,:\,\operatorname{trace}\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right)=a+d=0\right\}
$$
This is a three-dimensional subspace of a four-dimensional vector space: ##4## free parameter ##a,b,c,d## and ##1## linear condition ##a+d=0,## i.e. ##\dim \mathfrak{sl}(2)=4-1=3.## The matrices
$$
E=\begin{bmatrix}0&1\\0&0\end{bmatrix}\, , \,H=\begin{bmatrix}1&0\\0&-1\end{bmatrix}\, , \,F=\begin{bmatrix}0&0\\1&0\end{bmatrix}
$$
The matrices ##\{E,H,F\}## are linear independent and within ##\mathfrak{sl}(2)## so they form a basis of ##\mathfrak{sl}(2).## We now have (if I made no sign error)
\begin{align*}
(\operatorname{ad}(H))(E)&=[H,E]=H\cdot E - E\cdot H =2E\\
(\operatorname{ad}(H))(F)&=[H,F]=H\cdot F - F\cdot H =-2F\\
(\operatorname{ad}(E))(F)&=[E,F]=E\cdot F - F\cdot E =H
\end{align*}
The remaining products are either ##0## or follow from
$$
(\operatorname{ad}(X))(Y)=[X,Y]=-[Y,X]=-(\operatorname{ad}(Y))(X)
$$

If we enumerate the basis vectors, then I set ##X_1=E\, , \,X_2=H\, , \,X_3=F.##
Their coordinates in that basis are the standard vectors ##(1,0,0)\, , \,(0,1,0)\, , \,(0,0,1).## But ##(x,y,z)## still means
$$
(x,y,z)=x\cdot (1,0,0)+y\cdot (0,1,0)+z\cdot (0,0,1)=xE+yH+zF=\begin{bmatrix}y&x\\z&-y\end{bmatrix}
$$
 
  • Like
Likes HDB1
  • #7
fresh_42 said:
Where did you see the identity matrix?

The procedure goes as follows:
$$
\mathfrak{sl}(2)=\left\{\begin{bmatrix}a&b\\c&d\end{bmatrix}\,:\,\operatorname{trace}\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right)=a+d=0\right\}
$$
This is a three-dimensional subspace of a four-dimensional vector space: ##4## free parameter ##a,b,c,d## and ##1## linear condition ##a+d=0,## i.e. ##\dim \mathfrak{sl}(2)=4-1=3.## The matrices
$$
E=\begin{bmatrix}0&1\\0&0\end{bmatrix}\, , \,H=\begin{bmatrix}1&0\\0&-1\end{bmatrix}\, , \,F=\begin{bmatrix}0&0\\1&0\end{bmatrix}
$$
The matrices ##\{E,H,F\}## are linear independent and within ##\mathfrak{sl}(2)## so they form a basis of ##\mathfrak{sl}(2).## We now have (if I made no sign error)
\begin{align*}
(\operatorname{ad}(H))(E)&=[H,E]=H\cdot E - E\cdot H =2E\\
(\operatorname{ad}(H))(F)&=[H,F]=H\cdot F - F\cdot H =-2F\\
(\operatorname{ad}(E))(F)&=[E,F]=E\cdot F - F\cdot E =H
\end{align*}
The remaining products are either ##0## or follow from
$$
(\operatorname{ad}(X))(Y)=[X,Y]=-[Y,X]=-(\operatorname{ad}(Y))(X)
$$

If we enumerate the basis vectors, then I set ##X_1=E\, , \,X_2=H\, , \,X_3=F.##
Their coordinates in that basis are the standard vectors ##(1,0,0)\, , \,(0,1,0)\, , \,(0,0,1).## But ##(x,y,z)## still means
$$
(x,y,z)=x\cdot (1,0,0)+y\cdot (0,1,0)+z\cdot (0,0,1)=xE+yH+zF=\begin{bmatrix}y&x\\z&-y\end{bmatrix}
$$
thank you so much, bi=but please, I have the same problem, how I can compute ##[F, Y]## , what is please the matrix of ##X, Y##

I am so sorry for these questions,

Thanks in advance,
 
  • #8
HDB1 said:
thank you so much, bi=but please, I have the same problem, how I can compute ##[F, Y]## , what is please the matrix of ##X, Y##

If ##X=x_1X_1+x_2X_2+x_3X_3=x_1E+x_2H+x_3F## then ##X=\begin{pmatrix}x_2&x_1\\x_3&-x_2\end{pmatrix}## and similar for ##Y##.

Do not confuse this with the matrices of ##\operatorname{ad}X## or ##\operatorname{ad}Y.## Since ##\operatorname{ad} X## maps matrices in ##\mathfrak{sl}(2)## to other matrices in ##\mathfrak{sl}(2),## it is represented by a ##3\times 3## matrix. The matrices in ##\mathfrak{sl}(2)## become the vectors that are mapped by the linear transformation ##\operatorname{ad}X.## ##E## stands for ##(1,0,0)##, ##H## stands for ##(0,1,0)## and ##F## for ##(0,0,1).##

This is confusing when you start with the subject. The Lie algebras are vector spaces formed by matrices, and their representations like the adjoint representation are matrices that map matrices to other matrices. But actually, they map vectors to other vectors, only that those vectors happen to be matrices in a different context.

The matrix of ##\operatorname{ad}X=x_1\operatorname{ad}E+x_2\operatorname{ad}H+x_3\operatorname{ad}F## should be
$$
\operatorname{ad}X=\begin{pmatrix}2x_2&-2x_1&0\\-x_3&0&x_1\\0&2x_3&-2x_2 \end{pmatrix}
$$
 
  • Like
Likes HDB1
  • #9
Please, @fresh_42 , bear with me,

if the Lie algebra ##\mathfrak{s l}_2=\mathbb{K} F \oplus \mathbb{K} H \oplus \mathbb{K} E## is a simple Lie algebra where the Lie bracket is given by the rule: ##[H, E]=2 E,[H, F]=-2 F## and ##[E, F]=H##. and ##V_2=\mathbb{K} X \oplus \mathbb{K} Y## be the 2-dimensional simple ##\mathfrak{s l}_2##-module with basis ##X## and ##Y##. The module structure is given by this;

$$
H \cdot X=X, \quad H \cdot Y=-Y, \quad E \cdot X=0, \quad E \cdot Y=X, \quad F \cdot X=Y, \quad F \cdot Y=0
$$

how we get this structure, this is my biggest problem, i did not know we we find ##X, Y## and then get these relations, my second question if you do not mind, if we want to wite any element in ##\mathfrak{a}:=\mathfrak{s l}_2 \ltimes V_2##, then we have to wite it in ##E,H,F,X,Y##, we have matrices for ##H,F,E##, but what is ##X,Y##,

thank you my dear,
 
  • #10
please, @fresh_42 , as above, ##V_2## is module, why is also ideal?

I could not find the definition of semidirect product in the book, could you please, tell me what is its conditions in lie algebra?

Thanks
 
  • #11
HDB1 said:
Please, @fresh_42 , bear with me,

if the Lie algebra ##\mathfrak{s l}_2=\mathbb{K} F \oplus \mathbb{K} H \oplus \mathbb{K} E## is a simple Lie algebra where the Lie bracket is given by the rule: ##[H, E]=2 E,[H, F]=-2 F## and ##[E, F]=H##. and ##V_2=\mathbb{K} X \oplus \mathbb{K} Y## be the 2-dimensional simple ##\mathfrak{s l}_2##-module with basis ##X## and ##Y##. The module structure is given by this;

$$
H \cdot X=X, \quad H \cdot Y=-Y, \quad E \cdot X=0, \quad E \cdot Y=X, \quad F \cdot X=Y, \quad F \cdot Y=0
$$

how we get this structure, this is my biggest problem, i did not know we we find ##X, Y## and then get these relations,
This is again a good idea. I haven't thought about that construction before. It actually gave me a counterexample to one of my conjectures. Did I already say thank you for that? It means I will no longer have to search for certain proof.

It is actually quite easy: ##V_2## is ##\mathbb{K}^2,## an abelian two-dimensional Lie algebra. So we can build ##\mathfrak{sl}(2)\oplus \mathbb{K}^2## as vector space. Now we need the multiplications. But the ##2\times 2 ## matrices of ##\mathfrak{sl}(2)## act naturally on the vector space ##\mathbb{K}^2=\operatorname{lin \,span}\{X,Y\}## by simple matrix multiplication:
$$
\begin{pmatrix}a&b\\c&-a\end{pmatrix}\cdot\begin{pmatrix}x\\y\end{pmatrix}=
\begin{pmatrix}ax+by \\ cx-ay\end{pmatrix}=(ax+by)X+(cx-ay)Y
$$
All that remains to show is that the Lie algebra definition for this multiplication holds. But it is matrix multiplication and ##\mathbb{K}^2## is abelian, so there are no problems to be expected.
HDB1 said:
my second question if you do not mind, if we want to wite any element in ##\mathfrak{a}:=\mathfrak{s l}_2 \ltimes V_2##, then we have to wite it in ##E,H,F,X,Y##, we have matrices for ##H,F,E##, but what is ##X,Y##,

You have basically two possibilities.

a) Forget it. We have five vectors and the Lie multiplication among them. So we have all we need for a Lie algebra and do not have to bother what they might be.

b) "What are they?" is equivalent to the question "How can we represent them?"
We have five dimensions, so we cannot put it into the three-dimensional ##\mathfrak{sl}(2).## But we are lucky because ##[A,B]=0## for all ##B\in L:=_{def} \mathfrak{sl}(2)\oplus \mathbb{K}^2## implies ##A=0.## (Check it as an exercise to make sure I didn't make a mistake.)

This means that ##\operatorname{ad}L,## the linear span of all ##5\times 5## matrices
$$
\operatorname{ad}E\; , \;\operatorname{ad}H\; , \;\operatorname{ad}F\; , \;\operatorname{ad}X\; , \;\operatorname{ad}Y
$$
form a one-to-one, i.e. a faithful representation of ##L.##

So "what are they?" can be answered by this list of ##5## ##5\times 5## matrices.
 
Last edited:
  • Like
Likes HDB1
  • #12
HDB1 said:
please, @fresh_42 , as above, ##V_2## is module, why is also ideal?
It is an ideal because ##[A,X]\in V_2## and ##[A,Y]\in V_2## for all ##A\in \mathfrak{sl}(2)\oplus V_2.## Multiplication by any element from the new Lie algebra maps ##V_2## into ##V_2.##
$$
[\mathfrak{sl}(2)\oplus V_2\, , \,V_2]=\underbrace{[\mathfrak{sl}(2),V_2]}_{\subseteq V_2}+\underbrace{[V_2,V_2]}_{=\{0\}}\subseteq V_2
$$

HDB1 said:
I could not find the definition of semidirect product in the book, could you please, tell me what is its conditions in lie algebra?

Yeah, I had trouble finding it, too. It's on page 11.

A Lie algebra is semisimple if its maximal solvable ideal, called its radical, is zero.

This means that a semisimple Lie algebra has no solvable ideals. There are equivalent definitions possible. Remember that a simple Lie algebra is a Lie algebra that has ##\{0\}## and itself as the only possible ideals. Then we could say:

A Lie algebra is semisimple if it is a sum of simple Lie algebras. (theorem 5.2)

or

A Lie algebra is semisimple if its Killink-form is non-degenerate. (theorem 5.1)
 
  • Like
Likes HDB1
  • #13
fresh_42 said:
It is an ideal because ##[A,X]\in V_2## and ##[A,Y]\in V_2## for all ##A\in \mathfrak{sl}(2)\oplus V_2.## Multiplication by any element from the new Lie algebra maps ##V_2## into ##V_2.##
$$
[\mathfrak{sl}(2)\oplus V_2\, , \,V_2]=\underbrace{[\mathfrak{sl}(2),V_2]}_{\subseteq V_2}+\underbrace{[V_2,V_2]}_{=\{0\}}\subseteq V_2
$$
Yeah, I had trouble finding it, too. It's on page 11.

A Lie algebra is semisimple if its maximal solvable ideal, called its radical, is zero.

This means that a semisimple Lie algebra has no solvable ideals. There are equivalent definitions possible. Remember that a simple Lie algebra is a Lie algebra that has ##\{0\}## and itself as the only possible ideals. Then we could say:

A Lie algebra is semisimple if it is a sum of simple Lie algebras. (theorem 5.2)

or

A Lie algebra is semisimple if its Killink-form is non-degenerate. (theorem 5.1)
Thank you, genius,

Please, About (theorem 5.2)
, if we want to prove ##sl_2## is semisimple by using this theorem , then we have to write it in this way:
$$
\mathfrak{s l}_2=\mathbb{K} F \oplus \mathbb{K} H \oplus \mathbb{K} E
$$

here: can we say, ##\mathbb{K} F##, ##\mathbb{K} H##, ## \mathbb{K} E## are simple ideals?, if yes, please, how we can prove that? is it the same in simple lie algebra ##sl_2##?
as If we assume any ideal in them, then we will get zero.my second question please: could you give a simple example of semisimple lie algebra that we use (theorem 5-1) to prove it?
my last question, and I am so so sorry for bothering you, I need the definition of semi direct product of lie algebra, it is not in the book,
 
  • #14
fresh_42 said:
This is again a good idea. I haven't thought about that construction before. It actually gave me a counterexample to one of my conjectures. Did I already say thank you for that? It means I will no longer have to search for certain proof.

It is actually quite easy: ##V_2## is ##\mathbb{K}^2,## an abelian two-dimensional Lie algebra. So we can build ##\mathfrak{sl}(2)\oplus \mathbb{K}^2## as vector space. Now we need the multiplications. But the ##2\times 2 ## matrices of ##\mathfrak{sl}(2)## act naturally on the vector space ##\mathbb{K}^2=\operatorname{lin \,span}\{X,Y\}## by simple matrix multiplication:
$$
\begin{pmatrix}a&b\\c&-a\end{pmatrix}\cdot\begin{pmatrix}x\\y\end{pmatrix}=
\begin{pmatrix}ax+by \\ cx-ay\end{pmatrix}=(ax+by)X+(cx-ay)Y
$$
All that remains to show is that the Lie algebra definition for this multiplication holds. But it is matrix multiplication and ##\mathbb{K}^2## is abelian, so there are no problems to be expected.

You have basically two possibilities.

a) Forget it. We have five vectors and the Lie multiplication among them. So we have all we need for a Lie algebra and do not have to bother what they might be.

b) "What are they?" is equivalent to the question "How can we represent them?"
We have five dimensions, so we cannot put it into the three-dimensional ##\mathfrak{sl}(2).## But we are lucky because ##[A,B]=0## for all ##B\in L:=_{def} \mathfrak{sl}(2)\oplus \mathbb{K}^2## implies ##A=0.## (Check it as an exercise to make sure I didn't make a mistake.)

This means that ##\operatorname{ad}L,## the linear span of all ##5\times 5## matrices
$$
\operatorname{ad}E\; , \;\operatorname{ad}H\; , \;\operatorname{ad}F\; , \;\operatorname{ad}X\; , \;\operatorname{ad}Y
$$
form a one-to-one, i.e. a faithful representation of ##L.##

So "what are they?" can be answered by this list of ##5## ##5\times 5## matrices.
Thank you thank you, my question here, please, matrix of ##X= \left(\begin{array}{l}1 \\ 0\end{array}\right)## and ##Y=\left(\begin{array}{l}0 \\ 1\end{array}\right)##, If yes, please please, how we can compute the lie bracket, e,g, ##[H,X]##, I did not get the answer, which is ##[H,X]=X##,also please, how you know ##\mathbb{K}^2## abelian, I tried to compute ##[X,Y]=0## but did not work.
Thank you from the bottom of my heart,
 
  • #15
HDB1 said:
Thank you thank you, my question here, please, matrix of ##X= \left(\begin{array}{l}1 \\ 0\end{array}\right)## and ##Y=\left(\begin{array}{l}0 \\ 1\end{array}\right)##,
Yes.
HDB1 said:
If yes, please please, how we can compute the lie bracket, e,g, ##[H,X]##, I did not get the answer, which is ##[H,X]=X##,
We don't. We set
$$
[H,X]:=_{def} H\cdot X=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\cdot \begin{pmatrix}1\\0\end{pmatrix} = \begin{pmatrix}1\\0\end{pmatrix} = X
$$
as matrix application to the vector (and similar for ##E, F## and ##Y,## too) and it works. (At least according to my little program I wrote to check Jacobi identities.)

HDB1 said:
also please, how you know ##\mathbb{K}^2## abelian, I tried to compute ##[X,Y]=0## but did not work.
Again by simple definition. ##\mathbb{K}^m## is always meant as an abelian Lie algebra if no explicit multiplication rules are given. We simple define ##[X,Y]=0.##

You must leave the thought where the elements of a Lie algebra come from. In general, we only know that a Lie algebra is a vector space, so its elements are vectors, and that those vectors can be multiplied so that anti-commutativity and the Jacobi identity hold.

Where we get examples, or how we construct examples is a different issue. As long as anti-commutativity and the Jacobi identity hold, we are free to define whatever we want.

For example, if the non-trivial multiplications of a six-dimensional vector space ##V=\operatorname{lin \,span}\{A_1,\ldots,A_6\}## are
\begin{align*}
[A_1,A_2]&= -2 (A_2 + A_5) \, , \,[A_1,A_3]= -2 (A_3 + A_6) \\
[A_2,A_4]&= -2 A_5\, , \,[A_3,A_4]= -2 A_6\, , \,[A_4,A_5]= -2 A_5\, , \,
[A_4,A_6]= -2 A_6
\end{align*}
and the trivial ones are
\begin{align*}
[A_1,A_4]&= [A_1,A_5]=[A_1,A_6]=[A_2,A_3]=[A_2,A_5]= [A_2,A_6]=0\\ [A_3,A_5]&= [A_3,A_6]= [A_5,A_6]= 0
\end{align*}
then ##\left(V\, , \,[.\, , \,.]\right)## becomes a solvable, non-nilpotent Lie algebra.

It is one, and since I know how I constructed it, I even have a matrix representation. However, it is not obvious if we only consider the multiplications. But we have all we need for a Lie algebra.
 
  • #16
HDB1 said:
Thank you thank you, my question here, please, matrix of ##X= \left(\begin{array}{l}1 \\ 0\end{array}\right)## and ##Y=\left(\begin{array}{l}0 \\ 1\end{array}\right)##, If yes, please please, how we can compute the lie bracket, e,g, ##[H,X]##, I did not get the answer, which is ##[H,X]=X##,also please, how you know ##\mathbb{K}^2## abelian, I tried to compute ##[X,Y]=0## but did not work.
Let's see whether the Jacobi identity holds if we set ##S.(aX+bY)=a(S\cdot X)+b (S\cdot Y)## for ##S\in \mathfrak{sl}(2).## Note that we define ##V_2## to be abelian.

If we have three elements from ##V_2## then the Jacobi identities hold because we defined ##[V_2,V_2]=0.##
If we have two elements ##A,B\in V_2## and ##S\in \mathfrak{sl}(2)## then ##[S,[A,B]]=0\, , \,[A,\underbrace{[B,S]}_{\in V_2}]=0\, , \,[B,\underbrace{[S,A]}_{\in V_2}]=0.##
Thus all that remains are the combinations
\begin{align*}
[E,[H,aX+bY]]&+[H,[aX+bY,E]]+[aX+bY,[E,H]]\\
&=[E,aX-bY]+[H,-bX]-2[aX+bY,E]=-bX -bX-2(-bX)=0\\
[E,[F,aX+bY]]&+[F,[aX+bY,E]]+[aX+bY,[E,F]]\\
&=[E,aY]+[F,-bX]+[aX+bY,H]=aX-bY+(-aX+bY)=0\\
[H,[F,aX+bY]]&+[F,[aX+bY,H]]+[aX+bY,[H,F]]\\
&=[H,aY]+[F,-aX+bY]-2[aX+bY,F]=-aY-aY-2(-aY)=0
\end{align*}

So ##\mathfrak{sl}(2) \ltimes V_2## is a Lie algebra, no matter whether ##E,H,F## are ##2\times 2## matrices and ##X,Y## are two-dimensional vectors. Our settings make it a Lie algebra.
 
  • Like
Likes HDB1
  • #17
The notation ##\mathfrak{sl}(2)\ltimes V_2## reads as follows.

Say the Lie algebra is ##L:=_{def} \;\mathfrak{sl}(2) \oplus V_2## as direct vector space sum. The Lie algebra structure is as you defined in post #1 and which we just saw in the previous post above.

We have ##[\mathfrak{sl}(2),\mathfrak{sl}(2)]\subseteq \mathfrak{sl}(2)## and
$$
[L,V_2]=[\mathfrak{sl}(2)+V_2,V_2]=[\mathfrak{sl}(2),V_2]\subseteq V_2.
$$
Hence ##V_2\subseteq L## is an ideal, we write ##V_2 \triangleleft L## or ##L \triangleright V_2## for ideals.
However, ##\mathfrak{sl}(2)\subseteq L## is only a subalgebra and no ideal, we write ##\mathfrak{sl}(2)<L## for subalgebras.
If we combine those, we get
$$
\mathfrak{sl}(2) < \underbrace{\mathfrak{sl}(2) \ltimes V_2}_{=L} \triangleright V_2
$$
 
  • Love
Likes HDB1
  • #18
  • Like
Likes fresh_42
  • #19
Dear @fresh_42, please, I have a question related to this thread:

in the Definitions of a homomorphism of $L$-modules, and simple modules, could we use the same module here ##V_2=\mathbb{K}^2## as example of these tow definitions,

1. A homomorphism of ##L##-modules is a linear map ##\phi: V \rightarrow W## such that ##\phi(x \cdot v)=x \cdot \phi(v)##, for all ##x \in L, v \in V##.

2- An ##L##-module ##V## is called simple if it has two ##L##-submodules, itself and 0.

or if you have another examples of these definitions with lie algebra ##\mathfrak{s l}(2)##

Thank you so much in advance, :heart: :heart: :heart:
 
  • #20
HDB1 said:
Dear @fresh_42, please, I have a question related to this thread:

in the Definitions of a homomorphism of $L$-modules, and simple modules, could we use the same module here ##V_2=\mathbb{K}^2## as example of these tow definitions,

1. A homomorphism of ##L##-modules is a linear map ##\phi: V \rightarrow W## such that ##\phi(x \cdot v)=x \cdot \phi(v)##, for all ##x \in L, v \in V##.
Yes. ##V_2## is a ##\mathfrak{sl}(2)## module by ##x.v=[X,V],## the multiplication in ##\mathfrak{sl}(2)\ltimes V_2.## You can also define ##X.v=0## and get a second module structure on ##V_2.## This trivial module is always possible. If we define ##\phi(v)=0## for all ##v\in V_2## then we have a module homomorphism
\begin{align*}
\phi \, : \,(V_2\, , \,[.\, , \,.]) &\longrightarrow (V_2\, , \,0)\\
\phi(x.v)&=x.\phi(v)=0\, , \,x\in V_2\, , \,v\in V_2\, , \,\phi(v)\in V_2
\end{align*}
E.g. ##\phi(E.Y)=\phi(X)=0## and ##E.\phi(X)=E.0=0##

You could of course use a more interesting example for ##W##. E.g. ##\mathfrak{sl}(2)## as a vector space is also an ##\mathfrak{sl}(2)## module by its Lie multiplication. Then we could ask for all possible module homomorphisms
\begin{align*}
\phi \, : \,(\mathfrak{sl}(2)\, , \,[.\, , \,.]) &\longrightarrow (V_2\, , \,[.\, , \,.])\\
\phi(x.v)&=\phi([x,v])=x.\phi(v)=[x,\phi(v)]\, , \,x\in \mathfrak{sl}(2)\, , \,v\in \mathfrak{sl}(2)\, , \,\phi(v)\in V_2
\end{align*}
or the other way around, for all possible module homomorphisms
\begin{align*}
\phi\, : \,(V_2,\, , \,[.\, , \,.])&\longrightarrow (\mathfrak{sl}(2)\, , \,[.\, , \,.])\\
\phi(x.v)&=\phi([x,v])=x.\phi(v)=[x,\phi(v)]\, , \,x\in \mathfrak{sl}(2)\, , \,v\in V_2\, , \,\phi(v)\in \mathfrak{sl}(2)
\end{align*}

HDB1 said:
2- An ##L##-module ##V## is called simple if it has ...
... only the ...
HDB1 said:
two ##L##-submodules, itself and 0.

Yes. But we have to prove that ##V_2## is a simple ##\mathfrak{sl}(2)## module. You should do it as an exercise.

Let ##v=\alpha X+\beta Y\in V_2## and ##\mathbb{K}\cdot v \subseteq V_2## a one-dimensional ##\mathfrak{sl}(2)## submodule in ##V_2.## Then
\begin{align*}
E.v&=\alpha E.X+\beta E.Y =\alpha [E,X] +\beta [E,Y]=\alpha \cdot 0 +\beta \cdot X=\beta X \in \mathbb{K}\cdot v =\mathbb{K} \cdot (\alpha X+\beta Y)\\
\end{align*}
Therefore, ##\beta X=\lambda \alpha X+\lambda \beta Y## for some ##\lambda \in \mathbb{K},## i.e. ##\lambda \cdot \beta =0.## If ##\lambda =0## then ##\beta ## has to be zero, too. So ##\beta=0## in any case and ##v=\alpha X.## Now,
\begin{align*}
F.v&=\alpha F.X =\alpha [F,X]=\alpha Y \in \mathbb{K}\cdot v= \mathbb{K}\cdot X
\end{align*}
But ##\alpha Y \in \mathbb{K}X## implies that ##\alpha =0## and thus ##v=0.##

This proves that there is no one-dimensional ##\mathfrak{sl}(2)## submodule in ##V_2## and ##V_2## with the Lie multiplication of ##\mathfrak{sl}(2)\ltimes V_2## as module operation is simple. Note that it is not simple if we consider the trivial module structure ##x.v=0## instead. Then it does have a one-dimensional submodule, two to be exact, ##\mathbb{K}X## and ##\mathbb{K}Y.##


HDB1 said:
or if you have another examples of these definitions with lie algebra ##\mathfrak{s l}(2)##

Thank you so much in advance, :heart: :heart: :heart:

Yes, I do. Let's consider the non abelian, two-dimensional Lie algebra ##L:=_{def} \operatorname{lin\,span}\{E,H\, : \,[H,E]=2E\}.##

a) Compute ##\mathfrak{A}(L):=_{def}\{f\, : \,L\stackrel{linear}{\longrightarrow} L\, : \,[f(E),H]+[H,f(E)]=0\}.##
b) Show that ##\mathfrak{A}(L)## is a Lie algebra. Which one?
c) Show that ##\mathfrak{A}(L)## is an ##L##-module by the operation
\begin{align*}
\quad X.f &= [\operatorname{ad} X,f]=\operatorname{ad}X \circ f -f\circ \operatorname{ad}X\; , \;X\in \{E,H\}\, , \,f\in \mathfrak{A}(L)\\
\quad (X.f)(Y)&=(\operatorname{ad}X \circ f -f\circ \operatorname{ad}X)(Y)=[X,f(Y)]-f([X,Y])\; , \;X,Y\in L\, , \,f\in \mathfrak{A}(L)
\end{align*}
 
  • Love
Likes HDB1

Similar threads

  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
19
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
922
  • Linear and Abstract Algebra
Replies
5
Views
919
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
843
  • Linear and Abstract Algebra
Replies
4
Views
901
  • Linear and Abstract Algebra
Replies
7
Views
1K
Replies
9
Views
916
Back
Top