Why Use Gram-Schmidt to Make a Unitary Matrix?

  • #1
nomadreid
Gold Member
1,665
203
TL;DR Summary
Why does Gram-Schmidt turn a matrix into a unitary one?
I understand the rationale for using the Gram-Schmidt process to find an orthogonal (or orthonormal) basis from a given set of linearly independent vectors (e.g., eigenvectors of a Hermitian matrix). However, the rational for using it on the columns of a matrix in order to get a unitary matrix (for example, if one diagonalizes a matrix and one gets a matrix P in PMP-1 which is not unitary) is not clear. (Simply normalizing the columns doesn't work for all matrices.)

An intuitive explanation would be super. Thanks.
 
Physics news on Phys.org
  • #2
What is the relationship between the columns of [itex]P[/itex] and the eigenvectors of [itex]M[/itex]?
 
  • Like
Likes nomadreid
  • #3
Pasmith, as far as I know, they are the same. So after applying G-S, I get them to be orthogonal. How does this make the matrix unitary?
 
  • #4
Traditionally the only difference between an orthogonal and a unitary matrix is whether you are working over the real or complex numbers so I'm a bit confused what exactly you're stuck on

Gram Schmidt over complex numbers has some complex conjugates when computing the inner product that you might be forgetting to include if you're getting your fields mixed up
 
  • Like
Likes nomadreid
  • #5
Sorry about the delay in the reply.

A side remark: as far as whether one is working over the real or complex numbers seems not to be the difference, since even if you are working in the complex numbers, although an orthogonal matrix is unitary, not all unitary matrices are orthogonal. But anyway, I get your point that they are related, with similar definitions. My question was a bit different.

But I found a partial answer to my own question for the case I was working on: there was a typo that made me think that a particular matrix was normal, when it wasn't normal, and hence not unitarily diagonalizable. It was diagonalizable, but the diagonalizing matrix (is there a term for the P in PMP-1 ?) was not unitary, and I was looking for a way to turn it into a unitary matrix. Somewhere (oops, didn't keep the source) I saw a suggestion to do so via the Gram-Schmidt process. However (given the incorrect matrix), that would've been a dead end, so the question up to that point became invalid.

However, this gives rise to another question. Suppose I have a normal matrix, and I diagonalize it. There are many ways to diagonalize it, and I presume that not all of them will yield a unitary matrix. Am I correct that, in order to get a unitary matrix from a non-unitary diagonalizing matrix, I just have to normalize it? Or is that too simple?

Thanks for your patience and your help.
 
  • #6
nomadreid said:
A side remark: as far as whether one is working over the real or complex numbers seems not to be the difference, since even if you are working in the complex numbers, although an orthogonal matrix is unitary, not all unitary matrices are orthogonal. But anyway, I get your point that they are related, with similar definitions. My question was a bit different.

The unitary matrices that contain only real numbers are orthogonal :)

nomadreid said:
However, this gives rise to another question. Suppose I have a normal matrix, and I diagonalize it. There are many ways to diagonalize it, and I presume that not all of them will yield a unitary matrix. Am I correct that, in order to get a unitary matrix from a non-unitary diagonalizing matrix, I just have to normalize it? Or is that too simple?

Just to check, your specific question here is, if ##AA^*=A^*A##, and you find some ##P## such that ##PAP^{-1}## is diagonal, is it guaranteed that ##P## is unitary?

I guess I agree it doesn't have to be. ##P## can have as columns any eigenvectors, and it will only be unitary if you pick eigenvectors of unit length.
 
  • Like
Likes nomadreid
  • #7
Thanks, Office_Shredder
Office_Shredder said:
The unitary matrices that contain only real numbers are orthogonal :)
Indeed, but I am a little confused at the diversity of the definitions of an orthogonal unitary matrix: in some places I see the orthogonal matrix defined as real from the start, whereas elsewhere I just see the condition that the eigenvectors have a dot product of 1. The matrix
0 -i
i 0
satisfies the latter, but not the former. Or must it have the inner product be zero, then
0 i
i 0
would work. So does the definition of orthogonal require that it be real?

Office_Shredder said:
I guess I agree it doesn't have to be. can have as columns any eigenvectors, and it will only be unitary if you pick eigenvectors of unit length
So if I just normalize the columns of P, resulting in P', is P' guaranteed to be unitary?
 
  • #8
nomadreid said:
Thanks, Office_Shredder

Indeed, but I am a little confused at the diversity of the definitions of an orthogonal unitary matrix: in some places I see the orthogonal matrix defined as real from the start, whereas elsewhere I just see the condition that the eigenvectors have a dot product of 1. The matrix
0 -i
i 0
satisfies the latter, but not the former. Or must it have the inner product be zero, then
0 i
i 0
would work. So does the definition of orthogonal require that it be real?

I think the thing you're forgetting is the dot product is the inner product for ##\mathbb{R}^n##, but not for ##\mathbb{C}^n##. Both orthogonal and unitary matrices satisfy that the columns form an orthonormal basis under the standard inner product of that vector space.

Typically a matrix which is referred to as orthogonal is *assumed* to have the field of the vector space be the real numbers, not the complex numbers.
nomadreid said:
So if I just normalize the columns of P, resulting in P', is P' guaranteed to be unitary?

I'm not sure what exactly it means to normalize the columns of P, but I suspect the answer is yes.
 
  • Like
Likes nomadreid
  • #9
Thanks for your help and your patience, Office_Shredder

Office_Shredder said:
I'm not sure what exactly it means to normalize the columns of P, but I suspect the answer is yes.
I mean to divide each column by its length so that the length of the resulting column vector =1. For this to work, all of P's columns and rows (whereby P is the matrix in the diagonalization PMP-1) would have to have the same length. My question here is whether the assumption of normality is necessary and sufficient to ensure this uniformity.

Office_Shredder said:
I think the thing you're forgetting is the dot product is the inner product for , but not for . Both orthogonal and unitary matrices satisfy that the columns form an orthonormal basis under the standard inner product of that vector space.
Ah, yes, I should not have mentioned the dot product. My example
0 -i
i 0
works for a matrix that satisfies the basic definition that multiplying it times its Hermitian conjugate results in the identity, and the columns and rows have length one, and the two columns A and B (or the two columns) have the inner product <A|B> = (A*)B [or (A)(B*) if you like](where C* is the Hermitian conjugate of C) equal to zero, and the absolute value of the determinant is one... in other words, it checks all the boxes except that it is not real.

Office_Shredder said:
Typically a matrix which is referred to as orthogonal is *assumed* to have the field of the vector space be the real numbers, not the complex numbers.
If this is not part of the definition of orthogonality, is there a reason (convenience?) for this assumption?
 
  • #10
nomadreid said:
If this is not part of the definition of orthogonality, is there a reason (convenience?) for this assumption?

I think it is part of the definition.
 
  • Like
Likes nomadreid
  • #11
Office_Shredder said:
I think it is part of the definition.
Ah. Interesting, since orthogonality of vectors does not have this restriction in its definition. I suppose this part of the definition for matrices serves some purpose, or maybe no one ever needed a matrix as in my example to be orthogonal.
 
  • #12
A square matrix over ##\mathbb C## is unitary if and only if its column vectors are pairwise orthonormal. The dot product over ##\mathbb C## is defined as ##\langle x,y \rangle = \sum x_i\overline{y}_i##.
 
  • Like
Likes nomadreid
  • #13
nuuskur said:
A square matrix over ##\mathbb C## is unitary if and only if its column vectors are pairwise orthonormal. The dot product over ##\mathbb C## is defined as ##\langle x,y \rangle = \sum x_i\overline{y}_i##.

Thanks for the note, muskur. Did you perhaps mean "inner product" instead of "dot product"?

The inner product definition you cited, I believe, is the "mathematician's version", whereas the "physicist's version", which is essentially the same, has the complex conjugate of the first, rather than the second term. Or do I have those reversed?
 
  • #14
Dot product and inner product are synonyms in this context. Inner product is a more general term for a map ## V\times V\to F ## with some extra properties, where ##V## is a vector space over ##F##. In the case of ##\mathbb R^n## and ##\mathbb C^n## one usually says dot product. As far as I know the complex dot product is always regarded the way I pointed out earlier. I haven't checked what happens with ##\langle x,y \rangle = \sum \overline{x}_iy_i ##.
 
Last edited:
  • Like
Likes nomadreid
  • #15
nuuskur said:
I haven't checked what happens with .
small typo: line over the xi
 

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
465
Replies
3
Views
1K
Replies
12
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
15
Views
1K
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
816
  • Linear and Abstract Algebra
Replies
23
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
1K
Back
Top