Eigenvectors

What Are Eigenvectors and Eigenvalues in Math?

Estimated Read Time: 6 minute(s)
Common Topics: matrix, eigenvector, eigenvalues, eigenvectors, row

Two important concepts in Linear Algebra are eigenvectors and eigenvalues for a linear transformation that is represented by a square matrix. Besides being useful in mathematics for solving systems of linear differential equations, diagonalizing matrices, and other applications, eigenvectors and eigenvalues are used in quantum mechanics and molecular physics, chemistry, geology, and many other scientific disciplines.

Some definitions:

An eigenvector for an n x n matrix A is a nonzero vector ##\vec{x}## such that ##A\vec{x} = \lambda \vec{x}##, for some scalar ##\lambda##.

An eigenvalue for a given eigenvector is a scalar ##\lambda## (usually a real or complex number) for which ##A\vec{x} = \lambda \vec{x}##.  The Greek lower-case letter ##\lambda## (“lambda”) is traditionally used to represent the scalar in this definition.

The first definition above is deceptively simple. A point that might be helpful is that an eigenvector ##\vec{x}## for a matrix A represents a favored direction, in the sense that the product ##A\vec{x}## produces a result vector that is a scalar multiple of ##\vec{x}##. In other words, when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction, something that is not generally true of vectors that aren’t eigenvectors.

Finding eigenvalues

Starting from the definition, we have

##A\vec{x} = \lambda \vec{x}##

##\Rightarrow A\vec{x} – \lambda \vec{x} = \vec{0}##

##\Rightarrow (A – \lambda I)\vec{x} = \vec{0}##

In the step above, I can’t subtract a scalar (##\lambda##) from a matrix, so I’m subtracting ##\lambda## times the identity matrix of appropriate size.

In the last equation above, one solution would be ##\vec{x} = \vec{0}##, but we don’t allow this possibility, because an eigenvector has to be nonzero. Another solution would be ##A – \lambda I = \vec{0}##, but because of the way matrix multiplication is defined, a matrix times a vector can result in zero even if neither the matrix nor the vector are zero. All we can be sure of is that the determinant of ##A – \lambda I## must be zero.

In other words, ##|A – \lambda I| = 0.##

To find the eigenvalues of a square matrix A, find the values of ##\lambda## for which ##|A – \lambda I| = 0.##

Example 1: Find the eigenvalues for the matrix ##A = \begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix}.##

In the work that follows, I’m assuming that you know how to evaluate a determinant.

Solution: ##|A – \lambda I|= \begin{vmatrix} 1 – \lambda & 3 \\ -1 & 5 – \lambda \end{vmatrix} = 0##

##\Rightarrow (1 – \lambda)(5 – \lambda) – (-3) = 0##

##\Rightarrow 5 – 6\lambda + \lambda^2 + 3 = 0##

##\Rightarrow \lambda^2 – 6\lambda + 8 = 0##

##\Rightarrow (\lambda – 4)(\lambda – 2) = 0##

##\Rightarrow \lambda = 4 \text{ or } \lambda = 2##

The eigenvalues are 4 and 2.


Finding eigenvectors

After you have found the eigenvalues, you are now ready to find the eigenvector (or eigenvectors) for each eigenvalue.

To find the eigenvector (or eigenvectors) associated with a given eigenvalue, solve for ##\vec{x}## in the matrix equation ##(A – \lambda I)\vec{x} = \vec{0}##. This action must be performed  for each eigenvalue.

 

Example 2: Find the eigenvectors for the matrix ##A = \begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix}.##

(This is the same matrix as in Example 1.)

Work for ##\lambda = 4##

##A – 4I= \begin{bmatrix} 1 – 4 & 3 \\ -1 & 5 – 4\end{bmatrix} = \begin{bmatrix} -3 & 3 \\ -1 & 1\end{bmatrix}##

To find an eigenvector associated with ##\lambda = 4##, we are going to solve the matrix equation ##(A – 4I)\vec{x} = \vec{0}## for ##\vec{x}##. Rather than write the matrix equation out as a system of equations, I’m going to take a shortcut, and use row reduction on the matrix ##A – 4I.## After row reduction, I’ll write the system of equations that are represented by the reduced matrix.

In the work shown here, I’m assuming that you are able to solve a system of equations in matrix form, using row operations to get an equivalent matrix in reduced row-echelon form. Using row operations on the last matrix above, we find that the matrix above is equivalent to ##\begin{bmatrix} 1 & -1 \\ 0 & 0\end{bmatrix}.##

The last matrix represents this system of equations:

##x_1 = x_2##

##x_2 = x_2##

We can write this as ##\vec{x} = \begin{bmatrix} x_1 \\ x_2\end{bmatrix} = x_2\begin{bmatrix} 1 \\ 1\end{bmatrix}##, where ##x_2## is a parameter.

An eigenvector for ##\lambda = 4## is ##\begin{bmatrix} 1 \\ 1\end{bmatrix}.##

This is not the only possible eigenvector for ##\lambda = 4##; any scalar multiple (except the zero multiple) will also be an eigenvector.

As a check, satisfy yourself that ##\begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 4\begin{bmatrix} 1 \\ 1 \end{bmatrix}##, thus showing that ##A\vec{x} = \lambda \vec{x}## for our eigenvalue/eigenvector pair.

Work for ##\lambda = 2##

##A – 2I= \begin{bmatrix} 1 – 2 & 3 \\ -1 & 5 – 2\end{bmatrix} = \begin{bmatrix} -1 & 3 \\ -1 & 3\end{bmatrix}##

Using row operations to get the last matrix in reduced row-echelon form, we find that the last matrix above is equivalent to ##\begin{bmatrix} 1 & -3 \\ 0 & 0\end{bmatrix}.##

This matrix represents the following system of equations:

##x_1 = 3x_2##

##x_2 = x_2##

We can write this as ##\vec{x} = \begin{bmatrix} x_1 \\ x_2\end{bmatrix} = x_2\begin{bmatrix} 3 \\ 1\end{bmatrix}##, where ##x_2## is a parameter.

An eigenvector for ##\lambda = 2## is ##\begin{bmatrix} 3 \\ 1\end{bmatrix}.##

As a check, satisfy yourself that ##\begin{bmatrix} 1 & 3 \\ -1 & 5\end{bmatrix} \begin{bmatrix} 3 \\ 1 \end{bmatrix} = 2\begin{bmatrix} 3 \\ 1 \end{bmatrix}##.


For the final example, we’ll look at a 3 x 3 matrix.

Example 3: Find the eigenvalues and eigenvectors for the matrix ##A = \begin{bmatrix} 1 & 0 & -4 \\ 0 & 5 & 4 \\ -4 & 4 & 3\end{bmatrix}.##

Because this example deals with a 3 x 3 matrix instead of the 2 x 2 matrix of the previous examples, the work is a considerably longer. The solution I provide won’t show the level of detail of the previous examples. I leave it to readers of this article to flesh out the details I have omitted.

Solution:

(Part A – Finding the eigenvalues)

Set ##|A – \lambda I|## to 0 and solve for ##\lambda##.

##|A – \lambda I| = 0##

##\Rightarrow  \begin{vmatrix} 1 – \lambda & 0 & -4 \\ 0 & 5 – \lambda & 4 \\ -4 & 4 & 3 – \lambda \end{vmatrix} = 0##

##\Rightarrow -\lambda^3 + 9\lambda^2 + 9\lambda – 81 = 0##

##\Rightarrow (\lambda – 9)(\lambda^2 – 9) = 0##

∴ The eigenvalues are ##\lambda = 9##, ##\lambda = 3##, and ##\lambda = -3.##

I’ve skipped a lot of steps above, so you should convince yourself by expanding the determinant and factoring the resulting third-degree polynomial, that the values shown are the correct ones.

(Part B – Finding the eigenvectors)

I’ll show an outline of the work for ##\lambda = 9##, but will just show the results for the other two eigenvalues, ##\lambda = 3## and ##\lambda = -3##.

Work for ##\lambda = 9##

If ##\lambda = 9##,

##\begin{bmatrix} 1 – \lambda & 0 & -4 \\ 0 & 5 – \lambda & 4 \\ -4 & 4 & 3 – \lambda \end{bmatrix} = \begin{bmatrix} 1 – 9 & 0 & -4 \\ 0 & 5 – 9 & 4 \\ -4 & 4 & 3 – 9\end{bmatrix} = \begin{bmatrix} -8  & 0 & -4 \\ 0 & -4 & 4 \\ -4 & 4 & -6\end{bmatrix}##

The last matrix on the right is equivalent to ##\begin{bmatrix} 2  & 0 & 1 \\ 0 & 1 & -1 \\ 2 & -2 & 3\end{bmatrix}.##

Using row operations to put this matrix in reduced row-echelon form, we arrive at this fully reduced matrix:

##\begin{bmatrix} 1  & 0 & \frac 1 2\\ 0 & 1 & -1 \\ 0 & 0 & 0\end{bmatrix}##

This matrix represents the following system of equations:

##x_1 = -\frac 1 2 x_3##

##x_2 = x_3##

##x_3 = x_3##

We can write this system in vector form, as

##\vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = x_3\begin{bmatrix} -\frac 1 2 \\ 1 \\ 1\end{bmatrix}##, where ##x_3## is a parameter.

An eigenvector for ##\lambda = 9## is ##\begin{bmatrix} -\frac 1 2 \\ 1 \\ 1\end{bmatrix}.##

Any nonzero multiple of this eigenvector is also an eigenvector, so we could just as well have chosen ##\begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix}## for the eigenvector.

As before, you should always check your work, by verifying that ##\begin{bmatrix} 1 & 0 & -4 \\ 0 & 5 & 4 \\ -4 & 4 & 3\end{bmatrix} \begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix} = 9 \begin{bmatrix} -1 \\ 2\\ 2\end{bmatrix}.##

Results for ##\lambda = 3## and ##\lambda = -3##

Using the same procedure as above, I find that an eigenvector for ##\lambda = 3## is ##\begin{bmatrix} -2 \\ -2\\ 1\end{bmatrix}##, and that an eigenvector for ##\lambda = -3## is ##\begin{bmatrix} 1 \\ -\frac 1 2\\ 1\end{bmatrix}.## If you wish to avoid fractions, it’s convenient to choose ##\begin{bmatrix} 2 \\ -1\\ 2\end{bmatrix}## for an eigenvector for ##\lambda = -3.##

Summary for Example 2

For the matrix of this example, the eigenvalues are ##\lambda = 9##, ##\lambda = 3##, and ##\lambda = -3.## In the same order, a set of eigenvectors for these eigenvalues is ##\left\{\begin{bmatrix} -1 \\  2\\ 2\end{bmatrix}, \begin{bmatrix} -2 \\ – 2\\ 1\end{bmatrix}, \begin{bmatrix} 2 \\ -1\\ 2\end{bmatrix}\right\}.##

 

 

37 replies
  1. XGWManque says:

    This is not *at all* a rigorous, well-thought definition, but…

    I personally like to think of it this way: whenever a linear operator acts on some vector space, it transforms the vector subspaces inside, right? There might be some subspaces that aren't rotated or manipulated in any other direction, only scaled by some factor. Those invariant subspaces contain the eigenvectors, and the scaling factor is the corresponding eigenvalue. It's not hard to extend this general idea of "invariance" to the idea of, say, finding the allowed states of a system in quantum mechanics, especially when you remember that the Hamiltonian is a linear operator. Linear algebra in general is the art of taking algebraic, abstract concepts, and putting them into concrete matrices and systems of equations.

  2. Stephen Tashi says:
    ibkev

    but the engineer in me cries out… "But what is it for?" :)An unsophisticated indication:

    Suppose a problem involves a given matrix ##M## and many different column vectors ##v_1,v_2,v_3,…## and that you must compute the products ##Mv_1, Mv_2,Mv_3,…##.

    Further suppose you have your choice about what basis to use in representing the vectors and that ##M## has two eigenvectors ##e_1, e_2## with respective eigenvalues ##lambda_1, lambda_2##.

    In the happy event that each vector ##v_1## can be represented as a linear combination of ##e_1,e_2##, you could do all the multiplications without actually multiplying a matrix times a vector in detail. For example, if ##v_1 = 3e_1 + 4e_2## then
    ##Mv_1 = 3Me_1 + 4Me_2 = 3lambda_1 e_1 + 4 lambda_2 e_2##.

    The coordinate representation of that would be ##M begin{pmatrix} 3 \4 end{pmatrix} = begin{pmatrix} 3 lambda_1 \ 4lambda_2 end{pmatrix}## provided the coordinates represent the vectors ##v_i## in the basis ##{e_1,e_2}##.

    Of course, you might say "But I'd have to change all the ##v_i## to be in the ##{e_1,e_2}## basis". However, in practical data collection, raw data is reduced to some final form. So you are at least you know that the "eigenbasis" would be a good format for the reduced data. Also, in theoretical reasoning, it is often simpler to imagine that a set of vectors is represented in the eigenbasis of a particular matrix.

    A randomly chosen set of vectors can't necessarily be represented using the eigenvectors of a given matrix as a basis. However, there are many situations where the vectors involved in a physical situation can be represented using the eigenvectors of a matrix involved in that situation. Why Nature causes this to happen varies from case to case. When it does happen in Nature, we don't want to overlook it because it offers a great simplification.

  3. ibkev says:

    I recently stumbled across a great "intuition oriented" supplement to Mark44's Insight article. It has some nice animations that help visualize it from a geometric perspective.

  4. Mark44 says:

    [QUOTE=”kaushikquanta, post: 5476728, member: 564721″]excellent ,but please explain
    “when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction”[/QUOTE]
    This is from the basic definition of an eigenvector.

    An eigenvector ##vec{x}## for a matrix is a nonzero vector for which ##Avec{x} = lambdavec{x}## for some scalar ##lambda##. From this definition, it can be seen that ##Avec{x}## results in a vector that is a multiple of ##vec{x}##, hence it has the same direction or the opposite direction. Any time you multiply a vector by a scalar, the new vector is in the same direction or the opposite direction.

  5. kaushikquanta says:

    excellent ,but please explain
    “when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction”

  6. kaushikquanta says:

    a good explanation ,but .( “In other words, when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction,”) please explain the statement

  7. Mark44 says:

    [QUOTE=”smodak, post: 5470176, member: 467778″]Sorry my LaTex was all messed up. Here is what I meant to say…

    “All we can be sure of is that det(A–λ) must be zero”. How do you arrive at this?
    Are you saying that ##vec{x}## cannot be ##vec{0}## and ##A – lambda## may not be ##vec{0}##.
    =>##A – lambda## does not have an inverse.
    =>##det(A – lambda)## must be 0. Is that the reasoning?[/QUOTE]
    The full quote near the beginning of the article is this:
    [quote]In the last equation above, one solution would be ##vec{x} = vec{0}##, but we don’t allow this possibility, because an eigenvector has to be nonzero. Another solution would be ##A – λI = vec{0}##, but because of the way matrix multiplication is defined, a matrix times a vector can result in zero even if neither the matrix nor the vector are zero. All we can be sure of is that the determinant of |A – λI| must be zero.[/quote]

    The “last equation” in the quoted text above is ##(A – lambda I)vec{x} = vec{0}##. My statement about |A – λI| being zero doesn’t follow from ##A – lambda I## not being invertible; it’s really the other way around (i.e., since |A – λI| = 0, then A – λI doesn’t have an inverse.

    Note that what you wrote, A – λ, isn’t correct. Subtracting a scalar ( λ) from a matrix is not defined.

  8. smodak says:

    [quote=”smodak, post: 5470168″]”All we can be sure of is that the determinant of IA–λI must be zero”. How do you arrive at this?
    Are you saying that vec{x} cannot be vec{0}. And A – lambda may not be vec{0}.

    =>A – lambda does not have an inverse.
    =>det|A – lambda| must be 0.
    Is that the reasoning?[/quote]Sorry my LaTex was all messed up. Here is what I meant to say…

    “All we can be sure of is that det(A–λ) must be zero”. How do you arrive at this?
    Are you saying that ##vec{x}## cannot be ##vec{0}## and ##A – lambda## may not be ##vec{0}##.
    =>##A – lambda## does not have an inverse.
    =>##det(A – lambda)## must be 0. Is that the reasoning?

  9. smodak says:

    “All we can be sure of is that det(A–λ) must be zero”. How do you arrive at this?
    Are you saying that ##vec{x}## cannot be ##vec{0}## and ##A – lambda## may not be ##vec{0}##.
    =>##A – lambda## does not have an inverse.
    =>##det(A – lambda)## must be 0. Is that the reasoning?

  10. smodak says:

    “All we can be sure of is that the determinant of IA–λI must be zero”. How do you arrive at this?
    Are you saying that vec{x} cannot be vec{0}. And A – lambda may not be vec{0}.

    =>A – lambda does not have an inverse.
    =>det|A – lambda| must be 0.
    Is that the reasoning?

  11. WWGD says:

    [QUOTE=”chakradhar, post: 5457597, member: 592373″]adding some points to ibkev question, eigen values for a matrix are those which basically represents the entire matrix.for example if we talk about human face then there are some points which are enough to differentiate between the two faces these unique points can be considered as eigen values for matrix human face.
    so,by eigen vectors and eigen values we can transform a bigger matrix into smaller one. and with less calculation we can have most of information related to matrix….[/QUOTE]
    How do you assign a matrix to a face (sounds like a setup for a joke)?

  12. chakradhar says:

    adding some points to ibkev question, eigen values for a matrix are those which basically represents the entire matrix.for example if we talk about human face then there are some points which are enough to differentiate between the two faces these unique points can be considered as eigen values for matrix human face.
    so,by eigen vectors and eigen values we can transform a bigger matrix into smaller one. and with less calculation we can have most of information related to matrix….

  13. Mark44 says:

    [QUOTE=”ibkev, post: 5429097, member: 568589″]Eigenvalues/vectors is something I’ve often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I’ve been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out… “But what is it for?” :)[/QUOTE]
    My background isn’t in engineering, so I’m not aware of how eigenvalues and eigenvectors are applicable to engineering disciplines, if at all. An important application of these ideas is in diagonalizing square matrices to solve a system of differential equations. A few other applications, as listed in this Wikipedia article ([URL]https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors[/URL]) are
    [LIST]
    [*]Schrödinger equation in quantum mechanics
    [*]Molecular orbitals
    [*]Vibration analysis
    [*]Geology and glaciology (to study the orientation of components of glacial till)
    [/LIST]

  14. ibkev says:

    Eigenvalues/vectors is something I’ve often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I’ve been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out… “But what is it for?” :)

    “Here is an awesome tool that is very useful to a long list of disciplines. It’s called a screwdriver. To make use of it you grasp it with your hand and turn it. The end.” Nooooo! Don’t stop there – I don’t have any insight yet into why this tool is so useful, nor intuition into the types of problems I might encounter where I would be glad I had brought my trusty screwdriver with me.

    I would truly love to know these things, so I hope you will consider adding some additional exposition that offers insight into why eigenstuff is so handy.

  15. vin300 says:

    For how this helps the physics people, the eigen values reduce the components of a large tensor into only as many components as the order of the matrix. These reduced ones have the same effect as all the tensoral components combined. About eigenvectors, I’m not sure how it is applied.

  16. Mark44 says:

    [QUOTE=”geoffrey159, post: 5358581, member: 532398″]Nice insight !!!
    If you like it, I have an exemple of application for your post to euclidean geometry. You could explain how eigenvalues and eigenvectors are helpfull in order to carry out a full description of isometries in dimension 3, and conclude that they are rotations, reflections, and the composition of a rotation and a reflection about the orthogonal plane to the the axis of rotation.[/QUOTE]
    Thank your for the offer, but I think that I will decline. All I wanted to say in the article was a bit about what they (eigenvectors and eigenvalues) are, and a brief bit on how to find them. Adding what you suggested would go well beyond the main thrust of the article.

  17. geoffrey159 says:

    Nice insight !!!
    If you like it, I have an exemple of application for your post to euclidean geometry. You could explain how eigenvalues and eigenvectors are helpfull in order to carry out a full description of isometries in dimension 3, and conclude that they are rotations, reflections, and the composition of a rotation and a reflection about the orthogonal plane to the the axis of rotation.

  18. Samy_A says:

    [QUOTE=”2nafish117, post: 5355472, member: 569186″]but howw???
    ah i got it just as i was writing this.
    let ‘x’ be non a zero vector and let det(A) ≠ 0 .
    then premultiplying with A(inverse) we get
    (A(inverse)*A)*x=A(inverse)*0
    which then leads to the contradiction x=0
    am i right???
    i’m sorry that i don’t know how to use latex.[/QUOTE]Yes, that’s basically it.

  19. 2nafish117 says:

    [QUOTE=”Samy_A, post: 5355445, member: 574914″]Correct, you can’t write that. Note that Mark44 doesn’t write ##|A – lambda I||vec x| = 0##. He correctly writes ##|A – lambda I|=0##.

    In general, if for a square matrix ##B## there exists a non 0 vector ##vec x## satisfying ##Bvec x=vec 0 ##, then the determinant of ##B## must be 0.
    That’s how ## (A – lambda I)vec{x} = vec{0}## implies ##|A – lambda I|=0##.[/QUOTE]
    but howw???
    ah i got it just as i was writing this.
    let ‘x’ be non a zero vector and let det(A) ≠ 0 .
    then premultiplying with A(inverse) we get
    (A(inverse)*A)*x=A(inverse)*0
    which then leads to the contradiction x=0
    am i right???
    i’m sorry that i don’t know how to use latex.

  20. Samy_A says:

    [QUOTE=”2nafish117, post: 5355424, member: 569186″]i do not understand how det(A-lambda(I))=0
    since x is not a square matrix we cannot write det((A-lambda(I))*x)=det(A-lambda(I))*det(x)[/QUOTE]
    Correct, you can’t write that. Note that Mark44 doesn’t write ##|A – lambda I||vec x| = 0##. He correctly writes ##|A – lambda I|=0##.

    In general, if for a square matrix ##B## there exists a non 0 vector ##vec x## satisfying ##Bvec x=vec 0 ##, then the determinant of ##B## must be 0.
    That’s how ## (A – lambda I)vec{x} = vec{0}## implies ##|A – lambda I|=0##.

  21. WWGD says:

    [QUOTE=”Samy_A, post: 5353690, member: 574914″]It’s a mystery challenging the basic foundations of Physics: he seems to refer to the image mentioned in the post [B]following[/B] his own post. :oldsmile:
    If that is the case, the blue and violet arrows are the eigenvectors, not the red.[/QUOTE]
    How about asking Harouki to write an insight on that one — time travel??

  22. Krylov says:

    [QUOTE=”Mark44, post: 5353669, member: 147785″]What image are you talking about? The article doesn’t have any images in it.[/QUOTE]
    Could it be a reference to Mona Lisa at the top of the Insight?

  23. Samy_A says:

    [QUOTE=”Mark44, post: 5353669, member: 147785″]What image are you talking about? The article doesn’t have any images in it.[/QUOTE]
    It’s a mystery challenging the basic foundations of Physics: he seems to refer to the image mentioned in the post [B]following[/B] his own post. :oldsmile:
    If that is the case, the blue and violet arrows are the eigenvectors, not the red.

  24. Mark44 says:

    [QUOTE=”Haruki Chou, post: 5353454, member: 583260″]So the red/blue arrows on the image are eigenvectors?[/QUOTE]
    What image are you talking about? The article doesn’t have any images in it.

  25. Mark44 says:

    [QUOTE=”QuantumQuest, post: 5352619, member: 554291″]Great job Mark![/QUOTE]

    [QUOTE=”RJLiberator, post: 5352636, member: 504241″]Excellent information, Mark44![/QUOTE]
    Thanks! My Insights article isn’t anything groundbreaking — just about every linear algebra text will cover this topic. My intent was to write something for this site that was short and sweet, on why we care about eigenvectors and eigenvalues, and how we find them in a couple of examples.

  26. kaushikquanta says:

    excellent ,but please explain"when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction"

  27. kaushikquanta says:

    a good explanation ,but .( "In other words, when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction,") please explain the statement

  28. smodak says:

    Sorry my LaTex was all messed up. Here is what I meant to say…"All we can be sure of is that det(A–λ) must be zero". How do you arrive at this?Are you saying that  ##\vec{x}## cannot be ##\vec{0}## and  ##A – \lambda##  may not be ##\vec{0}##.=>##A – \lambda## does not have an inverse.=>##det(A – \lambda)## must be 0. Is that the reasoning?

  29. smodak says:

    "All we can be sure of is that the determinant of IA–λI must be zero". How do you arrive at this? Are you saying that \vec{x} cannot be \vec{0}. And  A – \lambda  may not be \vec{0}.=>A – \lambda does not have an inverse. =>det|A – \lambda| must be 0.Is that the reasoning?

  30. ibkev says:

    Eigenvalues/vectors is something I've often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I've been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out… "But what is it for?" :)"Here is an awesome tool that is very useful to a long list of disciplines. It's called a screwdriver. To make use of it you grasp it with your hand and turn it. The end." Nooooo! Don't stop there – I don't have any insight yet into why this tool is so useful, nor intuition into the types of problems I might encounter where I would be glad I had brought my trusty screwdriver with me.I would truly love to know these things, so I hope you will consider adding some additional exposition that offers insight into why eigenstuff is so handy.

  31. Dr. Courtney says:

    I learned about eigenvalues and eigenvectors in quantum mechanics first, so I can't help but think of wavefunctions and energy levels of some Hamiltonian.  Other cases are (to me) just different analogs to wave functions and energy levels. Nice article.  I eventually took linear algebra and learned it the way you are presenting it, but I was a senior in college taking the course by correspondence.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply