Linear Algebra
Created | Updated Aug 19, 2015
Linear Algebra is a branch of mathematics. It is studied at the university level, by maths, physics and engineering students. An introductory linear algebra class is usually taken by third year students, and higher lever linear algebra is typically studied at a graduate level.
This entry describes some of the topics that are covered in an introductory linear algebra course. It does not attempt to teach any computational methods, or to give any proofs. The purpose of the entry is really to let a student know what they might expect to learn in the class, not to teach it to them.
No mathematical expertise is assumed on the part of the reader, although it will probably be much more interesting to those with a mathematical bent. In order to understand the material in a linear algebra course, nothing more than pre-university level algebra is really required, although most linear algebra students will have already taken at least two terms of calculus.
What's It All For?
For some people, studying linear algebra, or any branch of mathematics, is its own reward, and there is no need to think of applications. Outside the asylum, people use linear algebra for all sorts of things. It is an essential tool for engineers, physicists and all kind of analysts, who use it for everything from calculating drag on an airplane wing, to predicting the behaviour of subatomic particles, to understanding the interactions of different sectors of an economy.
Within mathematics, linear algebra is one of the pillars of the gateway to Higher Maths. (The other pillar would have to be Calculus, the foundation on which the pillars rest is probably Set Theory, and on top of the pillars.... the metaphor wears thin.) Knowledge of linear algebra is assumed in the study of Real and Complex Analysis, Abstract Algebra, Topology, Differential Geometry, and so on, and so on.
What To Expect from a Linear Algebra Course
The topics discussed in this entry should be covered at some length in any introductory linear algebra course, with some variations, and not necessarily in the same order.
The first thing would be to make some sense of the name.
Why 'Linear' Algebra?
What does the word 'linear' mean to a mathematician? A real world example can be helpful. The clearest way to understand something that's linear is to compare it with something that's non-linear.
Think about the fuel gauge in your car. (Unless you have a computerized one, in which case, think of the fuel gauge in your grandfather's car.) When you fill up the tank, the needle on the gauge points at the 'full' mark. As you burn up the fuel, the needle drops towards the 'empty' mark. The idea behind an accurate gauge is that each litre of fuel used causes the needle to move by the same amount. That's linear behaviour. If you graphed the gauge's motion, with fuel use on one axis, and needle movement on the other, you would get a straight line.
If your car is like most others however, the gauge actually behaves non-linearly. As you drive, it moves from 'full' down towards 'empty', but things aren't so simple in between. Some gagues seem to linger near the top for a long time, and then drop quickly to the 1/4 mark, after which they move steadily towards empty. Others drop within the first few liters to the 1/2 mark, where they wait for a while before gradually descending to the big 'E'. These are examples of non-linear behaviour. One litre near the top of the tank doesn't produce the same needle-movement as one near the bottom of the tank. A graph of a non-linear needle's motion would not be a straight line, but some kind of curve.
Linear Algebra is the study of mathematical objects that act like a well-behaved fuel gauge.
[ Geek-speak: a function f(x) is linear if and only if the following two conditions hold for all x, y and c: f(x + y) = f(x) + f(y), f(cx) = c*f(x)]
Systems of Linear Equations
The jumping-off point for the study of linear algebra is something most people learn in secondary school - solving systems of linear equations. A linear equation is one like this: 5x = 45, or like: 2x - y = 7. What makes these linear is that they don't have any funny stuff like x2, or cosine x, or any logarithms or square roots, or anything but multiplication by ordinary numbers, and addition and subtraction. A system of linear equations is just a group of them to be solved together. The system could contain just one equation, or any number of them.
Readers may remember that the first example above, 5x = 45, can be solved simply with division. Simply divide both sides by 5, and it becomes clear that x = 9. The second equation cannot be solved all by itself, because it has two unknowns: x and y. To solve for two unknowns, you need two equations, which you can then solve simultaneously. (Actually, there are occasions where you'll need more than two equations to solve two unknowns. In general, if you have n unknowns, then you need at least n equations - sometimes more.)
If you don't have enough equations to solve for all the unknowns, then there will be multiple solutions. For example, the equation from above, 2x - y = 7 will be true if x = 4 and y = 1, but it can also be solved with x = 6, y = 5. You can actually assign any value at all to x, as long as y is given the compatible value: y = 2x - 7. (For some values of x, y will be negative.) This equation therefore has an infinite number of solutions.
When an equation like the above example has two variables, we sometimes graph its solutions by drawing a line on an xy-plane. The familiar way to do this is to begin by solving the equation for y, thus putting it in slope-intercept form. As above: y = 2x - 7. This tells us that our line will intercept the y-axis at the point (0, 7), and will slope upward from that point, climbing two units in the y direction for every one unit in the x direction. This method is terrific for equations with only two variables, but once there are three or more, graphical representations become more difficult, if not impossible, to deal with. In Linear Algebra, it is much more common to leave the equation in the form 2x - y = 7 than to think about slopes and intercepts.
Sometimes, a system of more than one linear equation will have no solutions at all. For example:
2x + y = 10
4x + 2y = 12
There are no values of x and y that will make both of these true. If you'd like to convince yourself of that, it's a good exercise!
Any system of linear equations, with any number of unknowns, has either 1 solution, an infinite number of solutions, or no solution at all.
Solving Equations in a Matrix
The first thing you learn in Linear Algebra is how to solve systems of linear equations by using a matrix1. The last example from above, in a matrix, looks like this:
[2 1 10]
[4 2 12]
All we've done is remove the extra symbols, leaving only numbers behind, and put the whole thing in square brackets. The reason you put equations into a matrix is because it makes it very easy to work with systems of any size, and to keep track of exactly what you're doing. It also saves writing lots of '+' and '=' signs.
Just by analyzing the matrix that represents a system, you can determine whether it has any solutions, and if so, what they are. You can use matrices to analyze any system of linear equations - any number of equations with any number of unknowns.
Fun With Matrices
Once you start playing with matrices, you find that they don't have to represent systems of equations. They can represent lots of things, or they can just be played with on their own, and made to dance and whirl according to their own weird choreography. Matrices are a little bit like numbers. You can add them to each other (if they're the same size), and you can multiply them by numbers. If the sizes match up in the right way, you can even multiply them by each other. Matrix multiplication is not like anything you will have encountered in maths before linear algebra. The most strikingly weird thing about it is that it's non-commutative.
What does that mean? Well, addition is commutative, because x + y is always the same thing as y + x. The order of the terms doesn't matter. If A and B are matrices, on the other hand, then AB is not usually the same as BA. Multiplying them in one order is different from multiplying them in a different order. Here's a fun example illustrating how something can be non-commutative. (This example will make a lot more sense if the reader actually gets a book or a CD or something and tries it):
Take a book (or a CD, or something), and hold it in both hands, with one hand on each side. Now let the letter 'A' represent the motion of rotating the book one quarter turn clockwise, while keeping the same side facing up. If you start from the standard position, with the spine on the left and the title facing you, rightside-up, then A should change the book to a position with the spine facing away from you, but the front cover still visible.
Now, B will represent a different motion of the book, namely flipping it over. B always means flipping it in the same way, so that the part that was furthest from you ends up towards you, and whichever cover was on bottom ends up on top. If you start with the book in standard position, then you'll end up with the spine still on the left, but now you're looking at the back cover, and it's upside-down.
Now for the non-commutative part. Start in standard position, then do A, then do B (rotate, then flip). You should end up with the back cover up, spine near you. Now, starting again from standard position, do B, and then do A (flip, then rotate). Now the book is back cover up, spine away from you. Motions of a book are non-commutative!
Linear Transformations
The motions of the book described above are examples of
linear transformations. Other examples of linear transformations are certain things you can do to an image with a simple computer graphics program. You can rotate it (like the book), you can make it bigger or smaller, you can stretch it in one direction, leaving the other dimension the same, or you can shear it to one side. Shearing something means leaning it over, like what happens to a word when it's put into ITALICS, or what happens to a box when it's opened at both ends and folded up. These are all linear transformations. Fancy transformations, like swirls and curls, are non-linear.
Any of these transformations can be represented by a matrix. For example, the matrix for rotating a book clockwise (the motion A from above) looks like this:
[ 0 1]
[-1 0]
If that doesn't make sense, don't worry. The important thing is that a matrix represents a transformation, and that doing one transformation, and then another, is like multiplying the matrics together - in the right order!
Square Matrices, Determinants, and Eigenvalues
There are an awful lot of matrices out there, and it can be difficult to keep track of everything about them. For this reason, you spend a lot of time in a linear algebra class looking at special types of matrices, which you can begin to understand more deeply. One way of reducing the confusion is to restrict the discussion to square matrices - matrices that have as many rows as they have columns. The matrix above, representing the transformation of rotating the book, is 2X2, so it's square
There are some nice things you can do with square matrices to learn more about them. One thing is to calculate a number called the determinant of a matrix. Determinants are nice, because however big a matrix is, and however many numbers it contains, its determinant is always a single number that gives you some information about it.
What does a determinant tell you? Well, think about linear transformations, like in a photo-editing program. If you start out with a picture that has some area, in square centimetres, and then you apply some transformation to it, then the area of the new picture will equal the old area, multiplied by the determinant of the matrix you used. The determinant of A, the rotation matrix from the book example, is 1, because rotating something doesn't change its area. The determinant of B, the flipping matrix, is -1, because although the area hasn't changed, the book is now face-down.
You can get even more information about a square matrix by calculating its eigenvalues2. While the determinant is only one number, a square matrix has as many eigenvalues as it has rows (or columns), although they don't all have to be different numbers. The two eigenvalues of a 2x2 matrix could both be 7, for example, or they could be 7 and 5, or any other pair of numbers. Discussing what the eigenvalues actually mean is beyond the scope of this entry, but we can say a thing or two. If two matrices have all the same eigenvalues, then they are called similar3 and they will have some other properties in common - similar matrices are kind of like the same matrix, viewed from two different perspectives. Another neat trick is that if you multiply all the eigenvalues of a matrix together, you get its determinant!
The eigenvalues of the rotation matrix we've been talking about are, oddly enough, the imaginary numbers i and -i. Note that i * -i = 1, our matrix's determinant.
Eigenvalues are one of the stranger and harder to understand bits of an introductory course in Linear Algebra - a state which is worsened by the fact that they're often taught at the end of the term, when everybody's stopped paying attention.
[ Geek-speak: There is no succinct formula for the determinant function, det(A), where A is an nXn matrix. det(A) is, however, uniquely specified by the following 3 properties: det(I)=1, det(A) is a linear operator with respect to the rows of A, and any matrix A with two adjacent rows equal has det(A)=0. In the first of those conditions, and in the next paragraph, I is understood to be the nXn identity matrix, with every entry = 0, except for entries on the main4 diagonal, which all = 1.]
[ Geek-speak: Eigenvalues are defined as any solutions λ to the equation det(A - λI) = 0. This equation is a polynomial of degree n when A is an nXn matrix.]
Vectors, Dimension and Abstract Spaces
Linear algebra is the first math course in which many students are introduced to the notion of abstract spaces. Abstract spaces are, to many students, very weird and non-intuitive. To understand what an abstract space is, let's start by talking about vectors.
A vector is simpler than a matrix. One way to think of a vector is as one column or row of a matrix. It can be any length, and you can do basic arithmetic with it. For example, if we have two 3-vectors5 - (2, 3, 7) and (1, 4, -1) - then we can add them together, and come up with another 3-vector (3, 7, 6). Each entry in the new vector is just the sum of the corresponding entries in the original vectors. Also, a vector can be multiplied by a single number, so 3 * (4, 5, 6) = (12, 15, 18)
Vectors are thought of geometrically as little arrows, having some length, and pointing in some direction indicated by the numbers. 2-vectors (or ordered pairs), can all be thought of as living on an xy-plane, which has 2 dimensions. 3-vectors (ordered triples) point in some direction in 3-dimensional space, which is characterised by x, y and z axes. Larger vectors (ordered n-tuples) are thought of as existing in spaces with more than 3 dimensions. Nobody can visualize these spaces, but the maths still work, so it's ok.
There are other things that behave just like vectors. One example is polynomials. Polynomials, the reader may remember from algebra, are expressions like:
4 + 7x - 2x2 + x3
In a polynomial, you can go on raising x to as many powers as you like, but you still can't have any really funny stuff, like cosines. Anyway, polynomials are like vectors, because you can add them together just like vectors, and you can multiply them by ordinary numbers, just like vectors. The above polynomial is a third degree polynomial (because of the x3), and it has four numbers that characterize it - the four coefficients. It therefore corresponds to the 4-vector (4, 7, -2, 1). Generally, a polynomial of degree n corresponds to a vector with n+1 elements.
This is where it gets abstract. Since, for example, second degree polynomials correspond to 3-vectors, and since 3-vectors live in the space determined by x, y and z axes, you can think of second-degree polynomials as inhabiting a similar space, also 3-dimensional. Each point in space corresponds to a different second-degree polynomial. This is an abstract space. Any space like this, whether its denizens be vectors, polynomials, or some other objects that act like vectors, is called a vector space. Vector spaces can have any number of dimensions - even an infinite number6!
A lot of things that are true about vector spaces are true regardless of what kind of objects the vector space is made of. Whatever objects they are, if they behave like vectors, then they live in a vector space, which is just like any other vector space. The structure of the space does not depend at all on the nature of the objects that make it up. This abstractness is characteristic of lots of higher maths, and it takes a bit of getting used to.
The last thing we'll say about vector spaces is that you can have one vector space living inside of another one. For example, since a plane can be a 2-dimensional vector space, then a plane located somewhere in xyz-space could be a subspace of the 3-dimensional vector space in which it's located. One thing you learn about vector spaces is how to distinguish one subspace from another. (Hint: it involves a lot of working with matrices!)
If reading through these topics makes you hungry to know more, then you would probably enjoy taking a Linear Algebra course. Even if the subject doesn't appeal to you, perhaps this entry has given you clearer idea of what it's all about, and what sort of ideas are dealt with in some of higher maths.