Math 232

Linear Algebra

Spring 2005

  • Course information
  • Daily notes: recent and old
  • Homework assignments
  • Fun stuff |
  • Check your scores (Data last updated on May 12, 2005)
  • Daily notes

    Friday, May 6

    Our final exam is scheduled for 4:00-6:00 pm on Monday, May 9. The exam will cover material from the entire semester. There will be computational problems and proof problems. One proof problem will be evaluated on both content and writing. This will be made explicit in the statement of that problem.

    You can bring one page (standard notebook size) of notes to use on the final. You can use both sides of the page.

    Here are my office hours for reading period and exam week:

    I will be available on a drop-in basis during office hours. I will be available many other times for appointments. E-mail (martinj@ups.edu) to set a time to meet.

    Return to top.

    Homework assignments

    Here, FCLA refers to A First Course in Linear Algebra and JRA refers to Johnson/Riess/Arnold. I'll give problem numbers from the 4th edition of JRA. Note: The 5th edition (which has an orange cover), has a new Chapter 2 so Chapter N in the 5th edition corresponds to Chapter N-1 in the 4th edition for N greater than or equal to 3. Most of the problems in the 5th edition have the same problem number as those in the 4th edition. For those problems that differ, the 5th edition number is given in orange following a slash.
    FCLA Problems to do Submit Due date Comments
    WILA M70
    1.1: 1-6
    None
    SSSLE #C10, M40
    1.1: #7,11,14,15,39,40/41

    38/39

    Monday, Jan 24

    Problems 39 and 40 are "baby" versions of Theorem EOPSS.
    RREF C10,C11,C12,C30
    1.1: #25,37
    1.2: #1,9,11,19,23,27,29,31,37,49


    44


    Wednesday, Jan 26


    For 1.2 #1 and 9, ignore the given instructions. Instead, transform the given matrix to RREF.
    TSS C10,M51,M52,M60
    1.3: #1,3

    None
    HSE C10,C20,M45,M50-52
    1.3: #7-19 odd
    See comment
    Monday Jan 31
    Prove the following statement: A system of linear equations is homogeneous if and only if the system has the zero solution.
    NSM C30,C40
    1.7: #16,17,18,21,22,24

    None


    In the directions for Section 1.7, replace Mx=θ by LS(M,0).
    VO C10,T13,T17
    1.5: #7,13,15
    T18
    Wednesday, Feb 9
    T13: Prove commutativity of vector addition (3)
    T17: Prove associativity of scalar multiplication (7)
    T18: Prove distributivity across vector addition (8)
    LC C21,C22,C40,M10 None
    SS C40,C41,C60,M20,M21
    2.3: #15,17,19,21
    None
    None
    LI C20-22,C50-52,C60,M50,T20 See comment Wednesday, Feb 16 Prove the following statement: If a set of vectors contains the zero vector, then the set is linearly dependent.
    0 C20-22
    2.6: #3,5,9,11,13,15

    28

    Monday, Feb 21

    For 2.6 #28, replace the wording by the following: Let B={...} be an orthonormal set. Let v be any vector in Sp(B), where...
    M0
    (0.33)
    1.5: #1,3
    1.6: #44

    47

    Wednesday, Feb 23
    For 1.6 #44, use Theorem VSPM in place of Theorem 7.
    For 1.6 #47, use Theorem TASM (ver. 0.30) or Theorem TMA (ver. 0.33) in place of Theorem 10.
    RM
    (0.34)
    C30, C32, C40-43
    2.3: #29-35 odd, 41,50
    None
    None

    For 2.3 #41, replace instruction (b) by "If b is in R(A), then exhibit a solution of LS(A,b)."
    RS
    (0.34)
    C30,C40,C41 None
    MM
    (0.35)
    C20,T22,T40
    1.5: #9,11,31-41 odd, 55,57
    See comment Thursday, Mar 3 Prove the following: Let A be an (m×n) matrix. Let B and C be (n×p) matrices. If A is nonsingular and AB=AC, then B=C.
    MISLE
    (0.36)
    C21-24,C26,C27,T10
    1.9: #7,19,29/31,49/51,52/54, 54/56,65/67,66/68

    56/58

    Monday, Mar 7
    In C22, the reference should be to Exercise MISLE.C21, not to Example MISLE.C21.
    MINSM
    (0.36)
    T10,T11
    3.7: #15,17,18,29
    None
    None

    In 3.7 #29, ignore the last question.
    Chapter VS warmup #1-7 from handout None We'll refer to the ideas in these problems as we work through Chapter VS.
    VS
    (0.36)
    T10, M10
    4.2: #1-5,9,11,13,15,19

    36

    Friday, Mar 11

    S
    (0.36)
    C20,C25,C26,M20
    2.2: #5,7,9,13,15,18,19,21,32
    4.3:#1-11 odd,17,19,23

    31

    Wednesday, Mar 23

    B
    (0.37)
    C20,C30
    4.4: #1,3,5,7,9,32,36
    None
    None

    D
    (0.37)
    C20,M20
    2.5: #17,23,25,27,29,32,33
    4.5: #1-3,7-13

    40

    Thursday, Mar 31

    PD
    (0.37)
    C10,T15,T25,T60
    2.5: #7,8,9
    4.5: #16


    17


    Monday, April 4

    DM
    (0.38)
    C25,C30,M20,M30
    3.2: #1,3,9,11,17,24,26,27
    None
    None


    Note that JRA denote a cofactor by Aij whereas in class I am using cij and in FCLA, the notation is CA,ij.
    EE
    (0.38)
    C20,C21,T10
    3.4: #3,7,11
    3.5: #5,7,9,13,17
    3.6: #21,23,33
    None
    None
    None
    None


    PEE
    (0.38)
    3.5: #21,22,23
    3.6: #41,42
    28
    None
    Monday, April 11

    See 3.6 #37 for notation used in 3.6 #42.
    Induction #1-3 from handout #3 Monday, April 11
    SD
    (0.38)
    C20,T15,T16
    3.7: #3,5,7,26
    None
    None


    LT
    (0.39)
    C15,C20,C25,C30,M10
    2.7: #1,2,5,7,11,13,15,17,19,23,33
    4.7: #1-9 odd
    4.8: #1-4
    None
    None
    None
    None


    ILT
    (0.40)
    C10,C25,T10,T15
    2.7: #3
    4.7: #26
    None
    None
    None

    See also the problems below for ILT/SLT. These problems concern both the null space and the range of a linear transformation. You can do the parts that concern null space now and leave the parts that concern range until we've covered that idea in Section SLT.
    SLT
    (0.40)
    C10,C25
    4.7: #22
    None
    None


    ILT/SLT
    (0.40)
    2.7: #29
    4.7: #13,16,17,18,19,20,21,27
    None
    None


    IVLT
    (0.40)
    C50,T15,T16
    4.8: #7,9,18,19,23,24,26,27
    None
    None


    Recall that C[0,1] is the vector space of all continuous functions with domain [0,1].
    VR
    (0.41)
    C10, C20
    4.4: #12,13,15,19,27
    None
    None


    The notation [v]B in JRA is equivalent to ρB(v) in FCLA.
    MR
    (0.41)
    C20, C30, C40
    4.9: #1,2,3,5,7,11,13,14,19,28
    None
    None


    In JRA, a matrix representation is often denoted Q while in FCLA, the notation is MTB,C. The FCLA notation is more detailed because it explicitly includes the name of the transformation, T, and the names of bases, B and C.

    Return to top.

    Fun Stuff

    The Mathematical Atlas describes the many fields and subfields of mathematics. The site has numerous links to other interesting and useful sites about mathematics.

    If you are interested in the history of mathematics, a good place to start is the History of Mathematics page at the University of St. Andrews, Scotland.

    Check out the Astronomy Picture of the Day.

    Return to top.

    Old daily notes

    Wednesday, January 19

    You should read Section SLE.WILA and work on the assigned problem. The main goal of the trail mix example is to give you some idea of how systems of linear equations arise in applications. This examples involves some ideas we'll see more of in this course and some ideas from an area of mathematics called optimization.

    If you want a printed copy of A First Course in Linear Algebra, bring $22 (cash or check made out to Rob Beezer) to me by Friday at class time. Rob Beezer will place the order on Friday and pick it up in time to distribute in class next Monday.

    Friday, January 21

    There is an error in Example SAE of Section SLE.RREF. At the row operation -2R1+R2, Row 1 is not correct. This error propogates through the rest of the example. This error does not affect the conclusion in the example since the reasoning only uses the bottom row of the final system.

    Rob Beezer is offered a $1 reward for each mathematically significant error or typo that is found. (The above error is an example of something mathematically significant; a misspelled word is not mathematically significant.) If you find an error, send me an e-mail and I'll forward it to Professor Beezer or send an e-mail directly to Professor Beezer at beezer@ups.edu

    Monday, January 24

    Much of the material in Sections RREF and TSS is quite easy when you apply it to a specific example. The more difficult task is to form definitions and arguments that work in general (with m equations in n unknowns without giving specific values for m and n). As you read these sections, you should keep in mind that the goal is developing results that can be applied generally. This is more difficult, and more powerful, than any specific example.

    In each of the last two class meetings, I forgot to distribute a handout with some comments on writing in mathematics. I'll bring copies to the next class. Please, someone remind me to hand them out!

    You should read Section SLE.TSS before class on Wednesday. The sets defined as D and F are hard to think about in the general definition but easy to figure out in any specific example. In the end, the set D gives the index values for the dependent variables and the set F gives the index values for the free variables, hence the labels D and F.

    Wednesday, January 26

    Someone in the other section of linear algebra has spotted another mathematically significant typo. The last line on p. 57 (of version 0.30) should read "Archetype E has r=3" rather than "has r=2".

    You should continue reading Section SLE.TSS and work on the assigned problems. You should read the definitions carefully so you can use the language correctly. You should read the theorems carefully so you can apply the results in problems and in proving other theorems. You should read the proofs carefully so you can gather ideas and techniques for constructing your own proofs. Bring any questions you have on the reading and problems to class tomorrow.

    Thursday, January 27

    Rob Beezer has updated the Exercises for Sections SLE.RREF, SLE.TSS, SLE.HSE, and SLE.NSM. I brought printed copies of these to class today. The assignments below for TSS and HSE are from the new version. If you are using an electronic version, you should download Version 0.31 from the text web page.

    In class, we started the transition from a focus on systems of equations to a focus on matrices and vectors. In a sense, the first steps in this transition are simply rewriting old ideas in a new language. So, for example, the new idea of the null space of a matrix is defined as the old idea of the solution set of a homogeneous system of equations.

    Thursday, February 3

    I've put the previous Daily notes into a section at the bottom of this page. I'll do this after each exam and start fresh here at the top of the page.

    Friday, February 11

    Yesterday, I completely messed up in posting new homework. I wiped out the assignment for Section SS and mislabeled the assignment for Section LI. It's all correct now. On Monday, we'll address any questions you have from the assignments for Sections SS and LI.

    The proof you are to submit for Section LI requires only a little spark of insight. Once you have the right insight, you can write down a proof in a few sentences. So, don't be alarmed if you have a short proof and be a little suspicious if your proof is running more than a few sentences.

    Friday, February 18

    Exam #2 will be on Thursday, February 24 from 8:00 to 9:20. It will cover all of Chapter V (Vectors) and the first three sections of Chapter M (Matrices).

    Monday, February 21

    I've decided to include Section M.RS on Thursday's exam. The new idea in this section is the row space of a matrix. The definition involves two old ideas, namely matrix transpose and range. Specifically, the row space of a matrix is defined as the range of the transpose of the matrix. The utility of row space comes through the theorem If A and B are row-equivalent, then the row space of A is equal to the row space of B. I've assigned a few problems from this section.

    Note that in the printed version of the text (0.30), something funny happened typographically in the notation for row space. What should read RS(A) appears as ∇s(A).

    Wednesday, February 23

    Tomorrow's exam will cover all of Chapter V and the first three sections of Chapter M. The format will be similar to the first exam: definitions to state, computational problems, proofs. In this case, I will ask you to prove one of the vector space properties for column vectors or matrices. The exam will also include one take-home problem (on the Gram-Schmidt process) that will be due in class on Friday.

    At first glance, there seems to be a lot of material to cover for this exam. The material comes together in many ways. One central theme is that of finding a linearly independent spanning set for either the null space of a matrix or the range of a matrix. Many of the important theorems focus on producing a set of vectors S that is both linearly independent and has Sp(S) equal to either the null space of a matrix (Theorems BNS) or the range of a matrix (Theorems BROC, RNS, and BRS). In general, the three theorems about a linearly independent spanning set for the range of a matrix produce different spanning sets. Each will be useful in its own way.

    Monday, February 28

    For the exercise to be submitted from Section MM, you should use only the ideas through Section MM. There is a nice proof that uses matrix inverse but I want you to use the ideas through Section MM only.

    Monday, February 28

    Make note of Theorem MMIP in Section MM. This gives an alternate way of expressing an inner product. This alternate expression for the inner product is used in JRA. (Note that the vectors in JRA are assumed to have real entries so the complex conjugation does not appear explicitly.)

    Monday, March 7

    In class, we worked on the problems from the handout Linear combinations of matrices. The basic idea is to extend the notions of linear combination, span, relation of linear dependence, and linearly independent/dependent to apply to a set of matrices. We'll use this as a second detailed example (column vectors being a first example) as we develop the general idea of a vector space in Chapter VS.

    Friday, March 11

    We have now hit a big idea, namely vector space. Much of what we have done up to this point has been to set up for this idea (and others to follow). We have the general idea of a vector space and a number of concrete examples: column vectors of size n, (m×n) matrices, polynomials of degree n or less, sequences, and functions with the complex numbers as domain. JRA also give another example with which you should become familiar. This is the set of all continuous functions with the interval [a,b] as domain and operations of addition and scalar multiplication defined in the "obvious" or "natural" way. You should read Example 5 in Section 4.2 of JRA.

    There are two kinds of problems in the current and upcoming sections. Some problems involve abstract vector spaces without specifying any example. Other problems are about a specific flavor of vector space. The problem sets have quite a few problems so that you become familiar with abstract vector spaces and the different flavors of vector space in concrete examples.

    Monday, March 21

    Exam #3 will be on Thursday, March 24 from 8:00-9:20. It will cover Chapter M Sections MISLE and MINSM and Chapter VS Sections VS, S, and B (except the last three subsections of B).

    In Section VS.B, a basis for a vector space is defined as a linearly independent spanning set. We have been working with this idea for some time without naming it. In Chapter V, we found linearly independent spanning sets for the null space of a matrix. In Chapter M, we found linearly independent spanning sets for the range of a matrix. For the handout Linear combinations of matrices, you found linearly independent spanning sets for various subspaces of matrices.

    Friday, April 1

    In the last few weeks, we have studied an abstract idea, namely vector space. Our next topic, the eigenvalue problem is a bit more concrete. Eigenvalues and eigenvectors are special numbers and vectors associated with a given square matrix. The standard way to compute the eigenvalues of a matrix involves using a determinant. So, in the last two days, we have looked at the determinant of a square matrix. We will develop only the bare necessities of determinants, just enough to use them as a tool in the eigenvalue problem. This is the one place we will accept some results without proof. The proofs of these results are not difficult but take time to develop and would require dropping other, more important, topics. If you are interested in the proofs, you can read Section 3.3 of JRA.

    Friday, April 8

    Exam #4 will be Thursday, April 14 from 8:00-9:20. It will cover Sections D and PD from Chapter VS, Chapter D, and Chapter E.

    Wednesday, April 20

    We are now studying linear transformations. Linear transformations map one vector space, the domain, to another vector space, the codomain, in a way that respects the vector addition and scalar multiplication that is defined for each of the two vector spaces. In Chapter R, we will see that by choosing a basis for the domain and a basis for the codomain, we can represent abstract vectors by column vectors and represent any linear transformation as multiplication of a column vector by a matrix.

    Exam #5 will be on Wednesday, May 4 from 7:30 (gasp) to 8:50 am. It will cover the material in Chapters LT and R.

    Friday, April 29

    In class, we talked about the big ideas in Section CB, the last section of FCLA. I'm not assigning problems on this material and will not hold you accountable for these ideas. On Monday, we'll do a quick review and then focus on the questions you bring on reading and problems. We'll also talk briefly about the final exam. On Wednesday, we'll have Exam #5 covering Chapters LT and R with the exception of Section R.CB.