Math 310

Numerical Analysis

Spring 2007

  • Homework assignments
  • Daily notes
  • Fun stuff
  • Homework assignments

    For assignments with no problems to be submitted, a target date is given. section. For problems to be submitted, a due date is given. I'll ask for questions on this section in the class period that precedes the due date.
    Section Problems to do Submit Target or due date Comments
    2.1 1,5,7,11,18 None Friday, January 19
    2.1 Programming problem None Friday, January 19
    1.2 1,3,5,7,13,15,16,21 None Monday, January 22
    2.3 1,2 None Tuesday, January 23
    Mathematica Assignment 1 All Wednesday, January 24
    2.2 1,3,7,11(a,b,c),17,24 16 Tuesday, January 30 Problem 11 asks you to find a suitable interval. You need not find the biggest possible suitable interval.
    2.3 11,17,19,23 16 Friday, February 2
    1.3 6,7,8,9,13,15 None Friday, February 2 Problems 8 and 9 relate to things we talked about earlier in the semester.
    2.4 1(a,b), 3(a,b),8,9,11 None Friday, February 2
    2.5 7, 11(a,c), 14 None Monday, February 5
    3.1 1, 5(a,c) None Friday, February 9 For at least one of these problems, construct the interpolating polynomial by hand rather than using something like the Mathematica code from clas.
    3.2 1,3,5,7,11 None Monday, February 12
    3.1 3, 9(a), 21 None Wednesday, February 14
    3.3 1(a,c), 3(a), 5 None Friday, February 16
    3.4 3,15,17,19,26,31 30 Wednesday, February 21
    3.5 1-4 None Friday, February 23 For Problem 4, you'll need to figure out how to interpret the table. I found the column labels to be unhelpful (misleading, in fact).
    4.1 5(a,b), 7(a,b), 9(a),13,19 22 Wednesday, February 28
    4.2 2,5,8,9 15 Friday, March 2
    4.3 21(a,b,d,e), 22 None Monday, March 5
    4.4 7,11,13,15,20 None Tuesday, March 6
    4.9 1(a,c), 3(a,c), 4(a) None Friday, March 23
    5.1 3(a,d),6 None Wednesday, March 28
    5.2 1(a,c), 9 None Wednesday, March 28
    5.3 9(a,b) None Friday, March 30
    5.4 3(a,b),7(a,b),17(a,b),31 15(a,b) Monday, April 2 See comment in March 28 Daily note
    5.5 3(a,b) None Wednesday, April 4
    5.6 3(a,b),7(a,b),12 None Friday, April 6
    6.1 3,12,13,15,20 None Monday, April 16
    6.2 1(a,b), 2(a,b), 3(a,b), 9(a), 13(a), 17(a) None Monday, April 16
    6.5 1(a), 3(c), 7(a,d), 9(c) 11 Friday, April 20
    7.1 1,2,5(c),7,9 None Tuesday, April 24
    7.2 7(b,e),14,15,16,17 None Wednesday, April 25
    7.3 5(a,c),7(a,c),9(a,c),11(a,c),13(a,c),15(a,c),18,23 None Friday, April 27

    Daily notes

    Wednesday, May 2

    Topics: review
    Text: everything

    Tuesday, May 1

    Topics: Mathematica programming tools
    Text:
    Code: 05_01_2007.nb

    Monday, April 30

    Topics: iterative refinement
    Text: Section 7.4

    Friday, April 27

    Topics: condition number as measure of "distance" to singular matrix
    Text: Section 7.4

    Wednesday, April 25

    Topics: relaxation methods; condition number and perturbing b
    Text: Sections 7.3, 7.4
    Code: 04_25_2007.nb

    Tuesday, April 24

    Topics: Gauss-Seidel iteration; residuals; condition number
    Text: Sections 7.3, 7.4

    Monday, April 23

    Topics: spectral radius and natural matrix norm; examples of Jacobi iteration
    Text: Sections 7.2, 7.3

    Friday, April 20

    Topics: eigenvalues and spectral radius; convergence of Jacobi iteration
    Text: Section 7.2, 7.3

    Wednesday, April 18

    Topics: matrix norms; natural matrix norms; computing certain natural matrix norms
    Text: Section 7.2

    Tuesday, April 17

    Topics: preview of Jacobi iteration; vector norms
    Text: Section 7.1, 7.3

    Monday, April 16

    Topics: counting flops; overview of iterative methods
    Text: Sections 6.1,6.2,6.5

    Counting flops (floating point operations) provides a crude measure of computational efficiency. I've intentionally skipped over this idea until now. In class, we counted flops for Gaussian elimination as an example.

    Friday, April 13

    Topics: no class
    Text:

    Wednesday, April 11

    Topics: LU decomposition algorithm; permutations
    Text: Section 6.5

    Tuesday, April 10

    Topics: pivoting strategies; LU decomposition
    Text: Section 6.2, 6.5
    Code: 04_10_2007.nb

    Monday, April 9

    Topics: Gaussian elimination and back-solving
    Text: Section 6.1
    Code: 04_09_2007.nb

    Friday, April 6

    Topics: stability
    Text: Section 5.10

    I was completely incoherent at the end of class (and possibly much earlier, but I'm quite sure about the end part). I'll write something latter this weekend to articulate what I was trying to say.

    I've sent an e-mail with the take-home exam as an attachment. It's due next Friday, April 13.

    Wednesday, April 4

    Topics: Ricci flow and the Poincare Conjecture
    Text: none
    Code: Talk slides (2.5 MB PDF file)

    Thanks for indulging me on this.

    Tuesday, April 3

    Topics: Adam-Moulton four-step method; a predictor-corrector method
    Text: Section 5.6
    Code: 04_03_2007.nb

    Monday, April 2

    Topics:termination of RKF in the Slope Field Calculator; Adam-Bashforth four-step method
    Text: Sections 5.5, 5.6
    Code: 04_02_2007.nb

    Exam #3 will be take-home. I will distribute it this Friday and it will be due the following Friday.

    Friday, March 30

    Topics: an adaptive method: RKF
    Text: Section 5.5
    Code: 03_30_2007.nb

    Wednesday, March 28

    Topics: Runge-Kutta methods
    Text: Section 5.4
    Code: 03_28_2007.nb

    For Problem 15(a,b) that you are to submit, you should write Mathematica code to implement the Runge-Kutta method of order 4 given in the text and then use your code to do the requested calculations.

    Tuesday, March 27

    Topics: error analysis for Euler's method; 2nd order Taylor method
    Text: Section 5.2, 5.3

    In class, we derived and implemented the 2nd order Taylor method. This involves computing partial derivatives of f(t,y). This is the n=2 case of Equation (5.17) in the text. The text's version looks different than what we wrote down in class because it is expressed using the notation T(n) and the derivative of f with respect to t is not expanded (using a chain rule) in the general expression. Instead, the text does this calculation for each example.

    We won't dwell on the ideas in Section 5.3 and will quickly move on to Runge-Kutta methods (which we started in on at the end of class).

    Monday, March 26

    Topics: slope fields; error analysis for Euler's method
    Text: Section 5.1, 5.2
    Code: 03_26_2007.nb

    The slope field/numerical approximation applet for first-order differential equations I demonstrated in class is called JOde (for Java ODE, I think). It is available through the web page of Marek Rychlik (University of Arizona) who is the author. It's also available at Eduardo Sontag's site at Rutgers.

    Friday, March 23

    Topics: numerical solutions of first-order ordinary differential equations
    Text: Section 5.1, 5.2
    Code: 03_23_2007.nb

    Section 5.1 of the text sets up some theory of differential equations. For those of you who have had MATH 301, this will have a familiar look, although the approach here is slightly more general than what we typically do in MATH 301. In particular, the hypothesis of the Existence-Uniqueness Theorem are given in terms of a Lipschitz condition. This is a weaker hypothesis than what we typically use in MATH 301 so the resulting theorem is slightly stronger.

    Euler's method is introduced in Section 5.2. We looked at Euler's method in class but didn't deal with enough technical detail to get analyze the error in detail. We'll do this next week.

    Wednesday, March 21

    Topics: more on integration in the presence of singularities
    Text: Section 4.9

    Tuesday, March 20

    Topics: integration in the presence of singularities
    Text: Section 4.9
    Code: 03_20_2007.nb

    Monday, March 19

    Topics: Comparing recursive implementations of adaptive integration; brief overview of Gaussian quadrature
    Text: Section 4.6
    Code: 03_19_2007.nb

    We will not cover Gaussian quadrature from Section 4.7 in any detail. Likewise, we will skip Section 4.8 on numerical approximations of multiple integrals. You should be able to master the material in these sections on your own if the need arises.

    Friday, March 9

    Topics: Recursive implementation of adaptive integration
    Text: Section 4.6
    Code: 03_09_2007.nb

    Wednesday, March 7

    Topics: Adaptive quadrature
    Text: Section 4.6

    In class, Ben suggested that implementing the adaptive method would be easy using a recursive programming language and asked if Mathematica allows recursive programming. The answer is yes. Here's a very simple example. On Friday, we'll use this idea to build a very simple implementation of the adaptive method.

    Tuesday, March 6

    Topics: Romberg integration
    Text: Section 4.5
    Code: 03_06_2007.nb

    Romberg integration is the result of doing Richardson's extrapolation on the Composite Trapezoid approximation (doubling the number of subintervals to generate the initial list of approximations). We need to know details about the expansion in powers of h of the error in the Composite Trapezoid so that we can determine the most efficient extrapolation. We already know that the lowest power of the expansion is h2 resulting in a denominator of 4-1=3 for the first step of the extrapolation. We didn't know that the h3 term has a coefficient of 0 so the next lowest power to eliminate is really the h4 term. Knowing this, we can use 42-1=15 in the denominator of the second step. In general, the error term can be expressed as an expansion in powers of h2j for j=1,2,3,.... Knowing this leads us to use denominators 4j-1-1.

    The standard way to show that the error has an expansion of this form uses some tools (the Euler-Maclaurin formula and properties of Bernouli polynomials) that are straightforward but take a bit to develop. A paper by Edward Rozema in the American Mathematical Monthly gives a proof based on Taylor series. T. von Petersdorff has a paper in the American Mathematical Monthly that gives another proof using a different approach. (Note: These links are through the JSTOR archive. The links should work from campus but may not work if you are off-campus.) Looking at these papers is optional.

    Monday, March 5

    Topics: piecewise/composite Simpson's
    Text: Section 4.4
    Code: 03_05_2007.nb

    Friday, March 2

    Topics: Simpson's rule
    Text: Section 4.3

    In class today, I got myself into trouble by not working out the details of the error term for integration based on the interpolating polynomial on three points (i.e., for Simpson's rule). Without the details, we had the possibility of an inconsistency when considering cubic functions. The details on this handout show that there is no inconsistency.

    Wednesday, February 28

    Topics: using the interpolating polynomial to construct approximations of definite integrals
    Text: Section 4.3

    I failed to be clever at the end of class when it came to choosing an antiderivative for (x-x0)(x-x1). Rather than the antiderivative you get by expanding the product and then integrating term-by-term, we should have thought about how to write down an antiderivative in terms of x-x0 and x-x1. If we put a bit of effort into finding a nice antiderivative, we'll save considerable effort when it comes to evaluating that antiderivative at x0 and x1. Here's the things we should have thought:

    We can start with guessing there is an antiderivative of the form A(x-x0)3 +B(x-x0)2(x-x1) +C(x-x0)(x-x1)2 +D(x-x1)3. Take the derivative and then compare with (x-x0)(x-x1). We can set up a system of equations for A, B, and C to get equality. The system will be undetermined but we can include the condition B=C by the symmetry condition. When all is said and done, we get (-1/12)[(x-x0)3 -3(x-x0)2(x-x1) -3(x-x0)(x-x1)2 +(x-x1)3]. Notice how much this simplifies evaluating at x0 and x1.

    Tuesday, February 27

    Topics: more on Richardson extrapolation; numerical integration
    Text: Section 4.2, 4.3
    Code: 02_27_2007.nb

    Monday, February 26

    Topics: Richardson extrapolation
    Text: Section 4.2

    Friday, February 23

    Topics: numerical differentiation
    Text: Section 4.1
    Code: 02_23_2007.nb

    In class, we used Taylor series to arrive at the center-difference approximation for f'(x0 along with an error expression for this approximation. We also started in on the text's approach using the interpolating polynomial. I recommend that you finish off the details of what we started in class with the second-degree interpolating polynomial and then compare with the text's results on pages 170-171. I'm leaving it to you to understand the other three-point and five-point approximations for f'(x0 as well as approximations for higher-order derivatives given in the text.

    Wednesday, February 21

    Topics: Bezier curves as weighted combinations
    Text: Section 3.5
    Code: 02_21_2007.nb

    I've included a few extra things in the Mathematica notebook from today's class. First, I implemented the weighted combination on four points that was initially suggested to acheive the goals (start at first point with tangent in direction of second point, end at fourth point with tangent in direction opposite to third point). Second, I derive Equations 3.24 and 3.25 on page 162 as a weighted combination of four points so that you can see there is a connection between the text's approach (which we followed on Tuesday) and the weighted combination approach we used on Wednesday.

    The first part of Section 3.6 in the text is a nice summary of the chapter's contents. You should read and understand the material on page 164 of the text.

    Tuesday, February 20

    Topics: review of parametric curves; cubic Bezier curves
    Text: Section 3.5

    Monday, February 19

    Topics: more on cubic splines
    Text: Section 3.4
    Code: 02_19_2007.nb

    Friday, February 16

    Topics: piecewise Hermite interpolation; cubic splines
    Text: Section 3.4
    Code: 02_16_2007.nb

    Thursday, February 15

    I've assigned some problems from Section 3.3.

    I've also modified the Mathematica code for computing the Lagrange form of the interpolating polynomial that I distributed yesterday. In the modified form, I compute Ln,k(x) as a list (indexed by k) rather than as a function of k and x. Extending this modified code to compute Hermite interpolating polynomials is a bit easier. The code and additional comments are in this Mathematica notebook.

    Wednesday, February 14

    Topics: Section 3.1 homework question; constructing Hermite interpolating polynomials in Mathematica
    Text: Section 3.3

    In the lab, several people finished Mathematica code to generate the Hermite interpolating polynomial for a given function and given set of interpolating points. If you didn't finish, consider that your homework assignment. Come find me tomorrow if you have questions on how to finish up. If you have troublesome code, e-mail a Mathematica notebook to me and I'll try to help you debug it. If you do get your code working, make a plot showing graphs of the function and the interpolating polynomial on the same set of axes.

    I'll assign a few problems from Section 3.3 tomorrow morning.

    Tuesday, February 13

    Topics: remainder term and error bound for the interpolating polynomial; Hermite interpolation
    Text: Section 3.1, 3.2
    Code: 02_13_2007.nb

    As was pointed out in class, the expression we have for the remainder (or error) term that uses the (n+1)st derivative of the function being approximated is of limited utility. To be useful, we need to be able to compute and bound the appropriate derivative of the function. In some cases, we can prove a bound, perhaps using our knowledge of elementary functions. (For example, we know sin x is bounded by -1 and 1.) In other cases, we can conjecture a bound (without careful proof) perhaps using evidence from a plot.

    As an alternate to using an error bound, we can use the following rule of thumb (as discussed in Example 3 on p. 110 of our text): Start computing a sequence of better and better approximations. Stop when two successive approximations agree with each other to within the required tolerance. This stopping criterion is not foolproof. There will be cases in which this stopping criterion produces an approximation with an actual error that is bigger than the required tolerance.

    Monday, February 12

    Topics: more on divided differences
    Text: Section 3.2
    Code: 02_12_2007.nb

    Friday, February 9

    Topics: divided differences computation of the interpolating polynomial
    Text: Section 3.2

    So, back in my office this afternoon, all of the algebra worked out perfectly: the two expressions for the quadratic cofficient a2 (or my recreations of them on paper) are equivalent. If I have time over the weekend, I'll post a handout with my office calculations so you can compare against our (yes, our) classroom calculations.

    I've assigned a few homework problems from Section 3.2 on using divided differences to compute the interpolating polynomial for a set of points. Some of these ask you to use divided differences. Others ask you to use variations on divided differences called Newton foward-differences and Newton backward-differences. These latter two variations are relevant for equally spaced x values. The problems I've assigned focus on the mechanics of computing. I'll later assign problems that address issues such as the error in using the interpolating polynomial as an approximation for the function used to generate the polynomial.

    Wednesday, February 7

    Topics: Lagrange interpolating polynomials
    Text: Section 3.1
    Code: 02_07_2007.nb

    Monday, February 6

    Exam #1

    Monday, February 5

    Topics: review
    Text: Chapters 1 and 2

    Friday, February 2

    Topics: Section 1.3 homework question; Steffensen's method
    Text: Section 2.5
    Code: 02_02_2007.nb

    Note that Problem 14 of Section 2.5 is relevant to thinking about order of convergence for the sequence 1/n!

    Also note that Theorem 2.14 in Section 2.5 answers the question that someone asked in class. This theorem gives conditions (quite mild) under which Steffenson's method has quadratic convergence (i.e., convergence or order 2). Note that the hypothesis of the theorem requires only g'(p)≠1 rather than |g'(p)|<1. Recall that for a function having g'(p)>1, we know that function iteration will not converge to p. However, we can use Steffensen's method to get a sequence that converges quadratically. That's a big improvement. Consider an example we looked at earlier: To approximate the positive solution of x2-2=0, we can convert to a fixed point problem for g(x)=x2+x-2. Now, g'(p)=2p+1>1 so function iteration won't work. However, using Steffensen's method with this function gives convergence and fast! Try it.

    Exam #1 will be Tuesday, February 6 from 11:00 am-12:20 pm in WSC 101. It will cover material from Chapters 1 and 2 except Section 2.6.

    Wednesday, January 31

    Topics: more on order of convergence; order of convergence for function iteration
    Text: Section 2.4

    Bonus problem from class: Determine the order of convergence for the sequence 1/n!.

    Section 2.4 covers two ideas: (1) order of convergence and (2) dealing with roots having multiplicity greater than 1. Problem 3 from the assignment for Section 2.4 relates to the second idea. Work on the mechanics of Problem 3. We'll talk about the ideas behind this on Friday.

    Tuesday, January 30

    Topics: "big oh", rate of convergence, order of convergence
    Text: Sections 1.3, 2.4

    Monday, January 29

    Topics: questions on Section 2.2 homework; secant method
    Text: Sections 2.2, 2.3

    Friday, January 26

    Topics: convergence of function iteration to a fixed point; analysis of Newton's method as function iteration
    Text: Sections 2.2, 2.3

    When we view Newton's method as function iteration, there are several functions floating around. We start with the problem of approximating a solution for the equation f(x)=0. Let p be the solution we are interested in so f(p)=0. Newton's method is equivalent to iterating the function g(x)=x-f(x)/f'(x) starting with an intial value x0. The fact that p is a solution to f(x)=0 is equivalent to the fact that p is a fixed point of the function g(x)=x-f(x)/f'(x). That is, f(p)=0 is equivalent to g(p)=p.

    Wednesday, January 24

    Topics: existence and uniqueness of fixed points; convergence of function iteration to a fixed point
    Text: Sections 2.2

    I've assigned problems from Section 2.2. We'll take time at the beginning of class on Friday and Monday to address questions from these. In class, we have focussed on the proving theorems about fixed points and convergence of function iteration to a fixed point. Many of the Section 2.2 involve working with specific examples of function iteration. You can do the required computations on a calculator or by writing simple code in Mathematica or some other environment.

    Mathematica provides a number of built-in commands related to function iteration. These include Nest, NestList, and NestWhile, and NestWhileList. For example, Nest[Sin,1.,10] will iterate the sine function 10 times starting with 1 as the first input. Note that the first slot in these functions requires the name of a function. Using Nest[2x(1-x),0.1,10] won't do what you might initially hope. What's required is to first make an assignment such as g[x_]=2x(1-x) and then use Nest[g,0.1,10].

    Tuesday, January 23

    Topics: implementing Newton's method in Mathematica; stopping criteria for Newton's method; function iteration and fixed points
    Text: Sections 2.3, 2.2
    Code: 01_23_2007.nb

    I've added some commentary to the Mathematica code from today's class along with some coding that implements the Newton's method algorithm as it is given on page 65 of the text.

    Monday, January 22

    Topics: Section 1.2 homework question; a few more comments on IEEE-754; intro to Newton's method
    Text: Section 1.2, 2.3

    We just introduced the basic idea of Newton's method so I've only assigned a few problems from Section 2.3.

    I've been jumping around in the text a bit so that we have a few basic numerical algorithms to play with as we talk about some of the general ideas (algorithms, truncation and round-off error, speed of convergence) that are common to much of what we will do. Soon, we will be covering material in the order presented by the text.

    Remember that we will meet in TH 212 starting tomorrow.

    Friday, January 19

    Topics: sources of error in numerical approximation: stopping at a finite point in an approximating sequence, round-off error in finite-precision arthimetic
    Text: Section 1.2

    I was a bit slow in class today so we didn't get to some of the ideas you'll need for the problems I've assigned from Section 1.2. Specifically, we didn't talk about rounding errors in doing arthimetic with floating point numbers. The text authors' strategy here is to not look at the arthimetic defined in the IEEE-754 standard because that is too cumbersome. Instead, the authors looked at rounding errors using base 10 with a finite number of digits and one of two options for mapping real numbers to finite-digit represenatives:

    The text introduces the general idea of a function fl (for floating point) that maps a real number to a finite-digit representative. The meaning of fl(x) has to be given in each context. If you are using three-digit rounding, then fl(1.235)=1.24 If you are using three-digit chopping, then fl(1.234)=1.23

    This material is fairly straightforward so I will leave it to you to read Section 1.2, work on the assigned problems, and then ask questions in class on Monday. Some problems in Section 1.2 involve lots of simple calculations. It's worthwhile doing these and then stepping back to ask what can be learning from the results taken as a whole.

    Wednesday, January 17

    Topics: implementing the bisection method in Mathematica
    Text: Section 2.1
    Code: 01_17_2007.nb

    I've assigned problems from Section 2.1 of the text. Since you might not have your copy of the text, I've put these on a handout. The handout also has an additional programming problem.

    Note that I've put a link to a Mathematica notebook above. This is a text file but you'll need to open it in Mathematica (or the freely available MathReader that you can use to read but not edit or execute Mathematica files) to get readable formatting.

    Tuesday, January 16

    Topics: course overview; example of the bisection method
    Text: Section 2.1

    The bookstore will have copies of the textbook available on Thursday.

    Your assignment for today is to use the bisection method to compute an approximation for the square root to 2 to within 10-4.

    Fun Stuff

    The Mathematical Atlas describes the many fields and subfields of mathematics. The site has numerous links to other interesting and useful sites about mathematics.

    If you are interested in the history of mathematics, a good place to start is the History of Mathematics page at the University of St. Andrews, Scotland.

    Check out the Astronomy Picture of the Day.