Lecture 22 Taylor’s Remainder Theorem

Text References: Course notes pp. 103-106 & Rogawski 10.7

22.1 Recap

Last time, we learned some result to help us calculate Taylor polynomials more easily.

Exercise 22.1 Find the 10th-order Maclaurin polynomial for \(h(x)=\sin(3x^2)\) given that the fifth-order Maclaurin polynomial for \(f(x)=\sin(x)\) is \(P_{5,0}(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}\).

Solution. We notice that \(h(x)=f(3x^2)\). Since Since \(p(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}\) is the second degree Maclaurin polynomial for \(f(x)\), the tenth degree Maclaurin polynomial for \(h(x)=f(3x^2)\) is \[r(x)=3x^2-\frac{(3x^2)^3}{3!}+\frac{(3x^2)^5}{5!}= 3x^2-\frac{9}{2}x^6+\frac{81}{40}x^{10}\]

22.2 Learning Objectives

  • N/A

22.3 Taylor’s Remainder Theorem

Now that we’ve got a good handle on computing Taylor polynomials, we might start to wonder: how accurate is this approximation. In other words. how closely does \(P_{n, x_0}(x)=\sum_{k=0}^n \frac{f^{}(k)}(x_0)(x-x_0)^k{k!}\) approximate \(f(x)\). Yet another way to formulate this is how big is the error \(|f(x)-P_{n, x_0}(x)|\)?

We are going to work towards answering this question in the next few lessons. Today, we will focus on an intermediate result called Taylor’s Remainder Theorem.

Our more immediate goal is to try to understand the error term \(|f(x)-P_{n, x_0}(x)|\). Let’s start by considering the simplest case where \(P_{n, x_0}\) is a constant, that is, the zeroth order Taylor polynomial. We have \[f(x)-P_{0,x_0}(x)=f(x)-f(x_0)\] By the Fundamental Theorem of Calculus, we get \[f(x)-P_{0,x_0}(x)=\int_{x_0}^x f'(t)dt\] This expression might seem more complicated than what we started with, but it will pay off in the long run, so let’s stick with it.

Now, let’s consider the case of the first order Taylor polynomial. That is, let’s consider the error in a linear approximation of \(f(x)\). We have \[f(x)-P_{0,x_0}(x)=f(x)-f(x_0)=f(x)- [f(x_0)+f'(x_0)(x-x_0)]\] We see that the term \(f(x)-f(x_0)\) is making an appearance, so we can use the same integration trick as before: \[f(x)-P_{1,x_0}(x)=\int_{x_0}^x f'(t)dt-f'(x_0)(x-x_0)\]

We can further simplify this expression using integration by parts with \(u=f'(t)\) and \(v=t\) to get \[ f(x)-P_{1,x_0}(x)=\left[ t f'(t)\right |_{x_0}^x-\int_{x_0}^x tf''(t)dt - f'(x_0)(x-x_0)\]

With a bit more algebra magic, we end up with \[f(x)-P_{1,x_0}(x)=\int_{x_0}^x (x-t)f''(t)dt\] The key point here is that we’ve come up with a more tidy expression for the error term.

Repeating this process for the second order Taylor polynomial, we find \[f(x)-P_{2,x_0}(x)=\int_{x_0}^x \frac{(x-t)^2f'''(t)}{2}dt\]

At this point, we have a general idea of how the pattern works, so let’s write it out formally.

Theorem 22.1 If \(f(x)\) has \(n+1\) derivatives at \(x_0\), then \[f(x)=\sum_{k=0}^n \frac{f^{(k)}(x_0)(x-x_0)^k}{k!}+R_n(x)\] where \[R_n(x)=\int_{x_0}^x\frac{(x-t)^n}{n!}f^{(n+1)}(t)dt\]

We call \(R_n(x)\) the remainder or the error in the approximation. Calculating this remainder isn’t actually possible. What we’re going to aim for instead is finding an upper bound on the error, so we can know how far off our approximation is in the worst-case scenario.