Lecture 25 Infinite Series

Text References: Course notes pp. 114-129 & Rogawski 10.2-10.3

25.1 Recap

Last time, we wrapped up our study of Taylor polynomials and used them to compute integrals.

25.2 Learning Objectives

  • State the Taylor series of a few well-known functions.
  • Identify sequences, sequences of partial sums, and series.

25.3 Taylor Series

We’re going to extend the Taylor polynomial to the Taylor series of a function \(f(x)\). The Taylor series will include the terms of all orders in the Taylor polynomial:

Definition 25.1 If \(f\) is infinitely differentiable at \(x_0\), then the Taylor series for \(f(x)\) centred at \(x_0\) is \[T(x)=f(x_0)+f'(x_0)(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+\cdots = \sum_{n=0}^{\infty}\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n\]

Let’s take a look at two functions, \(f(x)=\sin(x)\) and \(g(x)=\frac{1}{1+x}\). The interactive applet displays \(f(x)\) and \(g(x)\) along with their respective Taylor polynomials. Use the checkboxes to display either \(f(x)\) or \(g(x)\). Using the slider to increase the order of the Taylor polynomials, observe how well the Taylor polynomial approximates the value at \(x=2\).

It seems like the Taylor polynomial gives a better and better approximation of \(f(2)\), but the same can’t be said for \(g(2)\). A natural question to ask is whether the Taylor series of a function is actually equal to the function. We’ve seen a few examples (such as \(f(x)=\sin(x)\) where taking higher and higher order Taylor polynomials of a function results in better and better approximations. In these cases, we might expect that as \(n\to\infty\) the Taylor polynomial converges to the function.

In order to answer this question, we’re going to revisit Taylor’s Inequality. We have that \[f(x)=\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k+R_n(x)\] where \(|R_n(x)|\leq K \dfrac{|x-x_0|^{n+1}}{(n+1)!}\) with \(|f^{(n+1)}(z)|\leq K\) for all values of \(z\) between \(x\) and \(x_0\). Whether or not the Taylor series is equal to the function depends on what happens as we take the limit \(n\to\infty\). We have \[f(x)=^{?}\lim_{n\to\infty}\left [\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k+R_n(x) \right] =^? \lim_{n\to\infty}\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k + \lim_{n\to \infty}R_n(x)\] We will have an equality of \(f(x)\) and its Taylor series if \(\displaystyle \lim_{n\to \infty}R_n(x)=0\); otherwise, we can still calculate the Taylor series of the function, but it won’t be equal to \(f(x)\).

A warning about notation: when we write \(f(x)=\sum_{k=0}^\infty\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k\), we really mean that \(f(x)=\lim_{n\to\infty}\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k\) for all \(x\).

Here are a few Taylor series you should know:

  • \(\sin(x)=\sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots\)
  • \(\cos(x)=\sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{(2n)!}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\cdots\)
  • \(e^x = 1+ x+ \frac{x^2}{2!}+\frac{x^3}{3!}+\cdots\)

The question of whether or not the remainder of a Taylor series goes to zero requires a few more tools that we’re going to work on developing.

25.4 Infinite Series

We’re going to leave Taylor series aside for the moment and focus on sequences and series.

Let’s start with sequences, denoted by \(\{a_k\}\), which are ordered lists of numbers with elements indexed by the natural numbers.

From a sequence, we can build the sequence of partial sums, denoted by \(\{s_n\}\) and defined as

\[\begin{align*} s_0 &= a_0 \\ s_1 &= a_0 + a_1 \\ s_2 &= a_0 + a_1 + a_2 \\ & \vdots \\ s_n &= a_0 + a_1 + a_2 + \cdots + a_n \end{align*}\]

Exercise 25.1 Consider the sequence \(a_k=\dfrac{1}{2^k}\).

  1. List the terms \(a_0, \ldots, a_3\) of the sequence.
  2. Give the partial \(s_0, \ldots, s_3\) sums of the sequence.

Solution. We have \(a_k=\dfrac{1}{2^k}\)

  1. \(a_0= \dfrac{1}{2^0}=1, \quad a_1=\dfrac{1}{2^1}=\dfrac{1}{2}, \quad a_2=\dfrac{1}{2^2}=\dfrac{1}{4}, \quad a_3=\dfrac{1}{2^3}=\dfrac{1}{8}\)
  2. We have \[\begin{align*} s_0 &= a_0=1 \\ s_1 &= a_0+a_1=\dfrac{3}{2}=1.5\\ s_2 &= a_0 + a_1 + a_2 = 1+\dfrac{1}{2}+\dfrac{1}{4} =\dfrac{7}{4} =1.75\\ s_3 &= a_0 + a_1 + a_2 +a_3 = 1+\dfrac{1}{2}+\dfrac{1}{4} +\dfrac{1}{8} =\dfrac{15}{8} = 1.875 \end{align*}\]

Now, let’s define infinite series:

Definition 25.2 An infinite series (or a series) of constant \(a_k\) is defined as a limit of finite series: \[\sum_{k=0}^\infty a_k =\lim_{n\to \infty}\sum_{k=0}^na_k\]

We say that the series \(\sum_{k=0}^\infty a_k\) converges if the sequence \(\{s_n\}\) converges to some value \(s\), i.e. if \(\displaystyle \lim_{n\to\infty}s_n=s\). Otherwise, we say that the series diverges.

Although is common to start the indexing for sequences and series at \(k=0\), we can start the index at any integer. If we happen to meet a series which doesn’t start at \(k=0\), we can always re-index it using the following trick: \(\displaystyle \sum_{k=q}^\infty a_k = \sum_{k=0}^\infty a_{k+q}\). For questions of convergence of a series, it’s common to omit the index value entirely and just write \(\displaystyle \sum a_k\).

Let’s take a look at our exercise again and write out a few more of the partial sums:

\[\begin{align*} s_0 & = 1\\ s_1 & = 1.5\\ s_2 & = 1.75\\ s_3 & = 1.875\\ s_4 & = 1.9375\\ s_5 & = 1.96875\\ s_6 & = 1.984375\\ \end{align*}\]

The series certainly appears to be converging to \(2\), but we don’t have the tools to prove this yet. Soon, we will come up with several tests to determine whether a given series converges or diverges.