Taylor Series |
---|
DefinitionLet $f(x)$ be any real-valued function and let a be some arbitrary point at which the function is infinitely differentiable. Then the Taylor Series around the point $x=a$ is given by \[T_{ \infty }(x):=\sum _{ n=0 }^{ \infty }{ \frac { f^{ n }(a) }{ n! } (x-a)^{ n } }\] where ${ f }^{ n }(a)$ is the nth derivative of $f(x)$ at $x=a$. If we compute the series around the point $x=0$, the series is known as “Maclaurin Series”. And the partial sum of this Taylor Series is referred to as nth degree Taylors polynomial at $a$: \[T_{ n }(x):=f(a)+\frac { f'(a) }{ 1! } (x-a)+\frac { f''(a) }{ 2! } (x-a)^{ 2 }+..+\frac { f^{ n }(a) }{ n! } (x-a)^{ n }\]
-1 |
MotivationConsider some continuous function (or at least continuous in some interval) which is not a polynomial (say some trigonometric function or some function of logarithms). Can we write the function in terms of the powers of x? i.e. can it be represented in a polynomial form? (of a finite or infinite degree). Now basically why would we have to represent a function as a polynomial? Will there be any in hand use with it? The simple answer would be YES! As in many cases, it is easier to handle (might be in terms of simplification, integration, etc) the polynomial functions instead of non-polynomial ones. Consider the integral, $\int _{ -1 }^{ 1 }{ { e }^{ -{ x }^{ 2 } }dx }$ or say $\int _{ -1 }^{ 1 }{ \frac { sin(x) }{ x } dx }$. Since there is no best known easy way to compute these types of integrals, one would always aim to visualize or rather try to convert them to their polynomial form (if possible). Once achieved, it won't be a challenging task to integrate anymore as it is a polynomial integration! For this, we use multiple derivatives of the function. We know for a polynomial curve, we can get the tangent equation at a given point, which is, in fact, a linear polynomial. i.e. we can approximate any function at a point (where it is defined) and represent it in a polynomial form! i.e. from the tangent equation(let say at $p$), we can get $f(p)$ by just substituting $p$ in the obtained tangent equation! Here if we substitute any point other than $p$ in the tangent equation, we won't get the function value! So what if we want to get the polynomial form of the same function (or rather to say, a better polynomial approximation of the same function) in an interval instead of only at a single point? Yes, we can do it by considering multiple derivatives! But now comes the point, how many derivatives? Will they be infinite? If yes, how could a function equal to the sum of infinite terms? (Wouldn't the sum of infinite terms even be an infinite value?) Let us try to find answers to all these questions! 0 |
Bird's eye viewTaylor series are a form of power series which are used as the polynomial approximations for a given function. For a function, f(x), the Taylor polynomial around the point a is given by: \[T_{ n }(x):=f(a)+\frac { f'(a) }{ 1! } (x-a)+\frac { f''(a) }{ 2! } (x-a)^{ 2 }+..+\frac { f^{ n }(a) }{ n! } (x-a)^{ n }\] Here f should be an n-derivative function, i.e. around the point a, the function should be differentiable n times! These Taylor polynomials have the property that the polynomials and their derivatives up to order n resemble the function itself, at a. Here we can see the degree is n. The Taylor Series is just a Taylor polynomial with an infinite degree. 0 |
Context of the DefinitionWe know we can use multiple derivatives to approximate the function. But how? We pick up a point about which we want to approximate the function. Let $f(x)$ be the function and let $a$ be the point. Let $T(x)$ be the polynomial form. Now to approximate the entire function, we need to take care of the following points:
and so on. Here $T(x)$ can be a polynomial with infinite terms! (Some functions’ derivatives follow a cyclic order!) Now consider $T(x)$ to be in the form $a_{ 0 }+{ a }_{ 1 }x+a_{ 2 }x^{ 2 }+..$ and $a=0$ Now if we apply the above conditions, we get \[f(x)=T(x)=f(0)+f'(0)x+\frac { f''(0) }{ 2! } x^{ 2 }+..\] i.e. here we are trying to make the graph of $T(x)$ coincide with the graph of our function to the best, for an interval as big as possible. Consider the Maclaurin Series of the following example for a small interval of $x$: 0 |
0 |
Video 1 : Getting an approximate polynomial from the function |
Here we took $0$ as a reference point about which we are trying to coincide our graph with the function's graph. We have selected such a point as the other non-influencing terms get vanished while we are trying to solve the above set of conditions(solving for $a_{i}'s$)! Is it possible to take any other point as our point of reference? A big YES! In most of the cases, we can consider other points as a reference and try to compute the required polynomial. But in these cases, we need to shift the origin accordingly to remove the other useless terms while calculating for the $a_{i}'s$. So let us again head over to the same example. But this time, its referred to as a Taylor Series rather than to as a Maclaurin Series as we are not approximating the function about the origin! 0 |
0 |
Video 2 : Getting an approximate polynomial from the function about x=1 |
Here $T_{n}(x)$ refers to a "Taylors polynomial" with degree $n$. As we are adding more and more terms to our T(x) by just matching higher-order derivatives, we are getting a more precise approximation to our original function,f(x)! If we add infinite terms, it is referred to as “Taylor Series”. NOTE that adding infinite terms do not(always) result in infinite values. We say the series (sum) “converges” to a value! Now is it always true? Also, can we always take any point of reference and compute higher-order terms to get much better approximations? 0 |
The remainder of a polynomial Now let us consider f(x) as $ln(x)$ and try to get its polynomial approximation around $a=2$. 0 |
0 |
Video 3 : Example of a series where normal taylors form fails |
Here we can observe that the function can be approximated only within the interval of convergence! Hence the Taylor series expansion for logarithmic series is: \[ln(a)+\sum _{ n=1 }^{ \infty }{ \frac { (-1)^{ n+1 }(x-a)^{ n } }{ na^{ n } } } \] for x=(0,2a], where a is radius of convergence! (Refer Power Series) Hence we generally introduce a remainder term, $R_{n}$ to our function. Therefore, \[f(x)=T_{n}(x)+R_{n}(x)\] Here $R_{n}$ is also known as an error polynomial. NOTE: We can never find an accurate value for $R_{n}$! As if we can find an exact value from f(x) and $T_{n}(x)$, then we can find the exact function itself by just adding the accurate remainder value every time!
Taylor's theorem Let $f(x)$ be some continuous function which is differentiable k + 1 times on the interval [a, x] (or on [x, a], if x < a). Then, \[\sum _{ n=0 }^{ k }{ \frac { (x-a)^{ n } }{ n! } f^{ n }\left( a \right) } +\int _{ a }^{ x }{ \frac { (x-t)^{ k } }{ k! } f^{ k+1 }\left( t \right)dt }\] Alternatively remainder can also be considered as the rate of change of the area bounded between $f(x)$ and $T_{n}(x)$.
0 |
0 |
Video 4 : How rate of change of area increases outside the interval of convergence |
Lagrange’s formula Since $R_{n}(x)=f(x)-T_{n}(x)$, Therefore we get $R^{n+1}_{n}(x)=f^{n+1}(x)-T^{n+1}_{n}(x)$ (where $P^{n+1}$ is the $n+1^{th}$ derivative of some function $P$). Since $T_{n}(x)$ is an $n^{th}$ degree polynomial, therefore $R^{n+1}_{n}(x)=f^{n+1}(x)$. Now if $\left| { f }^{ n+1 }(x) \right| \le M$, we get $\left| R_{ n }^{ n+1 }(x) \right| \le M$ Therefore after integrating $n+1$,times we get; \[\left| R_{ n }(x) \right| \le \frac { M(x-a)^{ n+1 } }{ (n+1)! }\] This form is referred to as Lagrange's form of remainder. NOTE: We can approximate any function with some polynomial using Taylor expansion only in its interval of convergence! There are many other series, which can be used to approximate the functions in bigger intervals! (Pade’s approximation is one such method) Hence $\lim _{ n\longrightarrow \infty }{ R_{ n }(x)=0 }$ for $|x-a|<R$, where $R$ is radius of convergence of $f(x)$. And hence the function becomes just the sum of its general Taylor Series.
Applications
HistoryIt all started around the 14th century where the workaround infinite series was started by Madhava of Sangamagrama, which were known to be special cases of Taylor series. It was the 17th century wherein things got to pick up at a faster pace. During this time, James Gregon used the "Maclaurin Series" of several functions in his works. He used the expansion of $tan^{ -1 }(x)$ in his “Exercitationes Geometricae”. There were even several other mathematicians who used the series expansion for various other functions such as $ln(1+x)$ (by N.Mercator), $(1+x)^{ a }$ (by I.Newton) and many more. The series expansions of $tan^{ -1 }(x)$,$sin(x)$,$cos(x)$ were known before 1540 itself and were even used by many people. But there wasn't any general method to get series for any given function until 1715 when Brook Taylor gave a formalized things. Soon after Colin Maclaurin took a special case of these results and gave Maclaurin Series.
References
Further Readings
0 |
Contributor: |
Mentor & Editor: |
Verified by: |
Approved On: |
The work under this website is licenced under a Creative Commons Attribution-Share Alike 4.0 International License CC BY-SA