banner
IWSR

IWSR

美少女爱好者,我永远喜欢芳泽瑾。另外 P5 天下第一!

Taylor Series (to be completed)

Taylor series is an extremely powerful function approximation tool in mathematics - 3Blue1Brown.

I have been relearning about Taylor series these days and have decided to write an article as a learning record.

The general idea of Taylor's theorem is that if a function is smooth enough and the derivatives of the function at a certain point are known, we can use these derivative values as coefficients to construct a polynomial that approximates the value of the function in the neighborhood of that point.

The formula is expressed as

g(x)=f(a)+f(x)1!(xa)+f2(x)2!(xa)2+...+fn(x)n!(xa)n+Rn(x)g(x) = f(a) + \frac{f^{\prime}(x)}{1!} (x - a) + \frac{f^2(x)}{2!} (x - a)^2 + ... + \frac{f^n(x)}{n!} (x - a)^n + R_n(x)

Let's put aside the above equation for now and explain what kind of function is considered smooth. A smooth function is a function that has infinitely many derivatives, and it can be differentiated infinitely many times at every point in its domain, with the derivatives being continuous at that point. For example, y=xy = x, its derivative is y=1y^{\prime} = 1, and the constant derivative after that is always 0, so y=xy = x is a first-order smooth function. Another example is y=sinxy = \sin x, which is a smooth function of infinite order because its derivatives are simply a cycle of cosx\cos x, sinx-\sin x, cosx-\cos x, sinx\sin x.

What is approximation?#

Approximation of a function means using another function to approximate the original function, so that they have only a small error within a certain range. From this description, it can be seen that approximation can be used as a means of computing complex functions, by approximating some difficult-to-compute functions with easier-to-compute functions. Just understanding it in words feels a bit strange, so let's take an example to make it more intuitive. For example, when encountering a scenario that requires manual calculation of sin2\sin 2, it is usually approximated by Taylor expansion at zero as n=0+(1)n(2n+1)!(2)2n+1\sum_{n=0}^{+\infty} \frac{(-1)^n}{(2n+1)!} (2)^{2n+1} (this is just an example, we will introduce how to expand later), and only a few terms need to be taken to obtain an approximate solution. But no matter how we approximate it, there will inevitably be an error (i.e., the term Rn(x)R_n(x) in the expression), which we will also discuss later.

How to approximate a function#

The derivative of a function at a certain point refers to the rate of change of the function near that point (refer to here for more information). If we differentiate the result of differentiation again, it corresponds to the rate of change of the rate of change at that point. If this relationship continues to be pushed down, it will become endless. In general, if we want to approximate a function, we only need to make their nth order derivatives the same (as long as the rates of change, rates of change of rates of change, etc. are the same, their function graphs will inevitably be infinitely close). Therefore, if we want to approximate a function f(x), we can assume a corresponding g(x)=a0+a1(x)+a2(x2)+...+an(xn)g(x) = a_0 + a_1(x) + a_2(x^2) + ... + a_n(x^n), where n is any positive integer. The reason for assuming g(x) in this form is also very simple. This form of expression is more convenient for calculating its nth order derivatives.

Expand it at zero

{f(0)=g(0)f(0)=g(0)f2(0)=g2(0)..fn(0)=gn(0)\begin{cases} f(0) = g(0) \\ f^{\prime}(0) = g^{\prime}(0) \\ f^{2}(0) = g^{2}(0) \\ . \\ . \\ f^{n}(0) = g^{n}(0) \end{cases}

Here is the process of solving the nth order derivative ana_n

fn(0)=gn(0)fn(0)=ann!an=fn(0)n!\because f^{n}(0) = g^{n}(0) \\ \therefore f^{n}(0) = a_n n! \\ \therefore a_n = \frac{f^{n}(0)}{n!}

We can get

{a0=f(0)a1=f(0)1!a2=f2(0)2!..an=fn(0)n!\begin{cases} a_0 = f(0) \\ a_1 = \frac{f^{\prime}(0)}{1!} \\ a_2 = \frac{f^2(0)}{2!} \\ . \\ . \\ a_n = \frac{f^n(0)}{n!} \end{cases}

So we get the equation for Taylor expansion at zero (Maclaurin series)

g(x)=f(0)+f(0)1!(x)+f2(0)2!(x)2+...+fn(0)n!(x)n+Rn(x)g(x) = f(0) + \frac{f^{\prime}(0)}{1!} (x) + \frac{f^2(0)}{2!} (x)^2 + ... + \frac{f^n(0)}{n!} (x)^n + R_n(x)

To generalize this expansion, for example, expanding at point a, we just need to perform a right shift

g(x)=f(a)+f(a)1!(xa)+f2(a)2!(xa)2+...+fn(a)n!(xa)n+Rn(x)g(x) = f(a) + \frac{f^{\prime}(a)}{1!} (x - a) + \frac{f^2(a)}{2!} (x - a)^2 + ... + \frac{f^n(a)}{n!} (x - a)^n + R_n(x)

We call the Rn(x)R_n(x) in the above equation the remainder term, which will be discussed separately later. If this term is removed, the above equation can only be approximated with an approximate equal sign, after all, even if n is infinite, there will still be n+1.

Processing the remainder term Rn(x)R_n(x)#

It's too complicated to derive, so I'll write about it later.

Both the process and the result actually express the same thing - the farther the target point is from the expansion point, the larger the error. For example, when we expand sin x at zero, but when we evaluate it by substituting 2, then this 2 is the target point. But why is it allowed to substitute 2 into the expansion at zero to calculate sin2\sin 2? This is also related to the convergence interval, because the convergence interval of sin is (+-\infty +\infty), so it can be directly substituted for calculation. I'll write about it later, I'll write about it later.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.