Taylor Series?

<p>Does anyone else understand this? I'm having loads of trouble with it...it doesn't even make sense. and it doesn't help that the book gives the easiest example ever and that the professor only uses those same examples when lecturing..........</p>

<p>The point of the Taylor Series is to estimate the function at a point by evaluating its value (and its derivatives) at a nearby point (at which it’s presumably easier to evaluate). It’s a bit like using sin(pi/2) and the slope of sine at pi/2 to estimate sin(1.5), except that since you’re taking higher derivatives it’s more accurate.</p>

<p>Ask in the engineering thread. Or on physicsforums.com</p>

<p>Polynomials are your friends. They are easy to integrate or differentiate, and they are a lot easier to handle inside of formulas versus functions like sin(), cos(), exp(). I’m sure that on a math test you would rather differentiate the function x^2 + x + 5 instead of exp(sqrt(cos(theta))).</p>

<p>A guy 300 years ago (Taylor) found out that for a large class of functions, you can write down a power series (~ an infinite degree polynomial) and have it be exactly the same as your original function for some interval. The coefficients of this polynomial are the numbers that you get when you evaluate the original function and its derivatives at a point in the interval. </p>

<p>The good news is that if your interval is small enough, you only need a few terms from this polynomial to get a good approximation to the original function. These approximations are used all over the place. You can see how useful they are: in formulas and stuff, you would rather have polynomials instead of crazy transcendental functions, and should you know that your function’s input doesn’t really change that much (your interval is short), you really aren’t losing much when you make this approximation.</p>

<p>Is this helpful, or not really?</p>