Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Numerical Analysis: Numbers, Polynomials, and Solving Algorithms, Study notes of Introduction to Business Management

This chapter discusses numerical algorithms for finding the roots of polynomials and solving other related problems. The concepts of exact and approximate numbers, decimal approximations, and the use of maple software for polynomial evaluation and root finding. It also introduces horner's method and taylor series.

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-1rt
koofers-user-1rt 🇺🇸

10 documents

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Theory of Equations
Lesson 4
by
Barry H. Dayton
Northeastern Illinois University
Chicago, IL 60625, USA
www.neiu.edu/˜bhdayton/theq/
These notes are copyrighted by Barry Dayton, 2002. The PDF files are freely available on
the web and may be copied into your hard drive or other suitable electronic storage devices.
These files may be shared or distributed. Single copies may be printed for personal use but
must list the website www.neiu.edu/˜bhdayton/theq/ on the title page along with
this paragraph.
“Maple” and “MAPLE” represent registered trademarks of Waterloo Maple Inc.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download Numerical Analysis: Numbers, Polynomials, and Solving Algorithms and more Study notes Introduction to Business Management in PDF only on Docsity!

Theory of Equations

Lesson 4

by

Barry H. Dayton

Northeastern Illinois University

Chicago, IL 60625, USA

www.neiu.edu/˜bhdayton/theq/

These notes are copyrighted by Barry Dayton, 2002. The PDF files are freely available on the web and may be copied into your hard drive or other suitable electronic storage devices. These files may be shared or distributed. Single copies may be printed for personal use but must list the website www.neiu.edu/˜bhdayton/theq/ on the title page along with this paragraph.

“Maple” and “MAPLE” represent registered trademarks of Waterloo Maple Inc.

38 CHAPTER 2. NUMERICAL ANALYSIS

evident whether an exact number is real or complex, for example: √ −5 +

is a real number, can you prove that? Given two exact numbers the key question is: are they equal? It is not always possible to tell, but there are ways that often work. For example, Maple has a built in proceedure named radnormal which can usually tell if a exact number made up of radicals of rational numbers is zero. And x = y if and only if x − y = 0. Recall that the equality relation is reflexive (always x = x), symmetric (if x = y then y = x) and transitive (if x = y and y = z then x = z). On the other hand, it is very hard to decide which of two real exact numbers is larger. And exact numbers need not be real but we saw in Chapter 1 that there is no ordering of complex numbers. So order is not a useful concept with exact numbers. We will be using exact numbers in chapters 4,5 and 6.

DECIMAL APPROXIMATIONS

The real numbers can be represented as infinite decimals , that is, numbers such as

  1. 14159265358979323847.. .. While this is useful for some theoretical considerations it is not possible to represent real numbers this way for computational purposes as it would take an infinitely long amount of time to write down most numbers. Fractions can be represented as infinite repeating decimals, for example 13 =
  2. 3333333... = 0. 3 or 247 = 3. 428571428571... = 3. 428571. The sequence of repeat- ing digits is called the repetend. Sometimes the repetend is 0 such as in 163 = 0. 18750. Since there is only a finite amount of information that must be given, a few digits and the repetend, in principle we could represent fractions exactly using decimals. The following Example/Exercise shows why this is not practical:

Exercise 2.1.1 [20 points] Do the indicated operations on repeating decimals giving the answers as repeating decimals. In particular, find the repetends. You may use Maple.

  1. Add 3 .91 + 2.911 + 1. 91111
  2. Multiply 2. 07 ∗ 3. 13

For these reasons we will never, in this text, use decimals as exact numbers, only as approximations to exact real numbers. Thus we can approximate π = 3. 1415927 or 24 7 = 3.^4285714285 or^

3 16 = 0.^1875

2.1. EXACT AND APPROXIMATE NUMBERS 39

We will use the usual convention in rounding, for positive numbers if the n + 1st digit to the right of the decimal point is 0,1,2,3 or 4 we round to n digits by dropping all digits to the right of the nth^ digit. However if the n + 1st^ digit is 5 or higher we drop the digits to the right of the nth^ digit but raise the nth^ digit by 1. In this case if the nth digit is 9 we have to adjust digits further to the left. For negative numbers we apply this rounding convention to the absolute value and then put the negative sign back in. Thus, rounding to 5 digits we see

  1. 53422468 rounds to 2. 53422 − 3. 43414682 rounds to − 3. 43415
  2. 9999999 rounds to 2. 00000
  3. 000004 rounds to 0. 00000

In this text we will assume that a decimal number represents any number that rounds to it. Thus the decimal number 2.53422 represents not a specific real num- ber but rather the half open interval [2. 53415 , 2 .534225) = {x ∈ RRR| 2. 53415 ≤ x <

  1. 534225 }. In our usage the numbers 12 , 0. 5 , 0. 50 , 0. 500 are all different: 12 is an exact number,
  2. 5 represents all numbers in the interval [ 10045 , 10055 ) = [. 45 , .55), 0.50 represents all num- bers in the interval [0. 495 , 0 .505) and 0.500 represents all numbers in [0. 4995 , 0 .5005). Figure 2.1 shows this pictorially.

Figure 2.1: 12 ,. 5 ,. 50

Note also that − 0. 5 represents (−. 55 , −.45] and 0. represents the open interval (−. 5 , .5) while 0.0 represents (−. 05 , .05) and 0.00 represents (−. 005 , .005). With our usage, equality of decimal numbers is not a useful concept, for example we don’t have the symmetry property since 0.5 represents 0.467 but 0.467 does not

2.2. NUMERICAL ALGORITHMS 41

2.2 Numerical Algorithms

Numerical algorithms use floating point approximations to real numbers to do calcu- lations. One then gets approximate, rather than exact, solutions to the problem. Tra- ditionally, when working with numerical algorithms, there are two main concerns: ac- curacy and efficiency (speed). Using modern calculation equiptment these will not be an important consideration in our working with polynomials, but we discuss these factors briefly in this section, for more details consult a Numerical Analysis textbook. Other concerns regarding numerical methods are stability/reliability and simplicity of the method. We will address these issues when discussing particular algorithms.

ACCURACY

As mentioned above it is the mantissa of a floating point number that determines its accuracy. Precisely we use the term significant digits. To define the concept of significant digits we make a very brief excursion into error analysis. Let p be a real number and p∗^ an approximation of p, eg. p∗^ is a floating point number. The absolute error is |p∗^ − p| whereas the relative error is |p

∗−p| |p|. The relative error is considered the more important measurement by numerical analysists. Thus, p∗ is said to approximate p to t significant digits if t is the largest non-negative integer so that the relative error is less than 5 ∗ 10 −t. Note that if p = 2. 43333333333 and p∗^ = 2. 433 then the relative error is 1. 37 ∗ 10 − 4 so p∗^ approximates p to 4 significant digits. On the other hand if p = 9. 23333333333 and p∗^ = 9. 233 while the absolute error is the same, the relative error is 3. 61 ∗ 10 −^5 so p∗^ approximates p to 5 significant digits. At first this might seem a bit strange but if one considers that the number 9.233 is about as accurate as the number 10.233 then 5 significant digits seems more reasonable. For practical purposes, the number of significant digits accuracy one can get from a computer or calculator is at best 1 less than the number of digits of the mantissa in the floating point representation of the number. One way to find out how many digits a given machine has is to do the following: enter 1/3 as a division problem. Multiply by 10 and subtract 3. Repeat this last step until the leading digit is no longer 3. Your machine precision is 1/10 raised to the power of the number of times you can do this. A quicker way that might work is by the calculation 4 − 4 / 3 − 4 / 3 − 4 / 3 (in Maple let x := evalf(4/3) and do 4 - x - x - x). Some calculators or programs make assumptions about rational numbers so you may need to calculate something like 3 ∗ tan(π/6)^2 − 1 using an internal value for π. Calculators may have a machine precision of between 7 and 12 digits, the machine precision of a computer program may depend on the program. Maple’s machine pre- cision is determined by the variable Digits which has a default value of 10 but can be

42 CHAPTER 2. NUMERICAL ANALYSIS

arbitrarily set by the user. For the purposes of the exercises in this Chapter you should try for 5 significant digit accuracy, which should be attainable on any calculator or computer, with sufficient care. One very important consideration for the computer or calculator user is the repre- sentation of the number 0. In principle the number 0 is not representable as a floating point number. Some computers have special ways of denoting exact 0, however due to round off error numbers which should be 0 are often expressed as floating point numbers with large negative exponents. If you are dealing with, say, 5 significant digit accuracy, any number given in exponential form mE−t where t > 6 should be consid- ered as, and treated as, 0. For example − 0. 382342 E− 8 or .789543E-10 are different ways of representing the number zero. In any case, any number less than your machine precision in absolute value must be consisdered to be zero. Floating point multiplications or additions of numbers with the same sign give re- sults with about the same relative error as the numbers started with. Exponentiation or use of math functions such as sine or cosine use many floating point additions and multiplications and so results will have more relative error. One place where floating point arithmetic can kill you is when adding numbers of about the same magnitude but of opposite signs. For example, let p = 243. 2222222222222 and q = 243. 2211111111111 p∗^ = 243. 222 and q∗^ = 243. 221 are approximations to p and q good to 6 significant digits. p − q =. 0011111111111 but p∗^ − q∗^ =. 001 , the absolute error is .0001111111111 but the relative error is 0.1 and thus the appoxima- tion of p − q by p∗^ − q∗^ is good to only one significant digit. Thus whenever using a calculator or computer great care must be used to avoid this situation, i.e. make sure you never get small numbers as a result of addition or subtraction of large ones. The most serious disaster with floating point is that the commutative law of addition may not hold. For example, consider A = 10^20 , B = − 1020 and C = 1. Then A + B + C = 1 but A + C + B = 0.

EFFICIENCY

With computers and calculators arithmetic can be done with great speed and little human effort. Yet the study of efficiency of algorithms is more important today than any time in the past. The reason for this is that we now tackle harder problems than were previously deemed possible and we often repeat calculations hundreds or thou- sands of times. For example, instead of going through many cumbersome steps in a process called “separation of roots” which was formerly used to get the approximate location of real roots of a polynomial, it is now easy to simply use the computer to graph the polynomial by plotting points. But a plotting program may evaluate the poly- nomial between 280 or 640 times, depending on the machine. Such programs can be

44 CHAPTER 2. NUMERICAL ANALYSIS

Programming in Maple is structured and based on procedures. A procedure is essentially a set of Maple commands gathered to- gether under one name. The syntax for defining a procedure will be something like

F := proc(a,b,c) . . . end;

where the dots indicate Maple commands. The variables a, b, c are the parameters of the procedure. A Maple procedure may have zero or many arguments. To use the procedure defined above simply use the syntax (for example)

F(2,7,sqrt(3));

One technicality that Maple forces us to deal with is the idea of local and global variables and parameters. In the above example, a, b, c are parameters. These variables retain the values given to them in the calling statement, thus above a = 2, b = 7 and c =

Parameters can not be assigned new values within a procedure. Local variables are variables which exist only within the procedure. Thus if the procedure F above uses the local variable X then when the procedure is done running Maple no longer recognizes X as an assigned variable unless X existed beforehand. In this latter case the value of X was not changed by the procedure. If X is declared as a global variable then the value of X before the procedure was run becomes available to the procedure and the value that X has at the end of the procedure is retained by X when the procedure is done. The latest versions of Maple require the user to declare all variables used as local or global or Maple will give a warning and declare all undeclared variables as local. Note that parameters are automatically local and do not get declared. Two important Maple statements for programing are the repeti- tion statement and the selection statement. The repetition statement is of the form

2.3. MAPLE PROGRAMMING AND EVALUATION OF POLYNOMIALS 45

for i from 2 to 10 do . . od; and the selection statement is of the form if . . elseif . . else . fi; In both cases the dots indicate one or more Maple statements. Note the use of od; and fi; to end for and if statements respectively. The else and elseif are optional in the if statement. For more options, including a while statement, see the help pages for for and if. Note that the for statement can also be used alone outside a procedure as well, for example if you wish to make a table of values of the polynomial f you could simply enter on the command line: for t from 0 to 5 by 0.1 do print(t,subs(x=t, f)); od; Be very careful to not forget the od because Maple will not give any output until it receives the od. When defining a procedure or a stand-alone for loop use SHIFT-ENTER rather than just ENTER, until the last line of the pro- cedure. This tells Maple to wait for the next line of the procedure and to not skip lines. You will not be able to leave the procedure defining mode until you put in a last line consisting of end; and hit ENTER. You can later scroll back and edit a procedure, then hit EN- TER anywhere in the procedure when you are done. The easiest way to save procedures is to save the worksheet on which they are defined. For examples of Maple procedures see the various programs in the remainder of this section.

2.3. MAPLE PROGRAMMING AND EVALUATION OF POLYNOMIALS 47

One interesting feature of Maple is that c could be an indeterminant. Thus executing EVP(’x’) gives the algebraic version of the poly- nomial!

On the whole, this program will work quite well, but from the point of view of numerical analysis it has two flaws. First of all we have speed. We note that in each step of our for loop we have one addition, one multiplication and one exponentiation. If we think of exponentiation as equivalent to 5 additions or multiplications, we have the equivalent of seven operations each step. For a polynomial of degree N we have N + 1 steps so the complexity is 7 ∗ (N + 1) operations. We will see that this is over 3 times more than necessary. Secondly there is accuracy. Suppose we wish to evaluate the polynomial p(x) = − 4 − 4 x − 2 x^2 − 11 x^3 + 4x^4

at c = 11. 213. The following chart shows the beginning value of y and the value of aj ∗cj^ each time the calculation y ← y +aj ∗cj^ is executed by a calculator with machine accuracy 10 −^7. j y aj cj^ y + aj cj 0 0. 0 − 4. 0 − 4. 0 1 − 4. 0 − 44. 852 − 48. 852 2 − 48. 852 − 251. 4628 − 300. 3148 3 − 300. 3148 − 15508. 09 − 15808. 40 4 − 15808. 40 15808. 38 −. 02050781 (Answer) We note that all the digits of the final value of y except the first two cannot be justified from the sum − 15808 .40 + 15808. 38 and in fact are put there quite arbitrarily by the calculator just to bring the number up to the correct length. Thus all we can really conclude is that the answer is somewhere about −. 02 , i.e. at best one significant digit. Experts in numerical analysis prefer a different algorithm, the nested algorithm. Here we write our polynomial as

x^4 − 11 x^3 − 2 x^2 − 4 x − 4 = (((1 ∗ x − 11) ∗ x − 2) ∗ x − 4) ∗ x − 4

A Maple Procedure to evaluate the polynomial in this form might be

Maple Implementation

EVH := proc(c) local j,y; global n,P;

48 CHAPTER 2. NUMERICAL ANALYSIS

y := 0; for j from n by -1 to 0 do y := y*c +P[j]; od; end; Which is executed by the statements

n := 4; P := array(0..n,[-4,-4,-2,-11,1]); EVH(11.213);

First we note that the calculation y = y ∗ c + aj contains only two operations, neither of them exponentiations. Thus the complexity of this algorithm to evaluate a polynomial of degree N in terms of additions and multiplications is 2 ∗ (N + 1), 3.5 times faster than the exponential algorithm. As for accuracy here is the chart of additions performed by this algorithm:

j y ∗ c aj y ∗ c + aj 4 0. 0 1. 1. 0 3 11. 213 − 11.. 213 2 2. 388369 − 2.. 3883692 1 4. 354782 − 4. 3547820 0 3. 978166 − 4 −. 02183445 (Answer)

Note that unlike the chart for the exponential form all the numbers tend to be in a much smaller range of order of magnitude. It is true that we are constantly adding numbers of like magnitude but opposite sign, but the difference is only smaller by a factor of 10 – thus in the last column only the last digit is unwarrented. Thus we may have much more confidence in this answer - probably to 4 significant digits (the last actually computed digit is somewhat suspect always).

50 CHAPTER 2. NUMERICAL ANALYSIS

for j from 1 to k do F(2.00 + .01*j): od; time()-st; end;

To use this we replace the parameter F with the name of our evaluation procedure. The parameter k is the number of times the polynomial is to be evaluated. k=1000 is a good choice, once you have the procedure debuged. Before trying F as EVP or EVH you should figure out the overhead by trying a simple procedure such as the arrow function

F := c -> c;

then subtract the time Timeproc takes to evaluate this from your subsequent times. You can also compare Maple’s internal evalua- tion procedure with EVP and EVH. Create the arrow function

F := c -> subs(x=c,p);

Here p will be a polynomial in standard algebraic Maple syntax. One final comment on Maple is in order. Unlike calculators or standard programming languages, when Maple does a numerical calculation it estimates the accuracy and only prints out what it feels are signifigant digits. Thus if you run the examples above on Maple your results will be different from those reported above.

Exercise 2.3.1 [20 points] Explore the comparative evaluation time of EVP and EVH on your Maple system using k = 1000. Also compare these procedures to subs(x=c,p); as indicated above where p is a polynomial in exponential form and again when p is in nested form. Try this for several polynomials of various coefficients and degrees. Write up and try to explain your findings.

Exercise 2.3.2 [20 points] Let p(x) = − 13 − 7 x − 6 x^2 − 3 x^3 − 16 x^4 + 2x^5. How accurately can you calculate p(8.2341)? Try different calculator and computer environ- ments, and try both the nested and exponential method on each environment. Try the different methods (EVP, EVH, subs(x=c,p) on Maple and try different setting of Digits. Write up your findings and try to explain based on the discussion in this section.

2.4. TAYLOR’S SERIES AND HORNER’S PROCESS 51

2.4 Taylor’s Series and Horner’s Process

The exponential method for evaluation lacks good precision in the example of the last section primarily because for x of large absolute value xj^ gets very large, often much larger in magnitude than the value of the polynomial. One way to try and avoid this problem is using the nested method, (or synthetic division) but this doesn’t always work out as well as in the example above. A more certain method to deal with this problem is to use a change of variable to replace x by a number of absolute value less than 1, the successive terms in the exponential evaluation get smaller, rather than larger. To accomplish this, pick a convenient number c near the number at which you want to evaluate the polynomial. If your original polynomial has integer coefficients, choosing c to be an integer will allow exact arithmetic. Let p(x) be your polynomial, by the Remainder Theorem

p(x) = (x − c)g 1 (x) + p(c)

Applying the remainder theorem to g 1 (x) gives

g 1 (x) = (x − c)g 2 (x) + g 1 (c)

and substituting this into the first equation gives

p(x) = (x − c)^2 g 2 (x) + (x − c)g 1 (c) + p(c)

Now applying the Remainder Theorem to g 2 (x) and substituting

p(x) = (x − c)^3 g 3 (x) + (x − c)^2 g 2 (c) + (x − c)g 1 (c) + p(c)

We can continue in this manner noting that the degree of the quotient gk(x) goes down 1 each step. Thus if deg p(x) = n, gn(x) is a non-zero constant (in fact it is the leading coefficient of p(x)) and thus

p(x) = b 0 + b 1 (x − c) + b 2 (x − c)^2 + · · · + bn(x − c)n^ (2.1)

where b 0 = p(c) and bk = gk(c) and gk(x) = (x − c)gk+1(x) + gk(c) is the kth^ quotient, g 0 (x) = p(x). This polynomial in (x − c) is the Taylor Series of p(x) about c. This argument shows, as one also learns in calculus, that for a polynomial, the Taylor series is also a polynomial in (x − c) and so convergence does not become a problem. As an historical aside, it should be noted that Taylor did not invent Taylor’s series, in fact Taylor’s series were discovered by the Scottish mathematican Gregory

2.4. TAYLOR’S SERIES AND HORNER’S PROCESS 53

od; RETURN(Q); end;

An example of running this procedure is

Q := Horners(11,P); entries(Q); Evp(x-11,Q);

where P is the array giving the polynomial p(x) = − 4 − 4 x − 2 x^2 − 11 x^3 + x^4 of the previous section. Note that entries(Q) gives the coefficients of Q (warning: not guaranteed to be in the correct order!) whereas Evp(x-11,Q) almost writes p(x) as a polynomial in x − 11. I say “almost” since Maple cannot resist the temptation to simplify the linear term. While the above procedure is given to explain how Horner’s pro- cess is implemented, Maple has a built in procedure which does this, called taylor. The syntax would be

p := -4 - 4x - 2xˆ2 - 11*xˆ3 + xˆ4; taylor(p, x=11, 5);

The last argument, 5, gives the number of terms of the Taylor poly- nomial that you want displayed. For a polynomial of degre n this should be n + 1. You are cautioned that the result of the taylor procedure is of the SERIES data type and cannot be manipulated like a polynomial, even though it prints out on the screen like one. A series named f can be converted back to a polynomial by

convert(f,polynom);

The example above has the data for the polynomial p(x) used as an example in the previous section. Running the procedure we obtain for c = 11 (the integer closest to 11.213)

p(x) = −290 + 1283(x − 11) + 361(x − 11)^2 + 33(x − 11)^3 + (x − 11)^4

Thus evaluating the polynomial p(x) at x = 11. 213 is equivalent to evaluating

f (x) = −290 + 1283z + 361z^2 + 33z^3 + z^4 at z = 0. 213

54 CHAPTER 2. NUMERICAL ANALYSIS

This can be done quite accurately by the exponential method, especially by calculator, noting that the first 3 terms, although relatively large can be calculated exactly. Thus the successive terms are − 290. 000000

  1. 279000
  2. 378209

. 318899 . 002058 Answer −. 021833

where all the digits indicated are accurate except in the sum where the last digit is prob- ably inaccurate. This sum can be calculated by calculator doing the integer parts and fractional parts separately. Here we are sure that we have 4 significant digit accuracy. Differentiating the Taylor’s series (Equation 2.1 above) n times we see that the Taylor’s coefficients are connected to the higher derivatives of p(x) by the formula

bk =

p(k)(c) k!

Here p(k)^ denotes the kth derivative of p(x), the zero-th derivative being the function itself. This formula is known from calculus but it should be noted that we have only used algebraic methods. Thus Horner’s process can be used to calculate successive derivatives, a fact which we will use several times later in the chapter.

2.5 Synthetic Division

In precomputer days the nested algorithm and Horner’s process was done by synthetic division. While I don’t generally teach synthetic division to freshman algebra students, it is informative to see how this used to work and have the technique available for hand calculations. In Section 2.3 the long division example was used to show that the nested algorithm was equivalent to using the remainder theorem. One can see that there is a lot of re- dundant writing and poor use of space. In particular, the “trial remainder” and the next coefficient of the quotient are the same. These numbers actually end up being written 3 times. Thus to save time and space we write these only once, as well as not repeating the coefficients of the dividend. Finally we turn our work upside down in anticipation of Horner’s process. Thus, to divide f (x) = 3x^3 + 8x^2 + 5x − 7 by x + 2 we set up our work as follows: