









Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A lab exercise for a Mathematics course focusing on polynomial and piecewise linear interpolation. Students will learn how to write recursive functions in Matlab, test polynomial interpolation with different strategies for point distribution, and explore Chebyshev points for interpolation. exercises and hints for generating test points, computing interpolation errors, and comparing the performance of different interpolation methods.
What you will learn
Typology: Exercises
1 / 16
This page cannot be seen from the preview
Don't miss anything!
Introduction Exercise 1 Matlab Hints Exercise 2 Recursive functions Exercise 3 An Experimental Framework Exercise 4 Chebyshev Points Exercise 5 Bracketing Exercise 6 Piecewise Linear Interpolation Exercise 7 Exercise 8 Approximating the derivative (Extra) Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Extra Credit
We saw in the last lab that the interpolating polynomial could get worse (in the sense that values at interme- diate points are far from the function) as its degree increased. This means that our strategy of using equally spaced data for high degree polynomial interpolation is a bad idea. It turns out that equidistant spacing must always result in poor asymptotic convergence rates! See the excellent article by Platte, Trefethen, and Kuijlaars, Impossibility of Fast Stable Approximation of Analytic Functions from Equispaced Samples, SIAM Review, 53#(2) pp. 308-318, www.siam.org/journals/sirev/53-2/77470.html. In this lab, we will test other possible strategies for spacing the data and then we will look at an alternative way to do interpolation with a guarantee that the error will get smaller when we take more points. This lab will take three sessions. If you print this lab, you may prefer to use the pdf version.
When you encounter an error while using an m- le look at the first error message you receive, not the last. This message will identify the le from which it came, and the line number. Be careful: the le that caused the error may not be the le you think you are using{it could be called from the one you think you are using.
The error message Not enough input arguments. often means that you forgot the @ sign in front of a function name when you used it as an argument in an m- le.
In this section, we discuss programming recursive functions. As you will see, recursive functions can be very short and elegant when explicit formulas or loops are awkward and difficult to understand.
Consider the factorial function. Its mathematical de nition can be written as
0! = 1
n! = n
(n 1)!
for n 1
A function de ned this way is termed \recursive" because it is de ned in terms of itself. You have seen such de nitions often in your theoretical work, but they can be confusing when you try to program them. In this section, you will see how to write recursive functions in Matlab. It can be shown that any function written in a recursive manner can be rewritten using loops; however, the recursive form of the function is generally shorter and easier to understand.
Exercise 1: The following code implements the factorial function. We will call it rfactorial because Matlab already has the factorial function and it is called factorial.
(a) Copy the following code to a le named rfactorial.m. function f=rfactorial(n) % f=rfactorial(n) computes n! recursively
% your name and the date
if n< error(’rfactorial: cannot compute factorial of negative integer.’); elseif n==0 %double equals sign for logical testing f=1; else f=n*rfactorial(n-1); end (b) Con rm that rfactorial gets the same result for 5! as Matlab's factorial function. (c) Suppose you are computing factorial(3). When rfactorial starts up, the rst thing it does is check if n<0. Since n=3, it goes on and discovers that before it can continue, it must start up a new copy of rfactorial and compute factorial(2). In turn, this new copy checks that 2> and starts up another new copy of rfactorial so that there are now two copies of rfactorial waiting and one copy active. In your own words, explain how the computation continues to get a result of 6. (d) If you attempt to use rfactorial for a negative integer, it will print an error message and stop because of the error function. What would have happened if the code did not test if n<0?
The factorial function is really very simple, and recursion is, perhaps, too powerful a method to employ. After all, it can be written as a simple loop or, even more simply, as prod(1:n). Consider the Fibonacci series
f 0 = 1 f 1 = 1 (1) fn = fn− 1 + fn− 2 for n > 1 :
Values of fn can also be computed using Binet's formula
fn =
p 5)n+1^ (1
p 5)n+ 2 n+
p 5
You can nd more information about the Fibonacci series from the Wolfram (Mathematica) web site http://mathworld.wolfram.com/FibonacciNumber.html.
% they cover the interpolation interval in a standard way, i.e., % xval(1)=xdata(1) and xval(NTEST)=xdata(end) xval= ???
% we need the results of func at the points xdata to do the % interpolation WARNING: this is a vector statement % In a real problem, ydata would be "given" somehow, and % a function would not be available ydata=func(xdata);
% use Lagrange interpolation from lab6 to do the interpolation % WARNING: these use componentwise (vector) statements. % Generate yval as interpolated values corresponding to xval yval=eval_lag( ???
% we will be comparing yval with the exact results of func at xval % In a real problem, the exact results would not be available. yexact= ???
% plot the exact and interpolated results on the same plot % this gives assurance that everything is reasonable plot( ???
% compute the error in a standard way. max_error=max(abs(yexact-yval))/max(abs(yexact));
Use cut-and-paste to copy this code from the web page.
(a) Look a the code in test_poly_interpolate.m and complete the statements that have question marks in them. (b) Test your test poly interpolate.m function by getting the same approximation values for the Runge example function f (x) = 1=(1 + x^2 ) on the interval [ ; ] with uniformly spaced points as you should have seen in last lab and are reproduced here Recover your runge.m le from last lab or rewrite it. Do not forget to use componentwise (vector) syntax. Runge function, evenly spaced points ndata = 5 Max Error = 0. ndata = 11 Max Error = 0. ndata = 21 Max Error = 3. (c) Construct xdata containing ndata=5 evenly spaced points between -5 and +5 (not the same interval) and use the utility test_poly_interpolate to plot and evaluate the error between the interpolated polynomial and the exact values of the Runge example function, and ll in the following table. Runge function, evenly spaced points ndata = 5 Max Error = ________ ndata = 11 Max Error = ________ ndata = 21 Max Error = ________
After looking at the plotted comparisons between the Runge example function and its polynomial t, you might guess that the poor t is because the points are evenly spaced. Maybe they should be concentrated
near the endpoints (so they don't oscillate wildly there) or near the center (maybe the oscillations are caused by too many points near the endpoints). Let's examine these two hypotheses. The strategy used in the next exercise is to use a function to change the distribution of points. That is, pick a nonlinear function f that maps the interval [ 1 ; 1] into itself (we will use x^2 and x^1 /^2 ) and also pick an affine function g that maps the given interval, [ 5 ; 5] into [ 1 ; 1]. Then use the points xk = g−^1 (f (g(~xk)) = g−^1 ◦ f ◦ g(~xk), where ~xk are uniformly distributed.
Exercise 4: We have been looking at evenly-spaced points between -5 and 5. One way to concentrate these points near zero would be to recall that, for numbers less than one, jxj^2 < jxj. Hence, if xdata represents a sequence of numbers uniformly distributed between -5 and 5, then the expression 5( sign(xdata).abs(xdata./5).^2 ) yields a similar sequence of numbers concentrated near 0. (The sign and abs functions are used to get the signs correct for negative values.)
(a) Construct 11 points uniformly distributed between [ 5 ; 5] in a vector called xdata. Use the transformation xdata=5( sign(xdata).abs(xdata./5).^2 ) Look at the concentrated distribution of points using the command plot(xdata,zeros(size(xdata)),’*’) to plot the redistributed points along the x-axis. Your points should be concentrated near the center of the interval [ 5 ; 5] but with endpoints +5 and -5. Be sure that the endpoints are correct. You do not need to send me this plot. (b) Use this transformation and the test_poly_interpolate utility from Exercise 3 to ll in the following table for the Runge example function with ndata points concentrated near 0. Runge function, points concentrated near 0 ndata = 5 Max Error = ________ ndata = 11 Max Error = ________ ndata = 21 Max Error = ________ You should observe that the error is rising dramatically faster than in the uniformly distributed case. Exercise 5: Points can be concentrated near the endpoints in a similar manner, by forcing them away from zero using the jxj^1 /^2 =
jxj function.
(a) Use a transformation similar to the one in the previous exercise but with jxj^1 /^2 with 11 points. Take care to get the signs of negative xdata correct. (b) Plot your points using the following command. plot(xdata,zeros(size(xdata)),’*’) Please send me this plot. (c) Use this transformation and the test_poly_interpolate utility from Exercise 3 to ll in the following table for the Runge example function with ndata points concentrated near the endpoints. Runge function, points concentrated near endpoints ndata = 5 Max Error = ________ ndata = 11 Max Error = ________ ndata = 21 Max Error = ________ You should observe that the error is shrinking, in contrast with the uniformly distributed case.
You could ask why I chose x^2 and x^1 /^2 out of the many possibilities. Basically, it was an educated guess. It turns out, though, that there is a systematic way to pick optimally-distributed (Chebyshev) points. It is also true that the Chebyshev points are closely related to trigonometric interpolation (discussed last time).
function tval=cheby_trig(xval,degree) % tval=cheby_trig(xval,degree)
% your name and the date
if nargin== degree=7; end to evaluate Tn(x) as de ned using trigonometric functions in (3) above. If the argument degree is omitted, it defaults to 7, a fact that will be used below. (b) Write a recursive Matlab function with the signature and rst few lines, function tval=cheby_recurs(xval,degree) % tval=cheby_recurs(xval,degree)
% your name and the date
if nargin== degree=7; end to evaluate Tn(x) as de ned recursively in (4) above. If the argument degree is omitted, it defaults to 7, a fact that will be used below. (c) Show that cheby trig and cheby recurs agree for degree=4 (T 4 ) at the points x=[0,1,2,3,4] by computing the largest absolute value of the differences at those points. What is this value? Remark: If two polynomials of degree 4 agree at 5 points, they agree everywhere. Hence if (3) defines a polynomial, then (3) and (4) produce identical values for T 4. This is why only ve test points are required. (d) Plot values of cheby trig and cheby recurs on the interval [-1.1, 1.1] for degree=7 (T 7 ). Use at least 100 points for this plot in order to see the details. The two lines should lie on top of one another. You should also observe visually that the peaks and valleys of T 7 are of equal absolute value. Please include this plot with your summary. Suggestion: You can plot the rst curve with a wider line than the second, as you did in the previous lab, to see both lines. (e) Visually choose change-of-sign intervals around the largest and second-largest roots in your plot of T 7. Using your bisect.m le from Lab 3 or my version of bisect.m, nd the largest and second-largest roots of T 7 (x) = 0 on the interval [-1.1, 1.1].
The Chebyshev points are the zeros of the Chebyshev polynomials. They can be found from (3):
cos(n cos−^1 x) = 0; n cos−^1 x = (2k 1)= 2 ; cos−^1 x = (2k 1)=(2n);
x = cos
(2k 1) 2 n
For a given number n of data points, then, the Chebyshev points on the interval [a; b] can be constructed in the following way.
xk = 0:5(a + b + (a b) cos k);
for k = 1; 2 ; : : : ; n.
Exercise 7:
(a) Write a Matlab function m- le called cheby_points.m with the signature: function xdata = cheby_points ( a, b, ndata ) % xdata = cheby_points ( a, b, ndata ) % more comments
% your name and the date that returns in the vector xdata the values of the ndata Chebyshev points in the interval [a,b]. You can do this in 3 lines using vector notation: k = (1:ndata); %vector, NOT A LOOP theta = (vector expression involving k) xdata = (vector expression involving theta) or you can use a for loop. (The vector notation is more compact and runs faster, but is a little harder to understand.) (b) To check your work, use cheby points to nd the Chebyshev points on the interval [-1,1] for ndata=7. Do the largest and second largest roots agree with your roots of T 7 computed above? (c) What are the ve Chebyshev points for ndata=5 and [a,b]=[-5,5]? (d) You should observe that the Chebyshev points are not uniformly distributed on the interval but they are symmetrical about its center. It is easy to see from (3) that this fact is true in general.
Exercise 8: Repeat Exercise 4 but with Chebyshev points on [-5,5]. Fill in the table. (Note the extra row!) You should see smaller errors than before, especially for larger ndata.
Runge function, Chebyshev points ndata = 5 Max Error = ________ ndata = 11 Max Error = ________ ndata = 21 Max Error = ________ ndata = 41 Max Error = ________
In the following exercise, you will numerically examine the hypothesis that the Chebyshev points are the best possible set of interpolation points. You will do this by running many tests, each one using a randomly-generated set of interpolation points. If the theory is correct, none of these tests should result in an error smaller than the error obtained using the Chebyshev points.
Exercise 9:
(a) First, remove or comment out the plotting statements in test poly interpolate because we will be running it many times. (b) The Matlab rand function generates a matrix lled with randomly-generated numbers uniformly distributed in the interval [0; 1]. What does the following statement do? xdata=[-5,sort(10*(rand(1,19)-.5)),5];
x 1
y 1 r (x 1 ; y 1 )
x 2
y 2 r (x 2 ; y 2 ) J J J J J J J J J
x 3
y 3 r (x 3 ; y 3 )
x 4
y 4 r (x 4 ; y 4 )
x
In Matlab notation, this amounts to the following discussion. Denote N by ndata and the xn by the Matlab vector xdata. We will assume that the components of xdata are strictly increasing. Now suppose that we are given a value xval and we are asked to nd an integer named left index so that the subinterval [ xdata(left index), xdata(left index+1) ] contains xval. The index left index is that of the left end point of the subinterval, and the index (left index+1) is that of its right end point. (The term index in a computer program generally means the same thing as subscript in a mathematical expression.) The following clari cations are needed. We seek an integer value left index so that one of the following cases holds.
Case 1 If xval is less than xdata(1), then regard xval as in the rst interval and left index=1; or
Case 2 If xval is greater than or equal to xdata(ndata), then regard xval as in the nal interval and left index=ndata-1 (read that value carefully!); or,
Case 3 If xval satis es xdata(k)<=xval and xval<xdata(k+1), then left index=k.
We would like to be able to write a Matlab function to perform this task even when xval is a vector of values, but it is not easy to do it efficiently. Instead, in the following exercise, you will write a function scalar bracket to do it for the case xval is a scalar, not a vector. In Exercise 11, you will see how it can be written for vectors.
Exercise 10: In this exercise, we will be writing a Matlab function m- le called scalar bracket.m and test it thoroughly. Later, you will see how to write it for vectors.
(a) Write a function m- le called scalar_bracket.m that completes the following skeleton. The \cases" mentioned in the skeleton refer to the list of three cases above. function left_index=scalar_bracket(xdata,xval) % left_index=scalar_bracket(xdata,xval) % more comments
% your name and the date
ndata = ??? "number of x data points" ???
% first check Case 1 if ??? "condition on xval" ??? left_index = ??? return % then Case 2 elseif ??? "condition on xval" ??? left_index = ??? return % finally Case 3 else for k = 1:ndata- if ??? "condition on xval" ??? left_index = ??? return end end error(’Scalar_bracket: this cannot happen!’) end (b) Now, make up a set of at least ve values for xdata (don't forget to make them ascending!), and a set of at least seven test values xval, and test scalar_bracket with the values you have selected, one at a time. Make the tests as comprehensive as you can. Your tests should include: One point to the left of all xdata. One point to the right of all xdata. One point equal to one of the xdata points in the interior. One point not equal to any xdata but in the interior. Include your tests in your summary.
Note: The call to the Matlab error function may seem super uous to you because, indeed, it can never be executed. It is a powerful debugging tool, however. If you make an error in the cases, it might turn out that the error message does get executed. In that case, you are alerted to the error immediately instead of having an incorrect value of left index cause an obscure error hundreds of statements later. Anything that saves debugging time is worthwhile.
Exercise 11: In this exercise, you will see how the same task can be accomplished using vector statements. The code presented below is written to be as efficient as possible for the case of very large vectors xval.
(a) Copy the following code to a le named bracket.m.
function left_indices=bracket(xdata,xval) % left_indices=bracket(xdata,xval) % more comments
% your name and the date
ndata=numel(xdata);
left_indices=zeros(size(xval)); % Case 1 left_indices( find( xval<xdata(2) ) )=1; % Case 2
If C is a bound for the maximum absolute value of the second derivative of the function over the interval, then the pointwise interpolation error and the L∞^ norm are bounded by Ch^2 n, while the L^1 norm is bounded by (b a)Ch^2 n, and L^2 norm is bounded by (
p b a)Ch^2 n. Uniform convergence is convergence in the L∞^ norm, and is a much stronger result than pointwise convergence. Thus, if the only thing you know is that your function has a bounded second derivative on [a; b], then you are guaranteed that the maximum interpolation error you make decreases to zero as you make your intervals smaller. Stop and think: The convergence result for piecewise linear interpolation is so easy to get. Is it really better than polynomial interpolation? Why did we have such problems with polynomial interpolation before? One reason is that the error result for polynomial interpolation couldn't be turned into a convergence result. Using 10 data points, we had an error estimate in terms of the 10-th derivative. Then using 20 points, we had another error estimate in terms of the 20-th derivative. These two quantities can't be compared easily, and for nonpolynomial functions, successively higher derivatives can easily get successively and endlessly bigger in norm. The linear interpolation error bound involved a particular fixed derivative and held that xed, so then it was easy to drive the error to zero. because the other factors in the error term depended only on interval size. To evaluate a piecewise linear interpolant function at a point xval, we need to:
The bracket.m function from the previous exercise does the rst of these tasks, but what of the second and third?
Exercise 12: This exercise involves writing a Matlab function m- le called eval_plin.m (evaluate piecewise linear interpolation) to do steps 2 and 3 above.
(a) Write down a general formula for the linear function through an arbitrary pair of points. (b) Write a le called eval_plin.m starting with the signature function yval = eval_plin ( xdata, ydata, xval ) % yval = eval_plin ( xdata, ydata, xval ) % more comments
% your name and the date Add appropriate comments. (c) Add lines that use bracket.m to nd the vector of indices in xdata identifying the intervals that each of the values in xval lies. (It is possible to do this in one line of code.) (d) Add lines evaluating the formula you found in step 1 above on the interval you found in step 3. (It is possible to do this in one line of code.) (e) Test eval_plin.m on the piecewise linear function f (x) = 3jxj + 1 on the interval [ 2 ; 2] using the following sequence of commands. xdata=linspace(-1,1,7); ydata=3abs(xdata)+1; % the function y=3|x|+ plot(xdata,ydata,’*’); hold on
xval=linspace(-2,2,4001); plot(xval,eval_plin(xdata,ydata,xval)); hold off (Note: this test procedure is very similar to the test_poly_interpolate function we wrote earlier.) Your plot should pass through the ve data points and consist of two straight lines forming a \V". Please send me a copy of your plot.
A piecewise linear function was chosen for testing in the last part of this exercise for both theoretical and practical reasons. In the rst place, you are tting a piecewise linear function to a function that is already piecewise linear and both the original and the tted functions have breaks at x = 0, so the two functions will agree. This makes it easy to see that the interpolation results are correct. In the second place, it is unlikely you will get it right for this piecewise lineear data but have it wrong for other data.
Exercise 13: In this exercise, we will examine convergence of piecewise linear interpolants to the Runge example function we have considered before.
(a) Copy the le test_poly_interpolate.m to a le called test_plin_interpolate.m. Change the signature to be function max_error=test_plin_interpolate(func,xdata) % max_error=test_plin_interpolate(func,xdata) % more comments
% your name and the date and change from using polynomial interpolation to piecewise linear interpolation using eval plin. Do not forget to change to comments to re ect your coding changes. (b) Restore the plot statement, if it is not active. (c) Consider the same Runge example function we used above. Over the interval [-5,5], use ndata= equally spaced data points, use test_plin_interpolate to plot the interpolant and evaluate the interpolation error. Please send me this plot. (d) Repeat the evaluation for progressively ner meshes and ll in the following table. You do not need to send me the plots that are generated. Warning: The nal few of these tests should take a few seconds, but if you have written code that happens to be inefficient they can take much longer. You will not be graded on the efficiency of your code. Runge function, Piecewise linear, uniformly-spaced points ndata = 5 Max Error( 5) = ________ ndata = 11 Max Error( 11) = ________ ndata = 21 Max Error( 21) = ________ ndata = 41 Max Error( 41) = ________ ndata = 81 Max Error( 81) = ________ ndata =161 Max Error(161) = ________ ndata =321 Max Error(321) = ________ ndata =641 Max Error(641) = ________ (e) Repeat the above convergence study using Chebyshev points instead of uniformly distributed points. You should observe no particular advantage to using Chebyshev points. Runge function, Piecewise linear, Chebyshev points ndata = 5 Max Error( 5) = ________ ndata = 11 Max Error( 11) = ________
It may seem likely that if you have a good approximation of a differentiable function then you could differ- entiate it to get a good approximation of its derivative. On second thought, you have seen how the Runge example results in wiggles when it is approximated by polynomials, so polynomials probably cannot be used to approximate both a function and its derivative. Standard theorems indicate that a twice-differentiable function can be approximated to O(h^2 ) by piece- wise linear functions (as you saw above in Exercises 13-14), and the derivative of the approximation approx- imates the derivative of the function to O(h). In the following exercise, you will con rm this for the case of the Runge example function.
Exercise 15:
(a) Modify your eval plin.m from Exercise 13 so that it returns both yval and y1val, where y1val approximates the derivative of yval. (yval comes from a piecewise linear expression. y1val should come from the derivative of that piecewise linear expression.) (b) Write a Matlab function test plin1 interpolate.m similar to test plin interpolate.m, that interpolates the given function (func), but plots and measures accuracy of the derivative of the function. (c) Test test plin1 interpolate.m on the function f (x) = 3jxj + 1 on the interval [ 2 ; 2]. (Recall this is a test function used in Exercise 12.) Send me the plot. Explain why you believe your result is correct. (d) Using either the plot method ( rst two parts of Exercise 14) or the ratio method (third part of Exercise 14), estimate the rate of convergence of the derivative of the interpolant to the derivative of Runge's example function on the interval [-5,5] as the number of points becomes large. You should observe a slower rate of convergence for the derivative of Runge's example function than for Runge's example function itself (in Exercise 13). Explain your methodology.
Last change $Date: 2016/10/03 00:35:00 $