Lab 1
Summary of Lab 1
Roots, extrema, integrals and derivatives.
Linearization
We wish to linearize a differentiable function around a point a. That is, we take the tangent at a. More generally, the first two terms of the Taylor series.
With this, we get a line that approximates
near a. f = @(x) 0.5*(x-2).^2-2*cos(2*x)-1.5;
Df = @(x) x-2+4*sin(2*x);
L = @(x,a) f(a)+Df(a)*(x-a); % Tangent line of curve at a.
plot(x, f(x), 'b', x, L(x, a), 'r')
title(['Linearization of f(x) around a=' num2str(a)])
Newton's Metod
Newton's method follows directly from the linearization of a function, were solve for
. It is an interative algorithm were we initially solve for a = x0, an estimate, then use the result as the new point to linearize about.
We stop when we have converged (to within a predefined tolerance).
There are some obvious limitations: the initial estimate must be close to the actual root or the algorithm may diverge, stationary points always result in divergence.
We implement this in a function (at the bottom of this file) as follows:
Uppgift 1 - Find roots using Newton's method
Find the roots of
using newton's method. f = @(x) x.^3 - cos(4*x); % Set up the functions
Df = @(x) 3*x.^2 + 4*sin(4*x);
x = linspace(-2, 2, 100); % Plot f(x) and estimate roots.
kmax = 10; % Set termination parameters for Newton's method.
x0_vec = [-1.2, -0.5, 0.5]; % Vector containing estimated roots.
roots = [zeros(size(x0_vec))]; % Empty vector for results.
roots(k) = Newton(f, Df, x0_vec(k), tol, kmax); % Newton() is described above, and implemented at the bottom of the file.
plot(roots, zeros(size(roots)), 'ko') % Plot roots.
Just for fun, here are the iterations visualized (use the slider to adjust the inital estimate x0):
plot([x0 x0], [0 f(x0)], 'r--')
plot([xold x0], [f(xold) 0], 'r-')
if abs(d)<tol, break, end
title('Newton''s Method (with visualization)')
fzero()
fzero(fun, x0) is an inbuilt function that is similar to Newton's method. x0 can be a scalar, or a vector containing two points where the sign of f(x) changes.
f = @(x)0.5*(x-2).^2-2*cos(2*x)-1.5;
root = fzero(f, [-1 0]); % Use interval where endpoints change sign (ie. the interval passes zero).
Uppgift 2 - find roots using fzero()
Find where polar function
,
intersects unit circle (ie.
) rho = @(t) (2+sin(3*t))./(sqrt(1+exp(cos(t))));
f = @(t) rho(t) - 1; % Modify function to give r(phi) = 1 intersection.
phi = linspace(0, 2*pi, 1000);
polarplot(phi, rho(phi));
pax = gca; % Get Current Axis. This lets us modify graph properties.
pax.ThetaAxisUnits = 'radians'; % Radians are way nicer.
x0 = deg2rad([0 69 100 200 225 309]); % Visual inspection gives the following approximate roots.
roots = zeros(size(x0)); % Allocate vector for solved roots.
roots(k) = fzero(f, x0(k)); % Find the roots using fzero.
polarplot(roots, ones(size(roots)), 'ro')
polarplot(phi, ones(size(phi)), 'r--')
Extrema (Stationary Points/Maxima and Minima)
Previously, we linearized a function in order to describe the slope. To discriminate between minima and maxima, we need a higher order approximation in order to describe the curvature. This information is, again, contained in the Taylor series approximation.
where
is the relevant discriminant.
: minimum
: maximum
Uppgift 3 - find stationary points using fminbnd()
Now we'll disregard all that and use the builtin function fminbnd(fun, xa, xb), where xa and xb are interval bounds.
f = @(x) (1 + x.^2 - 1.5.*x.^3 + 0.5.*x.^4)./(1 + x.^4);
[x, fval] = fminbnd(f, -0.1, 0.1); % Find minimum.
[x, fval] = fminbnd(f, 1, 3);
h = @(x) -f(x); % We invert function f in order to use fminbnd to find maxima instead.
% We can not modify a function handle directly, we must create a new function handle.
[x, fval] = fminbnd(h,-1.2,-0.8); % Return coordinate of extrema.
plot(x, -fval, 'm*') % We found local minima of INVERTED function, thus flip it again (-fval).
[x, fval] = fminbnd(h, 0.3, 0.4);
Numerical Integration
Here we examine how integrals can be approximated numerically using Riemann sums.
All the following methods divide the problem-domain into n fixed-size intervals
. The basic idea is: We are now interested in how best to approximate the integral for each segment. The basic methods are derived geometrically:
- Left rectangle

- Right rectangle

- Midpoint

- Trapezeum

We implement these as functions directly:
L = @(f,x,dx) dx.*sum(f(x(1:end-1)));
R = @(f,x,dx) sum(f(x(2:end)).*dx);
M = @(f,x,dx) sum(f((x(1:end-1) + x(2:end))./2).*dx);
T = @(f,x,dx) sum(((f(x(1:end-1)) + f(x(2:end)))./2).*dx);
Uppgift 4 - approximate integral using Riemann sums
Approximate
using the above Riemann sums. ["L", L(f,x,dx); "R",R(f,x,dx); "M",M(f,x,dx); "T",T(f,x,dx); "Control", integral(f,a,b) ]
"L" "0.26025"
"R" "0.34439"
"M" "0.30059"
"T" "0.30232"
"Control" "0.30117"
Illustration of different Riemann Sums
xfine = linspace(a,b,1000);
plot(xfine, f(xfine), 'r-'), hold on, grid on
plot(x(i), f(x(i)), 'g*')
plot([x(i) x(i+1)], [f(x(i)) f(x(i))], 'g-')
plot([x(i+1) x(i+1)], [f(x(i)) f(x(i+1))], 'g-')
title('Vänster rektangelregel');
plot(xfine, f(xfine), 'r-'), hold on, grid on
plot(x(i+1), f(x(i+1)), 'g*')
plot([x(i) x(i+1)], [f(x(i+1)) f(x(i+1))], 'g-')
plot([x(i) x(i)], [f(x(i)) f(x(i+1))], 'g-')
title('Höger rektangelregel');
plot((x(i)+x(i+1))/2, f((x(i)+x(i+1))/2), 'g*')
plot([x(i) x(i)], [f(x(i)), f((x(i)+x(i+1))/2)], 'g-')
plot([x(i) x(i+1)], [f((x(i)+x(i+1))/2), f((x(i)+x(i+1))/2)], 'g-')
plot([x(i+1) x(i+1)], [f((x(i)+x(i+1))/2), f(x(i+1))], 'g-')
plot(xfine, f(xfine), 'r-'), grid on
title('Mittpunktsregeln rektangelregel');
plot(xfine, f(xfine), 'r-'), hold on, grid on
plot([x(i) x(i+1)], [f(x(i)) f(x(i+1))], 'g-')
Convergence of Riemann sums
x = linspace(a,b,n(k)+1);
kontroll = integral(f,a,b);
plot(n, qh-kontroll, n,qv-kontroll, n,qm-kontroll, n,qt-kontroll)
legend('L(n)', 'R(n)', 'M(n)', 'T(n)')
title('Error vs step-size')
xlabel('n'); ylabel('Err(n)')
Conclusion: H and L converge at same rate, though H overestimates and L underestimates.
Likewise, M and T converge with
, though M overestimates and T underestimates (see Stewart pg. 509, the rectangle formed by the Midpoint rule is equal in area to a trapezoid that is more accurate than that of the Trapezoidal rule, ironically. Twice as accurate.) Error Bounds
L and H use rectangles to approximate f, thus the error will be proportionate to the slope of the function,
. M and T can both be interpreted as trapezoidal, and thus compensate for slope
, but not for curvature
. The error is thus proportionate to
. Maximum error bounds can be given in terms of maximum slope and maximum curvature.
Uppgift 5 - Undamped Pendulum (solve parameterized function)
Undamped pendulum. Calculate the period time resulting from different initial angles
. Here, we are asked to calclate a function of the form:
h(t, x) is a parameterized function ie. it has a controllable parameter x that is different from the variable of integration.
In Matlab, integral() expects a function with only one parameter: the variable of integration. Thus, we must create a new (anonymous) function that restricts it to only a single parameter (that is, we assign fixed values to the other parameters).
h = @(t,x) ...
integral(@(t) h(t,0), c, d)
The above will now integrate over h(t), with x=0.
So, in our example:
h = @(phi, phi0) 1./sqrt(1 - sin(phi0/2).^2.*sin(phi).^2); % The integrand with two parameters, the former of which is the variable of integration.
T(i) = 4*sqrt(L/g).*integral(@(phi) h(phi, phi0(i)), c, d); % phi is symbolic, whereas phi0 is a fixed constant that we have defined. We must tell integral() which variable to integrate over. In other words, we create a new anonymous function that restricts it to only one parameter, and give the other parameters fixed values.
xlabel('\phi_0');ylabel('T(\phi_0)');
Numerical Derivation
We compute the derivatives numerically using difference quotients. Remember, the definition of the derivative is usually:
We are forced to choose a finite value for h. Many different difference quotients with differing convergence rates (as h becomes smaller) can be directly derived from Taylor Series.
Recap: the Taylor Series is the expansion of a function
into an infinite polynomial, around a specific point a. It is a weighted sum of the sum of a function and its derivatives at a single point that gives the function itself around that point. The Taylor Series of a function around point a is as follows: The most trivial are the backward- and forwards difference quotients:
(1)
(2)The Taylor Series of
at point x is We substitute this result into (1) to obtain:
The same result is obtained for
.
Here we approximate the derivative
at
, for different step-sizes h. In the final plot we see the effect that rounding error has on small h.
h = logspace(-15,-1,400);
loglog(h, abs((u(x+h)-u(x))./h - Du(x)), 'r'), hold on
loglog(h,abs((u(x+h)-u(x-h))./(2*h) - Du(x)),'g')
loglog(h,abs((-u(x+2*h)+8*u(x+h)-8*u(x-h)+u(x-2*h))./(12*h) - Du(x)),'b')
DD = @(x, h) (u(x+h)-2.*u(x)+u(x-h))./(h.^2);
loglog(h, abs(DD(x,h)-DDu(x)))
legend('Du', 'D_0', 'D_*', 'DD')
xlabel('h'); ylabel('Err(h)')
h = vpa(logspace(-15,-1,400)); % vpa() is Variable-Precision Arithmetics, which allows us to specify how many digits of precision we want (standard is 16).
loglog(h, abs((u(x+h)-u(x))./h - Du(x)), 'r'), hold on
loglog(h,abs((u(x+h)-u(x-h))./(2*h) - Du(x)),'g')
loglog(h,abs((-u(x+2*h)+8*u(x+h)-8*u(x-h)+u(x-2*h))./(12*h) - Du(x)),'b')
loglog(h, abs(DD(x,h)-DDu(x)))
legend('Du', 'D_0', 'D_*', 'DD')
xlabel('h'); ylabel('Err(h)')
title('Error vs h (32 bit precision)')
Uppgift 6
Load the table of (x,y) values, find the first and second derivatives, and plot the curvature:
Here we approximate the second derivative with
This works since, if we substitute the Taylor expansions of
, we are left with
. tab = load('labtabell.mat');
Dforward = (u(2:end) - u(1:end-1))./h;
% Backward difference quotient is no different in this situation.
% Central difference quotient
D0 = (u(3:end) - u(1:end-2))./(2*h);
% The derivative is essentially evaluated at the midpoint of each pair of
% points. Thus we create a new x-axis with only the midpoints (and one less
xm = (x(1:end-1) + x(2:end))./2;
plot(x, u), hold on, grid on
%plot(xm, Dforward, 'b--')
plot(x(2:end-1), D0, '--')
DD = (u(3:end) - 2.*u(2:end-1) + u(1:end-2))./h.^2;
plot(x(2:end-1), DD, '-.')
k = DD./( (1+D0.^2).^(3/2));
function x = Newton(f, fprim, x0, tol, kmax)
% tol: convergence tolerance (break condition)
% kmax: maximum number of iterations