Lab 1

Table of Contents

Summary of Lab 1

Roots, extrema, integrals and derivatives.

Linearization

We wish to linearize a differentiable function around a point a. That is, we take the tangent at a. More generally, the first two terms of the Taylor series.
With this, we get a line that approximates near a.
close all
f = @(x) 0.5*(x-2).^2-2*cos(2*x)-1.5;
Df = @(x) x-2+4*sin(2*x);
L = @(x,a) f(a)+Df(a)*(x-a); % Tangent line of curve at a.
a=4;
x = linspace(-3,7);
plot(x, f(x), 'b', x, L(x, a), 'r')
ylim([-5 10]); grid on;
title(['Linearization of f(x) around a=' num2str(a)])

Newton's Metod

Newton's method follows directly from the linearization of a function, were solve for .
It is an interative algorithm were we initially solve for a = x0, an estimate, then use the result as the new point to linearize about.
We stop when we have converged (to within a predefined tolerance).
There are some obvious limitations: the initial estimate must be close to the actual root or the algorithm may diverge, stationary points always result in divergence.
We implement this in a function (at the bottom of this file) as follows:
function x = Newton(f, fprim, x0, tol, kmax)
% Newton's method
% f: f(x)
% f: f'(x)
% x0: initial value
% tol: convergence tolerance (break condition)
% kmax: maximum number of iterations
xk = x0;
for k = 1:kmax
x = xk-f(xk)/fprim(xk);
if abs(x-xk) <= tol
break
else
xk = x;
end
end
end

Uppgift 1 - Find roots using Newton's method

Find the roots of using newton's method.
close all
f = @(x) x.^3 - cos(4*x); % Set up the functions
Df = @(x) 3*x.^2 + 4*sin(4*x);
x = linspace(-2, 2, 100); % Plot f(x) and estimate roots.
plot(x, f(x), 'b')
grid on, hold on
kmax = 10; % Set termination parameters for Newton's method.
tol = 0.5e-8;
x0_vec = [-1.2, -0.5, 0.5]; % Vector containing estimated roots.
roots = [zeros(size(x0_vec))]; % Empty vector for results.
for k = 1:length(x0_vec)
roots(k) = Newton(f, Df, x0_vec(k), tol, kmax); % Newton() is described above, and implemented at the bottom of the file.
end
plot(roots, zeros(size(roots)), 'ko') % Plot roots.
Just for fun, here are the iterations visualized (use the slider to adjust the inital estimate x0):
x0=-1.8;
for k = 1:kmax
plot(x0, 0, 'r*')
plot([x0 x0], [0 f(x0)], 'r--')
d=-f(x0)/Df(x0);
xold = x0;
x0=x0+d;
plot([xold x0], [f(xold) 0], 'r-')
if abs(d)<tol, break, end
end
title('Newton''s Method (with visualization)')

fzero()

fzero(fun, x0) is an inbuilt function that is similar to Newton's method. x0 can be a scalar, or a vector containing two points where the sign of f(x) changes.
close all
f = @(x)0.5*(x-2).^2-2*cos(2*x)-1.5;
x = linspace(-5,5,100);
plot(x, f(x))
grid on
root = fzero(f, [-1 0]); % Use interval where endpoints change sign (ie. the interval passes zero).
hold on
plot(root, 0, 'ro')
title('fzero demo')

Uppgift 2 - find roots using fzero()

Find where polar function , intersects unit circle (ie. )
close all
rho = @(t) (2+sin(3*t))./(sqrt(1+exp(cos(t))));
f = @(t) rho(t) - 1; % Modify function to give r(phi) = 1 intersection.
phi = linspace(0, 2*pi, 1000);
polarplot(phi, rho(phi));
pax = gca; % Get Current Axis. This lets us modify graph properties.
pax.ThetaAxisUnits = 'radians'; % Radians are way nicer.
hold on
x0 = deg2rad([0 69 100 200 225 309]); % Visual inspection gives the following approximate roots.
roots = zeros(size(x0)); % Allocate vector for solved roots.
for k = 1:length(roots)
roots(k) = fzero(f, x0(k)); % Find the roots using fzero.
end
polarplot(roots, ones(size(roots)), 'ro')
polarplot(phi, ones(size(phi)), 'r--')

Extrema (Stationary Points/Maxima and Minima)

Previously, we linearized a function in order to describe the slope. To discriminate between minima and maxima, we need a higher order approximation in order to describe the curvature. This information is, again, contained in the Taylor series approximation.
where is the relevant discriminant.

Uppgift 3 - find stationary points using fminbnd()

Now we'll disregard all that and use the builtin function fminbnd(fun, xa, xb), where xa and xb are interval bounds.
close all
f = @(x) (1 + x.^2 - 1.5.*x.^3 + 0.5.*x.^4)./(1 + x.^4);
x = linspace(-4,4);
plot(x, f(x));
grid on;
hold on;
[x, fval] = fminbnd(f, -0.1, 0.1); % Find minimum.
plot(x, fval, 'r*')
[x, fval] = fminbnd(f, 1, 3);
plot(x, fval, 'r*')
h = @(x) -f(x); % We invert function f in order to use fminbnd to find maxima instead.
% We can not modify a function handle directly, we must create a new function handle.
[x, fval] = fminbnd(h,-1.2,-0.8); % Return coordinate of extrema.
plot(x, -fval, 'm*') % We found local minima of INVERTED function, thus flip it again (-fval).
[x, fval] = fminbnd(h, 0.3, 0.4);
plot(x, -fval, 'm*')

Numerical Integration

Here we examine how integrals can be approximated numerically using Riemann sums.
All the following methods divide the problem-domain into n fixed-size intervals . The basic idea is:
We are now interested in how best to approximate the integral for each segment. The basic methods are derived geometrically:
We implement these as functions directly:
L = @(f,x,dx) dx.*sum(f(x(1:end-1)));
R = @(f,x,dx) sum(f(x(2:end)).*dx);
M = @(f,x,dx) sum(f((x(1:end-1) + x(2:end))./2).*dx);
T = @(f,x,dx) sum(((f(x(1:end-1)) + f(x(2:end)))./2).*dx);

Uppgift 4 - approximate integral using Riemann sums

Approximate using the above Riemann sums.
f = @(x) x.*sin(x);
a=0;
b=1;
n=10;
dx = (b-a)/n;
x = linspace(a, b, n+1);
format long
["L", L(f,x,dx); "R",R(f,x,dx); "M",M(f,x,dx); "T",T(f,x,dx); "Control", integral(f,a,b) ]
ans = 5×2 string
"L" "0.26025"
"R" "0.34439"
"M" "0.30059"
"T" "0.30232"
"Control" "0.30117"

Illustration of different Riemann Sums

a=0;
b=10;
n=10;
x = linspace(a,b,n+1);
xfine = linspace(a,b,1000);
subplot(4,2,1:2)
plot(xfine, f(xfine), 'r-'), hold on, grid on
for i=1:n
plot(x(i), f(x(i)), 'g*')
plot([x(i) x(i+1)], [f(x(i)) f(x(i))], 'g-')
plot([x(i+1) x(i+1)], [f(x(i)) f(x(i+1))], 'g-')
end
title('Vänster rektangelregel');
subplot(4,2,3:4)
plot(xfine, f(xfine), 'r-'), hold on, grid on
for i=1:n
plot(x(i+1), f(x(i+1)), 'g*')
plot([x(i) x(i+1)], [f(x(i+1)) f(x(i+1))], 'g-')
plot([x(i) x(i)], [f(x(i)) f(x(i+1))], 'g-')
end
title('Höger rektangelregel');
subplot(4,2,5:6)
hold on
for i=1:n
plot((x(i)+x(i+1))/2, f((x(i)+x(i+1))/2), 'g*')
plot([x(i) x(i)], [f(x(i)), f((x(i)+x(i+1))/2)], 'g-')
plot([x(i) x(i+1)], [f((x(i)+x(i+1))/2), f((x(i)+x(i+1))/2)], 'g-')
plot([x(i+1) x(i+1)], [f((x(i)+x(i+1))/2), f(x(i+1))], 'g-')
end
plot(xfine, f(xfine), 'r-'), grid on
title('Mittpunktsregeln rektangelregel');
subplot(4,2,7:8)
plot(xfine, f(xfine), 'r-'), hold on, grid on
for i=1:n
plot([x(i) x(i+1)], [f(x(i)) f(x(i+1))], 'g-')
end
title('Trapetzregel');

Convergence of Riemann sums

close all
f = @(x) x.*sin(x);
n = 10:10:100;
qv = zeros(size(n));
qh = zeros(size(n));
qm = zeros(size(n));
qt = zeros(size(n));
for k = 1:length(n)
x = linspace(a,b,n(k)+1);
dx = (b-a)/n(k);
qv(k) = L(f,x,dx);
qh(k) = R(f,x,dx);
qm(k) = M(f,x,dx);
qt(k) = T(f,x,dx);
end
kontroll = integral(f,a,b);
plot(n, qh-kontroll, n,qv-kontroll, n,qm-kontroll, n,qt-kontroll)
legend('L(n)', 'R(n)', 'M(n)', 'T(n)')
title('Error vs step-size')
xlabel('n'); ylabel('Err(n)')
grid on
Conclusion: H and L converge at same rate, though H overestimates and L underestimates.
Likewise, M and T converge with , though M overestimates and T underestimates (see Stewart pg. 509, the rectangle formed by the Midpoint rule is equal in area to a trapezoid that is more accurate than that of the Trapezoidal rule, ironically. Twice as accurate.)

Error Bounds

L and H use rectangles to approximate f, thus the error will be proportionate to the slope of the function, .
M and T can both be interpreted as trapezoidal, and thus compensate for slope , but not for curvature . The error is thus proportionate to .
Maximum error bounds can be given in terms of maximum slope and maximum curvature.

Uppgift 5 - Undamped Pendulum (solve parameterized function)

Undamped pendulum. Calculate the period time resulting from different initial angles .
Here, we are asked to calclate a function of the form:
h(t, x) is a parameterized function ie. it has a controllable parameter x that is different from the variable of integration.
In Matlab, integral() expects a function with only one parameter: the variable of integration. Thus, we must create a new (anonymous) function that restricts it to only a single parameter (that is, we assign fixed values to the other parameters).
h = @(t,x) ...
integral(@(t) h(t,0), c, d)
The above will now integrate over h(t), with x=0.
So, in our example:
close all
h = @(phi, phi0) 1./sqrt(1 - sin(phi0/2).^2.*sin(phi).^2); % The integrand with two parameters, the former of which is the variable of integration.
L = 0.1;
g = 9.80665;
c = 0;
d = pi/2;
phi0 = 10:1:170;
phi0 = deg2rad(phi0);
T = zeros(size(phi0));
for i = 1:length(phi0)
T(i) = 4*sqrt(L/g).*integral(@(phi) h(phi, phi0(i)), c, d); % phi is symbolic, whereas phi0 is a fixed constant that we have defined. We must tell integral() which variable to integrate over. In other words, we create a new anonymous function that restricts it to only one parameter, and give the other parameters fixed values.
end
plot(rad2deg(phi0), T)
xlabel('\phi_0');ylabel('T(\phi_0)');
grid on

Numerical Derivation

We compute the derivatives numerically using difference quotients. Remember, the definition of the derivative is usually:
We are forced to choose a finite value for h. Many different difference quotients with differing convergence rates (as h becomes smaller) can be directly derived from Taylor Series.
Recap: the Taylor Series is the expansion of a function into an infinite polynomial, around a specific point a. It is a weighted sum of the sum of a function and its derivatives at a single point that gives the function itself around that point. The Taylor Series of a function around point a is as follows:
The most trivial are the backward- and forwards difference quotients:
(1)
(2)
The Taylor Series of at point x is
We substitute this result into (1) to obtain:
The same result is obtained for .
Here we approximate the derivative at , for different step-sizes h.
In the final plot we see the effect that rounding error has on small h.
close all
u = @(x) exp(x);
Du = @(x) exp(x);
DDu = @(x) exp(x);
h = logspace(-15,-1,400);
x=0.4;
subplot(1,4,1:2)
loglog(h, abs((u(x+h)-u(x))./h - Du(x)), 'r'), hold on
loglog(h,abs((u(x+h)-u(x-h))./(2*h) - Du(x)),'g')
loglog(h,abs((-u(x+2*h)+8*u(x+h)-8*u(x-h)+u(x-2*h))./(12*h) - Du(x)),'b')
DD = @(x, h) (u(x+h)-2.*u(x)+u(x-h))./(h.^2);
loglog(h, abs(DD(x,h)-DDu(x)))
legend('Du', 'D_0', 'D_*', 'DD')
xlabel('h'); ylabel('Err(h)')
title('Error vs h')
ylim([10^-20, 10^-1])
digits(32)
h = vpa(logspace(-15,-1,400)); % vpa() is Variable-Precision Arithmetics, which allows us to specify how many digits of precision we want (standard is 16).
subplot(1,4,3:4)
loglog(h, abs((u(x+h)-u(x))./h - Du(x)), 'r'), hold on
loglog(h,abs((u(x+h)-u(x-h))./(2*h) - Du(x)),'g')
loglog(h,abs((-u(x+2*h)+8*u(x+h)-8*u(x-h)+u(x-2*h))./(12*h) - Du(x)),'b')
loglog(h, abs(DD(x,h)-DDu(x)))
legend('Du', 'D_0', 'D_*', 'DD')
xlabel('h'); ylabel('Err(h)')
title('Error vs h (32 bit precision)')
ylim([10^-20, 10^-1])

Uppgift 6

Load the table of (x,y) values, find the first and second derivatives, and plot the curvature:
Here we approximate the second derivative with
This works since, if we substitute the Taylor expansions of , we are left with .
close all
tab = load('labtabell.mat');
x = tab.x;
u = tab.y;
h = x(2) - x(1);
% Forwards difference
Dforward = (u(2:end) - u(1:end-1))./h;
% Backward difference quotient is no different in this situation.
% Central difference quotient
D0 = (u(3:end) - u(1:end-2))./(2*h);
% The derivative is essentially evaluated at the midpoint of each pair of
% points. Thus we create a new x-axis with only the midpoints (and one less
% point overall).
xm = (x(1:end-1) + x(2:end))./2;
plot(x, u), hold on, grid on
%plot(xm, Dforward, 'b--')
plot(x(2:end-1), D0, '--')
DD = (u(3:end) - 2.*u(2:end-1) + u(1:end-2))./h.^2;
plot(x(2:end-1), DD, '-.')
k = DD./( (1+D0.^2).^(3/2));
plot(x(2:end-1), k)
function x = Newton(f, fprim, x0, tol, kmax)
% Newton's method
% f: f(x)
% f: f'(x)
% x0: initial value
% tol: convergence tolerance (break condition)
% kmax: maximum number of iterations
xk = x0;
for k = 1:kmax
x = xk-f(xk)/fprim(xk);
if abs(x-xk) <= tol
break
else
xk = x;
end
end
end