We already have a solver that we can use to experiment with. Lets run it for a range of different timesteps.
import numpy as npdef solver(I, a, T, dt, theta):"""Solve u'=-a*u, u(0)=I, for t in (0, T] with steps of dt.""" Nt =int(T/dt) # no of time intervals T = Nt*dt # adjust T to fit time step dt u = np.zeros(Nt+1) # array of u[n] values t = np.linspace(0, T, Nt+1) # time mesh u[0] = I # assign initial condition u[1:] = (1- (1-theta)*a*dt)/(1+ theta*dt*a) u[:] = np.cumprod(u)return u, t
The characteristics of the displayed curves can be summarized as follows:
The Backward Euler scheme always gives a monotone solution, lying above the exact solution.
The Crank-Nicolson scheme gives the most accurate results, but for \(\Delta t=1.25\) the solution oscillates.
The Forward Euler scheme gives a growing, oscillating solution for \(\Delta t=1.25\); a decaying, oscillating solution for \(\Delta t=0.75\); a strange solution \(u^n=0\) for \(n\geq 1\) when \(\Delta t=0.5\); and a solution seemingly as accurate as the one by the Backward Euler scheme for \(\Delta t = 0.1\), but the curve lies below the exact solution.
Small enough \(\Delta t\) gives stable and accurate solution for all methods!
Problem setting
We ask the question
Under what circumstances, i.e., values of the input data \(I\), \(a\), and \(\Delta t\) will the Forward Euler and Crank-Nicolson schemes result in undesired oscillatory solutions?
Techniques of investigation:
Numerical experiments
Mathematical analysis
Another question to be raised is
How does \(\Delta t\) impact the error in the numerical solution?
Exact numerical solution
For the simple exponential decay problem we are lucky enough to have an exact numerical solution
Such a formula for the exact discrete solution is unusual to obtain in practice, but very handy for our analysis here.
Note
An exact dicrete solution fulfills a discrete equation (without round-off errors), whereas an exact solution fulfills the original mathematical equation.
Stability
Since \(u^n=I A^n\),
\(A < 0\) gives a factor \((-1)^n\) and oscillatory solutions
\(|A|>1\) gives growing solutions
Recall: the exact solution is monotone and decaying
If these qualitative properties are not met, we say that the numerical solution is unstable
\(-1\) is the critical limit (because \(A\le 1\) is always satisfied).
\(-1 < A\) is always fulfilled for Backward Euler (\(\theta=1\)) and Crank-Nicolson (\(\theta=0.5\)).
For forward Euler or simply \(\theta < 0.5\) we have \[
\Delta t \leq \frac{2}{(1-2\theta)a},\quad
\] and thus \(\Delta t \leq 2/a\) for stability of the forward Euler (\(\theta=0\)) method
Explanation of problems with forward Euler
\(a\Delta t= 2\cdot 1.25=2.5\) and \(A=-1.5\): oscillations and growth
\(a\Delta t = 2\cdot 0.75=1.5\) and \(A=-0.5\): oscillations and decay
\(\Delta t=0.5\) and \(A=0\): \(u^n=0\) for \(n>0\)
\(p=a\Delta t\) is the important parameter for numerical performance
\(p=a\Delta t\) is a dimensionless parameter
all expressions for stability and accuracy involve \(p\)
Note that \(\Delta t\) alone is not so important, it is the combination with \(a\) through \(p=a\Delta t\) that matters
Another evidence why \(p=a\Delta t\) is key
If we scale the model by \(\bar t=at\), \(\bar u=u/I\), we get \(d\bar u/d\bar t = -\bar u\), \(\bar u(0)=1\) (no physical parameters!). The analysis show that \(\Delta \bar t\) is key, corresponding to \(a\Delta t\) in the unscaled model.
Series expansion of amplification factors
To investigate \(A_e - A\) mathematically, we can Taylor expand the expression, using \(p=a\Delta t\) as variable.
from sympy import*# Create p as a mathematical symbol with name 'p'p = Symbol('p', positive=True)# Create a mathematical expression with pA_e = exp(-p)# Find the first 6 terms of the Taylor series of A_eA_e.series(p, 0, 6)
Substitute \(n\) by \(t/\Delta t\) and \(p\) by \(a \Delta t\):
Forward and Backward Euler: leading order term \(\frac{1}{2} ta^2\Delta t\)
Crank-Nicolson: leading order term \(\frac{1}{12}ta^3\Delta t^2\)
Convergence
The numerical scheme is convergent if the global error \(e^n\rightarrow 0\) as \(\Delta t\rightarrow 0\). If the error has a leading order term \((\Delta t)^r\), the convergence rate is of order \(r\).
Integrated errors
The \(\ell^2\) norm of the numerical error is computed as
Truncation error measures the residual in the difference equations. The scheme is consistent if the truncation error goes to 0 as \(\Delta t\rightarrow 0\). Importance: the difference equations approaches the differential equation as \(\Delta t\rightarrow 0\).
Stability means that the numerical solution exhibits the same qualitative properties as the exact solution. Here: monotone, decaying function.
Convergence implies that the true (global) error \(e^n =u_{e}(t_n)-u^n\rightarrow 0\) as \(\Delta t\rightarrow 0\). This is really what we want!
The Lax equivalence theorem for linear differential equations: consistency + stability is equivalent with convergence.
(Consistency and stability is in most problems much easier to establish than convergence.)
Numerical computation of convergence rate
We assume that the \(\ell^2\) error norm on the mesh with level \(i\) can be written as
\[
E_i = C (\Delta t_i)^r
\]
where \(C\) is a constant. This way, if we have the error on two levels, then we can compute
\[
r = \frac{\log {\frac{E_{i-1}}{E_i}}}{\log {\frac{\Delta t_{i-1}}{\Delta t_i}}}
\]
Function for convergence rate
u_exact =lambda t, I, a: I*np.exp(-a*t)def l2_error(I, a, theta, dt): u, t = solver(I, a, T, dt, theta) en = u_exact(t, I, a) - ureturn np.sqrt(dt*np.sum(en**2)) def convergence_rates(m, I=1, a=2, T=8, theta=1, dt=1.): dt_values, E_values = [], []for i inrange(m): E = l2_error(I, a, theta, dt) dt_values.append(dt) E_values.append(E) dt = dt/2# Compute m-1 orders that should all be the same r = [np.log(E_values[i-1]/E_values[i])/ np.log(dt_values[i-1]/dt_values[i])for i inrange(1, m, 1)]return r
Test convergence rates
Backward Euler:
I, a, T, dt, theta =1., 2., 8., 0.1, 1.convergence_rates(4, I, a, T, theta, dt)