Lecture 13#
Time-dependent variational methods#
In lecture 11 and lecture 12 we have focused on the spatial discretization of steady (time-independent) differential equations. We will now add time-dependence and thus consider equations like
The solution
and the spatial dependence in the basis functions. As such, we still use the same (time-independent) function space
The most common approach for handling time-dependent problems with variational methods is to use a finite difference method for the time-derivative, and then subsequently apply the method of weighted residuals. For example, in order to solve
with the forward Euler method in time, we discretize first as
where
Note
For simplicity we normally drop the spatial dependence and write only
The solution
A residual can now be defined as
and as always we want this residual to be zero.
Note
The notation
We now introduce the approximations to
Inserted into the residual we get
The method of weighted residuals is to find
and for the Galerkin method
In order to solve the time-dependent problem we need to solve this Galerkin problem many times. The procedure is
Initialize
For
find such that
Here
Note
By initializing
Stability#
Just like for the finite difference method we need to be very careful with the time step
Stability of the forward Euler method for finite difference spatial discretizations#
We start by considering (67) with finite difference discretizations for both temporal and spatial derivatives:
where, for example,
With the finite difference method we assume that the solution can be written as an exponential decay in time and periodic in space (the Fourier wave
where
where
For example, if
Now apply the ansatz
Divide by
Stability requires that
In order to say something more specific about stability, we need to define the finite difference matrix
where
and inserting for
Divide by
and use
We still want
and since
which is satisfied if
We have now found the absolute stability limit for the diffusion equation using the forward Euler method. However, it was quite complicated to compute since we had to discretize in both space and time. You might ask if there is an easier way, and the answer for once is yes. There is actually a generic approach we can take for any differentiation matrix (any value of
We can insert this into (68) to obtain
which indicates that
which is a generic result for the forward Euler method with any difference matrix
We can now compute the eigenvalues of
import numpy as np
from scipy import sparse
N = 11
D2 = sparse.diags((1, 1, -2, 1, 1), (-N, -1, 0, 1, N), shape=(N+1, N+1))
Lambda = np.linalg.eig(D2.toarray())[0]
print(f"Minimum eigenvalue = {min(Lambda)}")
Minimum eigenvalue = -4.0
We find that the minimum eigenvalue is -4! And we get the same result using any odd
which is the same result as (70) if we set
Stability of the forward Euler method for variational forms#
As described in the start of Stability, the variational methods do not include a mesh size
First of all we make the same ansatz and assume that the solution
where
For the forward Euler method we now consider the discretized Galerkin equation
and insert for Eq. (72)
Divide by
We now insert for
On matrix form this is
where
We can compute the eigenvalues
which can be inserted into (74)
Since the vector
and for stability
So with the Galerkin method we do not need the mesh size
The diffusion equation#
Consider the diffusion equation with Dirichlet boundary conditions and a prescribed initial condition
In order to solve (75) we choose a global Galerkin method with mapped Legendre polynomials and basis functions
Since we are using basis functions that are all zero on the boundaries, the last term disappears. In order to incorporate the Dirichlet boundary condition we add a boundary term
We use Legendre basis functions with a mapping from
The stiffness matrix
The mass matrix
The mass matrix is symmetric and we know that
and with
The mass matrix is as such
On matrix form the equation to solve is
This equation is solved for FE_Diffusion
below, where we reuse the code from mandatory assignment 2. In the code we assemble the mass and stiffness matrices
This means that for stability we need to choose a time step
import numpy as np
import sympy as sp
from scipy import sparse
from galerkin import TrialFunction, TestFunction, DirichletLegendre, DirichletChebyshev, \
project, inner
x, t, c, L = sp.symbols('x,t,c,L')
class FE_Diffusion:
"""Class for solving the diffusion equation with
Parameters
----------
N : int
Number of basis functions
L0 : number
The extent of the domain, which is [0, L0]
bc : 2-tuple of numbers
The Dirichlet boundary conditions at the two edges
u0 : Sympy function of x, t, c and L
Used for specifying initial condition
"""
def __init__(self, N, L0=1, u0=sp.cos(sp.pi*x/L)+sp.cos(10*sp.pi*x/L)):
self.N = N
self.L = L0
self.u0 = u0.subs({L: L0})
self.bc = (self.u0.subs({x: 0, t: 0}).n(), self.u0.subs({x: L0, t: 0}).n())
self.uh_np1 = np.zeros(N+1) # expansion coefficients \hat{u}^{n+1}
self.uh_n = np.zeros(N+1)
self.V = self.get_function_space()
self.A, self.S = self.get_mats()
self.lu = sparse.linalg.splu(self.A.tocsc())
def get_function_space(self):
return DirichletLegendre(self.N, domain=(0, self.L), bc=self.bc)
def get_mats(self):
"""Return mass and stiffness matrices
"""
N = self.N
u = TrialFunction(self.V)
v = TestFunction(self.V)
a = inner(u, v)
s = 2/self.L*sparse.diags([4*np.arange(N+1)+6], [0], shape=(N+1, N+1), format='csr')
return a, s
def get_eigenvalues(self):
"""Return eigenvalues of H=A^{-1}S"""
return np.linalg.eig(np.linalg.inv(self.A.toarray()) @ self.S.toarray())[0]
def __call__(self, Nt, dt=0.1, save_step=100):
"""Solve diffusion equation
Parameters
----------
Nt : int
Number of time steps
dt : number
timestep
save_step : int, optional
Save solution every save_step time step
Returns
-------
Dictionary with key, values as timestep, array of solution
The number of items in the dictionary is Nt/save_step, and
each value is an array of length N+1
"""
# Initialize
self.uh_n[:] = project(self.u0, self.V) # unm1 = u(x, 0)
xj = self.V.mesh()
plotdata = {0: self.V.eval(self.uh_n, xj)}
# Solve
f = np.zeros(self.N+1)
for n in range(2, Nt+1):
f[:] = self.A @ self.uh_n - dt*(self.S @ self.uh_n)
self.uh_np1[:] = self.lu.solve(f)
self.uh_n[:] = self.uh_np1
if n % save_step == 0: # save every save_step timestep
plotdata[n] = self.V.eval(self.uh_np1, xj)
return plotdata
Lets solve the problem using initial condition
from utilities import plot_with_offset
sol = FE_Diffusion(40, L0=2)
dt = 2/max(sol.get_eigenvalues())
data = sol(1000, dt=dt, save_step=100)
plot_with_offset(data, sol.V.mesh(), figsize=(4, 3))

The solution is nice and smooth and the time step is
print(f'Maximum time step = {dt}')
Maximum time step = 2.1980578790345177e-05
Lets solve it using a slightly larger
data = sol(1000, dt=1.01*dt, save_step=100)
plot_with_offset(data, sol.V.mesh(), figsize=(4, 3))

Obviously, the solver is now unstable and the solution diverges.
Note
The diffusion equation is “stiff” and requires a very short time step with the forward Euler method in order to remain stable.
Changing from Legendre to Chebyshev makes the required time step even shorter.
class FE_Diffusion_Cheb(FE_Diffusion):
def get_function_space(self):
return DirichletChebyshev(self.N, domain=(0, self.L), bc=self.bc)
def get_mats(self):
"""Return mass and stiffness matrices
"""
N = self.N
u = TrialFunction(self.V)
v = TestFunction(self.V)
a = inner(u, v)
#s2 = -inner(u.diff(2), v) # slow
k = np.arange(N+1, dtype=float)
def diag(i):
if i == 0:
return 2*np.pi*(k+1)*(k+2)
return 4*np.pi*(k[:-(i)]+1)
diags = np.arange(0, N+1, 2)
vals = [diag(i) for i in diags]
s = 2/self.L * sparse.diags(vals, diags, shape=(N+1, N+1), format='csr')
return a, s
sol = FE_Diffusion_Cheb(40, L0=2)
dt = 2/max(sol.get_eigenvalues())
data = sol(1000, dt=dt, save_step=100)
plot_with_offset(data, sol.V.mesh(), figsize=(4, 3))

print(f'Maximum time step = {dt}')
Maximum time step = 1.2332249161314778e-05
We can compare the absolute stability limits with the finite difference method, where
Stiffness matrix for Chebyshev basis
Note that in the implementation we have used
because it is quite expensive to compute the matrix using the inner(u.diff(2), v)
function that loops over all
but it is a lot of work and only necessary in order to save some time on computing the stiffness matrix.
Also note that because of the non-constant weight function
Use a stencil matrix for the implementation#
It is rather messy to work with composite basis functions. A very smart trick is thus to use a stencil matrix
This defines
With
But why is the stencil matrix so smart? Consider the mass matrix
You need to sort out the four (diagonal) matrices on the right. This is not very difficult, since they are all diagonal, but you need to watch out for the indices in order for the mass matrix to turn out correctly. Now use stencil matrix instead (with summation on
Here you only need the one diagonal matrix
And this works for any stencil matrix, not just the Dirichlet one. That is the main advantage of the stencil matrix approach. For the Neumann basis (see lecture 11)
the stencil matrix is
where
Again you need only the single matrix using orthogonal polynomials
Backward Euler#
The forward Euler method requires a very short time step for the diffusion equation. How about the backward Euler method
where the only difference from forward is that
If we continue with the ansatz
which on matrix form reads
If we multiply from left to right by
Here we can once again use
Since all time steps and eigenvalues are real numbers larger than 0, we find that
The Wave equation#
Lets consider the wave equation from lecture 5 in one spatial dimension
As you may recall, we solved the wave equation with Dirichlet, Neumann and open boundary conditions. The two time derivatives also required that the solution was initialized at two time-steps, or more mathematically precise, that both
We now use finite differences in time such that
A numerical residual
where we use
for
In order to solve the Galerkin problem we need to formulate the linear algebra problem. Hence we insert for
We next rearrange such that the unknown
With matrix notation,
where
where the vector
The boundary conditions determine the path forward. Here there are subtile differences between global Galerkin methods and the local finite element method, but the procedures are exactly as detailed in lecture 11 and lecture 12.
For Dirichlet boundary conditions
Global methods make use of basis functions that are all zero at the boundaries
. Hence the vector disappears. Nonzero values for are incorporated using a boundary function, see lecture 11.The finite element method makes use of Lagrange polynomials and
for and zero otherwise. Similarly, for and zero otherwise. Hence it is only and that potentially are different from zero. However, since and we ident and set and (see lecture 12). As such the boundary terms in will never enter the equation system.
For Neumann boundary conditions
Global methods can make use of basis functions that all have
, and add a boundary function that satisfied and . However, global methods can also make use of any basis functions (like pure Legendre polynomials) and incorporate the boundary conditions through . We get that . Note that this approach is very easy to use if , because in that case you only need to neglect the boundary vector .The finite element method makes use of the weak form and specifies
and since the basis functions are Lagrange polynomials. No more is needed. In the event that , the boundary vector can simply be neglected.
How about stability? We can make the same ansatz as for the diffusion equation and assume
which can be transformed by dividing by
Using
where
and in order for
Note
For all
Lets implement the wave equation using a global Galerkin method, homogeneous Dirichlet boundary conditions and an initial pulse FE_Diffusion
class and thus reuse much code
class Wave(FE_Diffusion):
def __init__(self, N, L0=1, c0=1, u0=sp.cos(sp.pi*x/L)+sp.cos(10*sp.pi*x/L)):
u0 = u0.subs({c: c0})
FE_Diffusion.__init__(self, N, L0=L0, u0=u0)
self.c = c0
self.uh_nm1 = np.zeros_like(self.uh_n)
def __call__(self, Nt, dt=0.1, save_step=100):
# Initialize
self.uh_nm1[:] = project(self.u0.subs(t, 0), self.V) # unm1 = u(x, 0)
xj = self.V.mesh()
plotdata = {0: self.V.eval(self.uh_nm1, xj)}
self.uh_n[:] = project(self.u0.subs(t, dt), self.V)
if save_step == 1:
plotdata[1] = self.V.eval(self.uh_n, xj)
# Solve
f = np.zeros(self.N+1)
alfa = self.c*dt
for n in range(2, Nt+1):
f[:] = self.A @ (2*self.uh_n-self.uh_nm1) - alfa**2*(self.S @ self.uh_n)
self.uh_np1[:] = self.lu.solve(f)
self.uh_nm1[:] = self.uh_n
self.uh_n[:] = self.uh_np1
if n % save_step == 0: # save every save_step timestep
plotdata[n] = self.V.eval(self.uh_np1, xj)
return plotdata
sol = Wave(40, L0=2, c0=1, u0=sp.exp(-50*(x-L/2+c*t)**2))
dt = 1/sol.c*np.sqrt(4/max(sol.get_eigenvalues()))
data = sol(400, dt=dt, save_step=40)
plot_with_offset(data, sol.V.mesh(), figsize=(4, 3))

print(f"Maximum time step = {dt}")
Maximum time step = 0.006630321076742087
This maximum time step should be compared with the maximum time step of a finite difference method. Remember the Courant number
Weekly assignments#
Implement the diffusion equation with the backward Euler scheme.
Find the absolute stability limit of the diffusion equation discretized with a Crank-Nicolson scheme. Use both a finite difference method
and a comparable Galerkin method.Implement the diffusion equation with a Crank-Nicolson scheme and verify the absolute stability limits found in 2.
Consider the leap-frog finite difference scheme
suggested by Richardson in 1910. Explain why this scheme is unconditionally unstable. To this end you can use that all the eigenvalues of are real.
Note
Richardson was one of the great pioneers for using mathematical techniques in weather forecasting, but at the time (1910) he did not know about stability limits for finite difference schemes. The von Neumann stability analysis that made this possible was not discovered until 1947.