where \(\hat{u}^n_j = \hat{u}_j(t_n)\) are the time dependent expansion coefficients (the unknowns at time \(t_n\)). Insert into the residual to get the numerical residual
The method of weighted residuals is to find \(u^{n+1}_N \in V_N\) such that
\[
(\mathcal{R}_N, v) = 0, \quad \forall \, v \in W
\]
and for the Galerkin method \(W = V_N\).
In order to solve the time-dependent problem we need to solve this Galerkin problem many times. The procedure is
Initialize \(u_N^0(\boldsymbol{x})\) by projecting the given initial condition \(u(\boldsymbol{x}, 0)\) to \(V_N\) (function approximation, see lectures 8, 9). That is, find \(u^0_N \in V_N\) such that \[
(u^0_N-u(\boldsymbol{x}, 0), v)=0, \quad \forall \, v \in V_N
\]
For \(n = 0, 1, \ldots, N_t-1\) find \(u^{n+1}_N \in V_N\) such that
into the discretized equations. For stability we required \(|g| \le 1\).
We actually called the amplification factor \(A\), but since then we have started using matrices that we rather have as \(A\). Hence the amplification factor is now called \(g\).
Since \(\sin^2\left(k \Delta x/2\right) \le 1\) and positive, we get that \(1- \frac{4 \Delta t}{\Delta x^2}\) is always less than 1. However, \(1- \frac{4 \Delta t}{\Delta x^2}\) can be smaller than \(-1\), and that gives us the stability limit
The complex exponentials \(e^{\hat{\imath} k x}\) are eigenfunctions of the Laplace operator in 1D, so we actually used eigenvalue theory also for the hard approach, we just did not mention it!
Stability of the forward Euler method for the Galerkin method
For the forward Euler method we now consider instead the Galerkin method for the discretization of space (instead of finite differences)
We get the variational form used on the previous slide with diffusion for \(\mathcal{L}(\psi_j)\) and integration by parts (boundary terms canceled since \(\psi_j(0)=\psi_j(L)=0\)): \[
\sum_{j=0}^N(\psi_j, \psi_i) \hat{u}^{n+1}_j = \sum_{j=0}^N(\psi_j, \psi_i) \hat{u}^{n}_j - \Delta t\left( \sum_{j=0}^N (\psi'_j, \psi'_i) \hat{u}^{n}_j - \cancel{[u^n_N \psi_i]_{x=0}^{x=L}} \right)
\]
It is rather messy to work with composite basis functions. A very smart trick is thus to use a stencil matrix\(S\), such that (summation on repeated \(j\) index)
\[
\psi_i = P_i-P_{i+2} = s_{ij} P_j
\]
This defines \(S \in \mathbb{R}^{(N+1) \times (N+3)}\) for the Dirichlet problem as
\[
\sum_{j=0}^N g \left( (\psi_j, \psi_i) + \Delta t (\psi'_j, \psi'_i) \right) \hat{u}^{0}_j = \sum_{j=0}^N(\psi_j, \psi_i) \hat{u}^{0}_j
\]
Backward Euler on matrix form
\[
\sum_{j=0}^N g \left( (\psi_j, \psi_i) + \Delta t (\psi'_j, \psi'_i) \right) \hat{u}^{0}_j = \sum_{j=0}^N(\psi_j, \psi_i) \hat{u}^{0}_j
\]
becomes
\[
g (A + \Delta t S) \boldsymbol{\hat{u}}^0 = A \boldsymbol{\hat{u}}^0
\]
If we multiply from left to right by \(A^{-1}\) we get
\[
g (I + \Delta t A^{-1}S) \boldsymbol{\hat{u}}^0 = \boldsymbol{\hat{u}}^0
\]
Here we can once again use \(H=A^{-1}S\) and \(H \boldsymbol{\hat{u}}^0 = \lambda \boldsymbol{\hat{u}}^0\) and thus we find
\[
g = \frac{1}{1+\Delta t \lambda}
\]
Since all time steps and eigenvalues are real numbers larger than 0, \(g\) is always less than 1 and positive and the backward Euler method is as such unconditionally stable.
With matrix notation, \(a_{ij} = (\psi_j, \psi_i)\) and \(s_{ij} = (\psi'_j, \psi'_i)\), we get
\[
A \boldsymbol{\hat{u}}^{n+1} = A (2 \boldsymbol{\hat{u}}^n - \boldsymbol{\hat{u}}^{n-1}) - \alpha^2 \left( S \boldsymbol{\hat{u}}^n - \boldsymbol{b}^n \right)
\]
where \(\boldsymbol{b}^n = ((u^n_N)'(L) \psi_i(L) - (u^n_N)'(0) \psi_i(0))_{i=0}^N\).
We can write this as
\[
A \boldsymbol{\hat{u}}^{n+1} = \boldsymbol{f}^n
\]
where the vector \(\boldsymbol{f}^n = A (2 \boldsymbol{\hat{u}}^n - \boldsymbol{\hat{u}}^{n-1}) - \alpha^2 \left( S \boldsymbol{\hat{u}}^n - \boldsymbol{b}^n\right)\).
Use a boundary function (with reference coordinate \(X\in [-1, 1]\)) \[
B(x)=\tilde{B}(X) = \frac{b}{2}(1+X) + \frac{a}{2}(1-X)
\]
and solve for \(\tilde{u}^n_N \in V_N = \text{span}\{\psi_j\}_{j=0}^N\). Final solution is then
\[
u^n_N = \tilde{u}^n_N + B(x)
\]
Dirichlet with the finite element method
Makes use of Lagrange polynomials and \(\psi_i(0)= 1\) for \(i=0\) and zero otherwise. Similarly, \(\psi_i(L)=1\) for \(i=N\) and zero otherwise. Hence it is only \(b^n_0\) and \(b^n_N\) that potentially are different from zero. However,
Since \(\hat{u}_0\) and \(\hat{u}_N\) are known we do not solve the linear PDE for \(i=0\) or \(i=N\). Simply ident the coefficient matrix \(A\) and set \(f^n_0=a\) and \(f^n_N=b\).
Use basis functions that eliminates \(b_i^n\) and ensures \((\tilde{u}^n_N)'(0)=(\tilde{u}^n_N)'(L)=0\)\[
\psi'_j(0) = \psi'_j(L) = 0, \quad j=0, 1, \ldots, N
\]
Add a boundary function \(u^n_N=\tilde{u}^n_N+B(x)\) that satisfied \(B'(0)=g_0\) and \(B'(L)=g_L\)
Insert for \((u^n_N)'(0) = g_0\) and \((u^n_N)'(L)=g_L\)
\[
{b}_i^n = g_L \psi_i(L) - g_0 \psi_i(0)
\]
Use any basis function, except those where all \(\psi_i(0)=0\) and \(\psi_i(L)=0\), for all \(i=0,1,\ldots, N\).
This works for the finite element method, where \(\psi_0(0)=1\) and \(\psi_N(L)=1\). Nothing else is required
Also works for the global Galerkin method, e.g., Legendre polynomials \(\psi_i(x)=P_i(X)\).
Does not work for Chebyshev because the method relies on integration by parts, which does not work with the Chebyshev weights \(\omega(x)=1/\sqrt{1-X^2}\).
Stability of the wave equation in variational form
We have derived the linear algebra problem
\[
A (\boldsymbol{\hat{u}}^{n+1} -2 \boldsymbol{\hat{u}}^n + \boldsymbol{\hat{u}}^{n-1}) = - \alpha^2 \left( S \boldsymbol{\hat{u}}^n - \boldsymbol{b}^n \right)
\]
The ansatz \(u_N^{n+1}=gu^{n}_N=g^{n+1}u_N^0\) also implies that
\[
\boldsymbol{\hat{u}}^{n+1} = g \boldsymbol{\hat{u}}^n = g^{n+1} \boldsymbol{\hat{u}}^0 \tag{1}
\]
since
\[
u^{n+1}_N = g u^{n}_N \Rightarrow \sum_{j=0}^N \hat{u}^{n+1}_j \psi_j = g \sum_{j=0}^N \hat{u}^{n}_j \psi_j = \sum_{j=0}^N (g\hat{u}^{n}_j) \psi_j
\]
Hence \(\hat{u}^{n+1}_j = g\hat{u}^n_j\). Recursively, we get (1).
Find amplification factor
\[
A (\boldsymbol{\hat{u}}^{n+1} -2 \boldsymbol{\hat{u}}^n + \boldsymbol{\hat{u}}^{n-1}) = - \alpha^2 \left( S \boldsymbol{\hat{u}}^n - \boldsymbol{b}^n \right)
\]
Insert for \(\boldsymbol{\hat{u}}^{n} = g^{n} \boldsymbol{\hat{u}}^0\) and assume that \(\boldsymbol{b}^n=\boldsymbol{0}\)
\[
(g^{n+1}-2g^n+g^{n-1}) A \boldsymbol{\hat{u}}^0 = -\alpha^2 g^n S \boldsymbol{\hat{u}}^0
\]
Divide by \(g^n\) and multiply by \(A^{-1}\) from the left
Use \(H = A^{-1} S\) and the eigenvalues \(H \boldsymbol{\hat{u}}^0 = \lambda I \boldsymbol{\hat{u}}^0\) to get
\[
(g - 2 + g^{-1}) I \boldsymbol{\hat{u}}^0 = -\alpha^2 \lambda I \boldsymbol{\hat{u}} ^{0}
\]
and after eliminating the equal terms on both sides \[
g + g^{-1} = \beta \tag{1}
\]
where \(\beta = -\alpha^2 \lambda +2\).
For stability we need \(|g| \le 1\).
Note
If \(g<1\) is a root of (1), then \(g^{-1}>1\) is also a root, which is unstable. All roots must be \(\le 1\). Hence \(|g|=1\) is the only possible solution.
Find the longest stable time step
\[
g + g^{-1} = \beta
\]
Solve quadratic equation for \(g\)
\[
g = \frac{\beta \pm \sqrt{\beta^2-4}}{2}
\]
Assume \(-2 \le \beta \le 2\) such that \(\beta^2-4 \le 0\) and \(\sqrt{\beta^2-4} = a \hat{\imath}\), with \(a=\sqrt{4-\beta^2}\) real
For \(-2 \le \beta \le 2\) we get that \(|g|=1\) and the numerical solution is stable!
Using \(\beta=-\alpha^2 \lambda +2 = -(c \Delta t)^2 \lambda +2 = -2\) the longest possible time step is
\[
\Delta t \le \frac{1}{c}\sqrt{\frac{4}{\max(\lambda)}}
\]
Implementation - Overload diffusion solver
We are using the same function space and thus the same matrices \(A\) and \(S\) as the diffusion equation. We only need to reimplement the time stepper in __call__.
class Wave(FE_Diffusion):def__init__(self, N, L0=1, c0=1, u0=sp.sin(sp.pi*x/L)): u0 = u0.subs({c: c0}) FE_Diffusion.__init__(self, N, L0=L0, u0=u0)self.c = c0self.uh_nm1 = np.zeros_like(self.uh_n)def__call__(self, Nt, dt=0.1, save_step=100):self.uh_nm1[:] = project(self.u0.subs(t, 0), self.V) xj =self.V.mesh() plotdata = {0: self.uh_nm1(xj)}self.uh_n[:] = project(self.u0.subs(t, dt), self.V)if save_step ==1: plotdata[1] =self.uh_n(xj) f = np.zeros(self.N+1) alfa =self.c*dtfor n inrange(2, Nt+1): f[:-2] =self.A @ (2*self.uh_n[:-2]-self.uh_nm1[:-2]) \- alfa**2*(self.S @self.uh_n[:-2])self.uh_np1[:] =self.sol(f)self.uh_nm1[:] =self.uh_nself.uh_n[:] =self.uh_np1if n % save_step ==0: # save every save_step timestep plotdata[n] =self.uh_np1(xj)return plotdata
data = sol(400, dt=dt*1.01, save_step=40)plot_with_offset(data, sol.V.mesh(), figsize=(8, 2.5))
Summary of lecture
We have solved time-dependent PDEs using finite differences in time and the Galerkin method in space.
In order to compute stability limits we have made use of the common marching ansatz \[
u_N^{n+1}=gu_N^n
\] and used eigenvalues \(\lambda\) and eigenfunctions \(\boldsymbol{\hat{u}}^0\) to find the amplification factor \(g\)\[
\begin{align}
g \boldsymbol{\hat{u}}^0 &= \boldsymbol{\hat{u}}^0 + \Delta t A^{-1}M \boldsymbol{\hat{u}}^0\\
g \boldsymbol{\hat{u}}^0 &= \boldsymbol{\hat{u}}^0 + \lambda \Delta t \boldsymbol{\hat{u}}^0 \\
\Rightarrow g &= 1 + \lambda \Delta t
\end{align}
\]
We have described how to apply Neumann boundary conditions both strongly (through basis functions) or weakly (through integration by parts of the variational form)
We have described how to utilize a stencil matrix for the implementation of composite basis functions.