![]() | Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli |
Note: The presentation below largely follows part II in “Finite Difference Methods for Ordinary and Partial Differential Equations” by LeVeque (SIAM, 2007).
from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
import numpyNumerical Solution to ODE Initial Value Problems - Part 1¶
Many physical, biological, and societal systems can be written as a system of ordinary differential equations (ODEs). In the case where the initial state (value) is know the problems can be written as
where
is the state vector
is a vector-valued function that describes the evolution of with time
is the initial condition at time
The Example for our time: A non-linear model of Epidemics¶
Classical Kermack and McKendrick (1927) SIR model of epidemics (with reinfection)
Where the variable represents the fraction of a population that is Susceptible to infection, is the proportion Infected and the fraction Recovered. For this model . The parameters control the relative rates of infection and recovery.
For this problem
Numerical Solutions¶
# Solve using SciPy's ODE integrator solve_ivp
from scipy.integrate import solve_ivp
# define the RHS of our system of ODE's
def f_sir(t, u, sigma, k):
s, i, r = u
return numpy.array([-s * i + k * r, (s - sigma) * i, sigma * i - k * r])sigma = 0.5
k = 0.025
t_max = 100
u_0 = [0.999, 0.001, 0.0]
sol = solve_ivp(
f_sir, [0, t_max], u_0, args=(sigma, k), rtol=1.0e-6, atol=1.0e-9, dense_output=True
)t = numpy.linspace(0, t_max, 300)
z = sol.sol(t)
fig = plt.figure(figsize=(20, 7))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t, z[0], "r", label="s", linewidth=2)
axes.plot(t, z[1], "b", label="i", linewidth=2)
axes.plot(t, z[2], "g", label="r", linewidth=2)
axes.plot(t, sigma * numpy.ones(t.shape), "k--", label="$\sigma$")
axes.legend(loc="best", shadow=True, fontsize=14)
axes.set_xlabel("Time", fontsize=16)
axes.set_ylabel("Population", fontsize=16)
axes.grid()
axes.set_title("SIR system: $\sigma={}$, $k={}$".format(sigma, k), fontsize=18)
plt.show()Questions an epidemiologist might ask:¶
What are the dynamics of this system? does it predict steady or oscillatory solutions?
Can we estimate critical parameters (, ) from data?
Can we reliably use this model to predict the future?
How do we evaluate whether this is a useful model?
How might I modify/improve this model.
Questions a Computational Mathematician might ask:¶
Does a solution to the model even exist and is it unique?
Is our approximate numerical solution accurate?
What are the dynamics of this system? does it predict steady or oscillatory solutions?
how do we understand the sensitivity to parameters?
Existence and Uniqueness of solutions (n-D autonomous systems)¶
For proof see Hirsch, Smale, Devaney, Dynamical Systems
Theorem: (Picard-Lindelhof)¶
Given an Autonomous, dynamical system
with and
Consider a “spherical” domain of radius around the initial condition .
If, within this domain, is
Bounded:
Lipshitz Continuous:
Then a unique solution exists to the ODE IVP for some interval of time where
Geometric Picture¶
![]() | $$ \frac{\text{d}\mathbf{u}}{\text{d}t} = \mathbf{f}(\mathbf{u}), \quad \mathbf{u}(0) = \mathbf{u}_0 $$ |
Short Version: If is sufficiently smooth, then a local solution to the ODE exists and is unique
Caveat: The theorem itself gives NO constructive way to find that solution
decay_constant = -numpy.log(2.0) / 1600.0
t = numpy.linspace(0.0, 4800, 11)
y = numpy.linspace(0.0, 1.2, 11)
T, Y = numpy.meshgrid(t, y)
dt = numpy.ones(T.shape)
dy = -dt * Y
tp = numpy.linspace(0.0, 4800, 100)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(T, Y, dt, dy, linewidth=0.1, color="gray")
axes.plot(tp, numpy.exp(decay_constant * tp), linewidth=3)
axes.plot(0.0, 1.0, "ro", markersize=10)
axes.plot(1600.0, 0.5, "rx", markersize=10)
axes.grid()
axes.set_title(
"Radioactive Decay, $u' = - \lambda u$, $u(0)=1$, $t_{1/2}=1600$ yr", fontsize=18
)
axes.set_xlabel("t (years)", fontsize=16)
axes.set_ylabel("u", fontsize=16)
axes.set_ylim((-0.1, 1.2))
plt.show()Examples: Complex radioactive decay (or chemical system).¶
Chain of decays from one species to another.
For systems of equations like this the general solution to the ODE is the matrix exponential:
which can be solved given the eigenvalues and eigenvectors of .
Examples: Particle tracking in a fluid¶
In fact all ODE IVP systems can be thought of as tracking particles through a flow field (dynamical system). In 1-dimension the flow “manifold” we are on is fixed by the initial condition.
x = numpy.linspace(0.0, 1.0, 11)
y = numpy.linspace(0.0, 1.0, 11)
x_fine = numpy.linspace(0.0, 1.0)
y_fine = numpy.linspace(0.0, 1.0)
X, Y = numpy.meshgrid(x, y)
X_fine, Y_fine = numpy.meshgrid(x_fine, y_fine)
pi = numpy.pi
psi = numpy.sin(pi * X_fine) * numpy.sin(pi * Y_fine)
U = pi * numpy.sin(pi * X) * numpy.cos(pi * Y)
V = -pi * numpy.cos(pi * X) * numpy.sin(pi * Y)
x0 = 0.75
y0 = 0.75
psi0 = numpy.sin(pi * x0) * numpy.sin(pi * y0)
fig = plt.figure(figsize=(8, 8))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(X, Y, U, V)
axes.plot(0.75, 0.75, "ro")
axes.contour(X_fine, Y_fine, psi, [psi0])
axes.grid()
axes.set_title("Particle tracking", fontsize=18)
axes.set_xlabel("y", fontsize=16)
axes.set_ylabel("x", fontsize=16)
plt.show()from scipy.integrate import solve_ivp
def f_vanderpol(t, u, mu=5.0):
return numpy.array([u[1], mu * (1.0 - u[0] ** 2) * u[1] - u[0]])
N = 100
t_span = (0.0, 50.0)
u0 = [1.0, 1.0]
f = lambda t, u: f_vanderpol(t, u, mu=5)
sol = solve_ivp(f, t_span, u0, method="BDF", rtol=1.0e-8)fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(sol.t, sol.y[0])
axes.set_title("Solution to Van der Pol Oscillator", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("y(t)", fontsize=16)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(sol.y[0], sol.y[1], "r")
axes.set_title("Phase Diagram for Van der Pol Oscillator", fontsize=18)
axes.set_xlabel("y(t)", fontsize=16)
axes.set_ylabel("y'(t)", fontsize=16)
axes.axis("equal")
axes.grid()
plt.show()Examples: the Lorenz Equations¶
The Lorenz Equations are a simplified model of atmospheric convection that are described by a non-linear system of three equations
and was one of the first systems to display “deterministic Chaos” or sensitivity to initial conditions.
from scipy.integrate import solve_ivp
def f_lorenz(t, u, sigma, beta, rho):
"""Compute the time-derivative of a Lorenz system."""
x, y, z = u
return numpy.array([sigma * (y - x), x * (rho - z) - y, x * y - beta * z])
t_span = (0.0, 20)
sigma = 10.0
beta = 8.0 / 3.0
rho = 28.0
u0 = numpy.array([5.0, 5.0, 10.0])
sol = solve_ivp(
f_lorenz, t_span, u0, args=(sigma, beta, rho), method="RK45", rtol=1.0e-8
)
u1 = u0 + numpy.array([0.0, 0.0, 0.00001])
sol1 = solve_ivp(
f_lorenz, t_span, u1, args=(sigma, beta, rho), method="RK45", rtol=1.0e-8
)fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(sol.t, sol.y.T)
axes.set_title("Solution to Lorenz Equations", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("y(t)", fontsize=16)
axes.legend(["x", "y", "z"], loc="best")
axes.grid()
ax = fig.add_subplot(1, 2, 2, projection="3d")
ax.axis("on")
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
ax.plot(sol.y[0], sol.y[1], sol.y[2], "b")
ax.plot(sol1.y[0], sol1.y[1], sol1.y[2], "r")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
# ax.view_init(30,104)
plt.show()The Big Questions¶
Given a RHS and initial condition
How do you find a discrete numerical solution that approximates the trajectory ?
How do you control the accuracy of the approximation?
How do you improve the efficiency of the approximation?
How do you understand stability and convergence?
Some Notation: Basic Stepping schemes¶
Introducing some notation to simplify things
where lower-case letters are “exact”.
Looking back at our work on numerical differentiation why not approximate the derivative as a finite difference:
We still need to decide how to evaluate the term however.
One obvious way to do this, is to just use and write the update scheme as
Which is our first integration scheme (which goes by the name Euler’s method). As usual, though the first scheme is often the worst scheme, but with a bit of understanding we can do much better with not a lot more work.
Integral form of ODE IVP’s: the Relationship to quadrature¶
Euler’s method is an example of a “Single Step, multi-stage” scheme of which there are many. However, to derive them it is actually more instructive to work with the integral form of an ODE, which will put the equations in a form where we can use our ideas from quadrature to make progress
Given a a system of ODE’s
However using the fundamental theorem of calculus tells us that the LHS is so we can write the ODE as
Single-Step Multi-Stage Schemes¶
The integral form of an ODE initial value problem can be written
Which says that our solution , if it exists at some time in the future, is plus a number
which is a definite line integral (along an unknown solution).
An important class of ODE solvers are called Single Step, Multi-stage schemes which can be most easily understood as extensions of the Newton-Cotes quadrature schemes for approximating (plus an error term that will scale as )
The Geometric picture¶
t = numpy.linspace(0.0, 4800, 11)
y = numpy.linspace(0.0, 1.2, 11)
T, Y = numpy.meshgrid(t, y)
dt = numpy.ones(T.shape)
dy = -dt * Y
tK = 2000.0
uK = numpy.exp(decay_constant * tK)
K = uK - 1.0
tp = numpy.linspace(0.0, 4800, 100)
tk = numpy.linspace(0.0, tK, 100)
fig = plt.figure(figsize=(10, 8))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(T, Y, dt, dy, color="gray")
axes.plot(tp, numpy.exp(decay_constant * tp))
axes.plot(0.0, 1.0, "ro")
axes.plot(tk, numpy.exp(decay_constant * tk), "r--")
axes.plot(tK, uK, "ro")
axes.plot([0.0, 0.0], [1.0, uK], "r--")
axes.text(10.0, 0.72, "$K$", fontsize=24, color="red")
axes.plot([0.0, tK], [uK, uK], "r--")
axes.text(900.0, uK - 0.1, "$\Delta t$", fontsize=24, color="red")
axes.text(-10, 1.0, "$U_0$", fontsize=24, color="blue")
axes.text(tK + 10, uK, "$U_1$", fontsize=24, color="blue")
axes.grid()
axes.set_title("Direction Set, $u' = - \lambda u$, $u(0)=1$", fontsize=18)
axes.set_xlabel("t (years)", fontsize=16)
axes.set_ylabel("u", fontsize=16)
axes.set_ylim((-0.1, 1.2))
plt.show()Forward Euler scheme¶
For example, if we approximate with a left-sided quadrature rule
then our first ODE algorithm can be written
Which is exactly Euler’s method that we derived previously
in terms of our discrete approximation $$
$$
known as the forward Euler method. In essence we are approximating the derivative with the value of the function at the point we are at .
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
# Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0.0, 1.0, "ro")
axes.text(0.0, 1.01, "$U_0$", fontsize=16)
axes.plot((0.0, dt), (c_0, u_np), "k")
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), "k--")
axes.plot((0.0, 0.0), (c_0, u_np), "k--")
axes.text(10.0, 0.75, "$K_1$", fontsize=16)
axes.plot((0.0, dt), (u_np, u_np), "k--")
axes.text(400, u_np - 0.05, "$\Delta t$", fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
axes.grid()
plt.show()# Implement Forward Euler
def euler(f, t_span, u0, N):
"""simple implementation of constant step-size forward euler method
This doc string should have so much more in it
"""
t = numpy.linspace(t_span[0], t_span[1], N)
u = numpy.empty(t.shape)
u[0] = u0
delta_t = t[1] - t[0]
for n, t_n in enumerate(t[:-1]):
K1 = delta_t * f(t_n, u[n])
u[n + 1] = u[n] + K1
return t, udecay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
t_span = [0.0, 1.6e3]
u0 = 1.0
N = 11
t_euler, u_euler = euler(f, t_span, u0, N)t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = lambda t: c_0 * numpy.exp(decay_constant * t)
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t_euler, u_euler, "or", label="Euler")
axes.plot(t_exact, u_exact(t_exact), "k--", label="True Solution")
axes.set_title("Forward Euler")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c(t)$")
axes.set_ylim((0.4, 1.1))
axes.grid()
axes.legend()
abs_err = numpy.abs(u_euler - u_exact(t_euler))
rel_err = abs_err / u_exact(t_euler)
axes = fig.add_subplot(1, 2, 2)
axes.plot(t_euler, abs_err, "ro", label="absolute error")
axes.plot(t_euler, rel_err, "bo", label="relative error")
axes.set_xlabel("t (years)")
axes.set_ylabel("error")
axes.set_title("Error")
axes.legend(loc="best")
axes.grid()
plt.show()Backward’s Euler¶
Similar to forward Euler is the backward Euler method which, uses a right-rectangle rule to estimate given at a future time. i.e.
However, the update scheme now becomes
which requires a (usually non-linear) solve for . Schemes where the function is evaluated at the unknown time are called implicit methods.
For some cases we can solve the equation by hand. For instance in the case of our example problem, , we have:
which can be solved for to find
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Backwards Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0 * numpy.exp(decay_constant * dt))
axes.plot((0.0, dt), (c_0, u_np), "k")
axes.plot(dt, u_np, "ro")
axes.text(dt + 10.0, u_np, "$U_1$", fontsize=16)
axes.plot((0.0, 0.0), (c_0, u_np), "k--")
axes.plot((0.0, dt), (u_np, u_np), "k--")
axes.text(400, u_np - 0.05, "$\Delta t$", fontsize=16)
axes.text(10.0, 0.85, "$K_1$", fontsize=16)
axes.grid()
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
plt.show()c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
n_steps = 11t_exact = numpy.linspace(0.0, 1.6e3, 100)
# Implement backwards Euler
t_backwards = numpy.linspace(0.0, 1.6e3, n_steps)
delta_t = t_backwards[1] - t_backwards[0]
u_backwards = numpy.empty(t_backwards.shape)
u_backwards[0] = c_0
for n in range(0, t_backwards.shape[0] - 1):
u_backwards[n + 1] = u_backwards[n] / (1.0 - decay_constant * delta_t)
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t_backwards, u_backwards, "or", label="Backwards Euler")
axes.plot(t_exact, u_exact(t_exact), "k--", label="True Solution")
axes.grid()
axes.set_title("Backwards Euler")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c(t)$")
axes.set_ylim((0.4, 1.1))
axes.legend()
abs_err = numpy.abs(u_backwards - u_exact(t_backwards))
rel_err = abs_err / u_exact(t_backwards)
axes = fig.add_subplot(1, 2, 2)
axes.plot(t_backwards, abs_err, "ro", label="absolute error")
axes.plot(t_backwards, rel_err, "bo", label="relative error")
axes.set_xlabel("t (years)")
axes.set_ylabel("error")
axes.set_title("Error")
axes.legend(loc="best")
axes.grid()
plt.show()It’s also useful to be able to do this in the case of systems of ODEs. Let , then
In general however we are often not able to do this with arbitrary .
Another simple implicit method is based on quadrature using the trapezoidal method. The scheme is
In this case what is the update scheme for ?
n_steps = 11c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
t_exact = numpy.linspace(0.0, 1.6e3, 100)
# Implement trapezoidal method
t = numpy.linspace(0.0, 1.6e3, n_steps)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = c_0
integration_constant = (1.0 + decay_constant * delta_t / 2.0) / (
1.0 - decay_constant * delta_t / 2.0
)
for n in range(t.shape[0] - 1):
u[n + 1] = u[n] * integration_constant
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t, u, "or", label="Trapezoidal")
axes.plot(t_exact, u_exact(t_exact), "k--", label="True Solution")
axes.grid()
axes.set_title("Trapezoidal")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4, 1.1))
axes.legend()
abs_err = numpy.abs(u - u_exact(t))
rel_err = abs_err / u_exact(t)
axes = fig.add_subplot(1, 2, 2)
axes.plot(t, abs_err, "ro", label="absolute error")
axes.plot(t, rel_err, "bo", label="relative error")
axes.set_xlabel("t (years)")
axes.set_ylabel("error")
axes.set_title("Error")
axes.legend(loc="best")
axes.grid()
plt.show()Error Analysis of ODE Methods¶
At this point it is also helpful to introduce more notation to distinguish between the true solution to the ODE and the approximated value which we will denote .
Definition: We define the truncation error of a scheme by replacing the with the true solution in the finite difference formula and looking at the difference from the exact solution.
For example we will use the difference form of forward Euler
and define the truncation error as
Definition: A method is called consistent if
Definition: We say that a method is order accurate if
uniformally on . This can also be written as . Note that a method is consistent if .
Error Analysis of Forward Euler¶
We can analyze the error and convergence order of forward Euler by considering the Taylor series centered at :
Evaluating this series at gives
From the definition of truncation error we can use our Taylor series expression and find the truncation error. Take the finite difference form of forward Euler
and replacing the derivative formulation with to find
Given the Taylor’s series expansion for
We substitute to find
This implies that forward Euler is first order accurate and therefore consistent.
to find
Truncation Error vs Step Error¶
Sometimes we will also consider the “Step Error” which is the error that is introduced over one step
This leads to an alternate definition of the truncation error as
so if the Truncation error is then the step error will be order
So for Forward (or Backward’s) Euler the step error
The step error can be very useful in adaptive stepping schemes
We can integrate both sides
to generate its integral form
Which says that our solution , if it exists at some time in the future, is plus a vector
which is a definite line integral (along an unknown solution).
An important class of ODE solvers are called Single Step, Multi-stage schemes which can be most easily understood as extensions of the Newton-Cotes quadrature schemes for approximating (plus an error term that will scale as )
Runge-Kutta Methods¶
One way to derive higher-order ODE solvers is to use higher order quadrature schemes that sample the function at a number of intermediate stages to provide a more accurate estimate of . These are not multi-step methods as they still only require information from the current time step but they raise the order of accuracy by adding stages. These types of methods are called Runge-Kutta methods.
Example: One-stage Runge-Kutta Scheme (Forward Euler)¶
For example, if we approximate with a left-sided quadrature rule
then our first ODE algorithm can be written
in terms of our discrete approximation $$
$$
which is the forward Euler method. In essence we are approximating the derivative with the value of the function at the point we are at .
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
# Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0.0, 1.0, "ro")
axes.text(0.0, 1.01, "$U_0$", fontsize=16)
axes.plot((0.0, dt), (c_0, u_np), "k")
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), "k--")
axes.plot((0.0, 0.0), (c_0, u_np), "k--")
axes.text(10.0, 0.75, "$K_1$", fontsize=16)
axes.plot((0.0, dt), (u_np, u_np), "k--")
axes.text(400, u_np - 0.05, "$\Delta t$", fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
axes.grid()
plt.show()For any solution with curvature, there is clearly a step error which for Euler is
Example: Two-stage Runge-Kutta Methods¶
Similarly, the two-stage Runge-Kutta methods RK2 approximates using a mid-point scheme (which should be 2nd order accurate). Unfortunately, we don’t know the value of the mid-point. However we can use an Euler step of size to estimate the mid-point.
We can write the algorithm as
Where we now evaluate the function in two stages and .
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# RK2 step
dt = 1e3
U0 = 1.0
K1 = dt * f(0.0, U0)
K2 = dt * f(dt / 2.0, U0 + K1 / 2)
U1 = U0 + K2fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0.0, U0, "ro")
axes.text(0.0, U0 + 0.01, "$U_0$", fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K1), "k--")
axes.plot((0.0, dt / 2.0), (U0, U0 + K1 / 2.0), "k")
Y1 = U0 + K1 / 2
axes.plot(dt / 2.0, U0 + K1 / 2, "ro")
axes.plot((0.0, 0.0), (U0, Y1), "k--")
axes.text(10.0, 0.85, "$\\frac{K_1}{2}$", fontsize=18)
axes.plot((0.0, dt / 2), (Y1, Y1), "k--")
axes.text(250, Y1 - 0.05, "$\\frac{\Delta t}{2}$", fontsize=18)
axes.plot(dt, U1, "go")
axes.plot((0.0, 0.0), (U0, Y1), "k--")
axes.text(10.0, 0.85, "$\\frac{K_1}{2}$", fontsize=18)
axes.plot((0.0, dt / 2), (Y1, Y1), "k--")
axes.text(250, Y1 - 0.05, "$\\frac{\Delta t}{2}$", fontsize=18)
axes.plot(dt, U1, "go")
axes.plot((0.0, dt), (U0, U1), "k")
axes.text(dt + 20, U1, "$U_1$", fontsize=18)
# axes.plot((0.0, 0.0), (U0, U1), 'g--')
# axes.plot((0.0, dt), (U1, U1), 'g--')
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
axes.grid()
plt.show()Error analysis RK2¶
The truncation error can be computed similarly to how we did so before but we do need to figure out how to compute the derivative inside of the function. Note that due to
that differentiating this with respect to leads to
leading to
so this method is second order accurate.
Example: Improved Euler’s method¶
The Improved Euler’s method is another RK2 scheme but instead of approximating a mid-point quadrature rule, it approximates a trapezoidal rule.
We can write the algorithm as
Where we now use function evaluations at both the initial value, and at the euler point but take the average of those slopes.
Again, error analysis shows that this scheme also has a truncation error
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# Improved Euler step
dt = 1e3
U0 = 1.0
K1 = dt * f(0.0, U0)
K2 = dt * f(dt, U0 + K1)
U1 = U0 + 0.5 * (K1 + K2)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0.0, U0, "ro")
axes.text(0.0, U0 + 0.01, "$U_0$", fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K1), "k--")
axes.plot((0.0, dt), (U0, U0 + K1), "k")
axes.plot(dt, Y1, "ro")
axes.text(dt + 10, Y1, "$Y_1$", fontsize=18)
axes.plot((0.0, 0.0), (U0, Y1), "k--")
axes.plot((0.0, dt), (Y1, Y1), "k--")
axes.text(350, Y1 - 0.05, "$\\frac{\Delta t}{2}$", fontsize=18)
axes.plot(dt, U1, "go")
axes.plot((0.0, 0.0), (U0, Y1), "k--")
axes.plot(0.0, Y1, "ko")
axes.text(10.0, Y1, "$K_1$", fontsize=18)
Y1 = U0 + K1
axes.plot((0.0, 0.0), (U0, U0 + K2), "b--")
axes.plot(0.0, U0 + K2, "bo--")
axes.text(10.0, U0 + K2, "$K_2$", fontsize=18)
axes.plot(0.0, U1, "gx", markersize=15)
axes.text(10.0, U1, "$0.5*(K_1 +K_2)$", fontsize=18)
axes.plot(dt, U1, "go")
axes.plot((0.0, dt), (U0, U1), "k")
axes.text(dt + 20, U1, "$U_1$", fontsize=18)
# axes.plot((0.0, 0.0), (U0, U1), 'g--')
# axes.plot((0.0, dt), (U1, U1), 'g--')
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
axes.grid()
plt.show()Example: 4-stage Runge-Kutta Method¶
If RK2 is related to a Mid-point quadrature scheme, then the classic 4-stage, 4th order Runge-Kutta scheme should be reminiscent of Simpson’s Quadrature rule. It requires 4 samples of at the beginning of the step, two-samples in the middle and one at the end, then a linear combination of those samples
With truncation error
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# RK4 step
dt = 1e3
U0 = 1.0
K1 = dt * f(0.0, U0)
K2 = dt * f(dt / 2.0, U0 + K1 / 2)
K3 = dt * f(dt / 2.0, U0 + K2 / 2)
K4 = dt * f(dt, U0 + K3)
U1 = U0 + 1.0 / 6.0 * (K1 + 2 * (K2 + K3) + K4)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0.0, U0, "ro")
axes.text(0.0 - 20, U0 - 0.04, "$K_1$", color="red", fontsize=16)
axes.text(0.0, U0 + 0.01, "$U_0$", fontsize=16)
axes.plot((0.0, dt / 2.0), (U0, U0 + K1 / 2.0), "k--")
axes.plot(dt / 2.0, U0 + K1 / 2, "ro")
axes.text(dt / 2 - 20, U0 + K1 / 2 - 0.04, "$K_2$", color="red", fontsize=16)
axes.plot((0.0, dt / 2.0), (U0, U0 + K2 / 2.0), "k--")
axes.plot(dt / 2.0, U0 + K2 / 2, "ro")
axes.text(dt / 2 - 20, U0 + K2 / 2 + 0.02, "$K_3$", color="red", fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K3), "k--")
axes.plot(dt, U0 + K3, "ro")
axes.text(dt - 20, U0 + K3 - 0.04, "$K_4$", color="red", fontsize=16)
axes.plot(dt, U1, "go")
# axes.plot((0., dt), (U0, U1), 'k')
axes.text(dt + 20, U1, "$U_1$", fontsize=18)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
axes.grid()
plt.show()def RK2(f, t_span, u0, N):
"""implement constant step size 2 stage Runge-Kutta Method RK2"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = u0
for n, t_n in enumerate(t[:-1]):
K_1 = delta_t * f(t_n, u[n])
K_2 = delta_t * f(t_n + delta_t / 2.0, u[n] + K_1 / 2.0)
u[n + 1] = u[n] + K_2
return t, u
def improved_euler(f, t_span, u0, N):
"""implement constant step size 2 stage Improved Euler Method trapezoidal rule"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = u0
for n, t_n in enumerate(t[:-1]):
K_1 = delta_t * f(t_n, u[n])
K_2 = delta_t * f(t_n + delta_t, u[n] + K_1)
u[n + 1] = u[n] + 0.5 * (K_1 + K_2)
return t, udef RK4(f, t_span, u0, N):
"""implement constant step size 4 stage Runge-Kutta Method RK4"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = u0
for n, t_n in enumerate(t[:-1]):
K_1 = delta_t * f(t_n, u[n])
K_2 = delta_t * f(t_n + delta_t / 2.0, u[n] + K_1 / 2.0)
K_3 = delta_t * f(t_n + delta_t / 2.0, u[n] + K_2 / 2.0)
K_4 = delta_t * f(t_n + delta_t, u[n] + K_3)
u[n + 1] = u[n] + 1.0 / 6.0 * (K_1 + 2.0 * (K_2 + K_3) + K_4)
return t, u# Implement and compare the two-stage and 4-stage Runge-Kutta methods
f = lambda t, u: -u
N = 21
t_span = [0.0, 5.0]
u0 = 1.0
u_exact = lambda t: u0 * numpy.exp(-t)
t_exact = numpy.linspace(t_span[0], t_span[1], 100)
t_euler, u_euler = euler(f, t_span, u0, N)
t_ieuler, u_ieuler = improved_euler(f, t_span, u0, N)
t_RK2, u_RK2 = RK2(f, t_span, u0, N)
t_RK4, u_RK4 = RK4(f, t_span, u0, N)fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t_exact, u_exact(t_exact), "k", label="exact")
axes.plot(t_euler, u_euler, "ro", label="euler")
axes.plot(t_ieuler, u_ieuler, "co", label="improved euler")
axes.plot(t_RK2, u_RK2, "go", label="RK2")
axes.plot(t_RK4, u_RK4, "bo", label="RK4")
axes.grid()
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("u", fontsize=16)
axes.legend(loc="best")
err = lambda u, t: numpy.abs(u - u_exact(t)) / u_exact(t)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(t_euler, err(u_euler, t_euler), "ro", label="euler")
axes.semilogy(t_ieuler, err(u_ieuler, t_ieuler), "co", label="improved euler")
axes.semilogy(t_RK2, err(u_RK2, t_RK2), "go", label="RK2")
axes.semilogy(t_RK4, err(u_RK4, t_RK4), "bo", label="RK4")
axes.set_xlabel("t (years)")
axes.set_ylabel("Rel. error")
axes.set_title("Error")
axes.legend(loc="best")
axes.grid()
plt.show()Convergence of Single Step Multi-Stage schemes¶
All of the above schemes are consistent and have truncation errors
N = numpy.array([2**n for n in range(4, 10)])
err_euler = numpy.zeros(len(N))
err_ieuler = numpy.zeros(len(N))
err_RK2 = numpy.zeros(len(N))
err_RK4 = numpy.zeros(len(N))
t_span = [0.0, 4.0]
dt = t_span[1] / N
u0 = 1.0
u_exact = u0 * numpy.exp(-t_span[1])
for i, n in enumerate(N):
t, u_euler = euler(f, t_span, u0, n)
err_euler[i] = numpy.abs(u_euler[-1] - u_exact)
t, u_ieuler = improved_euler(f, t_span, u0, n)
err_ieuler[i] = numpy.abs(u_ieuler[-1] - u_exact)
t, u_RK2 = RK2(f, t_span, u0, n)
err_RK2[i] = numpy.abs(u_RK2[-1] - u_exact)
t, u_RK4 = RK4(f, t_span, u0, n)
err_RK4[i] = numpy.abs(u_RK4[-1] - u_exact)
err_fit = lambda dt, p: numpy.exp(p[1]) * dt ** p[0]
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
# Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_euler[2:]), 1)
line = axes.loglog(dt, err_euler, "o", label="euler, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
# Improved Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_ieuler[2:]), 1)
line = axes.loglog(dt, err_ieuler, "o", label="improved euler, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
# RK2
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK2[2:]), 1)
line = axes.loglog(dt, err_RK2, "o", label="rk2, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
# RK4
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK4[2:]), 1)
line = axes.loglog(dt, err_RK4, "o", label="rk4, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
axes.grid()
axes.set_xlabel("$\Delta t$", fontsize=16)
axes.set_ylabel("$Error$", fontsize=16)
axes.set_title("Convergence: Single Step Schemes", fontsize=18)
axes.legend(loc="best", fontsize=14)
plt.show()Summary: single-Step Multi-Stage Schemes¶
The integral form of an ODE initial value problem can be written
Which says that our solution , if it exists at some time in the future, is plus a number
which is a definite line integral (along an unknown solution).
Single Step, Multi-stage schemes¶
are most easily understood as extensions of the Newton-Cotes quadrature schemes for approximating (plus an error term that will scale as )
Explicit Schemes
| Name | Stages | "Quadrature" | $$T$$ |
|---|---|---|---|
| Euler | 1 | Left-Rectangle | $$O(\Delta t)$$ |
| Improved Euler | 2 | Trapezoidal | $$O(\Delta t^2)$$ |
| RK2 | 2 | Mid-Point | $$O(\Delta t^2)$$ |
| RK4 | 4 | Simpson | $$O(\Delta t^4)$$ |
Implicit Schemes
| Name | Stages | "Quadrature" | $$T$$ |
|---|---|---|---|
| Backwards-Euler | 1 | Right-Rectangle | $$O(\Delta t)$$ |
| Trapezoidal | 2 | Trapezoidal | $$O(\Delta t^2)$$ |
Convergence of Single Step Multi-Stage schemes¶
All of the above schemes are consistent and have truncation errors
N = numpy.array([2**n for n in range(4, 10)])
err_euler = numpy.zeros(len(N))
err_ieuler = numpy.zeros(len(N))
err_RK2 = numpy.zeros(len(N))
err_RK4 = numpy.zeros(len(N))
t_span = [0.0, 4.0]
dt = t_span[1] / N
u0 = 1.0
u_exact = u0 * numpy.exp(-t_span[1])
for i, n in enumerate(N):
t, u_euler = euler(f, t_span, u0, n)
err_euler[i] = numpy.abs(u_euler[-1] - u_exact)
t, u_ieuler = improved_euler(f, t_span, u0, n)
err_ieuler[i] = numpy.abs(u_ieuler[-1] - u_exact)
t, u_RK2 = RK2(f, t_span, u0, n)
err_RK2[i] = numpy.abs(u_RK2[-1] - u_exact)
t, u_RK4 = RK4(f, t_span, u0, n)
err_RK4[i] = numpy.abs(u_RK4[-1] - u_exact)
err_fit = lambda dt, p: numpy.exp(p[1]) * dt ** p[0]
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
# Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_euler[2:]), 1)
line = axes.loglog(dt, err_euler, "o", label="euler, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
# Improved Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_ieuler[2:]), 1)
line = axes.loglog(dt, err_ieuler, "o", label="improved euler, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
# RK2
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK2[2:]), 1)
line = axes.loglog(dt, err_RK2, "o", label="rk2, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
# RK4
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK4[2:]), 1)
line = axes.loglog(dt, err_RK4, "o", label="rk4, p={:3.2f}".format(p[0]))
axes.loglog(dt, err_fit(dt, p), "--", color=line[0].get_color())
axes.grid()
axes.set_xlabel("$\Delta t$", fontsize=16)
axes.set_ylabel("$Error$", fontsize=16)
axes.set_title("Convergence: Single Step Schemes", fontsize=18)
axes.legend(loc="best", fontsize=14)
plt.show()General Explicit Runge-Kutta Schemes and ``Butcher Tableaus’’¶
The most general form of an explicit RK scheme with stages is
where
are the function evaluations at stage which are determined by coefficients , and .
All of the critical coefficients for any explicit RK scheme can be arranged in a ``Butcher tableau’’
where gives the fraction of the time step for each stage. gives the weighting for each function evaluation in the general quadrature and the determine how to use previous stages to advance the value of at stage .
for more information see here
All of the schemes we have discussed can be written as Butcher Tableaus (although they can be a bit hard to parse). But for example:
Classical RK4¶
Wikipedia provides a convenient list of a large range of explicit and implicit RK schemes
Adaptive Time Stepping¶
Why should we care about all of these schemes and their errors?¶
Even though we know the formal error. It is with respect to a true solution we don’t know.
In itself, the error estimates don’t tell us how to choose a time step to keep the solution within a given tolerance
However, in combination, we can use multiple methods to control the error and provide Adaptive time stepping
Example: Compare 1 step of Euler to one step of RK2¶
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# RK2 step
dt = 1e3
U0 = 1.0
K1 = dt * f(0.0, U0)
Y1 = U0 + K1 / 2
K2 = dt * f(dt / 2.0, Y1)
U1 = U0 + K2
t = numpy.linspace(U0, 1600.0)
u_true = U0 * numpy.exp(decay_constant * t)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, u_true, label="True Solution")
axes.plot(0.0, U0, "ro")
axes.text(0.0, U0 + 0.01, "$U_0$", fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K1), "k")
# axes.plot((0.0, dt/2.), (U0, U0 + K1/2.), 'k')
# Euler step
axes.plot(dt, U0 + K1, "ro")
# axes.plot((0.0, 0.0), (U0, Y1), 'k--')
axes.text(dt + 10.0, U0 + K1, "$U_{euler}$", fontsize=18)
axes.plot(dt, U1, "go")
# axes.plot((0.0, 0.0), (U0, Y1), 'k--')
# axes.text(10., 0.85, '$\\frac{K_1}{2}$', fontsize=18)
# axes.plot((0.0, dt/2), (Y1, Y1), 'k--')
# axes.text(250, Y1 - 0.05, '$\\frac{\Delta t}{2}$', fontsize=18)
# RK2 Step
axes.plot(dt, U1, "go")
axes.plot((0.0, dt), (U0, U1), "k")
axes.text(dt + 20, U1, "$U_{RK2}$", fontsize=18)
# axes.plot((0.0, 0.0), (U0, U1), 'g--')
# axes.plot((0.0, dt), (U1, U1), 'g--')
# difference
axes.plot((dt, dt), (U1, U0 + K1), "k--")
axes.text(dt + 40, 0.5 * (U1 + U0 + K1), "$\Delta\propto\Delta t^2$", fontsize=18)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.05))
axes.legend(loc="best")
axes.grid()
plt.show()Therefore we can compute the relative truncation error as
is Computable!
has a known dependence on step-size
Adaptive Time Stepping¶
Given the relative truncation error and its scaling with , we can now use this to choose a single good time step.
Example:¶
Suppose we wanted our relative truncation error to be small relative to the solution or zero, we could set
where and are relative and absolute tolerances (and we assume that is a reasonably good estimate of the true solution)
Moreover, we know how the relative truncation error should scale with time step, i.e.
But our measured relative truncation error, depends on the step size we just took i.e
or rearranging, we can solve for the target step size
which tells us how to grow or shrink our time step to maintain accuracy.
In general, if we have two methods with different step errors such that
then our adaptive stepper will look like
This leads to all sorts of adaptive schemes most are included in standard libraries.
Embedded Runge-Kutta Schemes¶
There are in fact a whole family of Embedded RK schemes which are stage schemes but can combine the function evaluations in two different ways to produce methods with different error estimates.
A popular one is RK45 (available in SciPy) which is based on the Dormand-Prince 5(4) pair which uses 6 function evaluations per step to produce a 4th order and 5th order scheme. The 4th order scheme controls the time step, and the 5th order scheme actually is the solution.
Dormand-Prince 5(4) Butcher Tableau¶
The first row of coefficients gives the fifth-order accurate solution and the second row gives the fourth-order accurate solution.
from scipy.integrate import solve_ivp
def f_vanderpol(t, u, mu=5):
return numpy.array([u[1], mu * (1.0 - u[0] ** 2) * u[1] - u[0]])
t_span = (0.0, 50.0)
u0 = [1.0, 0.0]
f = lambda t, u: f_vanderpol(t, u, mu=20)
sol = solve_ivp(f, t_span, u0, method="RK45", rtol=1.0e-3, atol=1.0e-8)fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(sol.t, sol.y[0], "o-")
axes.set_title("Solution to Van der Pol Oscillator", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("y(t)", fontsize=16)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
delta_t = sol.t[1:] - sol.t[:-1]
axes.plot(sol.t[:-1], delta_t)
axes.grid()
axes.set_xlabel("$t$", fontsize=16)
axes.set_ylabel("$\Delta t$", fontsize=16)
axes.set_title("Timesteps, N = {}".format(len(sol.t)), fontsize=18)
plt.show()Taylor Series Methods¶
A Taylor series method can be derived by direct substitution of the right-hand-side function and its appropriate derivatives into the Taylor series expansion for . For a th order method we would look at the Taylor series up to that order and replace all the derivatives of with derivatives of instead.
We then replace these derivatives with the appropriate derivative of which will always be one less than the derivative of (due to the original ODE)
leading to the method
2nd Order Taylor Series Method¶
We want terms up to second order so we need to take the derivative of once to find and therefore
With Step error and truncation error ,
def Taylor_3_flambda_u(lamda, t_span, u0, N):
"""implement constant step size 3rd order Taylor Series method for f(t,u) = \lambda u"""
t = numpy.linspace(t_span[0], t_span[1], N)
lambda_dt = lamda * (t[1] - t[0])
u = numpy.empty(t.shape)
u[0] = u0
for n, t_n in enumerate(t[:-1]):
u[n + 1] = u[n] * (
1.0 + lambda_dt + (lambda_dt**2) / 2.0 + (lambda_dt**3) / 6.0
)
return t, ulam = -1.0
t_span = [0.0, 5.0]
u0 = 1.0
f = lambda t, u: -u
t_exact = numpy.linspace(t_span[0], t_span[1], 100)
u_exact = u0 * numpy.exp(-t_exact)
N = 20
t_taylor, u_taylor = Taylor_3_flambda_u(lam, t_span, u0, N)
t_euler, u_euler = euler(f, t_span, u0, N)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, "k", label="exact")
axes.plot(t_euler, u_euler, "ro", label="euler")
axes.plot(t_taylor, u_taylor, "bo", label="Taylor3")
axes.grid()
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("u", fontsize=16)
axes.legend(loc="best")
plt.show()Some Drawbacks¶
Taylor Series methods
require differentiating the given equation which can be cumbersome and difficult to implement
require a new routine for every
General one-step/multi-stage methods
higher order methods often require a large number of evaluations of per time step
Overview -- so far¶
So far we have discussed 3 basic techniques for integration of ODE IVP’s
Single-Step Multi-Stage schemes (explicit and implicit)
Taylor’s Series Methods
Linear Multi-step schemes (just started this)
as well as
truncation error of each method (and its relation to step-error)
adaptive stepping for Single-Step Schemes
In general Single-Step Multi-Stage methods (e.g. Embedded RK schemes) plus adaptive time stepping make for a very robust family of solvers. However there are some other classical schemes worth mentioning that have some advantages
Linear Multi-Step Methods¶
Multi-step methods are ODE methods that
require only one new function evaluation per time step to work.
reuse values and function evaluations at some number of previous time steps
Disadvantages over single step methods
Methods are not self-starting, i.e. they require other methods to find the initial values
Difficult to adapt. The time step Δ𝑡 in one-step methods can be changed at any time while multi-step methods this is much more complex
Simplest example: The leap-frog method¶
The leap-frog method is similar to Euler’s method in that it uses the information from two-previous time steps to advance the problem. We can write the problem as a centered first-derivative centered at time i.e.
or
this method is known as the leap-frog method
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Leap-Frog step
dt = 1e3
u1 = c_0 * numpy.exp(decay_constant * dt / 2.0)
u_np = c_0 + dt * (decay_constant * u1)
axes.plot((0.0, dt), (c_0, u_np), "k")
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), "k--")
axes.plot((0.0, 0.0), (c_0, u_np), "k--")
axes.plot((0.0, dt), (u_np, u_np), "k--")
axes.text(400, u_np - 0.05, "$\Delta t$", fontsize=16)
axes.plot([0.0, dt / 2, dt], [c_0, u1, u_np], "ro")
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c$")
axes.grid()
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5, 1.0))
plt.show()def leap_frog(f, t_span, u0, N, start=RK2):
"""calculate fixed step with leap-frog iterator with a single step starter"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.zeros(t.shape)
u[0] = u0
# use a single-step multi-stage method to start
t_start, u_start = start(f, (t[0], t[1]), u0, 2)
u[1] = u_start[-1]
for n, t_np in enumerate(t[1:-1]):
u[n + 2] = u[n] + 2 * delta_t * f(t_np, u[n + 1])
return t, uu0 = 1.0
t_span = (0.0, 1600.0)
N = 21
# Stable example
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
t_exact = numpy.linspace(t_span[0], t_span[1], N)
u_exact = u0 * numpy.exp(decay_constant * t_exact)
t_leapfrog, u_leapfrog = leap_frog(f, t_span, u0, N, start=RK4)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_leapfrog, u_leapfrog, "or-", label="Leap-Frog")
axes.plot(t_exact, u_exact, "k--", label="True Solution")
axes.grid()
axes.set_title("Leap-Frog", fontsize=18)
axes.set_xlabel("t (years)", fontsize=16)
axes.set_xlabel("$c(t)$", fontsize=16)
axes.legend(loc="best", fontsize=14)
plt.show()Error Analysis of Leap-Frog Method¶
To easily analyze this method we will expand the Taylor series around time to yield
We need one more expansion however due to leap-frog. Recall that leap-frog has the form
To handle the term we need to write this with relation to . Again we use the Taylor series
Plugging these into our definition of the truncation error along with the leap-frog method definition leads to
Therefore the method is second order accurate and is consistent theoretically. In practice it’s a bit more complicated than that.
# Compare accuracy between Euler, RK2 and Leap-Frog
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
u_0 = 1.0
t_span = (0.0, 10.0)
num_steps = [2**n for n in range(4, 11)]delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_RK2 = numpy.empty(len(num_steps))
error_leapfrog = numpy.empty(len(num_steps))
for i, N in enumerate(num_steps):
t = numpy.linspace(t_span[0], t_span[1], N)
tt, u_euler = euler(f, t_span, u_0, N)
tt, u_rk2 = RK2(f, t_span, u_0, N)
tt, u_leapfrog = leap_frog(f, t_span, u_0, N, start=euler)
delta_t[i] = t[1] - t[0]
# Compute error for each
error_euler[i] = numpy.linalg.norm(delta_t[i] * (u_euler - u_exact(t)), ord=1)
error_RK2[i] = numpy.linalg.norm(delta_t[i] * (u_rk2 - u_exact(t)), ord=1)
error_leapfrog[i] = numpy.linalg.norm(delta_t[i] * (u_leapfrog - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
order_C = lambda delta_x, error, order: numpy.exp(
numpy.log(error) - order * numpy.log(delta_x)
)
axes.loglog(delta_t, error_euler, "bo", label="Forward Euler, $n=1$")
axes.loglog(delta_t, error_RK2, "ro", label="RK2, $n=2$")
axes.loglog(delta_t, error_leapfrog, "go", label="Leap-Frog, $n=2$")
axes.loglog(delta_t, order_C(delta_t[2], error_euler[2], 1.0) * delta_t**1.0, "--b")
axes.loglog(delta_t, order_C(delta_t[2], error_RK2[2], 2.0) * delta_t**2.0, "--r")
axes.loglog(delta_t, order_C(delta_t[2], error_leapfrog[2], 2.0) * delta_t**2.0, "--r")
axes.grid()
axes.legend(loc=2, fontsize=14)
axes.set_title("Comparison of Errors", fontsize=18)
axes.set_xlabel("$\Delta t$", fontsize=16)
axes.set_ylabel("$|U(t_f) - u(t_f)|$", fontsize=16)
plt.show()Look at the errors for Leap-Frog¶
They’re actually quite large...If you make a quick plot of u_leapfrog vs you’ll see what’s happening (and is a good example of an issue we will need to address in future lectures)
N = 50
t_leapfrog, u_leapfrog = leap_frog(f, t_span, u_0, N, start=euler)
## Your plotting code here
plt.figure()
plt.plot(t_leapfrog, u_leapfrog)
plt.grid()
plt.show()General Linear Multi-Step Methods¶
Leap-frog is perhaps the simplest of multi-step methods but all linear multi-step methods can be written as the linear combination of past, present and future solutions:
If then the method is explicit (only requires previous time steps). Note that the coefficients are not unique as we can multiply both sides by a constant. In practice a normalization of is used.
For example: our Leap-frog method can be written using , ,
Adams-Bashforth Methods¶
The Adams-Bashforth methods are explicit solvers that maximize the order of accuracy given a number of steps . This is accomplished by looking at the Taylor series and picking the coefficients to eliminate as many terms in the Taylor series as possible.
def AB2(f, t_span, u0, N, start=RK2):
"""calculate fixed step Adams-Bashforth 2-step method with a single step starter
reuses previous function evaluations
"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.zeros(t.shape)
u[0] = u0
# use a single-step multi-stage method to start
t_start, u_start = start(f, (t[0], t[1]), u0, 2)
u[1] = u_start[-1]
# set initial function evaluations
fn = f(t[0], u[0])
fnp = f(t[1], u[1])
for n, t_np in enumerate(t[2:]):
u[n + 2] = u[n + 1] + delta_t / 2.0 * (-fn + 3.0 * fnp)
fn = fnp
fnp = f(t_np, u[n + 2])
return t, u# Use 2-step Adams-Bashforth to compute solution
f = lambda t, u: -u
t_span = (0.0, 10.0)
u0 = 1.0
N = 20
t, u_ab2 = AB2(f, t_span, u0, N, start=RK2)t_exact = numpy.linspace(t_span[0], t_span[1], 100)
u_exact = numpy.exp(-t_exact)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, "k", label="True")
axes.plot(t, u_ab2, "ro", label="2-step A-B")
axes.set_title("Adams-Bashforth Method", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("u(t)", fontsize=16)
axes.legend(loc=1, fontsize=14)
axes.grid()
plt.show()Adams-Moulton Methods¶
The Adams-Moulton methods are the implicit versions of the Adams-Bashforth methods. Since this gives one additional parameter to use these methods are generally one order of accuracy greater than their counterparts.
# Use 2-step Adams-Moulton to compute solution
# u' = - decay u
decay_constant = 1.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 20
# N = 10
# N = 5
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
U[1] = U[0] + 0.5 * delta_t * f(t[0], U[0])
U[1] = U[0] + delta_t * f(t[0], U[1])
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t / 12.0)
for n in range(t.shape[0] - 2):
U[n + 2] = (
U[n + 1] + decay_constant * delta_t / 12.0 * (U[n] - 8.0 * U[n + 1])
) * integration_constant
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, "k", label="True")
axes.plot(t, U, "ro", label="2-step A-M")
axes.set_title("Adams-Moulton Method ($f=-u$)", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("u(t)", fontsize=16)
axes.legend(loc=1, fontsize=14)
axes.grid()
plt.show()Truncation Error for Multi-Step Methods¶
We can again find the truncation error in general for linear multi-step methods:
Using the general expansion and evaluation of the Taylor series about we have
collecting terms of order
The method is consistent if the first two terms of the expansion vanish, i.e. and .
# Compare accuracy between RK-2, AB-2 and AM-2, RK-4
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
t_span = (0.0, t_f)
num_steps = [2**n for n in range(4, 10)]
delta_t = numpy.empty(len(num_steps))
error_rk = numpy.empty(len(num_steps))
error_rk4 = numpy.empty(len(num_steps))
error_ab = numpy.empty(len(num_steps))
error_am = numpy.empty(len(num_steps))
for i, N in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute RK2
tt, U_rk = RK2(f, t_span, u0, N)
# Compute RK4
tt, U_rk4 = RK4(f, t_span, u0, N)
# Compute Adams-Bashforth 2-stage
tt, U_ab = AB2(f, t_span, u0, N)
# Compute Adama-Moulton 2-stage
U_am = numpy.empty(t.shape)
U_am[:2] = U_rk[:2]
decay_constant = 1.0
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t[i] / 12.0)
for n in range(t.shape[0] - 2):
U_am[n + 2] = (
U_am[n + 1]
+ decay_constant * delta_t[i] / 12.0 * (U_am[n] - 8.0 * U_am[n + 1])
) * integration_constant
# Compute error for each
error_rk[i] = numpy.linalg.norm(delta_t[i] * (U_rk - u_exact(t)), ord=1)
error_rk4[i] = numpy.linalg.norm(delta_t[i] * (U_rk4 - u_exact(t)), ord=1)
error_ab[i] = numpy.linalg.norm(delta_t[i] * (U_ab - u_exact(t)), ord=1)
error_am[i] = numpy.linalg.norm(delta_t[i] * (U_am - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_rk, "ko", label="RK-2")
axes.loglog(delta_t, error_ab, "bo", label="AB-2")
axes.loglog(delta_t, error_am, "go", label="AM-2")
axes.loglog(delta_t, error_rk4, "co", label="RK-4")
order_C = lambda delta_x, error, order: numpy.exp(
numpy.log(error) - order * numpy.log(delta_x)
)
colors = ["r", "b", "g", "c"]
start = [error_ab[0], error_ab[0], error_am[0], error_rk4[0]]
for i, c in enumerate(colors):
order = i + 1.0
axes.loglog(
delta_t,
order_C(delta_t[0], start[i], order) * delta_t**order,
"--{}".format(c),
label="$p={}$".format(order),
)
axes.legend(loc=4, fontsize=14)
axes.set_title("Comparison of Errors", fontsize=18)
axes.set_xlabel("$\Delta t$", fontsize=16)
axes.set_ylabel("$|U(t) - u(t)|$", fontsize=16)
axes.grid()
plt.show()Predictor-Corrector Methods¶
One way to simplify the Adams-Moulton methods so that implicit evaluations are not needed is by estimating the required implicit function evaluations with an explicit method. These are often called predictor-corrector methods as the explicit method provides a prediction of what the solution might be and the now explicit corrector step works to make that estimate more accurate.
Example: One-Step Adams-Bashforth-Moulton¶
Use the One-step Adams-Bashforth method to predict the value of and then use the Adams-Moulton method to correct that value:
leading to a second order accurate method. Note this algorithm is identical to _____________________?
# One-step Adams-Bashforth-Moulton
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 50
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
for n in range(t.shape[0] - 1):
U[n + 1] = U[n] + delta_t * f(t[n], U[n])
U[n + 1] = U[n] + 0.5 * delta_t * (f(t[n], U[n]) + f(t[n + 1], U[n + 1]))
t_ie, u_ieuler = improved_euler(f, (0.0, 10.0), 1.0, N)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, "k", label="True")
axes.plot(t, U, "ro", label="2-step A-B")
axes.plot(t_ie, u_ieuler, "bx", label="Improved Euler")
axes.set_title("Adams-Bashforth-Moulton P/C Method", fontsize=18)
axes.set_xlabel("t", fontsize=18)
axes.set_ylabel("u(t)", fontsize=18)
axes.legend(loc="best", fontsize=14)
axes.grid()
plt.show()- (1927). Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, 115(772), 700–721. 10.1098/rspa.1927.0118
- Dormand, J. R., & Prince, P. J. (1980). A family of embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics, 6(1), 19–26. 10.1016/0771-050x(80)90013-3

