![]() | Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli |
Note: The presentation below largely follows part II in “Finite Difference Methods for Ordinary and Partial Differential Equations” by LeVeque (SIAM, 2007).
from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
import numpyNumerical Solution to ODE Initial Value Problems - Part 2¶
Overview -- so far¶
So far we have discussed 3 basic techniques for integration of ODE IVP’s
Single-Step Multi-Stage schemes (explicit and implicit)
Taylor’s Series Methods
Linear Multi-Step Schemes (explicit and implicit)
as well as
truncation error of each method (and its relation to step-error)
adaptive stepping
Overview -- to do¶
This notebook will continue the analysis of these methods to discuss the other important issues of
Convergence (Global Error) -- does the discrete solution converge to the continuous analytic solution?
Stability -- Are there limits to how large a time-step we can take?
Stiff Equations: consequences of stability
Implicit Methods: Solution of non-linear systems of equation using Newton’s Method
Convergence¶
We can think of an ODE method as providing a sequence of approximations where here is the number of time steps needed to reach the final time of interest , in effect we are increasing the resolution of the method. We then say that a method is convergent if this sequence converges to the true solution at the same time
We can also define this in a more familiar way in terms of such that
so that our definition of convergence becomes
In general for a method to be convergent it must be
consistent which as before meant that the local truncation error where ,
zero-stable which implies that the sum total of the errors as is bounded and has the same order as (the truncation error) which we know goes to zero as .
put another way, a method is convergent if the global error
Example: Forward Euler on a Linear Problem¶
Consider the simple linear problem
which we know has the solution . Applying Euler’s method to this problem leads to
We also know the local truncation error is defined by
which can be rearranged to find
Note that all values here are in terms of the exact solution where as the application of Euler’s method is in terms of the approximate solution.
Defining the global error as
we can subtract the last two expressions to find
a recursive definition for the global error that connects it to the local truncation error.
Working backwards from the global error at maximum time step , the first few iterates of this look like
Which at this point looks like
So taking it back steps () gives
Expanding this expression out backwards in time to leads to
Noting that are the first two terms in the Taylor series for we will bound this term by
which then implies the term in the summation can be bounded by
In other words the global error is bounded by the original global error and the maximum one-step truncation error made multiplied by the number of time steps taken. If as before and taking into account the local truncation error we can simplify this expression further to
If we assume that we have used the correct initial condition then as and we see that the global error is bounded by , the same as the local truncation error:
More generally, it can be shown that for all single-step multi-stage schemes (of which Euler is the simplest), the Global error has the same order as the Truncation error, all of which scale as
and therefore all of these schemes converge.
Absolute Stability¶
Although zero-stability guarantees stability it is much more difficult to work with in general as the limit can be difficult to compute (and in general isn’t ideal as it requires an infinite amount of computation). Instead we often consider a finite and examine if the method is stable for this particular choice of . This has the practical upside that it will also tell us what particular will ensure that our method is indeed stable.
We can compute an estimate for what we need to use by examining the truncation error for Euler’s method
and therefore
If we want a solution where then . Turning to the application of Euler’s method lets apply this to the case where and .
# Implement vectorized Forward Euler
def euler(f, t_span, u0, N):
"""simple implementation of constant step-size forward euler method for vector valued f
This doc string should have so much more in it
"""
t = numpy.linspace(t_span[0], t_span[1], N)
if numpy.isscalar(u0):
u = numpy.empty(t.shape)
else:
u = numpy.empty((len(t), len(u0)))
u[0] = u0
delta_t = t[1] - t[0]
for n, t_n in enumerate(t[:-1]):
K1 = delta_t * f(t_n, u[n])
u[n + 1] = u[n] + K1
return t, u.transpose()# Implement vectorized RK4
def RK4(f, t_span, u0, N):
"""simple implementation of constant step-size RK4 scheme for vector valued f
This doc string should have so much more in it
"""
t = numpy.linspace(t_span[0], t_span[1], N)
if numpy.isscalar(u0):
u = numpy.empty(t.shape)
else:
u = numpy.empty((len(t), len(u0)))
u[0] = u0
delta_t = t[1] - t[0]
for n, t_n in enumerate(t[:-1]):
K1 = delta_t * f(t_n, u[n])
K2 = delta_t * f(t_n + delta_t / 2.0, u[n] + K1 / 2.0)
K3 = delta_t * f(t_n + delta_t / 2.0, u[n] + K2 / 2.0)
K4 = delta_t * f(t_n + delta_t, u[n] + K3)
u[n + 1] = u[n] + (K1 + 2.0 * (K2 + K3) + K4) / 6.0
return t, u.transpose()# Compare accuracy between Euler for different values of lambda
f = lambda t, u, lam: lam * (u - numpy.cos(t)) - numpy.sin(t)
u_exact = lambda t: numpy.cos(t)
u_0 = u_exact(0.0)
t_span = [0.0, 2.0]
num_steps = [2**n for n in range(4, 9)]# num_steps = [2**n for n in range(15,20)]
delta_t = numpy.empty(len(num_steps))
error_10 = numpy.empty(len(num_steps))
error_2100 = numpy.empty(len(num_steps))
t_f = t_span[1]
u_f = u_exact(t_f)
for i, N in enumerate(num_steps):
# Compute Euler solution
ff = lambda t, u: f(t, u, -10.0)
t, U_10 = euler(ff, t_span, u_0, N)
delta_t[i] = t[1] - t[0]
error_10[i] = numpy.abs(U_10[-1] - u_f) / numpy.abs(u_f)
ff = lambda t, u: f(t, u, -2100.0)
t, U_2100 = euler(ff, t_span, u_0, N)
error_2100[i] = numpy.abs(U_2100[-1] - u_f) / numpy.abs(u_f)
print("Error: lambda = 10: {}".format(error_10))
print("Error: lambda = 2100: {}".format(error_2100))
# Plot error vs. delta_t
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.loglog(delta_t, error_10, "bo", label="Forward Euler")
order_C = lambda delta_x, error, order: numpy.exp(
numpy.log(error) - order * numpy.log(delta_x)
)
axes.loglog(
delta_t,
order_C(delta_t[1], error_10[1], 1.0) * delta_t**1.0,
"r--",
label="1st Order",
)
axes.legend(loc=4)
axes.set_title("Error - $\lambda = 10$")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
axes = fig.add_subplot(1, 2, 2)
axes.loglog(delta_t, error_2100, "bo", label="Forward Euler")
axes.loglog(
delta_t,
order_C(delta_t[1], error_2100[1], 1.0) * delta_t**1.0,
"r--",
label="1st Order",
)
axes.set_title("Error - $\lambda = 2100$")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
plt.show()So what went wrong with , the global error should go as
If then for the case the previous global error is multiplied by
which means the contribution from will slowly decrease as we take more time steps. For the other case we have
which means that for this the error made in previous time steps will grow! For this not to happen we would have to have which would lead to convergence again.
A simple example¶
Consider our simplest test problem
whose exact solution is
which for real should decay to zero as
Question?¶
What is the largest step-size for an Euler method, such that the discrete solution as ?
Euler’s method on this simplest problem can be written
i.e. at each step the last solution is multiplied by the factor
Hopefully it’s clear that for this scheme to be stable requires
The key parameter is actually the combination of and the time step , so stability requires that
or 𝑧∈[?,?]
plt.figure(figsize=(8, 6))
z = numpy.linspace(-3, 2, 200)
plt.plot(z, numpy.abs(1 + z))
plt.plot(z, numpy.ones(z.shape), "k--")
plt.plot([0.0, 0], [0.0, 3.0])
plt.xlabel("$z$", fontsize=16)
plt.ylabel("$|1 + z|$", fontsize=16)
plt.grid()
plt.show()# Compare accuracy between Euler for different values of time steps and $\lambda = -1$
lam = -1.0
f = lambda t, u: lam * u
u_exact = lambda t: numpy.exp(lam * t)
u_0 = u_exact(0.0)
t_span = [0.0, 10.0]
N = 21
N = 11
N = 9
# N = 6
N = 5
abs_z = numpy.abs(lam * t_span[-1] / (N - 1))
print("Abs(z)={}".format(abs_z))
t, u = euler(f, t_span, u_0, N)tt = numpy.linspace(0.0, t_span[1], 100)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(tt, u_exact(tt), label="$u_{{exact}}$")
axes.plot(t, u, "ro-", label="$u_{{euler}}$")
axes.set_xlabel("t")
axes.set_title("$\Delta t = {}$, $|z|={}$".format(t[1] - t[0], abs_z), fontsize=18)
axes.grid()
axes.legend(loc="best", fontsize=14)
plt.show()Absolute Stability of the Forward Euler Method for complex ¶
The region is the region of absolute stability. For more general problems may be complex (stemming from eigenvalues of real problems), so we generally consider the region of stability on the complex plane.
for
# Plot the region of absolute stability for Forward Euler
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
axes.fill(
numpy.cos(t) - 1.0,
numpy.sin(t),
color=(255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0),
)
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
axes.set_xlim((-3, 3.0))
axes.set_ylim((-3, 3))
axes.set_aspect("equal")
axes.set_xlabel("Re(z)")
axes.set_ylabel("Im(z)")
axes.set_title("Absolute Stability Region for Forward Euler")
plt.show()Using the stability plots to choose a stable ¶
If you know the spectrum of eigenvalues for a given problem, you can plot them on the stability diagram and then find the value of such that all are within the stability region.
E.g. consider Euler’s method on a single eigenvalue
with modulus which plots outside of the stability region.
# Plot the region of absolute stability for Forward Euler
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
lam = (-2, 2)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
axes.fill(
numpy.cos(t) - 1.0,
numpy.sin(t),
color=(255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0),
)
axes.plot(lam[0], lam[1], "rx", markersize=8)
axes.plot((lam[0], 0.0), (lam[1], 0.0), "r--")
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
axes.set_xlim((-3, 3.0))
axes.set_ylim((-3, 3))
axes.set_aspect("equal")
axes.set_xlabel("Re(z)")
axes.set_ylabel("Im(z)")
axes.set_title("Absolute Stability Region for Forward Euler")
plt.show()However so
Thus absolute stability requires that we find a time-step such that . If we just look for the intersection with the boundaries of the stability region
implies (after a little work) that
or
# Plot the region of absolute stability for Forward Euler
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
lam = (-2, 2)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
axes.fill(
numpy.cos(t) - 1.0,
numpy.sin(t),
color=(255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0),
)
axes.plot(lam[0], lam[1], "rx", markersize=8)
axes.text(lam[0], 1.1 * lam[1], "$\lambda$", fontsize=18)
axes.plot(0.5 * lam[0], 0.5 * lam[1], "kx", markersize=8)
axes.plot(0.5 * lam[0], 0.5 * lam[1], "kx", markersize=8)
axes.text(0.5 * lam[0], 0.6 * lam[1], "$\Delta t = 1/2$", fontsize=18)
axes.plot((lam[0], 0.0), (lam[1], 0.0), "r--")
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
axes.set_xlim((-3, 3.0))
axes.set_ylim((-3, 3))
axes.set_aspect("equal")
axes.set_xlabel("Re(z)")
axes.set_ylabel("Im(z)")
axes.set_title("Absolute Stability Region for Forward Euler")
plt.show()Why complex eigenvalues? And why should we care?¶
A simple example: The simple harmonic oscillator¶
The linear harmonic oscillator can be written as a 2nd-order ODE for position of a mass on a spring
Which has solutions
Alternatively, we can write this as a 2 dimensional system of first order equations by defining the velocity and $$
$$
All diagonalizable linear dynamical systems have solutions of the form
where , are corresponding eigenvalues and eigenvectors of and each eigenvector satisfies
which is exactly our model problem.
In the case where
The eigenvalues are _ _ _ _ _ _ _ _ ?
Even with complex Eigenvalues and Eigenvectors, the solutions are real and can be shown to be perfect circles on the phase plane
However, plotting the eigenvalues on the stability diagram show’s that Euler’s method will be completely unstable for this problem for all time steps
# Plot the region of absolute stability for Forward Euler
x = numpy.linspace(-2, 2, 11)
v = numpy.linspace(-2, 2, 11)
X, V = numpy.meshgrid(x, v)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.quiver(X, V, V, -X)
axes.plot(1.5 * numpy.cos(t), -1.5 * numpy.sin(t))
axes.plot([-2, 2], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-2, 2], "k--")
axes.set_xlabel("x", fontsize=16)
axes.set_ylabel("v", fontsize=16)
axes.set_title("Phase Plane: simple Harmonic oscillator", fontsize=16)
axes.set_aspect("equal")
axes = fig.add_subplot(1, 2, 2)
axes.fill(
numpy.cos(t) - 1.0,
numpy.sin(t),
color=(255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0),
)
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
axes.plot([0.0, 0.0], [-1.0, 1.0], "rx", markersize=10)
axes.set_xlim((-3, 3.0))
axes.set_ylim((-3, 3))
axes.set_aspect("equal")
axes.set_title("Absolute Stability Region for Forward Euler", fontsize=16)
plt.show()f = lambda t, u: numpy.array([u[1], -u[0]])
t_span = [0.0, 4 * numpy.pi]
u_0 = numpy.array([1.0, 0.0])
N = 100
t, u = euler(f, t_span, u_0, N)
t, u = RK4(f, t_span, u_0, N)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(X, V, V, -X)
axes.plot(1.0 * numpy.cos(t), -numpy.sin(t))
axes.plot(u[0], u[1], "r")
axes.plot([-2, 2], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-2, 2], "k--")
axes.set_xlabel("x", fontsize=16)
axes.set_ylabel("v", fontsize=16)
axes.set_title("Euler's method: simple Harmonic oscillator", fontsize=16)
axes.set_aspect("equal")Absolute Stability of Single-Step Multi-Stage schemes -- The R method¶
This approach can be applied to all Single-Step Multi-Stage schemes including all the Runge-Kutta and Taylor Series methods. The recipe is straightforward
Apply the stepping scheme for one step of the model problem , assuming complex
This will result in a discrete approximation
where , which will be a discrete approximation to the true solution
Absolute stability will require that
or rearranging and solving for gives
so
and absolute stability requires
so in fact the stability region encompasses the entire complex plane except for a circle centered at of radius 1 implying that the backward Euler method is in fact stable for any choice of for (and even some positive values of .
# Plot the region of absolute stability for Backward Euler
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
axes.set_facecolor((255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0))
axes.fill(numpy.cos(t) + 1.0, numpy.sin(t), "w")
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
# axes.set_xlim((-3, 1))
axes.set_ylim((-2, 2))
axes.set_aspect("equal")
axes.set_title("Absolute Stability Region for Backward Euler")
plt.show()However, stability doesn’t imply accuracy either. In this case, backwards Euler will still fail after long times for the simple harmonic oscillator as well as it will damp towards zero. For the general Linear autonomous dynamical system
Backwards Euler method can be written as
or rearranging, requires solving the linear algebraic problem at each step
# Implement Backwards Euler for a linear system $u' = Au$
def beuler(A, t_span, u0, N):
"""simple implementation of constant step-size forward euler method for vector valued f
This doc string should have so much more in it
"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty((len(t), len(u0)))
B = numpy.eye(len(u0)) - delta_t * A
u[0] = u0
for n, t_n in enumerate(t[:-1]):
u[n + 1] = numpy.linalg.solve(B, u[n])
return t, u.transpose()A = numpy.array([[0, 1], [-1, 0]])
t_span = [0.0, 4 * numpy.pi]
u_0 = numpy.array([1.0, 0.0])
N = 100
t, u = beuler(A, t_span, u_0, N)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(X, V, V, -X)
axes.plot(1.0 * numpy.cos(t), -numpy.sin(t))
axes.plot(u[0], u[1], "r")
axes.plot([-2, 2], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-2, 2], "k--")
axes.set_xlabel("x", fontsize=16)
axes.set_ylabel("v", fontsize=16)
axes.set_title("Backward Euler: simple Harmonic oscillator", fontsize=16)
axes.set_aspect("equal")General Stability Regions for Linear Methods¶
We can also map out stability regions for Linear Multi-step methods (although the analysis is slightly more complicated).
All of the Linear Multi-step methods from the previous lecture can be written
which for our model problem, becomes
Characteristic Polynomials and Linear Difference Equations¶
To help us analyze stability regions we need to take a small aside and consider linear difference equations and their solutions. Say we wanted to solve
given initial conditions . This expression has a solution in the general form .
Plugging this into the equation we have
which simplifies to
by dividing by . If then is a root of the polynomial
then solves the original difference equation.
Example: the Fibonacci Numbers¶
A classic example of linear difference equation is the Fibonnacci sequence, .
Which are solutions to the two-term difference equation
or
or
Substituting into
yields
which has solutions . Where the positive root is our favorite number which is the golden ratio. The negative root is . Note, these roots are related to the eigenvalues of a related problem
Thus all Fibonacci numbers can be written as a linear combination of the two roots
where and are determined by the first two terms in the sequence [0,1].
Note, in this case (while ) thus as .
Stability for LMM’s¶
Anyway, Back to the Linear difference equations arising from applying any Linear multistep method to the model problem
Letting
and substituting into the difference equations reduces to
where is the stability polynomial of the method, whose roots are solutions of the difference method.
Absolute stability of the linear multi-step methods¶
For to stay bounded as , it requires that all roots of satisfy
If this is the case, then the multi-step method is zero-stable. Roughly you can think of these roots as multiplying the past truncation errors so that if they are less than or equal to 1 they do not grow in magnitude. We then define the region of absolute stability as the values for for which this is true. This approach reduces to the method for one-step methods as a special case.
Example: Forward Euler’s Method¶
Examining forward Euler’s method we have
setting implies
or whose root is and we have re-derived the stability region we had found before.
Let ,
Plotting Stability Regions¶
Given a stability polynomial , absolute stability requires that
where are the (generally complex) roots of and therefore a function of . In the case of single step schemes, the problem reduces to finding the region of the complex plane where .
Either way, the simplest way to visualize the regions of the complex plane where the scheme is stable, is to calculate either or and plot the 1-contour and determine all regions that are .
Some useful codes¶
study the following cells for some useful routines for plotting stability regions for both Single-step multi-stage schemes and Linear Multi-step schemes
def stability_plot(X, Y, C, axes, title=None, continuous=True):
"""
Utility function to make stability diagram given complex stability scalar C
parameters:
-----------
X, Y: numpy.meshgrids for complex plane
C: numpy array
Field to plot, either |R(z)| for a single step scheme, or max(|xi_i(z)|) for a LMM scheme
axes: matplotlib axes object
subplot or plot to draw in.
title: string
subplot title if not None
continuous: boolean
if True, plot a continous coloring of C
if False, plot Heaviside(C)
"""
if continuous:
Ch = C
else:
Ch = numpy.heaviside(C - 1, 0.0)
pcolor_plot = axes.pcolor(
X, Y, Ch, vmin=0, vmax=1, cmap=plt.get_cmap("Greens_r"), shading="auto"
)
axes.contour(X, Y, C, "k", levels=[1.0])
fig = plt.gcf()
fig.colorbar(pcolor_plot)
axes.plot(x, numpy.zeros(x.shape), "k--")
axes.plot(numpy.zeros(y.shape), y, "k--")
axes.set_xlabel("Re", fontsize=16)
axes.set_ylabel("Im", fontsize=16)
if title is not None:
axes.set_title(title, fontsize=16)
axes.set_aspect("equal")
def plot_stability_ssms(R, x, y, axes=None, title=None, continuous=True):
"""
plot stability regions for single-step multi-stage ODE schemes given the function R(z)
such that U_{n+1} = R(z)U_n and z is complex
parameters:
-----------
R: calleable
function of a complex variable z such that if |R|<=1, the scheme is absolutely stable
x: numpy array
array of values for the real axis
y: numpy array
values to plot for the imaginary axis
axes: matplotlib axes object
subplot or plot to draw in. If axes=None create a new figure
title: string
subplot title if
continuous: boolean
if True, plot a continous coloring of C
if False, plot Heaviside(C)
"""
X, Y = numpy.meshgrid(x, y)
Z = X + 1j * Y
if axes is None:
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
abs_R = numpy.abs(R(Z))
stability_plot(X, Y, abs_R, axes, title, continuous)
def plot_stability_lmm(pi_coeff, x, y, axes=None, title=None, continuous=True):
"""
plot stability regions for linear multi-step ODE schemes given the coefficients of the stability polynomial
pi(xi, z)
parameters:
-----------
pi_coeff: calleable (function of z)
function that returns array of stability polynomial pi(z)
x: numpy array
array of values for the real axis
y: numpy array
values to plot for the imaginary axis
axes: matplotlib axes object
subplot or plot to draw in. If axes=None create a new figure
title: string
subplot title if not None
continuous: boolean
if True, plot a continous coloring of C
if False, plot Heaviside(C)
"""
X, Y = numpy.meshgrid(x, y)
Z = X + 1j * Y
if axes is None:
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
norm_max = numpy.empty(Z.shape)
for i, row in enumerate(Z):
for j, z in enumerate(row):
norm_max[i, j] = max(numpy.abs(numpy.roots(pi_coeff(z))))
stability_plot(X, Y, norm_max, axes, title, continuous)x = numpy.linspace(-3, 3, 100)
y = numpy.linspace(-3, 3, 100)
R_euler = lambda z: 1 + z
R_beuler = lambda z: 1.0 / (1.0 - z)
R_trap = lambda z: (1.0 + 0.5 * z) / (1.0 - 0.5 * z)
R_RK2 = lambda z: 1 + z + z**2 / 2
R_Taylor4 = lambda z: 1 + z + z**2 / 2 + z**3 / 6.0 + z**4 / 24.0
continuous = Truefig = plt.figure(figsize=(12, 12))
axes = fig.add_subplot(2, 2, 1)
plot_stability_ssms(
R_euler, x, y, axes=axes, title="Stability of Eulers method", continuous=continuous
)
axes = fig.add_subplot(2, 2, 2)
plot_stability_ssms(
R_beuler,
x,
y,
axes=axes,
title="Stability of Backwards Eulers",
continuous=continuous,
)
axes = fig.add_subplot(2, 2, 3)
plot_stability_ssms(
R_trap,
x,
y,
axes=axes,
title="Stability of Trapezoidal method",
continuous=continuous,
)
axes = fig.add_subplot(2, 2, 4)
plot_stability_ssms(
R_Taylor4,
x,
y,
axes=axes,
title="Stability of Taylor4 method",
continuous=continuous,
)x = numpy.linspace(-3, 3, 100)
y = numpy.linspace(-3, 3, 100)
pi_euler = lambda z: numpy.array([1.0, -(1 + z)])
pi_AB2 = lambda z: numpy.array([2, -(2.0 + 3 * z), z])fig = plt.figure(figsize=(12, 6))
axes = fig.add_subplot(1, 2, 1)
plot_stability_lmm(pi_euler, x, y, axes=axes, title="Stability of Eulers method")
axes = fig.add_subplot(1, 2, 2)
plot_stability_lmm(
pi_AB2, x, y, axes=axes, title="Stability of Adams-Bashforth2 method"
)Application to Stiff ODEs¶
Consider the ODE IVP
The general solution of the ODE is
For at , the solution is simply
What happens to solutions that are slightly different from or ?
# Plot "hairy" solutions to the ODE
u = lambda t_0, u0, lam, t: numpy.exp(lam * (t - t_0)) * (
u0 - numpy.cos(t_0)
) + numpy.cos(t)
fig = plt.figure(figsize=(12, 10))
for i, lam in enumerate([-1, -10, 1, 10]):
axes = fig.add_subplot(2, 2, i + 1)
for u0 in numpy.linspace(-1, 1, 10):
for t_0 in numpy.linspace(0.0, 9.0, 10):
t = numpy.linspace(t_0, 10.0, 100)
axes.plot(t, u(t_0, u0, lam, t), "b")
t = numpy.linspace(0.0, 10.0, 100)
axes.plot(t, numpy.cos(t), "r", linewidth=5)
axes.set_ylim((-2, 2))
axes.set_title("Perturbed Solutions $\lambda = %s$" % lam)
axes.set_xlabel("$t$")
axes.set_ylabel("$u(t)$")
axes.set_ylim((-2, 2))
plt.show()# Plot "inverse hairy" solutions to the ODE
u = lambda t_0, u0, lam, t: numpy.exp(lam * (t - t_0)) * (
u0 - numpy.cos(t_0)
) + numpy.cos(t)
num_steps = 10
error = numpy.ones(num_steps) * 1.0
t_hat = numpy.linspace(0.0, 10.0, num_steps + 1)
t_whole = numpy.linspace(0.0, 10.0, 1000)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
eta = 1.0
lam = 0.1
for n in range(1, num_steps):
t = numpy.linspace(t_hat[n - 1], t_hat[n], 100)
U = u(t_hat[n - 1], eta, lam, t)
axes.plot(t, U, "b")
axes.plot(t_whole, u(t_hat[n - 1], eta, lam, t_whole), "b--")
axes.plot([t[-1], t[-1]], (U[-1], U[-1] + -(1.0**n) * error[n]), "r")
eta = U[-1] + -(1.0**n) * error[n]
t = numpy.linspace(0.0, 10.0, 100)
axes.plot(t, numpy.cos(t), "g")
axes.set_title("Perturbed Solutions $\lambda = %s$" % lam)
axes.set_xlabel("$t$")
axes.set_ylabel("$u(t)$")
axes.set_ylim((-10, 10))
plt.show()Example: Chemical systems¶
Consider the transition of a chemical to a chemical through the process
We can write this problem as a system of Linear ODE’s $$
$A_0, B_0, C_0$
If is diagonalizable, the general solution of a linear dynamical system can be written in terms of the eigenvalues () and eigenvectors of as
And the general solution is
and the constants can be found by solving which describe the initial condition as a linear combination of the eigenvectors.
# Solve the chemical systems example using a Forward Euler scheme
# Problem parameters
K_1 = 1
K_2 = 1000
z = -2.01
dt = z / (-K_2)
T_max = 5.0
N = int(T_max / dt)
print("dt = {}, N = {}".format(dt, N))
A = numpy.array([[-K_1, 0, 0], [K_1, -K_2, 0], [0, K_2, 0]])
f = lambda u: numpy.dot(A, u)
u_0 = [3, 0, 0.0]
t_span = [0, T_max]f = lambda t, u: A.dot(u)
t, U = euler(f, t_span, u_0, N)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U.transpose())
axes.legend(["[A]", "[B]", "[C]"])
axes.set_title("Chemical System")
axes.set_xlabel("$t$")
axes.set_title("$[A], [B], [C]$")
# axes.set_ylim((0.0, 10.))
# axes.set_xlim((0.0, 2.0))
axes.grid()
plt.show()What is stiffness?¶
In general a stiff ODE is one where . For linear dynamical systems like the chemical decay problem, the stiffness ratio
can be used to characterize the stiffness of the system. In our last example this ratio was if . As we increased this ratio we observed that the numerical method became unstable and only a reduction in provided a stable solution again. For explicit time stepping methods this is problematic as the reduction of the time step for only one of the species leads to very expensive evaluations. For example, forward Euler has the stability criteria
where will have to be the maximum eigenvalue of the system.
# Implement vectorized Forward Euler
def euler(f, t_span, u0, N):
"""simple implementation of constant step-size forward euler method for vector valued f
This doc string should have so much more in it
"""
t = numpy.linspace(t_span[0], t_span[1], N)
if numpy.isscalar(u0):
u = numpy.empty(t.shape)
else:
u = numpy.empty((len(t), len(u0)))
u[0] = u0
delta_t = t[1] - t[0]
for n, t_n in enumerate(t[:-1]):
K1 = delta_t * f(t_n, u[n])
u[n + 1] = u[n] + K1
return t, u.transpose()# set the eigenvalues and time step
K_1 = 1.0
K_2 = 3.0
delta_t = 1.0
eigenvalues = [-K_1, -K_2]# Plot the region of absolute stability for Forward Euler
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
axes.fill(
numpy.cos(t) - 1.0,
numpy.sin(t),
color=(255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0),
)
for lam in eigenvalues:
print("z = {}".format(lam * delta_t))
axes.plot(lam * delta_t, 0.0, "ko")
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
# axes.set_xlim((-3, 1))
axes.set_ylim((-2, 2))
axes.set_aspect("equal")
axes.set_title("Absolute Stability Region for Forward Euler")
plt.show()# solve using backwards Euler
K_1 = 1.0
K_2 = 300
A = numpy.array([[-K_1, 0, 0], [K_1, -K_2, 0], [0, K_2, 0]])
N = 50
t_span = [0.0, 5.0]
U0 = [3.0, 0.0, 0.0]
t, U = beuler(A, t_span, U0, N)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U.T)
axes.legend(["[A]", "[B]", "[C]"])
axes.set_title("Chemical System")
axes.set_xlabel("$t$")
axes.set_title("$[A], [B], [C]$")
axes.grid()
plt.show()from scipy.integrate import solve_ivp
t_span = [0.0, 5.0]
U0 = [3.0, 0.0, 0.0]
K_1 = 1.0
K_2 = 10000
A = numpy.array([[-K_1, 0, 0], [K_1, -K_2, 0], [0, K_2, 0]])
f = lambda t, u: numpy.dot(A, u)
sol = solve_ivp(f, t_span, U0, method="Radau")fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(sol.t, sol.y.T, "o-")
axes.legend(["[A]", "[B]", "[C]"])
axes.set_title("Chemical System")
axes.set_xlabel("$t$")
axes.set_title("$[A], [B], [C]: N={}$".format(len(sol.t)))
axes.grid()
plt.show()A-Stability¶
What if we could expand the absolute stability region to encompass more of the left-half plane or even better, all of it. A method that has this property is called A-stable. We have already seen one example of this with backward Euler which as a stability region of
which covers the full left-half plane. It turns out that for linear multi-step methods a theorem by Dahlquist proves that there are no LMMs that satisfies the A-stability criterion beyond second order (trapezoidal rule). There are higher-order Runge-Kutta methods do however.
Perhaps this is too restrictive though. Often large eigenvalues for systems (for instance coming from a PDE discretization for the heat equation) lie completely on the real line. If the stability region can encompass as much of the real line as possible while leaving out the rest of the left-half plane we can possibly get a more efficient method. There are a number of methods that can be constructed that have this property but are higher-order.
# Plot the region of absolute stability for Backward Euler
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
K_1 = 3.0
K_2 = 1.0
delta_t = 1.0
eigenvalues = [-K_1, -K_2]
axes.set_facecolor((255.0 / 255.0, 145.0 / 255.0, 0 / 255.0, 1.0))
axes.fill(numpy.cos(t) + 1.0, numpy.sin(t), "w")
for lam in eigenvalues:
print(lam * delta_t)
axes.plot(lam * delta_t, 0.0, "ko")
axes.plot([-3, 3], [0.0, 0.0], "k--")
axes.plot([0.0, 0.0], [-3, 3], "k--")
# axes.set_xlim((-3, 1))
axes.set_ylim((-2, 2))
axes.set_aspect("equal")
axes.set_title("Absolute Stability Region for Backward Euler")
plt.show()L-Stability¶
It turns out not all A-stable methods are alike. Consider the backward Euler method and the trapezoidal method defined by
whose stability polynomial (or is
# Plot "hairy" solutions to the ODE
u = lambda t_0, u0, lam, t: numpy.exp(lam * (t - t_0)) * (
u0 - numpy.cos(t_0)
) + numpy.cos(t)
fig = plt.figure(figsize=(12, 5))
for i, lam in enumerate([-2, -20]):
axes = fig.add_subplot(1, 2, i + 1)
for u0 in numpy.linspace(-1, 1, 10):
for t_0 in numpy.linspace(0.0, 9.0, 10):
t = numpy.linspace(t_0, 10.0, 100)
axes.plot(t, u(t_0, u0, lam, t), "b")
t = numpy.linspace(0.0, 10.0, 100)
axes.plot(t, numpy.cos(t), "r", linewidth=5)
axes.set_ylim((-1.5, 1.5))
axes.grid()
axes.set_title("Perturbed Solutions $\lambda = %s$" % lam)
axes.set_xlabel("$t$")
axes.set_ylabel("$u(t)$")
axes.set_ylim((-1.5, 1.5))
plt.show()# Compare accuracy between Euler and Trapezoidal for the stiff ODE
f = lambda t, lam, u: lam * (u - numpy.cos(t)) - numpy.sin(t)
u_exact = lambda t_0, eta, lam, t: numpy.exp(lam * (t - t_0)) * (
u0 - numpy.cos(t_0)
) + numpy.cos(t)
t_0 = 0.0
t_f = 1.0
eta = 2.0
lam = -1e3num_steps = [10, 20, 40, 50]
# num_steps = numpy.arange(100, 1000, 100)
delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_trap = numpy.empty(len(num_steps))
for i, N in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
u = u_exact(t_0, eta, lam, t_f)
# Compute Euler solution
U_euler = numpy.empty(t.shape)
U_euler[0] = eta
for n, t_n in enumerate(t[1:]):
U_euler[n + 1] = (
U_euler[n] - lam * delta_t[i] * numpy.cos(t_n) - delta_t[i] * numpy.sin(t_n)
) / (1.0 - lam * delta_t[i])
error_euler[i] = numpy.abs(U_euler[-1] - u) / numpy.abs(u)
# Compute using trapezoidal
U_trap = numpy.empty(t.shape)
U_trap[0] = eta
for n, t_n in enumerate(t[1:]):
U_trap[n + 1] = (
U_trap[n]
+ delta_t[i] * 0.5 * f(t_n, lam, U_trap[n])
- 0.5 * lam * delta_t[i] * numpy.cos(t_n)
- 0.5 * delta_t[i] * numpy.sin(t_n)
) / (1.0 - 0.5 * lam * delta_t[i])
error_trap[i] = numpy.abs(U_trap[-1] - u)# Plot error vs. delta_t
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.plot(t, U_euler, "ro-")
axes.plot(t, u_exact(t_0, eta, lam, t), "k")
axes = fig.add_subplot(1, 2, 2)
axes.loglog(delta_t, error_euler, "bo")
order_C = lambda delta_x, error, order: numpy.exp(
numpy.log(error) - order * numpy.log(delta_x)
)
axes.loglog(
delta_t,
order_C(delta_t[1], error_euler[1], 1.0) * delta_t**1.0,
"r--",
label="1st Order",
)
axes.loglog(
delta_t,
order_C(delta_t[1], error_euler[1], 2.0) * delta_t**2.0,
"b--",
label="2nd Order",
)
axes.legend(loc=4)
axes.set_title("Comparison of Errors for Backwards Euler")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
# Plots for trapezoid
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.plot(t, U_trap, "ro-")
axes.plot(t, u_exact(t_0, eta, lam, t), "k")
axes = fig.add_subplot(1, 2, 2)
axes.loglog(delta_t, error_trap, "bo", label="Trapezoidal")
order_C = lambda delta_x, error, order: numpy.exp(
numpy.log(error) - order * numpy.log(delta_x)
)
axes.loglog(
delta_t,
order_C(delta_t[1], error_trap[1], 1.0) * delta_t**1.0,
"r--",
label="1st Order",
)
axes.loglog(
delta_t,
order_C(delta_t[1], error_trap[1], 2.0) * delta_t**2.0,
"b--",
label="2nd Order",
)
axes.legend(loc=4)
axes.set_title("Comparison of Errors for Trapezoidal Rule")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
plt.show()It turns out that if we look at a one-step method and define the following ratio
we can define another form of stability, called L-stable, where we require that the method is A-stable and that
Turns out that backwards Euler is L-stable while trapezoidal rule is not.
R_beuler = lambda z: 1.0 / (1.0 - z)
R_trap = lambda z: (1.0 + 0.5 * z) / (1.0 - 0.5 * z)fig = plt.figure(figsize=(12, 6))
axes = fig.add_subplot(1, 2, 1)
plot_stability_ssms(R_beuler, x, y, axes=axes, title="Stability of Backwards Eulers")
axes = fig.add_subplot(1, 2, 2)
plot_stability_ssms(R_trap, x, y, axes=axes, title="Stability of Trapezoidal")Backward Differencing Formulas¶
A class of LMM methods that are useful for stiff ODE problems are the backward difference formula (BDF) methods which have the form
These methods can be derived directly from backwards finite differences from the point and the rest of the points back in time. One can then derive r-step methods that are rth-order accurate this way. Some of the methods are
Inspection of BDF-1 shows that it’s equivalent to ??
You should also recognize BDF-2 from our Numerical differentiation notes
from fdcoeffV import fdcoeffV
# order of the BDF scheme (i.e. N=2 implies BDF-2)
N = 3
# order of the derivative
k = 1
x = numpy.array(range(N + 1))
w = fdcoeffV(k, x[N], x)
print("weights = {}".format(w))
print("weights x 6 = {}".format(6 * w))## Finish the code off here for calculating BDF coefficientspi_BDF1 = lambda z: numpy.array([1.0 - z, -1.0])
pi_BDF2 = lambda z: numpy.array([3.0 - 2 * z, -4.0, 1.0])
pi_BDF3 = lambda z: numpy.array([11 - 6.0 * z, -18.0, 9.0, -2.0])
pi_BDF4 = lambda z: numpy.array([25.0 - 12 * z, -48.0, 36, -16.0, 3])x = numpy.linspace(-10, 10, 50)
y = numpy.linspace(-10, 10, 50)
fig = plt.figure(figsize=(12, 12))
axes = fig.add_subplot(2, 2, 1)
plot_stability_lmm(pi_BDF1, x, y, axes=axes, title="Stability of BDF1 method")
axes = fig.add_subplot(2, 2, 2)
plot_stability_lmm(pi_BDF2, x, y, axes=axes, title="Stability of BDF2 method")
axes = fig.add_subplot(2, 2, 3)
plot_stability_lmm(pi_BDF3, x, y, axes=axes, title="Stability of BDF3 method")
axes = fig.add_subplot(2, 2, 4)
plot_stability_lmm(pi_BDF4, x, y, axes=axes, title="Stability of BDF4 method")BDF Schemes¶
Pro’s¶
LMM’s so only require 1 (non-linear) solve per time step
relatively easy to derive and code methods of variable order
L-Stable and A-Stable
Con’s¶
Still need to deal solving for system’s of non-linear equations
LMM’s are harder to make adaptive (but can do so using interpolation techniques)
Example, BDF-1 for a general non-linear ODE¶
Consider the general -dimensional nonlinear system of ODE’s (e.g van-der pol oscillator or SIR problem) which can be written as
where
and is a vector of scalar, possibly non-linear functions of the solution vector
for example, in the Van-der-pol oscillator and
BDF-1 discretization (Backwards-Euler)¶
So the BDF-1 (Backwards Euler) discretization of this system can be written
Which itself is a more general system of non-linear Equations for of form
where, in this case
Solution of systems of Non-linear equations: Newton’s method¶
If our system had only one scalar variable and function , then each step would reduce to solving the scalar non-linear problem
which is a rootfinding problem, and we could use any of our rootfinders to solve this (in particular we could use robust bracketing schemes like brent). Unfortunately once we move to higher-dimensions the problems become significantly harder (and solutions may not exist). However we can still attempt to apply Newton’s method to the general non-linear problem
Newton’s method: preliminaries¶
As a simpler non-linear system, we will consider the system of equations
which geometrically is the intersection of an ellipse and a straight line, which, depending on the values of and should have 0,1, or 2 solutions
x = numpy.linspace(-4, 4, 100)
y = numpy.linspace(-4, 4, 100)
X, Y = numpy.meshgrid(x, y)
Z1 = lambda gamma: X**2 + 2 * Y**2 - gamma
Z2 = lambda gamma: X + 5 * Y - gamma
gamma_1 = 10 * numpy.ones(3)
gamma_2 = [-2, 10, -15]
fig = plt.figure(figsize=(24, 6))
for i in range(3):
axes = fig.add_subplot(1, 3, i + 1)
axes.contour(X, Y, Z1(gamma_1[i]), [0], colors="r")
axes.contour(X, Y, Z2(gamma_2[i]), [0], colors="b")
axes.axis("equal")
axes.set_xlabel("$x_1$")
axes.set_ylabel("$x_2$")
axes.set_title("$\gamma_1={}, \gamma_2={}$".format(gamma_1[i], gamma_2[i]))
axes.grid()
plt.show()Now this system is sufficiently simple that we could solve it using substitution and solve a quadratic equation for the roots (or even use a 1-D rootfinder). But in general we would like a more general method where we can start from some initial guess and rapidly converge to one of the roots. Newton’s method provides a general way to solve these sorts of problems (with all the issues that go with Newton for scalar problems)
Newton’s method: Review ¶
Newton’s method for a scalar function of a single variable says, given any such that , there should be a correction such that
Or expanding in a Taylor’s series around
Neglecting the higher order terms this becomes
or solving for the correction
If the correction were exact it would follow that and we would have our solution.
In general, the higher-order terms are important so the residual is not zero, but this gives us an iterative scheme that we hope will converge
Set an initial guess . Then iterate for
until
If is well behaved with simple roots and the initial guess is within the basin of attraction of a given root, Newton can find the root with quadratic convergence.
It can also behave very badly and fly off to infinity...
Newton’s method: ¶
Newton’s method for a vector valued function is actually very similar (without the comfort of things such as brackets)
Again for general , but is a vector often called the “residual”.
By the rules of vector norms
and is only zero at a root of .
If the residual is non-zero for some arbitrary , Newton’s method assumes that there is some correction vector such that
And we simply need to expand a vector-valued function in its Taylor series and truncate at linear terms to solve for the correction.
Preliminaries: Taylor Series expansion of .¶
if is a scalar function of a vector valued variable (e.g. a surface like ), then the Taylor series of
or more beautifully in vector notation
where is the gradient of , and is the Hessian, a symmetric matrix whose components are
The Jacobian¶
The Jacobian of is the matrix whose rows are the gradients of the individual functions
or whose components are
i.e. all the partial derivatives of all the functions with respect to all the variables
Newton’s method for (finally)¶
Given all the pieces, raw Newton for is simply
given an initial guess such that : seek a correction such that
or solve
and correct
Minor digression: Linear systems¶
We can actually use Newton to solve linear systems as well. Consider the general linear problem
which we can rewrite in residual form as
i.e. the solution of our original problem is also a root of .
For any other initial guess , . However we can easily find the correction as
which we can solve immediately for the correction by solving the linear problem
Which is the same as the general Newton problem if
Then in index notation
Or the solution of our original problem is
Which for a simple linear function becomes
Thus we can solve for either the solution itself or the correction with the residual. These are equivalent
# Some quick and dirty Code
def newton(F, J, x0, tol=1.0e-6, MAX_ITS=100, verbose=True):
"""Solve F(x) = 0 using Newton's method until ||F(x)|| < tol or reaches Max iterations
Params:
-------
F: calleable:
Function returning residual F(x)
J: calleable
Function returning Jacobian J(x)
tol: float
stopping criteria for ||F|| < tol
MAX_ITS: integer
maximum number of iterations
verbose: bool
if true spits out norm of the residual at each iteration
Returns:
--------
x: list
list of points for each Newton iteration (just used for plotting intermediate results)
the solution is x[-1]
Raises:
-----------
ValueError if number of its exceeds MAX_ITS
"""
x = [x0]
for k in range(MAX_ITS + 1):
xk = x[k]
res = numpy.linalg.norm(F(xk))
if verbose:
print("k = {}, ||F|| = {}".format(k, res))
delta = numpy.linalg.solve(J(xk), -F(xk))
x.append(xk + delta)
if res < tol:
return numpy.array(x), k
raise ValueError("Maximum number of iterations exceeded {}".format(MAX_ITS))Let’s test with our toy problem¶
gamma = numpy.array([10, -2])
F = lambda x: numpy.array(
[x[0] ** 2 + 2 * x[1] ** 2 - gamma[0], x[0] + 5 * x[1] - gamma[1]]
)
J = lambda x: numpy.array([[2 * x[0], 4 * x[1]], [1, 5]])x0 = numpy.array([-0.5, 2])
x, nits = newton(F, J, x0)
print("\n", x)xa = numpy.linspace(-4, 4, 100)
ya = numpy.linspace(-4, 4, 100)
X, Y = numpy.meshgrid(xa, ya)
Z1 = lambda gamma: X**2 + 2 * Y**2 - gamma
Z2 = lambda gamma: X + 5 * Y - gamma
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.contour(X, Y, Z1(gamma[0]), [0], colors="r")
axes.contour(X, Y, Z2(gamma[1]), [0], colors="b")
for xi in x:
axes.plot(xi[0], xi[1], "gx", markersize=8)
axes.plot(xi[0], xi[1], "go", markersize=10)
axes.axis("equal")
axes.set_xlabel("$x_1$")
axes.set_ylabel("$x_2$")
axes.set_title("$\gamma_1={}, \gamma_2={}$".format(gamma[0], gamma[1]))
axes.grid()
plt.show()A project idea:¶
Write an adaptive BDF-2/BDF-3 scheme that can solve a general non-linear problem and test it using the vander-pol oscillator. This turns out to be a lot more challenging than it looks for the vander-pol problem.
