![]() | Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli |
from __future__ import absolute_import, print_function
%matplotlib inline
import matplotlib.pyplot as plt
import numpyNumerical Differentiation¶
GOAL: Given a set of points compute the derivative of a given order at a point to a specified accuracy.
Approaches:
Find the interpolating polynomial and differentiate that.
Use Taylor-series expansions and the method of undetermined coefficients to derive finite-difference weights and their error estimates
Issues: Order vs accuracy...how to choose
Example 1: how to approximate the derivative given a discrete sampling of a function ¶
Here we will consider how to estimate given a point sampling of sampled uniformly over the interval
N = 11
x = numpy.linspace(0, 1, N)
xfine = numpy.linspace(0, 1, 101)
f = lambda x: numpy.sin(numpy.pi * x) + 0.5 * numpy.sin(4 * numpy.pi * x)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(xfine, f(xfine), "b", label="$f(x)$")
axes.plot(x, f(x), "ro", markersize=12, label="$f(x_k)$")
axes.grid()
axes.set_xlabel("x")
p = numpy.polyfit(x, f(x), N - 1)
axes.plot(xfine, numpy.polyval(p, xfine), "g--", label="$P_{{{N}}}$".format(N=N - 1))
axes.legend(fontsize=15)
plt.show()Example 2: how to approximate derivative given a discrete sampling of a function ¶
Here we will consider how to estimate given a point sampling of Runge’s function sampled uniformly over the interval
N = 11
x = numpy.linspace(-1, 1, N)
xfine = numpy.linspace(-1, 1, 101)
f = lambda x: 1.0 / (1.0 + 25 * x**2)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(xfine, f(xfine), "b", label="$f(x)$")
axes.plot(x, f(x), "ro", markersize=12, label="$f(x_k)$")
axes.grid()
axes.set_xlabel("x")
p = numpy.polyfit(x, f(x), N - 1)
axes.plot(xfine, numpy.polyval(p, xfine), "g--", label="$P_{{{N}}}$".format(N=N - 1))
axes.legend(fontsize=15)
plt.show()The interpolating polynomial: review¶
From our previous lecture, we showed that we can approximate a function over some interval in terms of a unique interpolating polynomial through points and a remainder term
Where the Lagrange remainder term is
While there are multiple ways to represent the interpolating polynomial, both and are polynomials in and therefore differentiable. Thus we should be able to calculate the first derivative and its error as
and likewise for higher order derivatives up to degree .
Derivatives of the Lagrange Polynomials¶
The Lagrange basis, is a particularly nice basis for calculating numerical differentiation formulas because of their basic interpolating property that
where is just the value of our function at node and all of the dependence is contained in the Lagrange Polynomials (which only depend on the node coordinates , ). Thus, the interpolating polynomial at any is simply a linear combination of the values at the nodes
Examples¶
Given the potentially, highly oscillatory nature of the interpolating polynomial, in practice we only use a small number of data points around a given point to derive a differentiation formula for the derivative . In the context of differential equations we also often have so that and we can approximate the derivative of a known function .
N = 9
f = lambda x: 1.0 / (1.0 + 25 * x**2)
# f = lambda x: numpy.cos(2.*numpy.pi*x)x = numpy.linspace(-1, 1, N)
xfine = numpy.linspace(-1, 1, 101)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(xfine, f(xfine), "b", label="$f(x)$")
axes.plot(x, f(x), "ro", markersize=12, label="$f(x_k)$")
x3 = x[5:8]
x3fine = numpy.linspace(x3[0], x3[-1], 20)
p = numpy.polyfit(x3, f(x3), 2)
axes.plot(x3, f(x3), "m", label="Piecewise $P_1(x)$")
axes.plot(x3fine, numpy.polyval(p, x3fine), "k", label="Piecewise $P_2(x)$")
axes.grid()
axes.set_xlabel("x")
p = numpy.polyfit(x, f(x), N - 1)
axes.plot(xfine, numpy.polyval(p, xfine), "g--", label="$P_{{{N}}}$".format(N=N - 1))
axes.legend(fontsize=14, loc="best")
plt.show()Or written out in full
Thus the first derivative of this polynomial for all is
Where is the width of the interval. This formula is simply the slope of the chord connecting the points and . Note also, that the estimate of the first-derivative is constant for all .
“Forward” and “Backward” first derivatives¶
Even though the first derivative by this method is the same at both and , we sometime make a distinction between the “forward Derivative”
and the “backward” finite-difference as
N = 5
x = numpy.linspace(0, 1, N)
xfine = numpy.linspace(0, 1, 101)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(xfine, f(xfine), "b", label="$f(x)$")
axes.plot(x, f(x), "ro", markersize=12, label="$f(x_k)$")
x3 = x[1:4]
x3fine = numpy.linspace(x3[0], x3[-1], 20)
p = numpy.polyfit(x3, f(x3), 2)
axes.plot(x3, f(x3), "m", label="Piecewise $P_1(x)$")
axes.plot(x3fine, numpy.polyval(p, x3fine), "k", label="Piecewise $P_2(x)$")
axes.grid()
axes.set_xlabel("x")
axes.legend(fontsize=14, loc="best")
axes.text(0.6, 0.15, "$D_1^+$", size=20)
axes.text(0.38, 0.3, "$D_1^-$", size=20)
plt.show()Note these approximations should be familiar to use as the limit as these are no longer approximations but equivalent definitions of the derivative at .
Or written out in full
Thus the first derivative of this polynomial for all is
Exercise: show that the second-derivative is a constant (find it!) but is also just a linear combination of the function values at the nodes.
Becomes:
which if we evaluate at the three nodes yields
Again, just linear combinations of the values at the nodes
Quick Checks¶
In general, all finite difference formulas can be written as linear combinations of the values of at the nodes. The formula’s can be hard to remember, but they are easy to check.
The sum of the coefficients must add to zero. Why?
The sign of the coefficients can be checked by inserting
Error Analysis¶
In addition to calculating finite difference formulas, we can also estimate the error
From Lagrange’s Theorem, the remainder term looks like
Thus the derivative of the remainder term is
The remainder term contains a sum of ’th order polynomials and can be awkward to evaluate, however, if we restrict ourselves to the error at any given node , the remainder simplifies to
If we let we then know that the remainder term will be as thus showing that this approach converges and we can find arbitrarily high order approximations (ignoring floating point error).
Examples¶
First order differences ¶
For our first order finite differences, the error term is simply
Both of which are
Second order differences ¶
For general second order polynomial interpolation, the derivative of the remainder term is
Again evaluating this expression at the center point and assuming evenly space points we have
showing that our error is .
Caution¶
High order does not necessarily imply high-accuracy!
As always, the question remains as to whether the underlying function is well approximated by a high-order polynomial.
Convergence¶
Nevertheless, we can always check to see if the error reduces as expected as . Here we estimate the 1st and 2nd order first-derivative for evenly spaced points
def D1_p(func, x_min, x_max, N):
"""calculate consistent 1st order Forward difference of a function func(x) defined on the interval [x_min,xmax]
and sampled at N evenly spaced points"""
x = numpy.linspace(x_min, x_max, N)
f = func(x)
dx = x[1] - x[0]
f_prime = numpy.zeros(N)
f_prime[0:-1] = (f[1:] - f[0:-1]) / dx
# and patch up the end point with a backwards difference
f_prime[-1] = f_prime[-2]
return f_prime
def D1_2(func, x_min, x_max, N):
"""calculate consistent 2nd order first derivative of a function func(x) defined on the interval [x_min,xmax]
and sampled at N evenly spaced points"""
x = numpy.linspace(x_min, x_max, N)
f = func(x)
dx = x[1] - x[0]
f_prime = numpy.zeros(N)
# consistent 2nd order one-sided 1st derivative at x_min
f_prime[0] = f[:3].dot(numpy.array([-3, 4, -1])) / (2 * dx)
# centered derivatives in the interior
f_prime[1:-1] = (f[2:N] - f[0:-2]) / (2 * dx)
# consistent 2nd order one-sided 1st derivative at x_min
f_prime[-1] = f[-3:].dot(numpy.array([1, -4, 3])) / (2 * dx)
return f_primeNote:¶
This first derivative operator can also be written as a Matrix such that where is a vector of coordinates. (exercise left for the homework)
N = 21
xmin = 0.0
xmax = 1.0
func = lambda x: numpy.sin(numpy.pi * x) + 0.5 * numpy.sin(4 * numpy.pi * x)
func_prime = lambda x: numpy.pi * (
numpy.cos(numpy.pi * x) + 2.0 * numpy.cos(4 * numpy.pi * x)
)
D1f = D1_p(func, xmin, xmax, N)
D2f = D1_2(func, xmin, xmax, N)xa = numpy.linspace(xmin, xmax, 100)
xi = numpy.linspace(xmin, xmax, N)
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(xa, func(xa), "b", label="$f(x)$")
axes.plot(xa, func_prime(xa), "k--", label="$f'(x)$")
axes.plot(xi, func(xi), "ro")
axes.plot(xi, D1f, "ko", label="$D^+_1(f)$")
axes.legend(loc="best")
axes.set_title("$f'(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$f'(x)$ and $\hat{f}'(x)$")
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(xa, func(xa), "b", label="$f(x)$")
axes.plot(xa, func_prime(xa), "k--", label="$f'(x)$")
axes.plot(xi, func(xi), "ro")
axes.plot(xi, D2f, "go", label="$D_1^2(f)$")
axes.legend(loc="best")
axes.set_title("$f'(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$f'(x)$ and $\hat{f}'(x)$")
axes.grid()
plt.show()N = 21
xmin = -1
xmax = 1.0
func = lambda x: 1.0 / (1 + 25.0 * x**2)
func_prime = lambda x: -50.0 * x / (1.0 + 25.0 * x**2) ** 2
D1f = D1_p(func, xmin, xmax, N)
D2f = D1_2(func, xmin, xmax, N)xa = numpy.linspace(xmin, xmax, 100)
xi = numpy.linspace(xmin, xmax, N)
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(xa, func(xa), "b", label="$f(x)$")
axes.plot(xa, func_prime(xa), "k--", label="$f'(x)$")
axes.plot(xi, func(xi), "ro")
axes.plot(xi, D1f, "ko", label="$D^+_1(f)$")
axes.legend(loc="best")
axes.set_title("$f'(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$f'(x)$ and $\hat{f}'(x)$")
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(xa, func(xa), "b", label="$f(x)$")
axes.plot(xa, func_prime(xa), "k--", label="$f'(x)$")
axes.plot(xi, func(xi), "ro")
axes.plot(xi, D2f, "go", label="$D_1^2(f)$")
axes.legend(loc="best")
axes.set_title("$f'(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$f'(x)$ and $\hat{f}'(x)$")
axes.grid()
plt.show()Computing Order of Convergence¶
Say we had the error and we wanted to make a statement about the rate of convergence (note we can replace here with the from above). Then we can do the following:
The slope of the line is when modeling the error like this! We can also match the first point by solving for :
# Compute the error as a function of delta_x
N_range = numpy.logspace(1, 4, 10, dtype=int)
delta_x = numpy.empty(N_range.shape)
error = numpy.empty((N_range.shape[0], 4))
for i, N in enumerate(N_range):
x_hat = numpy.linspace(xmin, xmax, N)
delta_x[i] = x_hat[1] - x_hat[0]
# Compute forward difference
D1f = D1_p(func, xmin, xmax, N)
# Compute 2nd order difference
D2f = D1_2(func, xmin, xmax, N)
# Calculate the infinity norm or maximum error
error[i, 0] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D1f), ord=numpy.inf)
error[i, 1] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D2f), ord=numpy.inf)
error = numpy.array(error)
delta_x = numpy.array(delta_x)
order_C = lambda delta_x, error, order: numpy.exp(
numpy.log(error) - order * numpy.log(delta_x)
)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error[:, 0], "ro", label="$D_1^+$")
axes.loglog(delta_x, error[:, 1], "bo", label="$D_1^2$")
axes.loglog(
delta_x,
order_C(delta_x[0], error[0, 0], 1.0) * delta_x**1.0,
"r--",
label="1st Order",
)
axes.loglog(
delta_x,
order_C(delta_x[0], error[0, 1], 2.0) * delta_x**2.0,
"b--",
label="2nd Order",
)
axes.legend(loc=4)
axes.set_title("Convergence of Finite Differences", fontsize=18)
axes.set_xlabel("$\Delta x$", fontsize=16)
axes.set_ylabel("$|f'(x) - \hat{f}'(x)|$", fontsize=16)
axes.legend(loc="best", fontsize=14)
axes.grid()
plt.show()Computing Order of Convergence¶
Alternatively, we can fit our data to the best fit line in log-log space to estimate the rate of convergence . However, as these estimates are always asymptotic in the limit of small , it is often most instructive to just fit the small tails.
# Choose the largest value of $\Delta_x$ for fitting
N_max = 3# Compute the error as a function of delta_x
N_range = numpy.logspace(1, 4, 10, dtype=int)
delta_x = numpy.empty(N_range.shape)
error = numpy.empty((N_range.shape[0], 4))
for i, N in enumerate(N_range):
x_hat = numpy.linspace(xmin, xmax, N)
delta_x[i] = x_hat[1] - x_hat[0]
# Compute forward difference
D1f = D1_p(func, xmin, xmax, N)
# Compute 2nd order difference
D2f = D1_2(func, xmin, xmax, N)
# Calculate the infinity norm or maximum error
error[i, 0] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D1f), ord=numpy.inf)
error[i, 1] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D2f), ord=numpy.inf)
error = numpy.array(error)
delta_x = numpy.array(delta_x)
# calculate linear fits to log(err) v. log(\Delta x)
p1 = numpy.polyfit(numpy.log(delta_x[N_max:]), numpy.log(error[N_max:, 0]), 1)
p2 = numpy.polyfit(numpy.log(delta_x[N_max:]), numpy.log(error[N_max:, 1]), 1)
# print(p1,p2)
fit = lambda delta_x, p: numpy.exp(p[1]) * delta_x ** p[0]
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error[:, 0], "ro", label="$D_1^+$")
axes.loglog(delta_x, error[:, 1], "bo", label="$D_1^2$")
axes.loglog(delta_x, fit(delta_x, p1), "r--", label="n={:.2f}".format(p1[0]))
axes.loglog(delta_x, fit(delta_x, p2), "b--", label="n={:.2f}".format(p2[0]))
axes.loglog(delta_x[N_max], error[N_max, 0], "rs", markersize=15, fillstyle="none")
axes.loglog(delta_x[N_max], error[N_max, 1], "bs", markersize=15, fillstyle="none")
axes.legend(loc=4)
axes.set_title("Convergence of Finite Differences", fontsize=18)
axes.set_xlabel("$\Delta x$", fontsize=16)
axes.set_ylabel("$|f'(x) - \hat{f}'(x)|$", fontsize=16)
axes.legend(loc="best", fontsize=14)
axes.grid()
plt.show()Another approach: The method of undetermined Coefficients¶
An alternative method for finding finite-difference formulas is by using Taylor series expansions about the point we want to approximate. The Taylor series about is
Say we want to derive the second order accurate, first derivative approximation that we just did, this requires the values and . We can express these values via our Taylor series approximation above as
\begin{aligned} f(x_{n+1}) &= f(x_n) + (x_{n+1} - x_n) f’(x_n) + \frac{(x_{n+1} - x_n)^2}{2!} f’‘(x_n) + \frac{(x_{n+1} - x_n)^3}{3!} f’‘’(x_n) + \mathcal{O}((x_{n+1} - x_n)^4) \ \end{aligned}
or \begin{aligned} &= f(x_n) + \Delta x f’(x_n) + \frac{\Delta x^2}{2!} f’‘(x_n) + \frac{\Delta x^3}{3!} f’‘’(x_n) + \mathcal{O}(\Delta x^4) \end{aligned}
and
Or all together (for regularly spaced points),
Now to find out how to combine these into an expression for the derivative we assume our approximation looks like
where is our error, and are our ``undetermined coefficients’’
Plugging in the Taylor series approximations we find
Or
Since we want we want all terms lower than this to cancel except for those multiplying as those should sum to 1 to give us our approximation. Collecting the terms with common evaluations of the derivatives on we get a series of expressions for the coefficients , , and based on the fact we want an approximation to . The terms collected are and are set to 0 as we want the term to also cancel.
Or as a linear algebra problem
This last equation , using this in the second equation gives and . The first equation then leads to .
Putting this altogether then gives us our previous expression including an estimate for the error:
so that we find
Another way...¶
There is one more way to derive the second order accurate, first order finite-difference formula. Consider the two first order forward and backward finite-differences averaged together:
Example 4: Higher Order Derivatives¶
Using our Taylor series approach lets derive the second order accurate second derivative formula. Again we will use the same points and the Taylor series centered at so we end up with the same expression as before:
except this time we want to leave on the right hand side.
Try out the same trick as before and see if you can setup the equations that need to be solved.
Doing the same trick as before we have the following expressions:
The second equation implies which combined with the third implies
Finally the first equation gives
leading to the final expression
so that
def D2(func, x_min, x_max, N):
"""calculate consistent 2nd order second derivative of a function func(x) defined on the interval [x_min,xmax]
and sampled at N evenly spaced points"""
x = numpy.linspace(x_min, x_max, N)
f = func(x)
dx = x[1] - x[0]
D2f = numpy.zeros(x.shape)
D2f[1:-1] = (f[:-2] - 2 * f[1:-1] + f[2:]) / (dx**2)
# patch up end points to be 1 sided 2nd derivatives
D2f[0] = D2f[1]
D2f[-1] = D2f[-2]
return D2ff = lambda x: numpy.sin(x)
f_dubl_prime = lambda x: -numpy.sin(x)
# Use uniform discretization
x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)
N = 80
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x = x_hat[1] - x_hat[0]
# Compute derivative
D2f = D2(f, x_hat[0], x_hat[-1], N)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x), "b", label="$f(x)$")
axes.plot(x, f_dubl_prime(x), "k--", label="$f'(x)$")
axes.plot(x_hat, D2f, "ro", label="$D_2(f)$")
axes.set_xlim((x[0], x[-1]))
axes.set_ylim((-1.1, 1.1))
axes.legend(loc="best", fontsize=14)
axes.grid()
axes.set_title("Discrete Second derivative", fontsize=18)
axes.set_xlabel("$x$", fontsize=16)
plt.show()The general case¶
In the general case we can use any points to calculate consistent finite difference coefficients for approximating any derivative of order . Relaxing the requirement of equal grid spacing (or the expectation that the location where the derivative is evaluated , is one of the grid points) the Taylor series expansions become
where is the distance between the point and each grid point.
Equating terms of equal order reduces the problem to another Vandermonde matrix problem $$\begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \ \Delta x_0 & \Delta x_1 & \Delta x_2 & \cdots & \Delta x_N\ \frac{\Delta x_0^2}{2!} & \frac{\Delta x_1^2}{2!} & \frac{\Delta x_2^2}{2!} &\cdots & \frac{\Delta x_N^2}{2!}\ & & \vdots & \cdots & \ \frac{\Delta x_0^N}{N!} & \frac{\Delta x_1^N}{N!} & \frac{\Delta x_2^N}{N!} & \cdots & \frac{\Delta x_N^N}{N!}\ \end{bmatrix}
\mathbf{b}_k $$
where is a vector of zeros with just a one in the th position for the th derivative.
By exactly accounting for the first terms of the Taylor series (with equations), we can get any order derivative as well as an Error estimate for
This approach of “undetermined coefficients” can be efficiently coded up as a routine to provide consistent order finite difference coefficients for an arbitrarily spaced grid .
Here we present a python version of a matlab routine fdcoeffV.m from Randy Leveque’s excellent book Finite Difference Methods for ordinary and partial differential equations
def fdcoeffV(k, xbar, x):
"""
fdcoeffV routine modified from Leveque (2007) matlab function
Params:
-------
k: int
order of derivative
xbar: float
point at which derivative is to be evaluated
x: ndarray
numpy array of coordinates to use in calculating the weights
Returns:
--------
c: ndarray
array of floats of coefficients.
Compute coefficients for finite difference approximation for the
derivative of order k at xbar based on grid values at points in x.
WARNING: This approach is numerically unstable for large values of n since
the Vandermonde matrix is poorly conditioned. Use fdcoeffF.m instead,
which is based on Fornberg's method.
This function returns a row vector c of dimension 1 by n, where n=length(x),
containing coefficients to approximate u^{(k)}(xbar),
the k'th derivative of u evaluated at xbar, based on n values
of u at x(1), x(2), ... x(n).
If U is an array containing u(x) at these n points, then
c.dot(U) will give the approximation to u^{(k)}(xbar).
Note for k=0 this can be used to evaluate the interpolating polynomial
itself.
Requires len(x) > k.
Usually the elements x(i) are monotonically increasing
and x(1) <= xbar <= x(n), but neither condition is required.
The x values need not be equally spaced but must be distinct.
Modified rom http://www.amath.washington.edu/~rjl/fdmbook/ (2007)
"""
from scipy.special import factorial
n = x.shape[0]
assert k < n, " The order of the derivative must be less than the stencil width"
# Generate the Vandermonde matrix from the Taylor series
A = numpy.ones((n, n))
xrow = x - xbar # displacements x-xbar
for i in range(1, n):
A[i, :] = (xrow ** (i)) / factorial(i)
b = numpy.zeros(n) # b is right hand side,
b[k] = 1 # so k'th derivative term remains
c = numpy.linalg.solve(A, b) # solve n by n system for coefficients
return cN = 11
x = numpy.linspace(-2 * numpy.pi, 2.0 * numpy.pi, N)
k = 1
scale = (x[1] - x[0]) ** k
print(fdcoeffV(k, x[0], x[:3]) * scale)
for j in range(k, N - 1):
print(fdcoeffV(k, x[j], x[j - 1 : j + 2]) * scale)
print(fdcoeffV(k, x[-1], x[-3:]) * scale)Example: A variably spaced mesh¶
N = 21
y = numpy.linspace(-0.95, 0.95, N)
x = numpy.arctanh(y)fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.zeros(x.shape), "bo-")
axes.plot(x, y, "ro-")
axes.grid()
axes.set_xlabel("$x$")
axes.set_ylabel("$y$")
plt.show()k = 1
fd = fdcoeffV(k, x[0], x[:3])
print("{}, sum={}".format(fd, fd.sum()))
for j in range(1, N - 1):
fd = fdcoeffV(k, x[j], x[j - 1 : j + 2])
print("{}, sum={}".format(fd, fd.sum()))
fd = fdcoeffV(k, x[-1], x[-3:])
print("{}, sum={}".format(fd, fd.sum()))Application to Numerical PDE’s¶
Given an efficent way to generate Finite Difference Coefficients these coefficients can be stored in a (usually sparse) matrix such that given any discrete vector , We can calculate the approximate th derivative as simply the matrix vector product
This technique will become extremely useful when solving basic finite difference approximations to differential equations (as we will explore in future lectures and homeworks).
The Bigger idea¶
More generally, using finite differences we can transform a continuous differential operator on a function space
which maps a function to a function, to a discrete linear algebraic problem
where are discrete approximations to the continous functions and is a discrete differential operator (Matrix) which maps a vector to a vector.
