Table Of Contents

Previous topic

Python Techniques

Next topic

One-dimensional Multigrid

This Page

Multigrid Method

Here we present some notes about the multigrid method.

The mmf.math.multigrid module provides efficient ways of solving Poisson-type equations:

\left(-\nabla^2 + \mat{D}(x)\right)\vect{f}(x) = \vect{b}(x).

The geometric version of the problem presents a discretization of the left-hand side on a sequence of finer grids.

Details

Here we implements some simple multigrid solvers. These solve a series of related linear problems

\mat{A}^{(i)}x^{(i)} = b^{(i)}

where the matrices \mat{A}^{(i)} have special properties, typically representing derivative operators such as the Laplacian on a series of grids of increasing size.

Formally, the problem requires a set of three operators:

R^{(i)}: b^{(i)}\mapsto b^{(i-1)}: A restriction operator that
takes a right hand side of the problem on a finer grid into a right-hand side on a coarser grid.
S^{(i)}: x^{(i)}\mapsto x^{(i)}: A smoothing operator that removes
high-frequency components of the error from the solution.
I^{(i+1)}: x^{(i)}\mapsto x^{(i+1)}: An interpolation operator that
takes a solution on a coarser grid into a solution on the finer grid.

Typically these are linear operators (and we shall consider only linear operators here) and should satisfy the Galerkin condition:

\mat{A}^{(i)} =
   \mat{R}^{(i+1)}\cdot\mat{A}^{(i+1)}\cdot\mat{I}^{(i+1)}

which serves to define the coarse grid operators A^{(i)} in terms of the finer grid operator. A second condition is usually imposed that the interpolation and restriction operations be related by transposition:

\mat{R} = c \mat{I}^T.

This helps with the analysis by defining a variational property:

Note

I think this states that the solution to the coarse problem provides the minimum error solution to the fine problem when interpolated. In other words, if we only consider solutions of the form \vect{x} =
\mat{I}\vect{x}_{0}, then the best solution (in the sense of minimizing the error \norm{x - \mat{I}\vect{x}_{0}}) in this subspace will be the interpolation of the solution \vect{x}_{0} to the coarsened problem. However, I can’t prove this right now.

As we shall see, the Galerkin condition is not always easily maintained if efficient matrix operation is desired.

The smoothing operator \mat{S} must provide a relaxation method for the problem. The efficiency of the multigrid method is based on each of these operations being able to be applied in \order(N) time where N=n^d is the number of unknowns. These are typically just weighted averages of nearest neighbours respecting this requirement.

The idea is for one to use S to damp the high frequency errors, then to recursively apply the algorithm using R and I to go to a coarser grid. In the coarser grid, the low-frequency errors become high frequency errors which are again damped. Typically, this is accomplished with a weighted Jacobi algorithm or red-black ordered Gauss-Seidel. The idea is to form an update

x \mapsto S(x) = \mat{R}x + c

that is idempotent on the solution x^*. This ensures that the errors grow as x = x^{*} + \delta \rightarrow x^{*} + \mat{R}\delta. This, then eigenvalues of \mat{R} control the convergence: thus one chooses the iteration in such a way that the eigenvalues of high-frequency components are substantially between 1 and -1. The choice of \mat{R} is discussed in detail in the section Smoothing below.

Discretization

The first step, however, is to consider the discretization of the Poisson operator \op{T} = \nabla^2. This should have several properties:

  1. As the Laplacian is a symmetric operator (in the sense that all the eigenvalues are real), the discretization should preserve this property. This requires a fairly careful implementation of the boundary conditions.

  2. The Laplacian should preserve charge neutrality with applicable boundary conditions: the finite version of integration-by-parts:

    \int \nabla^2 f = \int_{\partial} \nabla f\cdot\uvect{n}

    Thus, with Neumann boundary conditions or without a boundary (for example, if one has periodic boundary conditions), for any vector \vect{f} one should have \mat{T}\cdot\vect{f} = 0 which means \sum_{i} T_{ij} = 0. (For Dirichlet boundary conditions, one must include the boundary contribution and so this relationship will not hold at the boundaries).

    In addition, the equation \nabla^2 f = q will have a solution iff \int q = 0. This means that \mat{T} must have a singular direction. This must be considered when solving such systems.

The second question is the order of the discretization. Apparently, for the multigrid method, higher order is not very helpful, so we content ourselves with the symmetric differences scheme.

We consider the following operators for the second derivatives which satisfy these properties. Each of these can be expressed as \mat{T} = (\mat{D}^{+}\mat{D}^{-} + \mat{D}^{-}\mat{D}^{+})/2 in terms of forward and backward difference operators:

\begin{aligned}
  \mat{T}_{\text{Dirichlet}} &= \begin{pmatrix}
   -2 &  1 & \\
    1 & -2 & 1\\
      & \ddots & \ddots & \ddots\\
      &    &  1 & -2 &  1\\
      &    &    &  1 & -2
  \end{pmatrix}, &
  \mat{D}^{+} &=  \begin{pmatrix}
   -1 &  1 & \\
      & -1 & 1\\
      &    & \ddots & \ddots\\
      &    &   & -1 &  1\\
      &    &    &   & -1
  \end{pmatrix}, &
  \mat{D}^{-} &=  \begin{pmatrix}
    1 & \\
   -1 &  1\\
      &  \ddots & \ddots\\
      &   & -1 &  1\\
      &   &   & -1 & 1
  \end{pmatrix}, \\
  \mat{T}_{\text{Neumann}} &= \begin{pmatrix}
   -1 &  1 & \\
    1 & -2 & 1\\
      & \ddots & \ddots & \ddots\\
      &    &  1 & -2 &  1\\
      &    &    &  1 & -1 \\
  \end{pmatrix}, &
  \mat{D}^{+} &=  \begin{pmatrix}
   -1 &  1 & \\
      & -1 & 1\\
      &    & \ddots & \ddots\\
      &    &   & -1 &  1\\
      &    &  \tfrac{1}{8}  & \tfrac{1}{4}  & \tfrac{1}{2}
  \end{pmatrix}, &
  \mat{D}^{-} &=  \begin{pmatrix}
    \tfrac{1}{2} & -\tfrac{1}{4} & -\tfrac{1}{8} \\
   -1 &  1\\
      &  \ddots & \ddots\\
      &   & -1 &  1\\
      &   &   & -1 & 1
  \end{pmatrix}, \\
  \mat{T}_{\text{periodic}} &= \begin{pmatrix}
   -2 &  1 &    &    &  1\\
    1 & -2 & 1\\
      & \ddots & \ddots & \ddots\\
      &    &  1 & -2 &  1\\
    1 &    &    &  1 & -2 \\
  \end{pmatrix}, &
  \mat{D}^{+} &=  \begin{pmatrix}
   -1 &  1 & \\
      & -1 & 1\\
      &    & \ddots & \ddots\\
      &    &   & -1 &  1\\
    1 &    &    &   & -1
  \end{pmatrix}, &
  \mat{D}^{-} &=  \begin{pmatrix}
    1 &   &   &   & -1\\
   -1 &  1\\
      &  \ddots & \ddots\\
      &   & -1 &  1\\
      &   &   & -1 & 1
  \end{pmatrix}.
\end{aligned}

We must, however, determine the consistent abscissa. This is done by consider the plane-wave wavefunctions e^{ikx}. For intermediate points with spacing \delta we immediately find the eigenvalues

\lambda_{k} = \frac{\mat{T}e^{ikx}}{e^{ikx}}
            = \frac{e^{ik(x-\delta)} + e^{ik(x+\delta)}-2e^{ikx}}{e^{ikx}}
            = 2(\cos k\delta - 1).

We now consider the endpoints. Let the left endpoint be at x=x_0, we then have the following condition obtained at the left endpoint with f = \sin for Dirichlet and f=\cos for Neumann boundary conditions respectively (the periodic condition place no restrictions):

\begin{aligned}
  \lambda_k &= \frac{-2\sin(kx_0) + \sin(kx_0 + k\delta)}{\sin(kx_0)}
             = -2 + \cos(k\delta)+ \cot(kx_0)\sin(k\delta)\\
  \lambda_k &= \frac{-\cos(kx_0) + \cos(kx_0 + k\delta)}{\cos(kx_0)}
             = -1 + \cos(k\delta) - \tan(kx_0)\sin(k\delta).
\end{aligned}

A consistent discretization will have a solution for all k which leads to the following consistency conditions:

Dirichlet

Here we have \cot(k\delta) = \cot(kx_0), which is always satisfied if x_{0} = \delta:

o    x    x    x   ...   x    o
|----|----|----|-- ... --|----|
0   1d   2d   2d   ...  Nd    L

Hence, given the interval [0,L] with the Dirichlet conditions f(0) = f(L), the discretization is valid for the lattice points

Neumann

Here we have \tan\frac{k\delta}{2} = \tan(kx_0), which is always satisfied if x_0 = \delta/2:

o  x    x    x   ... x    x  o
|--|----|----|-- ... |----|--|
0 d/2 3d/2 5d/2  ...         L

Here is a little check of the Neumann case. We first form the eigenvectors and eigenvalues, and then show that we recover the desired form of operator:

>>> L = 1
>>> N = 4
>>> f = 0.5
>>> d = L/(N - 1 + 2*f)
>>> x0 = f*d
>>> x = np.ogrid[x0:L-x0:N*1j]
>>> n = np.arange(N)
>>> k = pi*n/L
>>> V = np.cos(k[None,:]*x[:,None])
>>> V = 1/np.sqrt(np.diag(np.dot(V.T, V)))*V
>>> E = 2*(cos(k*d) - 1).ravel()
>>> T = np.dot(np.dot(V, diag(E)), V.T).round(2)
>>> T
array([[-1.,  1., -0.,  0.],
       [ 1., -2.,  1., -0.],
       [-0.,  1., -2.,  1.],
       [ 0., -0.,  1., -1.]])

One difficulty with the Neumann boundary conditions is that they don’t lend themselves to trivial restriction and interpolation: There is no way of removing lattice sites while preserving the ratio x_0/d =
1/2. One has several options:

  1. Choose a different parametrization of \mat{T} that lends itself to simple restriction and interpolation. One way to do this is to include the endpoint as part of the grid (i.e. x_0=0). The discretization is then

    \mat{T}_{\text{Neumann}} = \begin{pmatrix}
   -2 &  2 & \\
    1 & -2 & 1\\
      &  1 & -2 & 1\\
      &    & \ddots & \ddots & \ddots\\
      &    &    &  1 & -2 &  1\\
      &    &    &    &  2 & -2 \\
\end{pmatrix}

    Unfortunately this fails both the symmetry and neutrality criteria.

  2. Keep the current form of diagonalization but choose a modified restriction and interpolation to preserve the grid properties. The complication here is that the lattice points of the coarser grid will not coincide with those of finer grids.

  3. Keep the current form of diagonalization and the trivial restriction and interpolation and hope that the method will work. This may still work if we impose the Galerkin principle. Let’s try this in one dimension on the following problem:

    These may be considered as a test problem with periodic boundary conditions on [0,2] or with Neumann boundary conditions on [0,1]. We use the following refinement with N =
2^{n-1}n_0 + 1 grid-points:

    |        x               x       |       n0     = 2
        |    x       x       x   |         1 n0 + 1 = 3
           | x   x   x   x   x |           2 n0 + 1 = 5
            |x x x x x x x x x|            4 n0 + 1 = 9
            0-----------------L

    Note that the length of the grid varies from level to level: our hope is that enforcing the Galerkin principle will suffice to ensure good convergence behaviour. The nai{}ve interpolation and restriction operators preserve the form of \mat{-\nabla^2} and so will work with the first form of the problem.

    \mat{I} = \begin{pmatrix}
    1\\
    \tfrac{1}{2} & \tfrac{1}{2}\\
                 & 1  \\
                 & \tfrac{1}{2} & \tfrac{1}{2}\\
                 & & \ddots\\
   & & & & \tfrac{1}{2} & \tfrac{1}{2}\\
   & & & & & 1
 \end{pmatrix} \qquad
\mat{R} = \tfrac{1}{2}\mat{I}^{T} = \frac{1}{2}\begin{pmatrix}
    1 & \tfrac{1}{2}\\
    & \tfrac{1}{2} & 1 & \tfrac{1}{2}\\
    && \ddots\\
    & & & & \tfrac{1}{2} & 1 & \tfrac{1}{2}\\
    & & & & & & 1
 \end{pmatrix}

    These are probably not very good in that they do not preserve constants at the boundaries, but maybe the Galerkin principle will come to the rescue. We shall take the following problem as an e

Skewed Metric

Another complication arises with a non-trivial metric which we allow for the purpose of allowing distorted lattices. We allow the user to provide a matrix \mat{L} who’s columns are the lattice vectors. This gives rise to \mat{\delta}, the matrix of displacements (roughly, \mat{\delta} = \mat{L}/N where N is the number of segments. One must take care to include x_0 appropriately.)

Consider a 2d grid with basis vectors \vect{a} and \vect{b}:

\mat{\delta} = \begin{pmatrix}
  \vect{a} & \vect{b}
\end{pmatrix}.

Let \mat{D} be the operator [D]_{ij} = \nabla_{i}\nabla_{j}f about the point of interest so that

f(\vect{a}) = f + \vect{a}\cdot\vect{\nabla} f
              + \tfrac{1}{2}\vect{a}^T\mat{D}\vect{a} + \order(a^3).

We have the following relationships valid to order \order(a^4) (where we have used the fact that \mat{D} = \mat{D}^T:

\begin{aligned}
   f(\vect{a}) + f(-\vect{a}) - 2f &= \vect{a}^T\mat{D}\vect{a},\\
   f(\vect{b}) + f(-\vect{b}) - 2f &= \vect{b}^T\mat{D}\vect{b},\\
   f(\vect{a}\pm\vect{b}) + f(-\vect{a}\mp\vect{b}) - 2f
     &= \vect{a}^T\mat{D}\vect{a}
     + \vect{b}^T\mat{D}\vect{b} \pm 2\vect{a}^T\mat{D}\vect{b}.
\end{aligned}

We can compute the Laplacian as follows:

\nabla^2 = \tr[\mat{D}] = \tr\left[
  [\mat{\delta}^{-1}]^T\begin{pmatrix}
    \vect{a}^T\mat{D}\vect{a} & \vect{a}^T\mat{D}\vect{b}\\
    \vect{b}^T\mat{D}\vect{a} & \vect{b}^T\mat{D}\vect{b}
  \end{pmatrix}
  \mat{\delta}^{-1}
\right].

Since the Laplacian is symmetric, \vect{b}^T\mat{D}\vect{a} =
\vect{a}^T\mat{D}\vect{b}. Furthermore, the multidimensional matrices may be constructed from the lower dimensional versions using the tensor product. Thus, we really only need a one dimensional stencil for \mat{D}_{aa} = \vect{a}^T\mat{D}\vect{a} and a two dimensional stencil for \mat{D}_{ab} = \vect{a}^T\mat{D}\vect{b}.

To be explicit, we compute all the two-dimensional stencils. Let us reshape the operator [T]_{x_a, y_a; x_b,y_b} over the coordinates. It is customary to consider the following 9-point stencil representing the averaging of the surrounding cells about the point (a,b):

\mat{S}^{a,b} = [T]_{a,b;[a-1:a+1],[b-1:b+1]} =
\begin{pmatrix}
NW & N & NE\\
W & C & E\\
SW & S & SE
\end{pmatrix}
=
\begin{pmatrix}
S^{a,b}_{-,-} & S^{a,b}_{-,0} & S^{a,b}_{-,+}\\
S^{a,b}_{0,-} & S^{a,b}_{0,0} & S^{a,b}_{0,+}\\
S^{a,b}_{+,-} & S^{a,b}_{+,0} & SS^{a,b}_{+,+}
\end{pmatrix}.

Denote the indices s_{a,b} \in \{-, 0, +\}, then the symmetry and neutrality constraints have the form:

\begin{aligned}
  S^{a+s_a, b+s_b}_{-s_a, -s_b} &= S^{a,b}_{s_a, s_b}, &
  \sum_{s_a, s_b} S^{a+s_a, b+s_b}_{-s_a, -s_b} &= 0.
\end{aligned}

For example, in the interior, we have a single stencil \mat{S}^{a,b} = \mat{S} for all 0 < a,b < N-1, thus the conditions reduce to S_{-s_a,
-s_b} = S_{s_a, s_b} and \sum_{s_a, s_b} S_{s_a, s_b} = 0. The constructions are linear, so we construct a “basis” of stencils representing the various combinations \braket{a,b|\mat{D}|a,b} that satisfy these properties. Then the linear combinations required to compute \tr{\mat{D}} will by construction also satisfy the properties. In the interior we have the stencils \mat{S}:

\begin{aligned}
   \vect{a}^{T}\mat{D}\vect{a}:\quad
   \mat{S} &=
   \begin{pmatrix}
      0 & 0 & 0\\
      1 & -2 & 1\\
      0 & 0 & 0
   \end{pmatrix},&
   \vect{b}^{T}\mat{D}\vect{b}:\quad
   \mat{S} &=
   \begin{pmatrix}
      0 & 1 & 0\\
      0 & -2 & 0\\
      0 & 1 & 0
   \end{pmatrix},&
   \vect{a}^{T}\mat{D}\vect{b}:\quad
   \mat{S} &=
   \tfrac{1}{2}
   \begin{pmatrix}
      -1 & 0 & 1\\
      0 & 0 & 0\\
      1 & 0 & -1
   \end{pmatrix}
 \end{aligned}.

We note that, to order \order(\delta^4), there is also a “zero” stencil:

\mat{0}:\quad \mat{S} =
\begin{pmatrix}
  1 & -2 & 1\\
  -2 & 4 & -2\\
  1 & -2 & 1
\end{pmatrix}

For the Neumann boundary conditions we have the following set of stencils (plus rotations) at the edges:

\begin{aligned}
   \mat{S}_{aa} &=
   \begin{pmatrix}
      0 & 0 & 0\\
      0 & -1 & 1\\
      0 & 0 & 0
   \end{pmatrix},&
   \mat{S}_{bb} &=
   \begin{pmatrix}
      0 & 1 & 0\\
      0 & -2 & 0\\
      0 & 1 & 0
   \end{pmatrix},&
   \mat{S}_{ab} &=
   \tfrac{1}{2}
   \begin{pmatrix}
      0 & -1 & 1\\
      0 & 0 & 0\\
      0 & 1 & -1
   \end{pmatrix}, &
   \mat{S}_{0} =
   \begin{pmatrix}
     0 & -1 & 1\\
     0 & 2 & -2\\
     0 & -1 & 1
   \end{pmatrix},
 \end{aligned}

and at the corner we have:

\begin{aligned}
   \mat{S}_{aa} &=
   \begin{pmatrix}
      0 & 0 & 0\\
      0 & -1 & 1\\
      0 & 0 & 0
   \end{pmatrix},&
   \mat{S}_{bb} &=
   \begin{pmatrix}
      0 & 1 & 0\\
      0 & -1 & 0\\
      0 & 0 & 0
   \end{pmatrix},&
   \mat{S}_{ab} &=
   \begin{pmatrix}
      0 & -1 & 1\\
      0 & 1 & -1\\
      0 & 0 & 0
   \end{pmatrix}.
 \end{aligned}

Now we consider the boundary conditions \mat{S}^{0,a}. The most difficult are the lattice Neumann where we have at the point -\vect{a}/2 (see the discussion above about the 1d discretization) that \vect{a}^T\cdot\vect{\nabla}f = 0. This gives us the additional relationship \mat{D}\vect{a} = 2\vect{\nabla}f and so we have:

\begin{aligned}
   f(\vect{a}) - f &= \vect{a}^T\mat{D}\vect{a},\\
   f(\vect{b}) + f(-\vect{b}) - 2f &= \vect{b}^T\mat{D}\vect{b},\\
   f(\vect{a}\pm\vect{b}) + f(\vect{a}\mp\vect{b}) - 2f
     &= \vect{a}^T\mat{D}\vect{a}
     + \vect{b}^T\mat{D}\vect{b} \pm 2\vect{a}^T\mat{D}\vect{b}.
\end{aligned}

\begin{aligned}
   \vect{a}^{T}\mat{D}\vect{a} &=
   \begin{pmatrix}
      0 & 0 & 0\\
      0 & -1 & 1\\
      0 & 0 & 0
   \end{pmatrix},\\
   \vect{b}^{T}\mat{D}\vect{b} &=
   \begin{pmatrix}
      0 & 1 & 0\\
      0 & -2 & 0\\
      0 & 1 & 0
   \end{pmatrix},\\
   \vect{a}^{T}\mat{D}\vect{b} = \vect{b}^{T}\mat{D}\vect{a}
   &= \tfrac{1}{2}
   \begin{pmatrix}
      0 & -1 & 1\\
      -1 & 2 & -1\\
      1 & -1 & 0
   \end{pmatrix}
   =
   \tfrac{1}{2}
   \begin{pmatrix}
      -1 & 1 & 0\\
      1 & -2 & 1\\
      0 & 1 & -1
   \end{pmatrix}
   =
   \tfrac{1}{2}
   \begin{pmatrix}
      -1 & 0 & 1\\
      0 & 0 & 0\\
      1 & 0 & -1
   \end{pmatrix}
 \end{aligned}.

From this we can from the coordinate transformation \mat{R}:

\mat{R} = \mat{\delta}^{-1}\cdot[\mat{\delta}^{-1}]^{T}

The Laplacian is then

\mat{\nabla}^2 = \mat{R}\cdot\mat{T}\cdot\mat{R}

where the matrices are broadcast appropriately such that the summation is over the coordinate axes. This will preserved

Note

The Neumann boundary conditions will not be strictly preserved by this: the derivative will not vanish along the normal to the boundary – which is the usual condition – but will instead vanish along the lattice. This is actually what we want because we are use the Neumann cell to model an octant of the full unit cell in a periodic system.

Smoothing

We first consider the form:

\op{A} = -\nabla^2 + m^2 = -\op{T} + m^2.

Generally \op{T} has eigenvalues

\lambda_{T} \approx \frac{-4}{\delta_x^2} \sin^2(n\pi/N),\\
\delta_x^{2} = \frac{1}{\tr\mat{g}^{-1}\cdot\mat{g}^{-1}}.

Note

The arbitrary metric and dimensionality is incorporated into the factor \delta_x^2. To see this, consider that the trace sums over the dimensions of the Laplacian, with each dimension contributing /dx^2.

import mmf.math.multigrid
mg = mmf.math.multigrid.MG(
    d=1, n_0=1, n=5, L=0.1, boundary_conditions='periodic')
T = -mg.get_mats()[0][-1].toarray()
#T = -mg.to_mat(mg.x()[0], mg.T)
assert np.allclose(T,T.T)
E = np.linalg.eigvalsh(T)
dx_inv2 = np.trace(np.dot(mg.dx_inv().T,mg.dx_inv()))
plt.plot(np.arange(len(E),dtype=float)/len(E),
         E/(4*dx_inv2),
         'x',
         label="d=%i" % (mg.d,))

mg.initialize(n=3, d=3, n_0=(1,1,1))
T = -mg.get_mats()[0][-1].toarray()
#T = -mg.to_mat(mg.x()[0], mg.T)
assert np.allclose(T,T.T)
E = np.linalg.eigvalsh(T)
dx_inv2 = np.trace(np.dot(mg.dx_inv().T,mg.dx_inv()))
plt.plot(np.arange(len(E),dtype=float)/len(E),
         E/(4*dx_inv2),
         '.',
         label="d=%i" % (mg.d,))

plt.legend(loc='upper left')
plt.title("Eigenvalues of T")
plt.ylabel("Eigenvalue/($-4/\delta_x^2$)")

(Source code, png, hires.png, pdf)

../_images/multigrid-1.png

In order to ensure idempotency, we want smoothing operation in the form of

x \mapsto x - \frac{w}{2}\mat{P}(\mat{A}x - b)
  = \mat{R}_{w} x + \frac{w}{2}b

where \mat{R}_{w} = \mat{1} + \tfrac{w}{2}\mat{P}\mat{A}. This trivially satisfies the idempotency requirement. The goal is to choose the weight and the matrix \mat{P} (pre-conditioner) that is easy to compute (in terms of being of order N cost to compute \mat{P}x) such that the eigenvalues of the \mat{R}_{w} corresponding to high frequency components have magnitude less than one. In the case of only a constant mass term m^2 we may neglect \mat{P} to obtain

\lambda_{R} = 1 - \frac{w}{2}\left(m^2
+ \frac{4\sin^2(n\pi/N)}{\delta_x^2}\right).

Since our “multigrid” involves coarsening the grid by a factor of two, we wish to minimize the magnitude over the upper half of the frequencies, defining a fitness:

\mu \approx \max_{\theta\in[\pi/2, \pi]} \abs{\lambda_R}
    = \max_{\theta\in[\pi/2, \pi]} \Abs{1 - \frac{w}{2}\left(m^2
+ \frac{4\sin^2(\theta)}{\delta_x^2}\right)}.

The extrema occur at the endpoints, and a little inspection tells us that the optimal weight must satisfy:

\begin{aligned}
  1 - \frac{w}{2}\left(m^2
    + \frac{2}{\delta_x^2}\right) &=
  -1 + \frac{w}{2}\left(m^2
    + \frac{4}{\delta_x^2}\right),\\
  w &= \frac{1}{m^2 + \frac{3}{\delta_x^2}},\\
  \mu_{\text{min}} &=
  \frac{\frac{m^2}{2} + \frac{1}{\delta_x^2}}
  {m^2 + \frac{3}{\delta_x^2}}.
\end{aligned}

For reasonable problems, in the limit of large lattices, the \delta_x^{-2} terms dominate, and this approaches \mu_{\text{min}}
\rightarrow 1/3.

This suggests that as a general pre-conditioner we might use:

\mat{P} = \frac{1}{\frac{2}{\delta_{x}^2 + \frac{2}{3}\mat{D}}}

with a weight of w=2/3. The more standard pre-conditioner would be

\mat{P} = \frac{1}{\frac{2}{\delta_{x}^2 + \mat{D}}}

with w=2/3 as this corresponds directly to the diagonal of \mat{A} but it is not clear to me yet why this is better.

Finally, recall that we also consider the case where \mat{D} has an additional matrix structure so it is block diagonal in position space (rather than strictly diagonal). Here one has the choice of extracting the diagonal, or spending the extra work to diagonalize each of the blocks.