Multigrid in a nutshell

A fine grid (top) gets coarsened by a factor of two. 

Multigrid is an iterative method that is particularly suited for solving large, sparse linear systems, in the form of \mathbf A \mathbf x=\mathbf b.

When we have a sparse matrix at hand, direct methods, such as LU decomposition, for solving the equation becomes undesirable, because we might lose the sparsity and end up with a dense matrix. The best time complexity of a direct method, i.e. FFT, is \mathcal{O}(n\log n)s=1$ where n is the number of unknowns.

In this situation, using an iterative method is much more suitable. In this case, we will be using what are called smoothers, examples include Jacobi method, Gauss-Seidel method, and Successive Over-relaxation (SOR).

Let’s suppose the linear equation we wish to solve comes from discretizing a Poisson equation in space by a grid of uniform grid spacing. See post on Multigrid applied to incompressible Navier-Stokes equation for an example.  Multigrid is a method that uses relaxation methods on a hierarchy of  grids of successively coarser spatial resolution.

Doing so has two advantages. Suppose the source term is localized in some domain, solving the Poisson equation means this localized information needs to be spread to the entire domain. Using a 5 point stencil, the information can only propagate one grid size at a time, hence the time needed is influenced by the grid size. Using a coarser grid mean the information can travel further in one step, than using a finer grid. Therefore, the grid hierarchy of multigrid helps information quickly propagate through the domain.

Second advantage has to do with how smoothers work. Suppose the solution to the linear equation is made up of modes of all available frequencies on the fine grid. Iterative methods such as Jacobi and Gauss-Seidel are very good at solving for the high frequency modes on a grid, i.e. smoothing the solution. They are less effective at matching up low frequency errors.

Now we coarsen the grid. Some low frequency modes on the previous finer grid, now become high frequency modes on the coarser grid. Apply the same relaxation method, we can again cut down these errors. We can repeat the coarsening and smoothing process until we have a very small grid, where we can solve the problem exactly.

The algorithm of multigrid

Consider a two-grid system, assume we can solve the equation exactly on the coarser grid. We denote equation on l level as \mathbf A_l \mathbf x_l = \mathbf b_l. Restriction and interpolation operators averages and interpolates results from fine grid to coarse, and from coarse grid to fine, respectively. They are denoted by \mathbf R, \mathbf T.

  • Apply smoothing to  \mathbf A_l \mathbf x_l =\mathbf b_l to get an approximate solution \mathbf {\hat x_l}
  • Calculate residual: \mathbf r_l =\mathbf b_l  - \mathbf A_l \mathbf{\hat x_l}
  • Restrict residual to coarser grid: \mathbf r_{l-1}=\mathbf R \mathbf r_l
  • Solve \mathbf A_{l-1} \mathbf x_{l-1} =\mathbf b_{l-1}
  • Interpolate solution back to fine grid: \mathbf x^c_l = \mathbf T x_{l-1}
  • Correct top level solution: \mathbf x_l = \mathbf x_l+\mathbf x^c_l
  • Apply more smoothing to \mathbf x_l

If the coarser grid isn’t small enough to use a direct method, we can use a relaxation method. Or we can consider it as the top level for another two-grid system and apply the same algorithm recursively.

We have just seen a common way to apply the recursion, namely going from the finest grid level, traverse down the hierarchy and apply relaxation at each level, then traverse back up and apply relaxation again. This is called a V-cycle. Other iterative schemes are also possible, such as the W-cycle or the full multigrid cycle.

Smoothing methods

Let’s first consider Jacobi method. We first split the matrix by doing \mathbf A = \mathbf D - \mathbf L -\mathbf U. Here \mathbf D, ~-\mathbf L, ~-\mathbf U are diagonal, strictly lower triangular, and strictly upper triangular matrices of \mathbf A, respectively.

We can then rewrite our matrix equation as

\mathbf A \mathbf x = (\mathbf D - \mathbf L -\mathbf U) \mathbf x =\mathbf b

We can write this as an iterative method, i.e. the Jacobi method,

\mathbf x^{n+1} = \mathbf D^{-1} \left( \mathbf b + (\mathbf U + \mathbf L) \mathbf x^n \right)

To take a step further, we can use the new values as they become available, i.e.

\mathbf x^{n+1} = (\mathbf D - \mathbf L)^{-1} \left(\mathbf b + \mathbf U\mathbf x^n \right)

This modification leads to Gauss-Seidel method.

Discussion on relaxation methods are widely available, I won’t go into further detail here. Note that Gauss-Seidel in general converges faster than Jacobi, but it is harder to be parallelized. We can further improve upon Gauss-Seidel (faster convergence) by adding weights to the appropriate matrices and turning it into SOR.

Summary

Multigrid algorithm applies smoothers such as Jacobi or Gauss-Seidel methods, on a hierarchy of grids with coarsening resolution. The coarsening grid allows information to travel further within the same iteration. Since smoothers are effective in eliminating high frequency errors, by coarsening the grid, multigrid allows smoother to work on errors in a wide range of frequencies. The complexity of multigrid is \mathcal{O}(n), where n is the number of unknowns.

In another post, I write about my project on using OpenMP and GPU to parallelize the smoothing step in a multigrid  solver. This solver is developed by our research group, and it is implemented with OpenMPI. By more granular parallelism, I exploited the existing computation resources and improve the execution time performance.

Advertisement