Mesh Mess

FEM, CFD, CAE, and all that.

Tag: Domain Decomposition

Common block in Domain Decomposition implementation

Fortran 77 essentially does not have global variables for subroutines, instead, common blocks are used to passing data across subroutines when arguments are not preferred. This is conceptually unsafe, though widely used in some legacy F77 codes. The code I’ve been working on adopts this “feature” extensively. The problem is, in Dirichlet-Neumann-like domain decomposition implementation, the solver will be called twice for each iteration, each time with different stiffness matrix setup(due to different subdomains) and boundary conditions. On the other hand, the common blocks make those two calls undistinguishable.

To reuse the legacy solver, my first try was to construct two copies of the solver by putting it into two modules, hoping the explicit interface of the modules would identify the common blocks in different copies. This turns out to be wrong: the subroutines from the two modules still share the same common blocks.

What I finally decided to do is to build a copy of the original solver by renaming all the common blocks. Sed & AWK make this  not to difficult. But still I don’t think that’s the most efficient way to tackle this. Consider this is a lesson that not all sequential solvers are easy to parallelize.

Domain Decomposition for Poisson equation in 1D

It’s not really Poisson equation, just a second order ODE in that sense. But it serves a good example to demonstrate the idea of domain decomposition.

Using Dirichlet-Neumann method, one of the two subdomains gives out Neumann boundary condition and receives Dirichlet boundary data at the interface, and the other one does the opposite. The Mathematica modul would be somthing like this:

DNDecomp:   Main module;
DBCLaplace: linear FEM module for Dirichlet data ua at xa, and ub at xb;
MBCLaplace: linear FEM module for Mixed data ua at xa, and flux db at xb;
(xi,ui):    interface position and data;
theta:      relaxation coefficient to update ui in each iteration.
DNDecomp[xa_, ua_, xb_, ub_, xi_, ui0_, n1_, n2_, iter_, \[Theta]_] :=
Module[{ui, di, x1, u1, x2, u2, h1, h2, sol},
  f[t_] := 2;
  ui = ui0;
  h1 = (xi - xa)/n1; h2 = (xb - xi)/n2;
  sol = {};
    {x2, u2} = DBCLaplace[xi, ui, xb, ub, n2];
    di = (u2[[2]] - u2[[1]])/h2;
    {x1, u1} = MBCLaplace[xa, ua, xi, di, n1];
    ui = \[Theta] u1[[-1]] + (1 - \[Theta]) u2[[1]];
    AppendTo[sol, {{x1, u1}\[Transpose], {x2, u2}\[Transpose]}]},

In above example force function f=2 gives a parabolic exact solution. The effect of the relaxation coefficient \theta is obvious in plots below.

Above: theta=0.7, below: theta=0.5

Below is another example with exact solution u(x)=-x \sin(x).

Iteration = 6, theta = 0.5

Least square-control optimization formulation

This is a general optimization application to the domain decomposition methods. Trying to build a constrained least squares problem and searching for the minimization of the square norm objective functional, subject to constraints. In DD application, the square norm is usually a meaure of difference between subdomain solutions, and the constraints require the local solution of local subdomain problems. The boundary data on the overlapping/interface part of the domain is unknown, and determined to minimize the square norm. So, basically the method consists of three factors: square norm measure the sub-solution distance, constraints ensure the local solution and the subdomain data as to-be-determined parameter in optimization problem. This type of methods was first applied to DD by Lions and Glowinsky, who came up with an overlapping formulation coupling the Laplace equation with Navier-Stokes equation in flow problem. For those heterogeneous-type problem, general well-poseness results are not known.

Local FE approximation in Dirichlet DD methods

After decomposing a domain, we can formulate approximations by FE or BE methods in local subdomains. In the case of Dirichlet domain decomposition method, the global unknown on the skeleton is primal, i.e. u(x), while local unknowns are the nodal variable within subdomain \Omega_i (FE) or on the local boundaries \Gamma_i(BE). If focusing on a specific subdomain, one can neglect the subscripts and assume model problem as

-\nabla^2u(x)=f,\quad x\in\Omega

u(x)=g_D,\quad x\in\Gamma_D

\partial u/\partial n=g_N,\quad x\in\Gamma_N

By using Green’s identity, the weak formulation of this problem is

a(u,v)-\int_{\Gamma}{v\nabla u\cdot}ds=\int_{\Omega}{fv}d\Omega=\langle f,v\rangle

In which a(u,v) is the bilinear form for space H^1_0(\Omega). Assigning test function satisfying v|_{\Gamma_D}=0, we have

a(u,v)=\langle f,v \rangle+\langle g_N,v \rangle_{\Gamma_N}

Now we decompose the primal u(x) into three components

u=u_0+P\tilde{u}_1+P\tilde{g}_D, \quad u_0\in H^1_0(\Omega),\tilde{u}_1\in H^{1/2}_0(\Gamma,\Gamma_D),\tilde{g}_D|_{\Gamma_D}=g_D

which means, the unknown variable consists of its value within the domain, extension/prolongation from unknown Dirichlet boundary value on \Gamma_N, and extension/prolongation from Dirichlet data.  Here P denotes the “extension/prolongation” operator, mapping a function as boundary value to a function on the domain. By this decomposition one has two unknowns, u_0 on the domain and \tilde{u}_1 on the boundary, so another weak formulation is needed. Assuming u_0 is the weak solution of governing equation, it is easy to show, for test function space H^1_0(\Omega)

a(u_0+P\tilde{u}_1+P\tilde{g}_D,v)=\langle f,v \rangle

Finally, the weak formulation of the model problem is to resolve couple equations

a(u_0+P\tilde{u}_1,v)=\langle f,v \rangle-\langle P\tilde{g}_D,v \rangle,\quad \forall v \in H^1_0(\Omega)

a(u_0+P\tilde{u}_1,Pw)=\langle f,Pw\rangle+\langle g_N,w \rangle_{\Gamma_N}-\langle P\tilde{g}_D,Pw\rangle,\quad \forall w\in H^{1/2}_0(\Gamma,\Gamma_D)

for unknown u_0 and \tilde{u}_1. Assigning two finite element spaces, one on \Omega, one on \Gamma:

X=\text{span}(\phi_i)\subset H^1_0(\Omega),\quad \tilde{X}=\text{span}(\psi_j)\subset H^{1/2}_0(\Gamma,\Gamma_D)

The Galerkin FE formulation of the coupled problem is then

a(\phi_i,\phi_j)u^i_0+a(P\psi_i,\phi_j)\tilde{u}_1^i=\langle f,\phi_j \rangle-\langle P\tilde{g}_D,\phi_j \rangle, \forall \phi_j\in X

a(\phi_i,P\psi_j)u_0^i+a(P\psi_i,P\psi_j)\tilde{u}_1^i=\langle f,P\psi_j\rangle+\langle g_N,\psi_j\rangle_{\Gamma_N}-\langle P\tilde{g}_D,P\psi_j\rangle, \forall \psi_j\in \tilde{X}

Remembering that this formulation is for one subdomain, and in each subdomain same equations can be obtained as long as FEM is used to approximate interior solution. Take the summaion of those formulations across all the subdomains, and put them in matrix form:

\left[\begin{array}{cc}\mathbf{K_{II}} & \mathbf{K_{IC}}\\ \mathbf{K_{CI}} & \mathbf{K_{CC}} \end{array}\right]\left(\begin{array}{c}\mathbf{u_I}\\\mathbf{u_C}\end{array}\right)=\left(\begin{array}{c}\mathbf{f_I}\\\mathbf{f_C}\end{array}\right)

Which is the well-known substructure method formulation in structure engineering.  The subscripts imply “connecting” and “interior”. The unknown vectors are nodal solutions in the interior of subdomains and nodal solutions on the skeletons. Classical substructure algorithm goes like this:

  1. Find \mathbf{K_{II}}^{-1}, then \mathbf{K_{II}}^{-1}\mathbf{f_I};
  2. \mathbf{g_C}=\mathbf{f_C}-\mathbf{K_{CI}}\mathbf{K_{II}}^{-1}\mathbf{f_I};
  3. \mathbf{S_C}=\mathbf{K_C}-\mathbf{K_{CI}}\mathbf{K_{II}}^{-1}\mathbf{K_{IC}};
  4. \mathbf{u_C}=\mathbf{S_C}^{-1}\mathbf{g_C};
  5. \mathbf{u_I}=\mathbf{K_{II}}^{-1}\mathbf{f_I}-\mathbf{K_{II}}^{-1}\mathbf{K_{IC}}\mathbf{u_C}

Even though the inverse is easy to be parallelized due to the matrix being block diagonal, this algorithm is impractical for large problem since in step 4, finding schur complement is costly. Preconditioner are always used instead of solving the matrix equation in step 4.

Dirichlet Domain Decomposition Methods

Coupling interface data is THE most important component of domain decomposition (DD) methods. In many cases the coupled data are only required to up to order 1, which means, only continuity of variable and continuity of normal derivatives are need for adequate accuracy.  So, in variant formulation of the transmission condition of a problem, when its domain is decomposed, one can require a priori that the Dirichlet boundary/interface condition are built in the trial function, while the Neumann condition at boundary and interface has to be enforced explicitly. In particular, with u_i(x) as unknown variable in the local subdomain \Omega_i and t_{i}(x) as its exterior normal derivative at the subdomain boundary \Gamma_i, the Dirichlet transmission condition


for neighboring subdomains \Omega_i and \Omega_j at the common “border” \Gamma_{ij}, can be built in the trial function.  The Neumann condition at the same interface would be


because the normal derivatives are to opposite direction. At this moment, I should make some definitions. Let’s call the “real” boundary of the domain as “boundary”, while the local “boundary” for certain subdomain, together with “real” boundary, as skeleton \Gamma_S. So condition above is imposed on skeletons. We should also consider some Neumann boundary condition t=\nabla u\cdot n=g_N is imposed at part of the boundary \Gamma_N, and Dirichlet boundary condition u=g_D is imposed on the rest, called \Gamma_D. Now the weak form of N. boundary/interface is



For one interface/boundary of one subdomain, only one of equations above is valid. When those equations are added up for all the subdomains, we have


If we look those normal derivatives as flux, and notice that the summation does not include the boundaries in \Gamma_D, above equation simply says that the total flux across the skeletons and N. boundaries adds up to the flux across N. boundaries. In correspondence, the test function v(x) would be zero on \Gamma_D. Besides, with the notation above, the interfaces in the inner of domain is \Gamma_S-\Gamma_N\cap\Gamma_D.

Dirichlet DD method,  resolves u(x), by which the Dirichlet boundary condition and interface matching condition are naturally satisfied. The remaining the problem now is then how to obtain a weak formulation with unknown u from the last equation above. One can make an educated guess, if there exists a map between u_i(x) and t_i(x), say, t_i=\mathcal{F}(u_i), we immediately have the weak formulation of u(x) as


It turns out that that map \mathcal{F} does exist, under some condition of course. And by noticing that the we did not employ any information of the specific differential equation, one can guess that those information are hidden exactly in \mathcal{F}.