Mesh Mess

FEM, CFD, CAE, and all that.

Tag: numerical

POD snippet in R

I’ve been working on proper orthogonal decomposition (POD) for a while. Though it’s nice to have packages such as modred out there, I cannot help myself writing a snippet using R, since lots of my previosu data processing was in that language. Without all the theory chitchat  like K-L transformation, from the the implementation-on-discrete-data perspective the (snapshot) POD method boils down to just singular value decomposition (SVD), and it’s out of box in R, which makes the task mostly straightforward. That’s why a POD function in R essentially calls svd in R (and it further calls LAPACK routines DGESDD and ZGESVD.):

pod <- function(arr, nmod=dim(arr)[1], nv=nmod){
  pod <- svd(arr, nmod, nv)

Here “nmod” is the number of POD modes to be retrieved. Though it’s default to be the length of each snapshot, it’s rarely necessary this large. Mostly of the time a reasonable correlated set of snapshots requires less than 10 modes to obtain most of the system energy.

With pod of data “arr” obtained, we can do several thing immediately. We  can obtain the modes by retrieving the right singular vector:

pod_u<- function(pod, nnod){
  u <- matrix(unlist(pod["u"]), nrow = nnod )

or get the singular values:

pod_sv<- function(pod){
  sv <- as.vector(unlist(pod["d"]))

or the coefficients of the pod modes for the approximation:

pod_coef<- function(pod, nshot, nmod){
  v <- matrix(unlist(pod["v"]), nrow = nshot )
  sv <- pod_sv(pod)[1:nmod]
  coef <- diag(sv) %*% t(v)

and eventually the approximation itself

pod_approx<- function(pod, nshot, nmod, nnod){
  u <- pod_u(pod, nnod)
  coef <- pod_coef(pod, nshot, nmod)
  approx <- u %*% coef



How to categorize different numerical methods for PDE?

All the numerical methods for PDE can be boiled down two things: the projection from infinite dimension to finite dimension where the numerical solution is seeked, and the discretization of the equaiton itself. Quote from page 1 of Hesthaven and Warburton’s book “Nodal Discontinuous Galerkin Methods“:

The construction of any numerical method for solving a partial differential equation requires one to consider the two choices:

  • How does one represent the solution by an approximate solution?
  • In which sense will the approximate solution satisfy the partial differential equation?

You get what you mesh for

In finite element modeling, in fact, in all the numerical modeling, the resolution of the results depends on the space where the problem is projected. As to PDE’s numerical solution, the projection is from space with infinite dimension to space with finite dimension. So no matter it’s finite element or finite volume or finite difference, the size of the mesh is what you can see at the finest scale. Beyond that, it’s either physical modeling (such as turbulence subscale modeling) or some other educated guess.

Here is an example.

Water surface as wave coming in

This plot shows a finite element solution of Navier-Stokes equation, in a water waves problem. The free surface is tracked using Level Set method. Taking closer look reveals the following not-so-smooth “water surface”:

Free surface, enlarged

The resolution of the free surface is restricted by the size of the element. For any mesh, examing close enough reveals similar wiggles. To put it another way, in the numerical simulation, free surface is not really a surface, it’s more like a layer with finite thickness. If I plot the centroid of the elements that reside on the free surface, it looks like this:

Elements on the free surface

This is what a non-polished numerical result should look like. Smoothing such as with cubic splines sweepts everything under the table, including the message: “you get what you mesh for”.

Linear shape function, mass matrix, and element volume

In unsteady CFD problem, under certain time integration scheme, L2 inner product of shape functions gives what so called mass matrix, i.e. the matrix with its entries in the form M_{ij}=\int_{\Omega}{N_i N_j}\,d{\Omega}, where N_i is shape function. When shape function is linear, there is a better way to demonstrate M_{ij} is proportional to local element volume (area) than trivially plugging in and calculating or using Gaussian integral. Assume in 2D triangular element ABC whose area is A^{ABC},  and N_i(x,y), i=1,2,3 stands for the shape function whose unit value is reached at A, B, C correspondingly.

If we interpret local mass entry M_{13}^{ABC}=(N_1, N_3) over element ABC shown above as the mass of a piece of wood splinter in the shape of tetrahedron A1-A-B-C, where N_1(x,y) is the thickness function on the element, while N_3(x,y) is the density function, then it’s not difficult to see that M_{13}=(N_1,N_3)=(N_2,N_3)=M_{23}, because they are both the mass of tetrahedron with the same volume and density distribution from bottom (A1-A-B or B1-A-B) to top (C). The next step is to show that the mass of tetrahedron C1-A-B-C (M_{33}) is the half of the total mass of the prism A1-B1-C1-A-B-C with the same density distribution. The argument is based on linearity, which allows us to assign a single parameter for above integrals. Set this single coordinate s as the line parameter from C to line A-B, i.e. s=0 on AB, s=1 at C. Then, for mass of prism and tetrahedron, we both have

mass =\int{L(s) H(s) D(s)}\,ds

where L\times H gives the area for cross section parallel to A1-A-B-B1 at s . H is the thickness of ABC, while D is the density distribution. We have

L\propto (1-s),\quad D\propto s,
and H=1 for prism, H\propto s for tetrahedron. Then we have

M_p\propto \int{(1-s) s}\,ds

M_{33}\propto \int{(1-s) s^2}\,ds

which proves

2 M_{33}=M_p=\frac{1}{3} A_{ABC}

Since M_{33}+M_{13}+M_{23} is nothing but the sum of the mass of three tetrahedrons, which is the mass of the prism. This sum is then \frac{1}{3}A_{ABC}. Using M_{13}=M_{23}, finally we have



Note: for 3D tetrahedron element, linear shape function gives similar relation, in this case, above ratio over the volume of a tetrahedron would be 1/10 and 1/20:



What’s the difference?

Today I was asked for a one-sentence comment of the difference between finite difference method and finite element method. What I came up was:

In finite difference method we approximate the differential expression, or the partial derivative, while in finite element method we approximate the space in which the solution lies.

Hopefully this shed some light on the issue.