I will share some of my thoughts and skills on reservoir simulation (based on TOUGH+RealGasBrine, TOUGH2, TOUGHREACT, TOUGH+HYDRATE) here from time to time.
Welcome to discuss with me, or contact me if you want to post your thoughts here!
Nonlinear Convergence Problem

    Under challenging conditions (e.g., extremely non-linear behavior associated with phase appearance and disappearance), the reduction in the residuals may be difficult ( convergence cannot be attained within a certain number of iterations), often causing a timestep reduction to impractically small levels. All convergence problems can be categorized into mathematical or physical problems. Due to the robustness of the TOUGH+, most of the convergence problems stem from inappropriate physical settings. The importance of clear understanding the physical process before starting simulations cannot be overemphasized.

  1. Phase changes are the most common reason for nonconvergence problems. On the one hand, if the system is very sensitive to the phase change while the variation in the primary variable is not sufficiently large to support a complete phase transition, the local phase conditions may exhibit repeated oscillations between two thermodynamic states, resulting in small timesteps. To alleviate this problem, TOUGH+ introduces variables (Overshoot) specifying enhanced criteria for triggering phase and state change. By appropriately setting their values, the simulation allows an easily one-way phase change, instead of the repeated oscillations between two states.
  2. Note that phase changes, if improperly described, could induce numerical discontinuity, causing the Newton-Raphson method impossible to converge. For example, relative permeability or capillary pressure curves with very steep gradients without smoothing at endpoints can lead to derivatives that exceed the computational limits of a given Fortran compiler, causing the Jacobian matrix to become singular when a phase disappears or appears. Therefore, ensuring continuity of the derivatives in governing equations by using appropriate smoothing to steep equations (but remaining faithful to the data) is necessary to avoid such problems.
  3. Note that incorrect derivatives lead inevitably to successive time decreases to minuscule levels before the simulation is suspended automatically by the code. In other words, successive timestep cutbacks to extremely low levels are a clear indication of wrong derivatives.
  4. Simulations often give the appearance of difficulty in converging when the system approaches a steady state. This is because the derivatives in the associated Jacobian (which describe changes in the primary variables) tend toward zero, resulting in a system whose only possible solution is zero. In essence, this is not a convergence difficulty but an indication of a steady state, which is denoted by a rapid increase in the timesteps but with minimal (if any) changes in the primary variables.
  5. Although it is easy to identify situation that the entire system is approaching the steady state, detecting such phenomena when it occurs locally is challenging without carefully analyzing the physical process. For example, when gas plumes migrate upward under the effects of gravity, in some cases, the grid blocks at the lateral edge of gas plumes constrain the timestep as the difference in flux term between the inflow and outflow is very tiny. These blocks struggle to handle phase transitions. A relaxed phase change criterion is recommended if such a local phase transition is not quite critical for the global results.
  6. It is particularly important to set the global convergence criterion approximately, especially for large-scale simulations involving elements with drastically different volumes. If it is too small, the simulation may need very small timesteps and unacceptably long execution times; if it is too large, then the accuracy of the simulation results may suffer. The default (relative) convergence criterion is 10-5; for very large simulation, this may be relaxed to 10-4, but larger values are discouraged. As TOUGH+ determines whether the current timestep is convergence through comparing the maximum relative residual with the defined criterion, using a looser criterion is a better choice for the simulation cases with (i) large number of grid blocks, e.g., millions of degrees of freedom, (ii) long simulation time, e.g., CO2 storage over 50 years, and (iii) smooth solutions or quasi-steady-state solutions without sharp gradients, e.g., in water flow through homogeneous media.
  7. [TOUGHREACT, personal communication with Dr. Eric Sonnenthal]
    Basically, the closer you can capture how the natural system behaves, the better the convergence of the numerical model.
    a. Regarding which minerals and species to remove is tricky, because it depends on the purpose of the model (mostly just to look at permeability changes, or also to evaluate production chemistry). Just because a species has a very low concentration does not mean it is not important. It may have a very low solubility, so as soon as it is released through dissolution of one mineral, it is then incorporated almost completely into a secondary mineral, so its concentration in the fluid remains very low.
    b. Removing a few minerals that are not so important probably won't make much difference in the simulation, since the convergence issues depend usually on the more important minerals.
    c. A lot of the convergence issues have to do with putting a far-from-equilibrium fluid into a very hot rock instantaneously. For many of these high-temperature problems, I run a 1-grid block pseudo-steady-state simulation successively starting at low temperature and then going up in ~50C intervals to reach the reservoir temperature. Then I use that fluid composition as input, so I know it converges before putting it in a large model. This initialization of the system is by far the most important set of steps. One must carefully go from a steady-state TH model to a THM model to a THMC model to get the right “native-state” conditions that will run with injection.
    d. Whether it is a fine grid or a coarse grid does not really affect the chemical equilibration, just the time-stepping because of the fluid velocity. Therefore, the time-stepping needs to be constrained by the Courant limit and the reaction rate, which will be much smaller than a typical fluid flow model requires.
    e. The reactive surface areas are extremely important, and they will be different depending on how you set up the model. The surface areas need to be calculated based on the conceptual model of the system, and then calibrated to the native-state conditions.
    f. If the Courant limit is much greater than 1, the results will have large errors and likely be completely erroneous. Note that reactive chemistry is not like fluid flow or stress calculations, where small errors are generally acceptable and don't accumulate. Small errors in concentration (e.g., 1.e-4 or 1.e-5) caused by exceeding the Courant limit will result in small changes in the amounts of minerals dissolved or precipitated at every time step, which then feed directly into small changes in porosity and larger changes in permeability at every time step. These changes are additive, so if the porosity decreases by an extra 0.0001 every time step, and you run 10000 time steps, then 100% of the porosity can be filled up by mineralization just because of the time discretization errors accumulating. The concentration errors can also lead to minerals forming or dissolving that shouldn’t form. A small time step will allow for more accurate concentration which will converge much more easily. If the time step exceeds the Courant limit, the concentration will change unreasonably, and the convergence will be very poor, and the run will just crash. The porosity may just fill up too quickly and then the run will crash.
Mesh Generation

    Mesh generation is a critical step in numerical simulation because it directly affects the accuracy, stability, and efficiency of the computational solution. The quality and structure of the mesh determine how well the geometry, boundary conditions, and physics are captured. A well-constructed mesh must ensure the solution is accurately resolved in critical areas, such as in the regions with high gradients or with complex physical phenomena. Additionally, to achieve better convergence performance, the mesh should avoid large aspect ratios and sudden transition between coarse and fine volume elements.

  1. MESHMAKER V1.5 (Moridis 2016) can generate two-dimensional radially symmetric (r, z) meshes, and one-, two-, and three-dimensional rectilinear (Cartesian) grids. It generates grids that are directly usable by all members of the TOUGH+ and TOUGH2 family of codes.
  2. TOUGHIO (Luu 2020) is an open-source library that provides tools to facilitate pre- and post-processing for TOUGH using Python standards. One important feature is that it can import mesh generated by external software (e.g., Abaqus, FLAC3D, Gmsh) and convert it to a MESH file for TOUGH+. This provides significant flexibility for mesh preparing mesh. Some of the mesh structures in this study are designed using Gmsh (Geuzaine and Remacle 2009), which is an open-source 3D finite element mesh generator with a built-in CAD interface and then converted it to the MESH file used by TOUGH+. Note that a good mesh needs to satisfy the orthogonality conditions. Thus, careful checking the quality of the mesh is necessary, and this is especially true for unstructured meshes. The detailed workflow in the mesh preparation is as following: (i) Design the mesh in Gmsh, and export a mesh file with the *.msh format, and with the *.vtk format for visualization; (ii) check the quality of the generated mesh by means of the pyvista library in TOUGHIO; (iii) convert the *.msh file to a MESH file using 'read' and 'write_tough' functions integrated in TOUGHIO; and (iv) define the system boundaries in the MESH file.
Compiler

  1. [TOUGH+]: When I tried to simulation Discrete Fracture Model, which has very small element volumes (1.0e-8) in fracture cells, compiling with option '-o3' in Interl Compiler can cause convergence issues. I think '-O3' employs more aggressive optimizations that affects floating-point precision. I recommend to compile code without optimizations if the cell volume is very small. Not sure if this issues also happen in GNU.
  2. [TOUGH+]: Release builds may not perform initialization, leading to unpredictable behavior if uninitialized variables are accessed. I recommned to add option to initialize variables to zero for safe side.
  3. [TOUGH+]: Using initial value of '0' in the INCON file is quite dangerous, as '0' always cause numerical convergence issues, especially for simulation with a large number of elements. I suggest to use a very small value, such as '1.0e-18' for initial condition.
  4. When using parallel code on Linux, If you need to print out the result immediately (The Input/Output is unbuffered), you can set the environment variable named 'export GFORTRAN_UNBUFFERED_ALL=1'.
  5. [TOUGHREACT]: If 'Segmentation fault' appears immediately when trying to run the executable file. Increasing the 'OMP_STACKSIZE' environment variables, or decresing the 'Numer of elements' & 'Number of connections' in 'flowpar_v4.inc' to reduce the memory requirements