Skip to content

Commit

Permalink
Merge branch 'development' of github.com:AMReX-Astro/Castro into deve…
Browse files Browse the repository at this point in the history
…lopment
  • Loading branch information
zhichen3 committed Sep 8, 2024
2 parents 59ca30f + 97555d4 commit 2eaad51
Show file tree
Hide file tree
Showing 35 changed files with 426 additions and 209 deletions.
17 changes: 17 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,20 @@
# 24.09

* Code clean-ups / clang-tidy (#2942, #2949)

* update the `hse_convergence` readme to reflect current convergence
(#2946)

* update the `bubble_convergence` plotting script (#2947)

* new Frontier scaling numbers (#2948)

* more GPU error printing (@3944)

* science problem updates: `flame_wave` (#2943)

* documentation updates (#2939)

# 24.08

* lazy QueueReduction has been enabled for the timing diagnostics
Expand Down
49 changes: 32 additions & 17 deletions Docs/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,31 +17,46 @@ Compiling
There are 2 things you can do to check what’s happening. First, inspect
the directories in ``VPATH_LOCATIONS``. This can be done via:

::
.. prompt:: bash

make print-VPATH_LOCATIONS
make print-VPATH_LOCATIONS

Next, ask make to tell you where it is finding each of the source
files. This is done through a script ``find_files_vpath.py``
that is hooked into Castro’s build system. You can run this as:

::
.. prompt:: bash

make file_locations
make file_locations

At the end of the report, it will list any files it cannot find in
the vpath. Some of these are to be expected (like ``extern.f90``
and ``buildInfo.cpp``—these are written at compile-time. But any
other missing files need to be investigated.
the vpath. Some of these are to be expected (like
``buildInfo.cpp``—these are written at compile-time). But any other
missing files need to be investigated.

#. *I put a copy of one of the header files (e.g. ``problem_tagging.H``)
in my problem setup but it does not seem to recognized / used by
the build system. Why doesn't my executable use my custom version
of the header?*

This is likely due to compiler caching / ccache. You need to
clear the cache and the build:

.. prompt:: bash

ccache -C
make clean

Then rebuild and it should be recognized.

#. *I’m still having trouble compiling. How can I find out what
all of the make variables are set to?*

Use:

::
.. prompt:: bash

make help
make help

This will tell you the value of all the compilers and their options.

Expand Down Expand Up @@ -104,7 +119,7 @@ Debugging

Given a MultiFab ``mf``, you can dump out the state as:

::
.. code:: c++

print_state(mf, IntVect(AMREX_D_DECL(10, 20, 30)));

Expand All @@ -119,7 +134,7 @@ Debugging
You can simply output a FAB to ``std::cout``. Imagine that you
are in an MFIter loop, with a MultiFab ``mf``:

::
.. code:: c++

S = FArrayBox& mf[mfi];
std::cout << S << std::endl;
Expand All @@ -143,9 +158,9 @@ Profiling
When you run, a file named ``gmon.out`` will be produced. This can
be processed with gprof by running:

::
.. prompt:: bash

gprof exec-name
gprof exec-name

where *exec-name* is the name of the executable. More detailed
line-by-line information can be obtained by passing the -l
Expand All @@ -159,9 +174,9 @@ Managing Runs

Create a file called ``dump_and_continue``, e.g., as:

::
.. prompt:: bash

touch dump_and_continue
touch dump_and_continue

This will force the code to output a checkpoint file that can be used
to restart. Other options are ``plot_and_continue`` to output
Expand Down Expand Up @@ -193,9 +208,9 @@ Managing Runs

The build information (including git hashes, modules, EoS, network, etc.) can be displayed by running the executable as

::
.. prompt:: bash

./Castro.exe --describe
./Castro.exe --describe

.. _ch:faq:vis:

Expand Down
31 changes: 18 additions & 13 deletions Exec/gravity_tests/hse_convergence/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,29 +7,34 @@ in the plotfiles.

To run this problem, use one of the convergence scripts:

* ``convergence_plm.sh`` :
* `convergence_plm.sh` :

this runs CTU + PLM using the default HSE BCs and default
use_pslope, then with reflect BCs, then without use_pslope, and
finally runs with reflect instead of HSE BCs.
this runs CTU + PLM using:
1. the default HSE BCs and `use_pslope`
2. the HSE BCs with reflection and `use_pslope`
3. reflect BCs instead of HSE BCs without `use_pslope`
4. reflect BCs with `use_pslope`

These tests show that the best results come from HSE BCs + reflect vel
These tests show that the best results (by far) come from
`use_pslope=1` and reflecting BCs

* convergence_ppm.sh :

this runs CTU + PPM in a similar set of configurations as PLM above
(with one additional one: grav_source_type = 4)
1. the default HSE BCs
2. HSE BCs with reflection
3. reflecting BCs
4. reflecting BCs with `use_pslope`

These tests show that the best results come from HSE BCs + reflect vel
These tests show that the best results (by far) come from
reflecting BCs with `use_pslope=1`, just like the PLM case.

* convergence_sdc.sh :

this uses the TRUE_SDC integration, first with SDC-2 + PLM and reflecting BCs,
the SDC-2 + PPM and reflecting BCs, then the same but HSE BCs, and finally
SDC-4 + reflect
this uses the TRUE_SDC integration, first with SDC-2 + PLM and
reflecting BCs, the SDC-2 + PPM and reflecting BCs, then the same
but HSE BCs, and finally SDC-4 + reflect

These tests show that the PLM + reflect (which uses the
well-balanced use_pslope) and the SDC-4 + reflect give the lowest
errors and expected (or better) convergence:


errors and expected (or better) convergence.
41 changes: 6 additions & 35 deletions Exec/gravity_tests/hse_convergence/convergence_plm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -58,43 +58,15 @@ pfile=`ls -t | grep -i hse_512_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}


## plm + hse reflect + no pslope

ofile=plm-hsereflect-nopslope.converge.out

RUNPARAMS="
castro.ppm_type=0
castro.use_pslope=0
castro.hse_interp_temp=1
castro.hse_reflect_vels=1
"""

${EXEC} inputs.ppm.64 ${RUNPARAMS} >& 64.out
pfile=`ls -t | grep -i hse_64_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel > ${ofile}

${EXEC} inputs.ppm.128 ${RUNPARAMS} >& 128.out
pfile=`ls -t | grep -i hse_128_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}

${EXEC} inputs.ppm.256 ${RUNPARAMS} >& 256.out
pfile=`ls -t | grep -i hse_256_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}

${EXEC} inputs.ppm.512 ${RUNPARAMS} >& 512.out
pfile=`ls -t | grep -i hse_512_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}


## plm + reflect
## plm + reflect + nopslope

ofile=plm-reflect.converge.out
ofile=plm-reflect-nopslope.converge.out

RUNPARAMS="
castro.ppm_type=0
castro.use_pslope=1
castro.lo_bc=3
castro.hi_bc=3
castro.use_pslope=0
"""

${EXEC} inputs.ppm.64 ${RUNPARAMS} >& 64.out
Expand All @@ -114,16 +86,15 @@ pfile=`ls -t | grep -i hse_512_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}


## plm + reflect + pslope

## plm + reflect + nopslope

ofile=plm-reflect-nopslope.converge.out
ofile=plm-reflect-pslope.converge.out

RUNPARAMS="
castro.ppm_type=0
castro.lo_bc=3
castro.hi_bc=3
castro.use_pslope=0
castro.use_pslope=1
"""

${EXEC} inputs.ppm.64 ${RUNPARAMS} >& 64.out
Expand Down
12 changes: 7 additions & 5 deletions Exec/gravity_tests/hse_convergence/convergence_ppm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,13 @@ pfile=`ls -t | grep -i hse_512_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}


## ppm + grav_source_type = 4
## ppm + reflect

ofile=ppm-grav4.converge.out
ofile=ppm-reflect.converge.out

RUNPARAMS="
castro.grav_source_type=4
castro.lo_bc=3
castro.hi_bc=3
"""

${EXEC} inputs.ppm.64 ${RUNPARAMS} >& 64.out
Expand All @@ -75,13 +76,14 @@ pfile=`ls -t | grep -i hse_512_plt | head -1`
fextrema.gnu.ex -v magvel ${pfile} | grep -i magvel >> ${ofile}


## ppm + reflect
## ppm + reflect + pslope

ofile=ppm-reflect.converge.out
ofile=ppm-reflect-pslope.converge.out

RUNPARAMS="
castro.lo_bc=3
castro.hi_bc=3
castro.use_pslope=1
"""

${EXEC} inputs.ppm.64 ${RUNPARAMS} >& 64.out
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ void problem_initialize_state_data (int i, int j, int k,
}

u_tot += u_phi;
reint += p/(gamma_const - 1.0_rt);
reint += p/(eos_rp::eos_gamma - 1.0_rt);
}
}
}
Expand Down
4 changes: 2 additions & 2 deletions Exec/radiation_tests/Rad2Tshock/problem_initialize.H
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ void problem_initialize ()

eos_state.rho = problem::rho0;
eos_state.T = problem::T0;
for (int n = 0; n < NumSpec; n++) {
eos_state.xn[n] = 0.0_rt;
for (auto & X : eos_state.xn) {
X = 0.0_rt;
}
eos_state.xn[0] = 1.0_rt;

Expand Down
3 changes: 3 additions & 0 deletions Exec/radiation_tests/Rad2Tshock/problem_initialize_rad_data.H
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@ void problem_initialize_rad_data (int i, int j, int k,
const GeometryData& geomdata)
{

amrex::ignore_unused(nugroup);
amrex::ignore_unused(dnugroup);

const Real* dx = geomdata.CellSize();
const Real* problo = geomdata.ProbLo();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ void problem_initialize_state_data (int i, int j, int k,
// Provides the simulation to be run in the x,y,or z direction
// where length direction is the length side in a square prism

Real length_cell;
Real length_cell{};
if (problem::idir == 1) {
length_cell = problo[0] + dx[0] * (static_cast<Real>(i) + 0.5_rt);
} else if (problem::idir == 2) {
Expand Down
30 changes: 17 additions & 13 deletions Exec/reacting_tests/bubble_convergence/analysis/slice_multi.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
#!/usr/bin/env python3

import matplotlib
matplotlib.use('agg')

import os
import sys
import yt
Expand All @@ -26,35 +29,37 @@
fig = plt.figure()
fig.set_size_inches(12.0, 9.0)

grid = ImageGrid(fig, 111, nrows_ncols=(2, 2), axes_pad=0.75, cbar_pad="2%",
grid = ImageGrid(fig, 111, nrows_ncols=(2, 2),
axes_pad=0.75, cbar_pad="2%",
label_mode="L", cbar_mode="each")


fields = ["Temp", "magvel", "X(C12)", "rho_enuc"]
fields = ["Temp", "magvel", "X(C12)", "enuc"]

for i, f in enumerate(fields):

sp = yt.SlicePlot(ds, "z", f, center=[xctr, yctr, 0.0], width=[L_x, L_y, 0.0], fontsize="12")
sp = yt.SlicePlot(ds, "z", f, center=[xctr, yctr, 0.0*cm],
width=[L_x, L_y, 0.0*cm], fontsize="12")
sp.set_buff_size((2000,2000))

if f == "X(C12)":
sp.set_log(f, True)
sp.set_cmap(f, "plasma")
sp.set_zlim(f, 1.e-8, 2.e-4)
sp.set_cmap(f, "magma")
sp.set_zlim(f, 1.e-8, 1.e-4)

elif f == "magvel":
sp.set_log(f, False)
#sp.set_zlim(f, 1.e-3, 2.5e-2)
sp.set_cmap(f, "magma")
sp.set_cmap(f, "cividis")

elif f == "Temp":
sp.set_log(f, False)
#sp.set_zlim(f, 1.e-3, 2.5e-2)

elif f == "rho_enuc":
sp.set_log(f, True)
sp.set_zlim(f, 5.e7, 2.e8)

elif f == "enuc":
sp.set_log(f, True, linthresh=1.e11)
sp.set_zlim(f, 1.e11, 1.e14)
sp.set_cmap(f, "plasma")
#sp.set_zlim(f, 1.e-3, 2.5e-2)

sp.set_axes_unit("cm")

Expand All @@ -71,5 +76,4 @@

fig.set_size_inches(8.0, 8.0)
plt.tight_layout()
plt.savefig("{}_slice.pdf".format(os.path.basename(plotfile)))

plt.savefig("{}_slice.png".format(os.path.basename(plotfile)))
Loading

0 comments on commit 2eaad51

Please sign in to comment.