Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Main #274

Open
wants to merge 56 commits into
base: main
Choose a base branch
from
Open

Main #274

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
cf4f6df
turb: start out as copy from flatfoil_yz, get rid of CASEs
germasch May 13, 2022
e346a83
turb: set background density to 1, don't inject initially
germasch May 13, 2022
ac3a4bb
turb: get rid of heating
germasch May 13, 2022
64a0ded
turb: get rid of injection
germasch May 13, 2022
4161302
turb: get rid of ELECTRON_HE
germasch May 13, 2022
aaeab80
turb: now we have a minimal case
germasch May 13, 2022
8ea58f2
turb: switch output to WriterMRC (XDMF/HDF5)
germasch May 13, 2022
5e4f6ea
turb: setup intermediate mflds_alfven first
germasch May 13, 2022
ef4b5d7
turb: use 6-component field for Alfven perturbation
germasch May 13, 2022
134ed56
turb: looks like initial H and V pert works
germasch May 13, 2022
39b366c
turb: avoid hacky cast to double
germasch May 16, 2022
65d06f0
turb: add jupyter notebook
germasch May 13, 2022
2d52c8a
turb: fix compile with cuda
germasch May 16, 2022
c50b1ed
adding psc_turb_harris_xz.cxx This is the file that will include the …
JefferssonAgudelo Jun 2, 2022
688e588
Including the up and bo cases doesn't really work. The density is dou…
JefferssonAgudelo Jun 6, 2022
170ff4f
The harris set up is almost done. However, there is a net drift on th…
JefferssonAgudelo Jun 6, 2022
2da3a41
the drifting is because ions and electrons have different velocities …
JefferssonAgudelo Jun 6, 2022
05f5573
The psc_harris_xz.cxx seems to work for the simple pair plasma harris…
JefferssonAgudelo Jun 7, 2022
a1a3d4e
It is still not properly working. It seems there is an hy component t…
JefferssonAgudelo Jun 7, 2022
7d8d5e8
The error in the out appears to be due to paraview is keep a ghost fi…
JefferssonAgudelo Jun 8, 2022
50cce13
The turb_harris.cxx file has now a block that corresponds to the impl…
JefferssonAgudelo Jun 8, 2022
bef0561
The file turb_harris_yz.cxx is now ready as a standar harris current …
JefferssonAgudelo Jun 13, 2022
3ac0274
The psc_turb_harris_yz.cxx works now for mime greater than 1. There i…
JefferssonAgudelo Jun 13, 2022
fcd2e04
the file now have the correct calculation of Bext_x_r and Bext_y_r. T…
JefferssonAgudelo Jun 14, 2022
bc8731c
extra comments
JefferssonAgudelo Jun 15, 2022
cfc253d
auto-format only
germasch Jun 16, 2022
8624d6a
evolve bn_k over time, set initial magnetic field
germasch Jun 16, 2022
59f2fba
initialize in 3d, via vector potential, div B == 0 works
germasch Jun 16, 2022
713e16d
Merge pull request #1 from germasch/psc_harris_turb
JefferssonAgudelo Jun 16, 2022
cec882e
updated notebook
germasch Jun 17, 2022
7cec5c9
the div B works but not the curl of B. There is an issue with the size
JefferssonAgudelo Jun 22, 2022
485b0a9
Merge remote-tracking branch 'jeffersson1/main' into psc_harris_turb
germasch Jun 22, 2022
f348539
clang-format
germasch Jun 22, 2022
3cec8ee
calculate H including one ghost point on the higher side
germasch Jun 22, 2022
665fbad
second curl needs to be done going to -1 to get staggering right
germasch Jun 22, 2022
d20c0f0
initialize random numbers on rank 0 and broadcast
germasch Jun 22, 2022
bdc0556
put random ext B evolution back in
germasch Jun 22, 2022
7dddb3c
calculate Az from random modes again, rather than test func
germasch Jun 22, 2022
0c6c0a3
use MY_{ELECTRON,ION} constants rather than hardcoded numbers
germasch Jun 22, 2022
ea3b0a7
run 10 steps, output every step
germasch Jun 22, 2022
ca5a793
put initial condition back to Harris-only, with no pert/random B
germasch Jun 22, 2022
41c7ea6
update jupyter notebook
germasch Jun 22, 2022
c0bc660
put initial perturbation back to turbulent modes, make box all periodic
germasch Jun 23, 2022
a59bbcc
use separate functions for calculating curl Az, H
germasch Jun 23, 2022
2a39bf5
add ExtCurrent to psc as a hook to be able to modify fields
germasch Jun 23, 2022
2c1babc
run 100 steps and remove particles (setting plasma density to zero)
germasch Jun 23, 2022
cbef35c
add external currents
germasch Jun 23, 2022
4d44dca
update notebook
germasch Jun 23, 2022
251c961
Merge pull request #2 from germasch/psc_harris_turb
JefferssonAgudelo Jun 23, 2022
7ed5092
Including the turb_alfven
JefferssonAgudelo Jun 29, 2022
e77ef10
When including the perturbation for the velocity field in turb_alf.cx…
JefferssonAgudelo Jun 30, 2022
fcb33de
the turb_alf now works. There is an issue with the random numbers so …
JefferssonAgudelo Jun 30, 2022
2e3ac46
The turb_alf and the turb_langevin are set on the same geometry and t…
JefferssonAgudelo Jun 30, 2022
efceeb7
Including the correct background and harris populations
JefferssonAgudelo Jul 5, 2022
668682f
testing where the commit goes to
JefferssonAgudelo Jul 5, 2022
716753c
The drift velocity were too low. Also, to run the driving remember to…
JefferssonAgudelo Jul 8, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file added dummy_test_file.txt
Empty file.
404 changes: 404 additions & 0 deletions python/psc_turb1.ipynb

Large diffs are not rendered by default.

8 changes: 6 additions & 2 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,14 @@ if (NOT USE_CUDA)
endif()
add_psc_executable(psc_bubble_yz)
add_psc_executable(psc_flatfoil_yz)
add_psc_executable(psc_flatfoil_yz_small)
#add_psc_executable(psc_flatfoil_yz_small)
add_psc_executable(psc_whistler)
#add_psc_executable(psc_2d_shock)
add_psc_executable(psc_harris_xz)
add_psc_executable(psc_2d_shock)
add_psc_executable(psc_turb)
add_psc_executable(psc_turb_harris_yz)
add_psc_executable(psc_turb_alf)
add_psc_executable(psc_turb_langevin)

if (NOT USE_CUDA)
install(
Expand Down
68 changes: 55 additions & 13 deletions src/include/psc.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,8 @@ inline double courant_length(const Grid_t::Domain& domain)
// ======================================================================
// Psc

template <typename PscConfig, typename Diagnostics, typename InjectParticles>
template <typename PscConfig, typename Diagnostics, typename InjectParticles,
typename ExtCurrent>
struct Psc
{
using Mparticles = typename PscConfig::Mparticles;
Expand All @@ -158,7 +159,7 @@ struct Psc
Psc(const PscParams& params, Grid_t& grid, MfieldsState& mflds,
Mparticles& mprts, Balance& balance, Collision& collision, Checks& checks,
Marder& marder, Diagnostics& diagnostics,
InjectParticles& inject_particles)
InjectParticles& inject_particles, ExtCurrent& ext_current)
: p_{params},
grid_{&grid},
mflds_{mflds},
Expand All @@ -171,6 +172,7 @@ struct Psc
bndp_{grid},
diagnostics_{diagnostics},
inject_particles_{inject_particles},
ext_current_{ext_current},
checkpointing_{params.write_checkpoint_every_step}
{
time_start_ = MPI_Wtime();
Expand Down Expand Up @@ -484,7 +486,7 @@ struct Psc

// === particle injection
prof_start(pr_inject_prts);
inject_particles();
inject_particles_(grid(), mprts_);
prof_stop(pr_inject_prts);

if (checks_.continuity_every_step > 0 &&
Expand Down Expand Up @@ -522,6 +524,9 @@ struct Psc
bnd_.fill_ghosts(mflds_, HX, HX + 3);
#endif

// === external current
this->ext_current_(grid(), mflds_);

bndf_.add_ghosts_J(mflds_);
bnd_.add_ghosts(mflds_, JXI, JXI + 3);
bnd_.fill_ghosts(mflds_, JXI, JXI + 3);
Expand All @@ -540,10 +545,12 @@ struct Psc
// state is now: x^{n+3/2}, p^{n+1}, E^{n+3/2}, B^{n+1}

// === field propagation B^{n+1} -> B^{n+3/2}
// mpi_printf(comm, "***** before Push fields jeff\n");
mpi_printf(comm, "***** Push fields B\n");
prof_restart(pr_push_flds);
pushf_.push_H(mflds_, .5, Dim{});
prof_stop(pr_push_flds);
// mpi_printf(comm, "***** before if fields jeff\n");

#if 1
prof_start(pr_bndf);
Expand All @@ -552,13 +559,15 @@ struct Psc
prof_stop(pr_bndf);
// state is now: x^{n+3/2}, p^{n+1}, E^{n+3/2}, B^{n+3/2}
#endif
// mpi_printf(comm, "***** before continuity jeff\n");

if (checks_.continuity_every_step > 0 &&
timestep % checks_.continuity_every_step == 0) {
prof_restart(pr_checks);
checks_.continuity_after_particle_push(mprts_, mflds_);
prof_stop(pr_checks);
}
// mpi_printf(comm, "***** before marder correction jeff\n");

// E at t^{n+3/2}, particles at t^{n+3/2}
// B at t^{n+3/2} (Note: that is not its natural time,
Expand Down Expand Up @@ -589,11 +598,6 @@ struct Psc
#endif
}

// ----------------------------------------------------------------------
// inject_particles

void inject_particles() { return this->inject_particles_(grid(), mprts_); }

private:
// ----------------------------------------------------------------------
// print_profiling
Expand Down Expand Up @@ -724,6 +728,7 @@ protected:
Marder& marder_;
Diagnostics& diagnostics_;
InjectParticles& inject_particles_;
ExtCurrent& ext_current_;

Sort sort_;
PushParticles pushp_;
Expand Down Expand Up @@ -762,21 +767,42 @@ InjectParticlesNone injectParticlesNone;

} // namespace

// ======================================================================
// ExtCurrentNone

class ExtCurrentNone
{
public:
template <typename MfieldsState>
void operator()(const Grid_t& grid, MfieldsState& mflds)
{}
};

namespace
{

ExtCurrentNone extCurrentNone;

} // namespace

// ======================================================================
// makePscIntegrator

template <typename PscConfig, typename MfieldsState, typename Mparticles,
typename Balance, typename Collision, typename Checks,
typename Marder, typename Diagnostics,
typename InjectParticles = InjectParticlesNone>
Psc<PscConfig, Diagnostics, InjectParticles> makePscIntegrator(
typename InjectParticles = InjectParticlesNone,
typename ExtCurrent = ExtCurrentNone>
Psc<PscConfig, Diagnostics, InjectParticles, ExtCurrent> makePscIntegrator(
const PscParams& params, Grid_t& grid, MfieldsState& mflds, Mparticles& mprts,
Balance& balance, Collision& collision, Checks& checks, Marder& marder,
Diagnostics& diagnostics,
InjectParticles& inject_particles = injectParticlesNone)
InjectParticles& inject_particles = injectParticlesNone,
ExtCurrent& ext_current = extCurrentNone)
{
return {params, grid, mflds, mprts, balance,
collision, checks, marder, diagnostics, inject_particles};
return {params, grid, mflds, mprts, balance,
collision, checks, marder, diagnostics, inject_particles,
ext_current};
}

// ======================================================================
Expand All @@ -799,6 +825,22 @@ void partitionAndSetupParticles(SetupParticles& setup_particles,
setupParticlesGeneralInit(setup_particles, mprts, init_npt_general);
}

// ======================================================================
// partitionAndSetupParticlesGeneral
// convenience/backward compatibility method to allow simpler function signature
// init_npt signature: (int kind, Double3 pos, npt) -> void

template <typename SetupParticles, typename Balance, typename Mparticles,
typename FUNC>
void partitionAndSetupParticlesGeneral(SetupParticles& setup_particles,
Balance& balance, Grid_t*& grid_ptr,
Mparticles& mprts, FUNC&& init_npt)
{
partitionParticlesGeneralInit(setup_particles, balance, grid_ptr, mprts,
init_npt);
setupParticlesGeneralInit(setup_particles, mprts, init_npt);
}

// ----------------------------------------------------------------------
// partitionParticlesGeneralInit
// init_npt signature: (int kind, Double3 pos, int p, Int3 index, npt) -> void
Expand Down
37 changes: 37 additions & 0 deletions src/include/setup_fields.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,37 @@ struct SetupFields
});
}
}
// I think in here is where the Jext should be included Jeff.
template <typename FUNC>
static void run_general(Mfields& mf, FUNC&& func)
{
const auto& grid = mf.grid();
mpi_printf(grid.comm(), "**** Setting up fields...\n");

for (int p = 0; p < mf.n_patches(); ++p) {
auto& patch = grid.patches[p];
auto F = make_Fields3d<dim_xyz>(mf[p]);

int n_ghosts =
std::max({mf.ibn()[0], mf.ibn()[1], mf.ibn()[2]}); // FIXME, not pretty
// FIXME, do we need the ghost points?
grid.Foreach_3d(n_ghosts, n_ghosts, [&](int jx, int jy, int jz) {
Int3 index{jx, jy, jz};

for (int c = 0; c < 3; c++) {
F(HX + c, jx, jy, jz) +=
func(HX + c, index, p,
Centering::getPos(patch, index, Centering::FC, c));
F(EX + c, jx, jy, jz) +=
func(EX + c, index, p,
Centering::getPos(patch, index, Centering::EC, c));
F(JXI + c, jx, jy, jz) +=
func(JXI + c, index, p,
Centering::getPos(patch, index, Centering::EC, c));
}
});
}
}

template <typename FUNC>
static void runScalar(Mfields& mf, FUNC&& func,
Expand Down Expand Up @@ -75,6 +106,12 @@ void setupFields(MF& mflds, FUNC&& func)
detail::SetupFields<MF>::run(mflds, std::forward<FUNC>(func));
}

template <typename MF, typename FUNC>
void setupFieldsGeneral(MF& mflds, FUNC&& func)
{
detail::SetupFields<MF>::run_general(mflds, std::forward<FUNC>(func));
}

// func signature: (int component, Double3 position) -> double fieldValue
template <typename MF, typename FUNC>
void setupScalarField(MF& fld, const Centering::Centerer& centerer, FUNC&& func)
Expand Down
56 changes: 56 additions & 0 deletions src/libpsc/cuda/setup_fields_cuda.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,15 @@ void detail::SetupFields<MfieldsStateCuda>::run(Mfields& mf, FUNC&& func)
std::forward<FUNC>(func));
}

template <>
template <typename FUNC>
void detail::SetupFields<MfieldsStateCuda>::run_general(Mfields& mf,
FUNC&& func)
{
return detail::SetupFields<MfieldsCuda>::run_general(
mf.mflds(), std::forward<FUNC>(func));
}

template <>
template <typename FUNC>
void detail::SetupFields<MfieldsCuda>::run(Mfields& mf, FUNC&& func)
Expand Down Expand Up @@ -57,3 +66,50 @@ void detail::SetupFields<MfieldsCuda>::run(Mfields& mf, FUNC&& func)
}
gt::copy(h_mf, mf.storage());
}

template <>
template <typename FUNC>
void detail::SetupFields<MfieldsCuda>::run_general(Mfields& mf, FUNC&& func)
{
auto bnd = mf.ibn();
auto h_mf = gt::host_mirror(mf.gt());
gt::copy(mf.storage(), h_mf);

for (int p = 0; p < mf.n_patches(); ++p) {
auto& patch = mf.grid().patches[p];
auto flds =
make_Fields3d<dim_xyz>(h_mf.view(_all, _all, _all, _all, p), -bnd);

int n_ghosts = std::max({bnd[0], bnd[1], bnd[2]});

// FIXME, do we need the ghost points?
mf.grid().Foreach_3d(n_ghosts, n_ghosts, [&](int jx, int jy, int jz) {
Int3 idx{jx, jy, jz};
double x_nc = patch.x_nc(jx), y_nc = patch.y_nc(jy),
z_nc = patch.z_nc(jz);
double x_cc = patch.x_cc(jx), y_cc = patch.y_cc(jy),
z_cc = patch.z_cc(jz);

double ncc[3] = {x_nc, y_cc, z_cc};
double cnc[3] = {x_cc, y_nc, z_cc};
double ccn[3] = {x_cc, y_cc, z_nc};

double cnn[3] = {x_cc, y_nc, z_nc};
double ncn[3] = {x_nc, y_cc, z_nc};
double nnc[3] = {x_nc, y_nc, z_cc};

flds(HX, jx, jy, jz) += func(HX, idx, p, ncc);
flds(HY, jx, jy, jz) += func(HY, idx, p, cnc);
flds(HZ, jx, jy, jz) += func(HZ, idx, p, ccn);

flds(EX, jx, jy, jz) += func(EX, idx, p, cnn);
flds(EY, jx, jy, jz) += func(EY, idx, p, ncn);
flds(EZ, jx, jy, jz) += func(EZ, idx, p, nnc);

flds(JXI, jx, jy, jz) += func(JXI, idx, p, cnn);
flds(JYI, jx, jy, jz) += func(JYI, idx, p, ncn);
flds(JZI, jx, jy, jz) += func(JZI, idx, p, nnc);
});
}
gt::copy(h_mf, mf.storage());
}
31 changes: 16 additions & 15 deletions src/psc_harris_xz.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -143,8 +143,8 @@ PscParams psc_params;
void setupHarrisParams()
{
g.wpedt_max = .36;
g.wpe_wce = 2.;
g.mi_me = 25.;
g.wpe_wce = 2.5;
g.mi_me = 1.;

g.Lx_di = 40.;
g.Ly_di = 1.;
Expand All @@ -153,18 +153,18 @@ void setupHarrisParams()
g.electron_sort_interval = 25;
g.ion_sort_interval = 25;

g.taui = 40.;
g.taui = 10.;
g.t_intervali = 1.;
g.output_particle_interval = 10.;

g.overalloc = 2.;

g.gdims = {512, 1, 128};
g.np = {4, 1, 1};
g.np = {4, 1, 4};

g.L_di = .5;
g.L_di = 1.0;
g.Ti_Te = 5.;
g.nb_n0 = .05;
g.nb_n0 = .25;
g.Tbe_Te = .333;
g.Tbi_Ti = .333;

Expand Down Expand Up @@ -459,6 +459,7 @@ void setup_log(const Grid_t& grid)
mpi_printf(comm, "n_global_patches = %d\n", phys.n_global_patches);
mpi_printf(comm, "nppc = %g\n", g.nppc);
mpi_printf(comm, "b0 = %g\n", phys.b0);
mpi_printf(comm, "n0 = %g\n", phys.n0);
mpi_printf(comm, "v_A (based on nb) = %g\n", phys.v_A);
mpi_printf(comm, "di = %g\n", phys.di);
mpi_printf(comm, "Ne = %g\n", phys.Ne);
Expand Down Expand Up @@ -745,15 +746,15 @@ void initializeFields(MfieldsState& mflds)

switch (m) {
case HX:
return cs * b0 * tanh(z / L) +
dbx * cos(2. * M_PI * (x - .5 * Lx) / Lpert) *
sin(M_PI * z / Lz);
return cs * b0 * tanh(z / L) ;//+
//dbx * cos(2. * M_PI * (x - .5 * Lx) / Lpert) *
// sin(M_PI * z / Lz);

case HY: return -sn * b0 * tanh(z / L) + b0 * g.bg;
case HY: return 0.;//-sn * b0 * tanh(z / L) ;//+ b0 * g.bg;

case HZ:
return dbz * cos(M_PI * z / Lz) *
sin(2.0 * M_PI * (x - 0.5 * Lx) / Lpert);
return 0. ;//+ dbz * cos(M_PI * z / Lz) *
//sin(2.0 * M_PI * (x - 0.5 * Lx) / Lpert);

case JYI: return 0.; // FIXME

Expand Down Expand Up @@ -877,19 +878,19 @@ void run()
outf_params.fields.pfield_interval =
int((output_field_interval / (phys.wci * grid.dt)));
outf_params.fields.tfield_interval =
int((output_field_interval / (phys.wci * grid.dt)));
-int((output_field_interval / (phys.wci * grid.dt)));
OutputFields<MfieldsState, Mparticles, dim_xz> outf{grid, outf_params};

OutputParticlesParams outp_params{};
outp_params.every_step =
int((g.output_particle_interval / (phys.wci * grid.dt)));
-int((g.output_particle_interval / (phys.wci * grid.dt)));
outp_params.data_dir = ".";
outp_params.basename = "prt";
outp_params.lo = {192, 0, 48};
outp_params.hi = {320, 0, 80};
OutputParticles outp{grid, outp_params};

int oute_interval = 100;
int oute_interval = -100;
DiagEnergies oute{grid.comm(), oute_interval};

Diagnostics diagnostics{grid, outf, outp, oute};
Expand Down
Loading