-
Notifications
You must be signed in to change notification settings - Fork 145
Conference call notes 20200805
Kenneth Hoste edited this page Aug 5, 2020
·
6 revisions
(back to Conference calls)
Notes on the 153rd EasyBuild conference call, Wednesday August 5th 2020 (08:00 UTC - 10:00 CEST)
Alphabetical list of attendees (6):
- Simon Branford (Univ. of Birmingham, UK)
- Kenneth Hoste (HPC-UGent, Belgium)
- Adam Huffman (Big Data Institute, Oxford, UK)
- Lev Lafayette (Univ. of Melbourne, Australia)
- Alan O'Cais (Jülich Supercomputing Centre, Germany)
- Jörg Saßmannshausen (NIHR Biomedical Research Centre, UK)
- recent developments to be included in next EasyBuild release
- 2020b update for common toolchains
- compiler toolchain for AMD Rome systems
- Q&A
-
next release will most likely be v4.2.3
-
ETA: end of August
-
recent changes
- framework
- (nothing significant)
- easyblocks
- easyconfigs
- framework
-
to merge soon:
- framework
- add templates for CUDA compute capabilities (PR #3382)
- escape backslashes in
quote_py_str()
(PR #3386) - use one argument 'module swap' statements in Tcl modulefiles (required by Modules 4.2.3+) (PR #3397)
-
gcccudacore
toolchain (PR #3385)- Alan: discussion around this is not settled yet, warrants larger discussion around handling of accelerators
- maybe needs a separate call with maintainers to settle this
- Alan: discussion around this is not settled yet, warrants larger discussion around handling of accelerators
- easyblocks
- updates/fixes to Tinker easyblock (PR #2102)
- add missing 'lib' symlink in tbb installation (PR #2103)
- custom easyblock for
PyTorch
(PR #2104) - handle GNUInstallDirs
/opt
special case in CMakeMake easyblock (PR #2105) - update
Python
easyblock to take into account pip & setuptools that a included with Python 3.4+ (PR #2108)
- easyconfigs
- framework
- GCC 10.2 is available now (see PR #10935, combined with binutils 2.35)
- base for foss/2020b and intel/2020b toolchains?
-
foss
:- OpenMPI 4.1.0 (RC is available, final release soon)
- OpenBLAS 0.3.10 (updated from 0.3.9 in 2020a)
- FFTW 3.3.8 (no updates)
-
intel
:- compilers: 2020 update 2
- MPI: 2019 update 8
- MKL: 2020 update 2
- is GCC 10.x supported officially as base compiler?
- good experiences with recent
foss
toolchains by several people (Adam, Miguel) -
intel
also works, but onlyintel/2019b
gives good performance (AVX2 in MKL)- requires
export MKL_DEBUG_CPU_TYPE=5
to get good performance (use of AVX2) - Miguel:
export MKL_CBWR=COMPATIBLE
is also required with MKL >= 2019 to avoid failing tests with some software - defining
$MKL_DEBUG_CPU_TYPE
no longer works inintel/2020a
(MKL uses only AVX, no AVX2)
- requires
- Miguel has looked into AMD forks of BLIS and FFTW, little gain over standard OpenBLAS and FFTW in
foss
- Alan suggested using Clang-based compiler toolchains for AMD Rome
- big JSC system coming up with AMD Rome CPUs (JURECA update), already have JUSEPH which is AMD Rome based
- link shared in EasyBuild Slack: https://developer.amd.com/wordpress/media/2020/04/Compiler%20Options%20Quick%20Ref%20Guide%20for%20AMD%20EPYC%207xx2%20Series%20Processors.pdf
- Alan: PR for Mesa/OpenGL which dynamically determines which hardware to use
- internal for now, could check whether the discussion can be opened up to community
- maybe as a tech talk?
- Jörg: how about support for ARM?
- comes down to picking a good toolchain?
- EB framework already has basic support for ARM
- see also "Building FOSS Software On ARM64 and ppc64le" talk from Open Source Summit (see https://ossna2020.sched.com/event/c3X9/building-foss-software-on-arm64-and-ppc64le-lance-albertson-osu-open-source-lab-peter-pouliot-ampere-computing)
- software stack for ARM is challenging
- Simon: depends a lot on how much software you need to support
- link provided by Alan: https://gitlab.com/arm-hpc/packages/-/wikis/home#most-recently-modified-packages
- Adam: updates on EESSI stack
- Kenneth: making progress towards pilot stack (OpenFOAM, TensorFlow, GROMACS), not there yet
- Alan: problem with numpy with JSC toolchain with GCC+OpenBLAS
- clever stuff in numpy w.r.t. OpenMP runtime that's being picked by numpy (GCC vs Intel OpenMP runtime)
- can be fixed by setting an environment variable
- will open issue on this to have it on record in case it pops up in the future