diff --git a/episodes/02-submit-job.Rmd b/episodes/02-submit-job.Rmd index a9c3a63..208b376 100644 --- a/episodes/02-submit-job.Rmd +++ b/episodes/02-submit-job.Rmd @@ -166,7 +166,7 @@ time mpirun --map-by ppr:4:node Rscript hello_balance.R ::::::::::::::::::::::::::::::::::::: keypoints - Parallel R code distributes work -- There is shared memory and distributed memory parallelizm +- There is shared memory and distributed memory parallelism - You can test parallel code on your own local machine - There are several different job schedulers, but they share many similarities so you can learn a new one when needed diff --git a/episodes/03-multicore.Rmd b/episodes/03-multicore.Rmd index 4dd2792..f2bb82f 100644 --- a/episodes/03-multicore.Rmd +++ b/episodes/03-multicore.Rmd @@ -6,7 +6,7 @@ exercises: 2 :::::::::::::::::::::::::::::::::::::: questions -- Can parallelization decrease time to solution for my program? +- Can parallelisation decrease time to solution for my program? - What is machine learning? :::::::::::::::::::::::::::::::::::::::::::::::: @@ -146,7 +146,7 @@ time Rscript rf_mc.r --args 128 ::::::::::::::::::::::::::::::::::::: keypoints - To evaluate the fitted model, the availabe data is split into training and testing sets -- Parallelization decreases the training time +- Parallelisation decreases the training time :::::::::::::::::::::::::::::::::::::::::::::::: diff --git a/episodes/04-blas.Rmd b/episodes/04-blas.Rmd index e35400a..881aa34 100644 --- a/episodes/04-blas.Rmd +++ b/episodes/04-blas.Rmd @@ -14,7 +14,7 @@ exercises: 2 - Introduce the Basic Linear Algebra Subroutines (BLAS) - Show that BLAS routines are used from R for statistical calculations -- Demonstrate that parallelization can improve time to solution +- Demonstrate that parallelisation can improve time to solution :::::::::::::::::::::::::::::::::::::::::::::::: diff --git a/episodes/05-mpi.Rmd b/episodes/05-mpi.Rmd index f2784ac..2bf8884 100644 --- a/episodes/05-mpi.Rmd +++ b/episodes/05-mpi.Rmd @@ -1,5 +1,5 @@ --- -title: "MPI - Distributed Memory Parallelizm" +title: "MPI - Distributed Memory Parallelism" teaching: 10 exercises: 0 --- @@ -13,7 +13,7 @@ exercises: 0 ::::::::::::::::::::::::::::::::::::: objectives - Demonstrate how to submit a job on multiple nodes -- Demonstrate that a program with distributed memory parallelizm can be run on a shared memory node +- Demonstrate that a program with distributed memory parallelism can be run on a shared memory node :::::::::::::::::::::::::::::::::::::::::::::::: diff --git a/episodes/06-pbdmpi.Rmd b/episodes/06-pbdmpi.Rmd index bcec314..8f79989 100644 --- a/episodes/06-pbdmpi.Rmd +++ b/episodes/06-pbdmpi.Rmd @@ -408,7 +408,7 @@ finalize() ::::::::::::::::::::::::::::::::::::: keypoints - The message passing interface offers many operations that can be used to - efficiently and portably add parallelizm to your program + efficiently and portably add parallelism to your program - It is possible to use parallel libraries to minimize the amount of parallel programming you need to do for your data exploration and data analysis diff --git a/episodes/07-random-forest-mpi.Rmd b/episodes/07-random-forest-mpi.Rmd index a3c7823..8f765e4 100644 --- a/episodes/07-random-forest-mpi.Rmd +++ b/episodes/07-random-forest-mpi.Rmd @@ -1,5 +1,5 @@ --- -title: "MPI - Distributed Memory Parallelizm" +title: "MPI - Distributed Memory Parallelism" teaching: 10 exercises: 2 --- @@ -12,8 +12,8 @@ exercises: 2 ::::::::::::::::::::::::::::::::::::: objectives -- Demonstrate that distributed memory parallelizm is useful for working with large data -- Demonstrate that distributed memory parallelizm can lead to improved time to solution +- Demonstrate that distributed memory parallelism is useful for working with large data +- Demonstrate that distributed memory parallelism can lead to improved time to solution :::::::::::::::::::::::::::::::::::::::::::::::: @@ -146,7 +146,7 @@ time mpirun --map-by ppr:32:node Rscript rf_mpi.R ::::::::::::::::::::::::::::::::::::: keypoints - Classification can be used for data other than digits, such as diamonds -- Distributed memory parallelizm can speed up training +- Distributed memory parallelism can speed up training ::::::::::::::::::::::::::::::::::::::::::::::::