Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation steps #93

Open
lancelyon01 opened this issue Jan 18, 2022 · 10 comments
Open

Installation steps #93

lancelyon01 opened this issue Jan 18, 2022 · 10 comments

Comments

@lancelyon01
Copy link

Hi, Can anyone please tell me installation steps for RapidCFD, the readme file is not clear and I am not sure what has to be done. I am trying to install it on GPU(cloud). I have cuda 9.2, 11 installed. A step by step installtion procedure will be really helpful.

@TonkomoLLC
Copy link
Contributor

Hello,
If you already have CUDA installed and assuming you have RapidCFD-dev installed at /opt/RapidCFD-dev on Ubuntu 16-20 (I haven not tried Ubuntu 21), then installation for computing on a single GPU on your system should work as follows:

  1. Find the compute capability of your GPU, e.g., here. Make a note of the compute capability for your GPU.

  2. Edit line 10 of the wmake rules for c++ and replace with the value you found in step 1 in the -arch=sm_xx field. If there is a decimal in the compute capability, then just ignore it. For example, if a GPU compute capability is 5.2, then replace sm_30 with sm_52.

CC          = nvcc -Xptxas -dlcm=cg -std=c++11 -m64 -arch=sm_30
  1. Repeat step 2 by editing the -arch=sm_xx field in the wmake rules for c

  2. To compile faster, you may optionally compile with multiple cores (in parallel). You can enable parallel compilation of RapidCFD by typing "export WM_NCOMPPROCS=#" from the command line, where # is the # of cores to be used for compilation.

  3. From the command line type nvcc --version. If nvcc is not found, you may need to set environment variables, e.g.,

export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
export PATH=$PATH:$CUDA_HOME/bin

After executing these export statements try nvcc --version again. If it does not work, then you will need to find the location of CUDA on your system and edit the paths above as required for your individual setup. If it works, proceed to step 6.

  1. From '/opt/RapidCFD-dev' type source etc/bashrc, followed by `./Allwmake'

At this point the software will compile.

n.b.

  • If your GPU is too new for CUDA 9.2 you may need to install a newer version of CUDA. If you install new version of CUDA you need to completely reinstall RapidCFD from scratch (i.e., recompile from a clean copy of RapidCFD-dev).

  • If you have multiple GPU's then you will need to install ThirdParty-dev. I placed a copy of the ThirdParty-dev that I use here. To the best of my recollection (because I haven't rebuilt this ThirdParty-dev directory in years), you can place this material in /opt/ThirdParty-dev and compilation fo RapidCFD-dev should automatically find openmpi.

I hope you find this to be helpful.

Good luck with your compilation.

Eric

@lancelyon01
Copy link
Author

Thank you so much, can you redirect to me an answer where you would have provided instruction to install RapidCFD-dev.

@TonkomoLLC
Copy link
Contributor

I don't think these instructions were written in one location on this repo yet, but I think the bits and pieces summarized above were located in various replies to issues. That's not optimal, of course, and probably why you had trouble finding a concise installation procedure.

@lancelyon01
Copy link
Author

Hi Eric,
When I run the commandsource etc/bashrc in the terminal I get the below error

-bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamEtcFile: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamCleanPath: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamCleanPath: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamCleanPath: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/etc/config/settings.sh: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/etc/config/aliases.sh: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamCleanPath: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamCleanPath: No such file or directory -bash: /home/ec2-user/RapidCFD/RapidCFD-dev/bin/foamCleanPath: No such file or directory

And when I run ./Allwmake I got the below error

Error: Current directory is not $WM_PROJECT_DIR The environment variables are inconsistent with the installation. Check the OpenFOAM entries in your dot-files and source them.

@TonkomoLLC
Copy link
Contributor

Hi,

The default install directory for RapidCFD-dev (and OpenFOAM as well) is in /opt -- i.e., '/opt/RapidCFD-dev`.

I haven't tried RapidCFD on AWS, and I don't install OpenFOAM or RapidCFD in my home directory, so I don't have personal experience here with solving the issue you are describing.

Nonetheless, I will try to give you some pointers.

If you don't have root or sudo privileges (probably not since you said you were in the cloud) to install RapidCFD in /opt, then maybe edit RapidCFD/etc/bashrc and check out the section

################################################################################
# USER EDITABLE PART: Changes made here may be lost with the next upgrade
#
# either set $FOAM_INST_DIR before sourcing this file or set
# 'foamInstall' below to where OpenFOAM is installed
#
# Location of the OpenFOAM installation
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
foamInstall=$HOME/$WM_PROJECT
# foamInstall=~/$WM_PROJECT
# foamInstall=/opt/$WM_PROJECT
# foamInstall=/usr/local/$WM_PROJECT
#
# END OF (NORMAL) USER EDITABLE PART
################################################################################

Perhaps you can change the setting for foaminstall in the above file to:

foamInstall=/home/ec2-user/RapidCFD/$WM_PROJECT

Or, you can also try the hint in the bashrc file to export FOAM_INST_DIR=<your path> before sourcing etc/bashrc.

In any case, this is not a RapidCFD specific problem, it is a general OpenFOAM issue for compiling OpenFOAM in a non-standard location. You may find hints on the internet, including here or here.

I hope you are able to resolve this problem.

Good luck.

Best regards,

Eric

@lancelyon01
Copy link
Author

Screenshot (92)
Screenshot (93)

Hi Eric, when I do the step 6 From '/opt/RapidCFD-dev' type source etc/bashrc, followed by `./Allwmake'

I get the error from the above screenshot. can you please help.

@TonkomoLLC
Copy link
Contributor

Hello,

I am not sure if you are showing errors in the screenshots. Unless I missed something, your pictures show warnings. Does compilation ever terminate early due to an error?

I am guessing you are using a newer version of CUDA since some of the older Kepler GPU's are reported as deprecated by the compiler. I have some notes on compiling with newer versions of CUDA In #92. I have not yet tried the recently released CUDA 11.6

If you are indeed using a newer (i.e., > CUDA 11.1) version of CUDA and you are still having trouble compiling RapidCFD, is using an older version of CUDA an option?

Good luck and hope you solve your compilation issues.

Best regards,

Eric

jiaqiwang969 added a commit to jiaqiwang969/RapidCFD-dev that referenced this issue Jul 23, 2022
@Dcn303
Copy link

Dcn303 commented Sep 5, 2022

Hello, If you already have CUDA installed and assuming you have RapidCFD-dev installed at /opt/RapidCFD-dev on Ubuntu 16-20 (I haven not tried Ubuntu 21), then installation for computing on a single GPU on your system should work as follows:

  1. Find the compute capability of your GPU, e.g., here. Make a note of the compute capability for your GPU.
  2. Edit line 10 of the wmake rules for c++ and replace with the value you found in step 1 in the -arch=sm_xx field. If there is a decimal in the compute capability, then just ignore it. For example, if a GPU compute capability is 5.2, then replace sm_30 with sm_52.
CC          = nvcc -Xptxas -dlcm=cg -std=c++11 -m64 -arch=sm_30
  1. Repeat step 2 by editing the -arch=sm_xx field in the wmake rules for c
  2. To compile faster, you may optionally compile with multiple cores (in parallel). You can enable parallel compilation of RapidCFD by typing "export WM_NCOMPPROCS=#" from the command line, where # is the # of cores to be used for compilation.
  3. From the command line type nvcc --version. If nvcc is not found, you may need to set environment variables, e.g.,
export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
export PATH=$PATH:$CUDA_HOME/bin

After executing these export statements try nvcc --version again. If it does not work, then you will need to find the location of CUDA on your system and edit the paths above as required for your individual setup. If it works, proceed to step 6.

  1. From '/opt/RapidCFD-dev' type source etc/bashrc, followed by `./Allwmake'

At this point the software will compile.

n.b.

  • If your GPU is too new for CUDA 9.2 you may need to install a newer version of CUDA. If you install new version of CUDA you need to completely reinstall RapidCFD from scratch (i.e., recompile from a clean copy of RapidCFD-dev).
  • If you have multiple GPU's then you will need to install ThirdParty-dev. I placed a copy of the ThirdParty-dev that I use here. To the best of my recollection (because I haven't rebuilt this ThirdParty-dev directory in years), you can place this material in /opt/ThirdParty-dev and compilation fo RapidCFD-dev should automatically find openmpi.

I hope you find this to be helpful.

Good luck with your compilation.

Eric

Hi TonkomoLLC
your post has helped me alot thanks for that
but , i have additional questions regarding the
GPU architecture , if you have any idea please shade some light on it
that is in
CC = nvcc -Xptxas -dlcm=cg -std=c++11 -m64 -arch=sm_30
what will be the value of
-arch
as i have a NVIDIA Tesla V100 which is powered by NVIDIA Volta architecture
curiously waiting for your reply
Thanks

@TonkomoLLC
Copy link
Contributor

The compute capability of a Tesla V100 is 7.0. (so the referenced flag will be -arch=sm_70 in the wmake files)

Reference.

Thanks

@Dcn303
Copy link

Dcn303 commented Sep 8, 2022

The compute capability of a Tesla V100 is 7.0. (so the referenced flag will be -arch=sm_70 in the wmake files)

Reference.

Thanks

Thank you so much TonkomoLLC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants