Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Conda Environment #2

Open
Maikuraky opened this issue Mar 8, 2024 · 9 comments
Open

About Conda Environment #2

Maikuraky opened this issue Mar 8, 2024 · 9 comments

Comments

@Maikuraky
Copy link

Can you provide the configuration file for the conda environment?

@ajasja
Copy link

ajasja commented Mar 8, 2024

@ikalvet That would be very much apricated on our side as well, as we currently can not run apptainers on our cluster.

@ajasja
Copy link

ajasja commented Mar 8, 2024

I think I found some good starting instructions in the heme binder design repo https://github.com/ikalvet/heme_binder_diffusion:

conda create -n "diffusion" python=3.9
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
conda install -c conda-forge omegaconf hydra-core=1.3.2 scipy icecream openbabel assertpy opt_einsum pandas pydantic deepdiff e3nn prody pyparsing=3.1.1
conda install dglteam/label/cu118::dgl
conda install pytorch::torchdata

@ikalvet
Copy link
Contributor

ikalvet commented Mar 8, 2024

Yep, that conda environment setup should work for running RFdiffusionAA (tested on a remote non-UW system).
I should note that python=3.9 is NOT a requirement, I just used it to replicate the apptainer image.
Also the hydra-core=1.3.2 version is predefined because conda somehow decided to install the oldest version of hydra-core otherwise.
The packages prody and pyparsing=3.1.1 are for proteinMPNN.

@Maikuraky
Copy link
Author

Can you provide an env.yml or requirements.txt for creating a conda environment? Thanks

@truatpasteurdotfr
Copy link

You can extract the Singularity "recipe" and conda data from the sif file as listed in baker-laboratory/RoseTTAFold-All-Atom#5 (comment)

@matteoferla
Copy link

matteoferla commented Mar 11, 2024

I thought I'd share my recipe as I did some moving around so it need not be run in the github repo, which may be useful for others —changing the cuda=11.6 to whichever fits with the cuda drivers...

export NEW_CONDA_ENV="RFdiffusionAA"
conda create -y -n $NEW_CONDA_ENV python=3.9
conda activate $NEW_CONDA_ENV
conda env config vars set LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$CONDA_PREFIX:/.singularity.d/libs
conda env config vars set PYTHONUSERBASE=$CONDA_PREFIX
conda env config vars set CONDA_OVERRIDE_CUDA="11.6.2";
conda deactivate
conda activate $NEW_CONDA_ENV
# No to the pip route for pytorch as it will be messed up:
#pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu116
conda install -y pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -y -c conda-forge omegaconf hydra-core=1.3.2 scipy icecream openbabel assertpy opt_einsum pandas pydantic deepdiff e3nn prody pyparsing=3.1.1
conda install -y dglteam/label/cu116::dgl
conda install -y pytorch::torchdata
conda install -y anaconda::git

git clone https://github.com/baker-laboratory/RoseTTAFold-All-Atom /tmp/RoseTTAFold-AA
pip install --no-cache-dir -r /tmp/RoseTTAFold-AA/rf2aa/SE3Transformer/requirements.txt
pip install /tmp/RoseTTAFold-AA/rf2aa/SE3Transformer
# ops. the requirements downgraded to e3nn==0.3.3
pip install e3nn==0.5.1

# RFdiffusion full atom via poor man's pip
export CONDA_SITE_PACKAGES=$(ls -d $CONDA_PREFIX/lib/python*/ 2>/dev/null | head -n 1)site-packages
git clone --recurse-submodules https://github.com/baker-laboratory/rf_diffusion_all_atom.git /tmp/rf_diffusion_all_atom
mv /tmp/rf_diffusion_all_atom/*.py $CONDA_SITE_PACKAGES/
mv /tmp/rf_diffusion_all_atom/inference $CONDA_SITE_PACKAGES/
touch $CONDA_SITE_PACKAGES/inference/__init__.py
mv /tmp/rf_diffusion_all_atom/potentials $CONDA_SITE_PACKAGES/
touch $CONDA_SITE_PACKAGES/potentials/__init__.py
mv /tmp/rf_diffusion_all_atom/lib/rf2aa/rf2aa $CONDA_SITE_PACKAGES/rf2aa
# already has an init

#randomly missing:
pip install fire

# set weights
export RFAA_WEIGHTS=$HOME2/.cache/RFAA_weights
wget http://files.ipd.uw.edu/pub/RF-All-Atom/weights/RFDiffusionAA_paper_weights.pt -P $RFAA_WEIGHTS

mv /tmp/rf_diffusion_all_atom/config $HOME2/.cache/RFdiffusionAA_config
export RFDIFFUSIONAA_CONFIG=$HOME2/.cache/RFdiffusionAA_config/inference
# Make a nice alias
alias run_inference="$CONDA_PREFIX/bin/python $CONDA_SITE_PACKAGES/run_inference.py --config-path=$RFDIFFUSIONAA_CONFIG 'inference.ckpt_path=$RFAA_WEIGHTS'"
run_inference hydra.output_subdir=...

@ikalvet
Copy link
Contributor

ikalvet commented Mar 11, 2024

Here is the full yaml I exported from the conda environment that runs RFdiffusionAA. This may be slightly different from the one extracted from the Apptainer with some of the package versions, but it's tested to work on two different systems.

name: diffusion
channels:
  - pytorch
  - dglteam/label/cu118
  - nvidia
  - defaults
  - conda-forge
  - anaconda
dependencies:
  - _libgcc_mutex=0.1=conda_forge
  - _openmp_mutex=4.5=2_gnu
  - annotated-types=0.6.0=pyhd8ed1ab_0
  - antlr-python-runtime=4.9.3=pyhd8ed1ab_1
  - aom=3.8.1=h59595ed_0
  - assertpy=1.1=pyhd8ed1ab_1
  - asttokens=2.4.1=pyhd8ed1ab_0
  - backcall=0.2.0=pyh9f0ad1d_0
  - biopython=1.83=py39hd1e30aa_0
  - blas=1.0=mkl
  - brotli=1.1.0=hd590300_1
  - brotli-bin=1.1.0=hd590300_1
  - brotli-python=1.1.0=py39h3d6467e_1
  - bzip2=1.0.8=hd590300_5
  - ca-certificates=2024.2.2=hbcca054_0
  - cairo=1.18.0=h3faef2a_0
  - certifi=2024.2.2=py39h06a4308_0
  - charset-normalizer=3.3.2=pyhd8ed1ab_0
  - colorama=0.4.6=pyhd8ed1ab_0
  - contourpy=1.2.0=py39h7633fee_0
  - cuda-cudart=11.8.89=0
  - cuda-cupti=11.8.87=0
  - cuda-libraries=11.8.0=0
  - cuda-nvrtc=11.8.89=0
  - cuda-nvtx=11.8.86=0
  - cuda-runtime=11.8.0=0
  - cuda-version=12.4=h3060b56_3
  - cycler=0.12.1=pyhd8ed1ab_0
  - dav1d=1.2.1=hd590300_0
  - decorator=5.1.1=pyhd8ed1ab_0
  - deepdiff=6.7.1=pyhd8ed1ab_0
  - dgl=2.1.0.cu118=py39_0
  - e3nn=0.5.1=pyhd8ed1ab_0
  - exceptiongroup=1.2.0=pyhd8ed1ab_2
  - executing=2.0.1=pyhd8ed1ab_0
  - expat=2.6.1=h59595ed_0
  - ffmpeg=6.1.1=gpl_h8007c5b_104
  - filelock=3.13.1=pyhd8ed1ab_0
  - fire=0.5.0=pyhd8ed1ab_0
  - font-ttf-dejavu-sans-mono=2.37=hab24e00_0
  - font-ttf-inconsolata=3.000=h77eed37_0
  - font-ttf-source-code-pro=2.038=h77eed37_0
  - font-ttf-ubuntu=0.83=h77eed37_1
  - fontconfig=2.14.2=h14ed4e7_0
  - fonts-conda-ecosystem=1=0
  - fonts-conda-forge=1=0
  - fonttools=4.49.0=py39hd1e30aa_0
  - freetype=2.12.1=h267a509_2
  - fribidi=1.0.10=h36c2ea0_0
  - gettext=0.21.1=h27087fc_0
  - glib=2.78.4=hfc55251_0
  - glib-tools=2.78.4=hfc55251_0
  - gmp=6.3.0=h59595ed_0
  - gmpy2=2.1.2=py39h376b7d2_1
  - gnutls=3.7.9=hb077bed_0
  - graphite2=1.3.13=h58526e2_1001
  - harfbuzz=8.3.0=h3d44ed6_0
  - hydra-core=1.3.2=pyhd8ed1ab_0
  - icecream=2.1.3=pyhd8ed1ab_0
  - icu=73.2=h59595ed_0
  - idna=3.6=pyhd8ed1ab_0
  - importlib-resources=6.1.2=pyhd8ed1ab_0
  - importlib_resources=6.1.2=pyhd8ed1ab_0
  - intel-openmp=2023.1.0=hdb19cb5_46306
  - ipython=8.18.1=pyh707e725_3
  - jedi=0.19.1=pyhd8ed1ab_0
  - jinja2=3.1.3=pyhd8ed1ab_0
  - jpeg=9e=h166bdaf_2
  - kiwisolver=1.4.5=py39h7633fee_1
  - lame=3.100=h166bdaf_1003
  - lcms2=2.15=hfd0df8a_0
  - ld_impl_linux-64=2.40=h41732ed_0
  - lerc=4.0.0=h27087fc_0
  - libabseil=20240116.1=cxx17_h59595ed_2
  - libass=0.17.1=h8fe9dca_1
  - libblas=3.9.0=1_h86c2bf4_netlib
  - libbrotlicommon=1.1.0=hd590300_1
  - libbrotlidec=1.1.0=hd590300_1
  - libbrotlienc=1.1.0=hd590300_1
  - libcblas=3.9.0=5_h92ddd45_netlib
  - libcublas=11.11.3.6=0
  - libcufft=10.9.0.58=0
  - libcufile=1.9.0.20=hd3aeb46_0
  - libcurand=10.3.5.119=hd3aeb46_0
  - libcusolver=11.4.1.48=0
  - libcusparse=11.7.5.86=0
  - libdeflate=1.17=h0b41bf4_0
  - libdrm=2.4.120=hd590300_0
  - libexpat=2.6.1=h59595ed_0
  - libffi=3.4.4=h6a678d5_0
  - libgcc-ng=13.2.0=h807b86a_5
  - libgfortran-ng=13.2.0=h69a702a_5
  - libgfortran5=13.2.0=ha4646dd_5
  - libglib=2.78.4=h783c2da_0
  - libgomp=13.2.0=h807b86a_5
  - libhwloc=2.9.3=default_h554bfaf_1009
  - libiconv=1.17=hd590300_2
  - libidn2=2.3.7=hd590300_0
  - libjpeg-turbo=2.1.4=h166bdaf_0
  - liblapack=3.9.0=5_h92ddd45_netlib
  - libnpp=11.8.0.86=0
  - libnsl=2.0.1=hd590300_0
  - libnvjpeg=11.9.0.86=0
  - libopenvino=2023.3.0=h2e90f83_2
  - libopenvino-auto-batch-plugin=2023.3.0=hd5fc58b_2
  - libopenvino-auto-plugin=2023.3.0=hd5fc58b_2
  - libopenvino-hetero-plugin=2023.3.0=h3ecfda7_2
  - libopenvino-intel-cpu-plugin=2023.3.0=h2e90f83_2
  - libopenvino-intel-gpu-plugin=2023.3.0=h2e90f83_2
  - libopenvino-ir-frontend=2023.3.0=h3ecfda7_2
  - libopenvino-onnx-frontend=2023.3.0=h469e5c9_2
  - libopenvino-paddle-frontend=2023.3.0=h469e5c9_2
  - libopenvino-pytorch-frontend=2023.3.0=h59595ed_2
  - libopenvino-tensorflow-frontend=2023.3.0=he1e0747_2
  - libopenvino-tensorflow-lite-frontend=2023.3.0=h59595ed_2
  - libopus=1.3.1=h7f98852_1
  - libpciaccess=0.18=hd590300_0
  - libpng=1.6.43=h2797004_0
  - libprotobuf=4.25.2=h08a7969_1
  - libsqlite=3.45.1=h2797004_0
  - libstdcxx-ng=13.2.0=h7e041cc_5
  - libtasn1=4.19.0=h166bdaf_0
  - libtiff=4.5.0=h6adf6a1_2
  - libunistring=0.9.10=h7f98852_0
  - libuuid=2.38.1=h0b41bf4_0
  - libva=2.20.0=hd590300_0
  - libvpx=1.13.1=h59595ed_0
  - libwebp-base=1.3.2=hd590300_0
  - libxcb=1.15=h0b41bf4_0
  - libxcrypt=4.4.36=hd590300_1
  - libxml2=2.12.5=h232c23b_0
  - libzlib=1.2.13=hd590300_5
  - llvm-openmp=15.0.7=h0cdce71_0
  - lz4-c=1.9.4=hcb278e6_0
  - markupsafe=2.1.5=py39hd1e30aa_0
  - matplotlib-base=3.8.3=py39he9076e7_0
  - matplotlib-inline=0.1.6=pyhd8ed1ab_0
  - mkl=2023.1.0=h213fc3f_46344
  - mkl-service=2.4.0=py39h5eee18b_1
  - mkl_fft=1.3.8=py39h5eee18b_0
  - mkl_random=1.2.4=py39hdb19cb5_0
  - mpc=1.3.1=hfe3b2da_0
  - mpfr=4.2.1=h9458935_0
  - mpmath=1.3.0=pyhd8ed1ab_0
  - munkres=1.1.4=pyh9f0ad1d_0
  - ncurses=6.4=h59595ed_2
  - nettle=3.9.1=h7ab15ed_0
  - networkx=3.2.1=pyhd8ed1ab_0
  - numpy=1.26.4=py39h5f9d8c6_0
  - numpy-base=1.26.4=py39hb5e798b_0
  - ocl-icd=2.3.2=hd590300_0
  - ocl-icd-system=1.0.0=1
  - omegaconf=2.3.0=pyhd8ed1ab_0
  - openbabel=3.1.1=py39h2d01fe1_9
  - openh264=2.4.1=h59595ed_0
  - openjpeg=2.5.0=hfec8fc6_2
  - openssl=3.2.1=hd590300_0
  - opt-einsum=3.3.0=hd8ed1ab_2
  - opt_einsum=3.3.0=pyhc1e730c_2
  - opt_einsum_fx=0.1.4=pyhd8ed1ab_0
  - ordered-set=4.1.0=pyhd8ed1ab_0
  - orjson=3.9.15=py39h9fdd4d6_0
  - p11-kit=0.24.1=hc5aa10d_0
  - packaging=23.2=pyhd8ed1ab_0
  - pandas=2.2.1=py39hddac248_0
  - parso=0.8.3=pyhd8ed1ab_0
  - pcre2=10.42=hcad00b1_0
  - pexpect=4.9.0=pyhd8ed1ab_0
  - pickleshare=0.7.5=py_1003
  - pillow=10.2.0=py39h5eee18b_0
  - pip=24.0=pyhd8ed1ab_0
  - pixman=0.43.2=h59595ed_0
  - prody=2.4.0=py39h227be39_0
  - prompt-toolkit=3.0.43=py39h06a4308_0
  - psutil=5.9.8=py39hd1e30aa_0
  - pthread-stubs=0.4=h36c2ea0_1001
  - ptyprocess=0.7.0=pyhd3deb0d_0
  - pugixml=1.14=h59595ed_0
  - pure_eval=0.2.2=pyhd8ed1ab_0
  - pydantic=2.6.3=pyhd8ed1ab_0
  - pydantic-core=2.16.3=py39h9fdd4d6_0
  - pygments=2.17.2=pyhd8ed1ab_0
  - pyparsing=3.1.1=pyhd8ed1ab_0
  - pysocks=1.7.1=pyha2e5f31_6
  - python=3.9.18=h0755675_1_cpython
  - python-dateutil=2.9.0=pyhd8ed1ab_0
  - python-tzdata=2024.1=pyhd8ed1ab_0
  - python_abi=3.9=4_cp39
  - pytorch=2.2.1=py3.9_cuda11.8_cudnn8.7.0_0
  - pytorch-cuda=11.8=h7e8668a_5
  - pytorch-mutex=1.0=cuda
  - pytz=2024.1=pyhd8ed1ab_0
  - pyyaml=6.0.1=py39hd1e30aa_1
  - readline=8.2=h8228510_1
  - requests=2.31.0=pyhd8ed1ab_0
  - scipy=1.12.0=py39h474f0d3_2
  - setuptools=69.1.1=pyhd8ed1ab_0
  - six=1.16.0=pyh6c4a22f_0
  - snappy=1.1.10=h9fff704_0
  - sqlite=3.45.1=h2c6b66d_0
  - stack_data=0.6.2=pyhd8ed1ab_0
  - svt-av1=1.8.0=h59595ed_0
  - sympy=1.12=pypyh9d50eac_103
  - tbb=2021.11.0=h00ab1b0_1
  - termcolor=2.4.0=pyhd8ed1ab_0
  - tk=8.6.13=noxft_h4845f30_101
  - torchaudio=2.2.1=py39_cu118
  - torchdata=0.7.1=py39
  - torchtriton=2.2.0=py39
  - torchvision=0.17.1=py39_cu118
  - tqdm=4.66.2=pyhd8ed1ab_0
  - traitlets=5.14.1=pyhd8ed1ab_0
  - typing-extensions=4.10.0=hd8ed1ab_0
  - typing_extensions=4.10.0=pyha770c72_0
  - tzdata=2024a=h0c530f3_0
  - unicodedata2=15.1.0=py39hd1e30aa_0
  - urllib3=2.2.1=pyhd8ed1ab_0
  - wcwidth=0.2.13=pyhd8ed1ab_0
  - wheel=0.42.0=pyhd8ed1ab_0
  - x264=1!164.3095=h166bdaf_2
  - x265=3.5=h924138e_3
  - xorg-fixesproto=5.0=h7f98852_1002
  - xorg-kbproto=1.0.7=h7f98852_1002
  - xorg-libice=1.1.1=hd590300_0
  - xorg-libsm=1.2.4=h7391055_0
  - xorg-libx11=1.8.7=h8ee46fc_0
  - xorg-libxau=1.0.11=hd590300_0
  - xorg-libxdmcp=1.1.3=h7f98852_0
  - xorg-libxext=1.3.4=h0b41bf4_2
  - xorg-libxfixes=5.0.3=h7f98852_1004
  - xorg-libxrender=0.9.11=hd590300_0
  - xorg-renderproto=0.11.1=h7f98852_1002
  - xorg-xextproto=7.3.0=h0b41bf4_1003
  - xorg-xproto=7.0.31=h7f98852_1007
  - xz=5.4.6=h5eee18b_0
  - yaml=0.2.5=h7f98852_2
  - zipp=3.17.0=pyhd8ed1ab_0
  - zlib=1.2.13=hd590300_5
  - zstd=1.5.5=hfc55251_0

@rhsimplex
Copy link

rhsimplex commented Mar 21, 2024

Thanks for all the helpful contributions in this thread, I was able to cobble together a Docker image ❤️

Please note:

  • none of the versions are pinned, so this may not work in the future, but could be a starting point.
  • I wanted CPU inference, so you may need to tweak the dgl install for GPU inference
FROM pytorch/pytorch:latest

RUN apt-get update
RUN apt-get install -y git wget
RUN apt-get install libxrender1

RUN pip install omegaconf hydra-core==1.3.2 scipy icecream assertpy opt_einsum pandas pydantic deepdiff e3nn pyparsing==3.1.1 fire
RUN conda install -y -c conda-forge prody openbabel
RUN conda install -y dglteam::dgl
RUN conda install -y pytorch::torchdata

WORKDIR /

RUN git clone https://github.com/baker-laboratory/rf_diffusion_all_atom.git

WORKDIR rf_diffusion_all_atom

RUN wget http://files.ipd.uw.edu/pub/RF-All-Atom/weights/RFDiffusionAA_paper_weights.pt
RUN git submodule init
RUN git submodule update

@ArnNag
Copy link

ArnNag commented Jul 27, 2024

There is no need to use the dglteam channel if you are using conda-forge. dgl is on conda-forge (see conda-forge/staged-recipes#22691).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants