Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

Quick Links

USING PYTHON ON THE BMRC CLUSTER

The principal method for using Python on the BMRC cluster is to load one of our pre-installed software modules. To see which versions of Python are available run (noting the capital letter):

module avail Python

Our pre-installed Python modules include a number of common packages. To see which packages are included run e.g.:

module whois Python/3.7.4-GCCcore-8.3.0

and then check the Extensions list.

In addition to Python itself, we have a number of auxiliary Python modules which can be loaded in order to access other widely used packages. For example, scipynumpy and pandas are available through the SciPy-bundle-... modules. To see which versions of SciPy-bundle-... are available run:


module avail SciPy-bundle

To find a SciPy-bundle module that is compatible with your chosen Python module, check the Python version noted in the name and the toolchain. For example, SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 is compatible with Python/3.7.4-GCCcore-8.3.0, because

  1. they use the same version of Python, and
  2. they have compatible toolchains because the GCCcore-8.3.0 toolchain is part of the foss-2019b toolchain (to verify this, use module show foss/2019b).

If in doubt, simply try to load both modules together - if they are incompatible, an error will be reported.

Python Virtual Environments and Local Packages

In most cases if you require some software or Python packages which is not yet installed on the cluster, it is best to email us to request it. When sending software requests, please ensure that you send us sufficient information including the software name, its homepage or download page, and whether you wish to use it in conjunction with any other particular software modules.

In some cases, however, you may wish to try out software packages or install them for testing purposes. In these cases, installing your own packages via a Python virtual environment may be the best way.

On the BMRC cluster, we recommend the use of Python virtual environments in preference to other ways of handling multiple python installations. A python virtual environment provides you with a local copy of python over which you have full control. including which packages to install.

The need for dual virtual environments

At any one time the BMRC cluster comprises computers with different generations of CPU architecture. Currently, these fall into two groups. Our C and D nodes, as well as rescomp3 use Ivybridge-compatible CPUs while our E and F nodes, as well as rescomp1-2 use skylake CPUs. Software built for skylake will not run on Ivybridge, while software built for Ivybridge can run on Skylake but will not take advantage of the newer capabilities. For this reason, we in fact maintain two separate libraries for our pre-installed software - one for Ivybridge and one for Skylake - although this is normally invisible to the user because our system chooses automatically which software version to make available when you load something. When creating and managing your own environments, however, you will need to make this yourself.

Creating and managing your own python virtual environments

Here is an example of how to create and manage your own python virtual environments. Using this method, you create local package libraries on disk. Once configured, you can then install or remove packages using e.g. pip as you wish.

In order to ensure that your code will work across all cluster nodes (whether those nodes using ivybridge or skylake CPUs), the overall goal is to create two near-identical local package libraries, one for skylake CPUs and one for Ivybridge CPUs, and to select the correct one automatically when needed.

  1. First login to either rescomp1 or rescomp2, which use skylake CPUs. Use module avail Python to list and choose a suitable version of Python e.g. Python/3.7.4-GCCcore-8.3.0 and then module load Python/3.7.4-GCCcore-8.3.0 to load it.
  2. We will assume you wish to create a python virtual environment called projectA. First, find a suitable place on disk to store all your python virtual environments e.g. /well/<group>/users/<username>/python/ . Create this directory before continuing and then cd into it.
  3. Once inside your python directory, run python -m venv projectA-skylake . This will create a new python virtual environment in the projectA-skylake sub-folder. Once this is created, you must activate it before using it by running source projectA-skylake/bin/activate . Notice that your shell prompt changes to reflect virtual environment. Once it is activated, you can now proceed to install software e.g. by using the pip search XYZ to search for software and then pip install XYZ to install it. Repeat the process to install all the packages you need.
  4. Once you have installed all the packages you need in projectA-skylake run pip freeze > requirements.txt . This will put a list of all your installed packages and their versions into the file requirements.txt . We will use this file to recreate this environment for Ivybridge.
  5. Run deactivate to deactivate your projectA-skylake environment and then ssh to rescomp3. Note you can only reach rescomp3 by first logging into rescomp1-2 and then typing ssh rescomp3 .
  6. Once logged into rescomp3, you should load the same Python module your previously loaded on rescomp1-2 e.g. module load Python/3.7.4-GCCcore-8.3.0 . Note that our system automatically takes care to load the Ivybridge version of this software now that you are on rescomp3.
  7. cd  to your python folder (i.e. the parent folder in which projectA-skylake is located) and now create a second virtual environment by running python -m venv projectA-ivybridge . Once this is created, activate it by run source projectA-ivybridge/bin/activate .
  8. With the projectA-ivybridge environment activated, you can copy all the same packages that were previously installed into the skylake repository by running pip install -r /path/to/requirements.txt i.e. using the requirements.txt file you created earlier. Once python has finished installing all the packages from requirements.txt, run deactivate to deactivate your current python environment.
  9. You now have two identical python virtual environments, one built for skylake and the other is built for ivybridge.

Now that you have two identical environments, one for ivybridge and one for skylake, it only remains to choose the correct one to activate in your job submissions scripts. To do that, you can copy or amend the following sample submission script:

#!/bin/bash

# note that you must load whichever main Python module you used to create your virtual environments before activating the virtual environment
module load Python/3.7.4-GCCcore-8.3.0

# determine Ivybridge or Skylake compatibility on this node
CPU_ARCHITECTURE=$(/apps/misc/utils/bin/get-cpu-software-architecture.py)

# Error handling
if [[ ! $? == 0 ]]; then
  echo "Fatal error: Please send the following information to the BMRC team: Could not determine CPU software architecture on $(hostname)"
  exit 1
fi


# Activate the ivybridge or skylake version of your python virtual environment
source /path/to/projectA-${CPU_ARCHITECTURE}/bin/activate


# continue to use your python venv as normal

 

Conda, Anaconda and Miniconda

As explained above, we recommend where possible using python virtual environments in preference to conda. This is because python virtual environments tend to be include only the minimal of what is required whereas conda can also install non-python software that may cause incompatibilities with other software on our systems.

Conda Configuration

By default, conda will store your environments and downloaded packages in your home directory under ~/.conda - this will quickly cause your home directory to run out of space. To prevent this from happening we recommend the following:

  1. Create a dedicated conda folder in your group home folder with subdirectories for packages and environments e.g.
    cd /well/<group>/users/<username>
    mkdir -p conda/pkgs conda/envs
  2. Create the file ~/.condarc containing the following configuration (NB indented lines are indented two spaces):
    pkgs_dirs:
      - /well/<group>/users/<username>/conda/pkgs
    envs_dirs:
      - /well/<group>/users/<username>/conda/envs

Activating Conda Environments via qsub jobs

Before activating a conda environment, your shell must be configured to initialise conda itself. When working interactively, this happens automatically because conda will place the relevant initialisation commands in your ~/.bashrc file. However, jobs submitted via qsub run in an non-interactive bash environment which does not automatically read your ~/.bashrc file. So when you try to run conda activate in your job script it will fail because conda is not yet initialised.

To overcome this problem you can source your ~/.bashrc file before running conda activate...  i.e. in your job script you should have:

source ~/.bashrc

conda activate my-conda-environment

 

Alternatively, you can copy the relevant code from ~/.bashrc into your job submission script.