Python on the BMRC Cluster
- Pre-installed python software
- Jupyter Notebook for remote interactive Python coding
- Creating and managing your own python virtual environments
- Using Conda
USING PYTHON ON THE BMRC CLUSTER
The principal method for using Python on the BMRC cluster is to load one of our pre-installed software modules. To see which versions of Python are available run (noting the capital letter):
module avail Python
Our pre-installed Python modules include a number of common packages. To see which packages are included run e.g.:
module whois Python/3.7.4-GCCcore-8.3.0
and then check the Extensions list.
In addition to Python itself, we have a number of auxiliary Python modules which can be loaded in order to access other widely used packages. For example, scipy, numpy and pandas are available through the SciPy-bundle-... modules. To see which versions of SciPy-bundle-... are available run:
module avail SciPy-bundle
To find a SciPy-bundle module that is compatible with your chosen Python module, check the Python version noted in the name and the toolchain. For example, SciPy-bundle/2019.10-foss-2019b-Python-3.7.4 is compatible with Python/3.7.4-GCCcore-8.3.0, because
- they use the same version of Python, and
- they have compatible toolchains because the GCCcore-8.3.0 toolchain is part of the foss-2019b toolchain (to verify this, use module show foss/2019b).
If in doubt, simply try to load both modules together - if they are incompatible, an error will be reported.
Jupyter Notebook is a popular Python-based software that allows you to edit and run Python code remotely over a web-browser. Here's how to use it:
- Login to rescomp1 or rescomp2 and create a configuration directory for jupyter using: mkdir -p ~/.jupyter
Inside this directory create the file ~/.jupyter/jupyter_notebook_config.py which should contain this single line:
c.NotebookApp.ip = '*'
- While still logged in to rescomp1 or rescomp2, start an interactive cluster session using e.g. qlogin -q short.qc . Make a note of which node is running your qlogin session by using the hostname -s command or checking your prompt.
- Load the iPython module: module load IPython/7.9.0-foss-2019b-Python-3.7.4
Start jupyter notebook: jupyter notebook --no-browser
After running this command, you will see several lines of text appear on screen. The last few lines will look as below - you need only look at the line which begins http://127.0.0.1...
To access the notebook, open this file in a browser:
Or copy and paste one of these URLs:
Note that your own port number and token may differ from those shown here and <qlogin_host_name> will be the full version of your qlogin hostname eg. compc001.hpc.in.ox.ac.uk that you discovered above. In the following instructions, make sure to use the information shown on your own screen.
At this point, you need to create a tunnelled connection from your own computer to your qlogin session. First take note of the port number which could be 8888 as shown above or another number.
Then open a new terminal window ON YOUR OWN COMPUTER (i.e. not on rescomp1 or rescomp2) and create an SSH tunnel following this template:
ssh –L 8888:qloginhostname:8888 firstname.lastname@example.org
Remember to use your own port number in place of 8888, as well as your own qloginhostname (the short version is sufficient) and your own username.
- After running the tunnel command your terminal will appear to be logged into rescomp1 and (invisibly to you) an additional connection now exists between your computer and your qlogin host. Now open a web browser on your own computer and copy the line from your own terminal corresponding to the http://127.0.0.1... line above.
- Paste the newly copied line into your web browser and Jupyter notebook will appear
- To close down, click the Quit button in the top right of Jupyter notebook and then close all your terminal windows.
In most cases if you require some software or Python packages which is not yet installed on the cluster, it is best to email us to request it. When sending software requests, please ensure that you send us sufficient information including the software name, its homepage or download page, and whether you wish to use it in conjunction with any other particular software modules.
In some cases, however, you may wish to try out software packages or install them for testing purposes. In these cases, installing your own packages via a Python virtual environment may be the best way.
On the BMRC cluster, we recommend the use of Python virtual environments in preference to other ways of handling multiple python installations. A python virtual environment provides you with a local copy of python over which you have full control. including which packages to install.
The need for dual virtual environments
At any one time the BMRC cluster comprises computers with different generations of CPU architecture. Currently, these fall into two groups. Our C and D nodes, as well as rescomp3 use Ivybridge-compatible CPUs while our E and F nodes, as well as rescomp1-2 use skylake CPUs. Software built for skylake will not run on Ivybridge, while software built for Ivybridge can run on Skylake but will not take advantage of the newer capabilities. For this reason, we in fact maintain two separate libraries for our pre-installed software - one for Ivybridge and one for Skylake - although this is normally invisible to the user because our system chooses automatically which software version to make available when you load something. When creating and managing your own environments, however, you will need to make this yourself.
Creating and managing your own python virtual environments
Here is an example of how to create and manage your own python virtual environments. Using this method, you create local package libraries on disk. Once configured, you can then install or remove packages using e.g. pip as you wish.
In order to ensure that your code will work across all cluster nodes (whether those nodes using ivybridge or skylake CPUs), the overall goal is to create two near-identical local package libraries, one for skylake CPUs and one for Ivybridge CPUs, and to select the correct one automatically when needed.
- First login to either rescomp1 or rescomp2, which use skylake CPUs. Use module avail Python to list and choose a suitable version of Python e.g. Python/3.7.4-GCCcore-8.3.0 and then module load Python/3.7.4-GCCcore-8.3.0 to load it.
- We will assume you wish to create a python virtual environment called projectA. First, find a suitable place on disk to store all your python virtual environments e.g. /well/<group>/users/<username>/python/ . Create this directory before continuing and then cd into it.
- Once inside your python directory, run python -m venv projectA-skylake . This will create a new python virtual environment in the projectA-skylake sub-folder. Once this is created, you must activate it before using it by running source projectA-skylake/bin/activate . Notice that your shell prompt changes to reflect virtual environment. Once it is activated, you can now proceed to install software e.g. by using the pip search XYZ to search for software and then pip install XYZ to install it. Repeat the process to install all the packages you need.
- Once you have installed all the packages you need in projectA-skylake run pip freeze > requirements.txt . This will put a list of all your installed packages and their versions into the file requirements.txt . We will use this file to recreate this environment for Ivybridge.
- Run deactivate to deactivate your projectA-skylake environment and then ssh to rescomp3. Note you can only reach rescomp3 by first logging into rescomp1-2 and then typing ssh rescomp3 .
- Once logged into rescomp3, you should load the same Python module your previously loaded on rescomp1-2 e.g. module load Python/3.7.4-GCCcore-8.3.0 . Note that our system automatically takes care to load the Ivybridge version of this software now that you are on rescomp3.
- cd to your python folder (i.e. the parent folder in which projectA-skylake is located) and now create a second virtual environment by running python -m venv projectA-ivybridge . Once this is created, activate it by run source projectA-ivybridge/bin/activate .
- With the projectA-ivybridge environment activated, you can copy all the same packages that were previously installed into the skylake repository by running pip install -r /path/to/requirements.txt i.e. using the requirements.txt file you created earlier. Once python has finished installing all the packages from requirements.txt, run deactivate to deactivate your current python environment.
- You now have two identical python virtual environments, one built for skylake and the other is built for ivybridge.
Now that you have two identical environments, one for ivybridge and one for skylake, it only remains to choose the correct one to activate in your job submissions scripts. To do that, you can copy or amend the following sample submission script:
# note that you must load whichever main Python module you used to create your virtual environments before activating the virtual environment
module load Python/3.7.4-GCCcore-8.3.0
# determine Ivybridge or Skylake compatibility on this node
# Error handling
if [[ ! $? == 0 ]]; then
echo "Fatal error: Please send the following information to the BMRC team: Could not determine CPU software architecture on $(hostname)"
# Activate the ivybridge or skylake version of your python virtual environment
# continue to use your python venv as normal
As explained above, we recommend where possible using python virtual environments in preference to conda. This is because python virtual environments tend to be include only the minimal of what is required whereas conda can also install non-python software that may cause incompatibilities with other software on our systems.
By default, conda will store your environments and downloaded packages in your home directory under ~/.conda - this will quickly cause your home directory to run out of space. To prevent this from happening we recommend the following:
Create a dedicated conda folder in your group home folder with subdirectories for packages and environments e.g.
mkdir -p conda/pkgs conda/envs
Create the file ~/.condarc containing the following configuration (NB indented lines are indented two spaces):
Activating Conda Environments via qsub jobs
Before activating a conda environment, your shell must be configured to initialise conda itself. When working interactively, this happens automatically because conda will place the relevant initialisation commands in your ~/.bashrc file. However, jobs submitted via qsub run in an non-interactive bash environment which does not automatically read your ~/.bashrc file. So when you try to run conda activate in your job script it will fail because conda is not yet initialised.
To overcome this problem you can source your ~/.bashrc file before running conda activate... i.e. in your job script you should have:
conda activate my-conda-environment
Alternatively, you can copy the relevant code from ~/.bashrc into your job submission script.