Skip to content

Anaconda Environment Tutorial for PyTorch and TensorFlow

The below tutorial would show you steps on how to create an Anaconda environment, activate, and install libraries/packages for machine and deep learning (PyTorch and Tensorflow) using an Anaconda environment on Cheaha. There are also steps on how to access the terminal, as well as using Jupyter Notebook's Graphical User Interface (GUI) to work with these Anaconda environments. There are detailed steps here to guide your creation of a Jupyter Notebook job.

Installing Anaconda Environments using Terminal

To access the terminal (shell), please do the following.

  1. Login to rc.uab.edu

  2. Create a job on Cheaha using the Interactive Apps dropdown option.!Interactive Apps Dropdown Menu

  3. Select Jupyter Notebook, and fill out the options, as per your project needs, then click Launch. For more information on compute needs, and a guide for selecting the right options click here. !Jupyter Launch Button

  4. Click the Connect to Jupyter button !Connect to Jupyter Button

    You will see the below interface. !Jupyter Notebook Landing Page

  5. When the job has been created, on the My Interactive Sessions page, click the button in front of Host (usually colored blue) in the format >_c0000.

    !host image

    This should open into a terminal as shown below.

    !Cheaha Shell CLI

  6. In this interface, you can create, and activate environments, as well as install packages, modules and libraries into your activated environment.

How do we create a custom environment for PyTorch and TensorFlow

The instructions below, provide a recommended step by step guide to creating and activating an environment that has PyTorch and/or TensorFlow installed and ready to use for deep learning projects.

Installing PyTorch Using Terminal

There are two instances of PyTorch that can be installed, one requiring GPUs, and another utilising only CPUs. GPUs generally improve project compute speeds and are preferred. For both instances of pytorch, please follow these steps;

  1. Create and activate an environment as stated in the links.

  2. Access the terminal following the steps here.

Note

When installing packages, modules and libraries into environments, remember to also install ipykernel using conda install ipykernel. This way your activated environment would appear in the list of kernels in your Jupyter Notebook.

For a correct installation of pytorch, we have to ensure some conditions are met. See partition docs for a guide. One of such conditions, is to load CUDA toolkit using the below command in your environment setup form (see image below).

module load CUDA/11.8.0

!load CUDA

Note

The cudatoolkit version may vary, as at the time of this tutorial, 11.8 is the version used. Running nvidia-smi, as in the image below, will show you the status, version and other information on GPUs in your created job session. The CUDA version is highlighted. The GPU CUDA Version available on Cheaha at the time of this tutorial is 12.3. Because the toolkit version used is lower than the Cheaha GPU version, it works.

!nvidia-smi output

When your job has been created and your environment created and activated from the terminal (see above instructions), run the below command.

conda install pytorch torchvision torchaudio cudatoolkit=11.8 -c pytorch -c nvidia

This commands will install a GPU compatible PyTorch version into your environment. To verify PyTorch is installed, and to see what version you have installed in your environment, use the below command.

conda list | grep "torch"

You should get an output like the below image.

!PyTorch Env Output

The same process can be followed for installing another Deep Learning library Tensorflow (see instructions below) with some minute differences. You may decide to install the TensorFlow library into the same environment or create a new one. As a best practice, you may want to install these libraries in different environments.

Using PyTorch on Jupyter Notebook

As an example we will be using a sample Jupyter Notebook with just a simple torch function to test if a GPU will be utilized with PyTorch functions. Run the command in a cell, and if your output is True, then you have your GPU setup to support PyTorch functions.

import torch

print(torch.cuda.is_available())
x = torch.cuda.current_device()
print(torch.get_device_name(x))

!PyTorch Jupyter Notebook Output

Install TensorFlow GPU Using Terminal

  1. Create a new environment that is compatible with supported tensorflow versions, use the below command to do this. For this tutorial we will use Python 3.11.

    conda create -n tensorflow python=3.11
    
  2. The TensorFlow CPU and GPU versions requires pip to be up-to-date, to install and upgrade pip to the latest version use the below command.

    pip install --upgrade pip
    
  3. Install TensorFlow with pip

    pip install tensorflow[and-cuda]
    

The image below shows an output that the TensorFlow library will utilize the available GPU.

TensorFlow GPU output

Note

The information (I) and warning (W) outputs notifies you of the installed Tensorflow binary and how it would function. The I output informs you that the installed Tensorflow library will utilize your CPU for additional speed when GPUs are not the most efficient way to do processing for these operations. The W output tells you TensorRT is not available, please note TensorRT is not currently supported on our systems.