Skip to article frontmatterSkip to article content

Using CUDA conda packages with Pixi

University of Wisconsin-Madison

Wrangling CUDA dependencies can be exceptionally difficult, especially across multiple machines and platforms. As of 2022, the full CUDA stack from NVIDIA — from compilers to development libraries — is distributed as conda packages on conda-forge. When combined with new technologies from Pixi, this allows for fully reproducible CUDA accelerated workflows to become possible.

Pixi system requirements

Pixi allows for a system-requirements table, that allows for a workspace to define the system level machine configuration aspects that Pixi expects on the host system. These system requirements are expressed through conda virtual packages and can be noted from pixi info

pixi info
System
------------
       Pixi version: 0.58.0
           Platform: linux-64
   Virtual packages: __unix=0=0
                   : __linux=6.8.0=0
                   : __glibc=2.39=0
                   : __cuda=13.0=0
                   : __archspec=1=skylake
          Cache dir: /home/<user>/.cache/rattler/cache
       Auth storage: /home/<user>/.rattler/credentials.json
   Config locations: No config files found

...

We can specify a system requirement for the existence of CUDA on a host system by

pixi workspace system-requirements add cuda 12
pixi.toml
1
2
3
4
5
6
7
8
9
10
11
12
[workspace]
channels = ["conda-forge"]
name = "cuda"
platforms = ["linux-64"]
version = "0.1.0"

[tasks]

[dependencies]

[system-requirements]
cuda = "12"

All of this now allows us to do things like

pixi workspace system-requirements add cuda 12
pixi add pytorch-gpu 'cuda-version 12.9.*'
✔ Added pytorch-gpu >=2.8.0,<3
✔ Added cuda-version 12.9.*

and arrive at a fully specified and locked CUDA accelerated environment with PyTorch

pixi list torch
Package      Version  Build                           Size       Kind   Source
libtorch     2.8.0    cuda129_mkl_hc64f9c6_301        836.2 MiB  conda  https://conda.anaconda.org/conda-forge/
pytorch      2.8.0    cuda129_mkl_py313_h392364f_301  24 MiB     conda  https://conda.anaconda.org/conda-forge/
pytorch-gpu  2.8.0    cuda129_mkl_h43a4b0b_301        46.2 KiB   conda  https://conda.anaconda.org/conda-forge/
pixi list cuda
Package               Version  Build       Size       Kind   Source
cuda-crt-tools        12.9.86  ha770c72_2  28.5 KiB   conda  https://conda.anaconda.org/conda-forge/
cuda-cudart           12.9.79  h5888daf_0  22.7 KiB   conda  https://conda.anaconda.org/conda-forge/
cuda-cudart_linux-64  12.9.79  h3f2d84a_0  192.6 KiB  conda  https://conda.anaconda.org/conda-forge/
cuda-cuobjdump        12.9.82  hffce074_1  239.3 KiB  conda  https://conda.anaconda.org/conda-forge/
cuda-cupti            12.9.79  h676940d_1  1.8 MiB    conda  https://conda.anaconda.org/conda-forge/
cuda-nvcc-tools       12.9.86  he02047a_2  26.1 MiB   conda  https://conda.anaconda.org/conda-forge/
cuda-nvdisasm         12.9.88  hffce074_1  5.3 MiB    conda  https://conda.anaconda.org/conda-forge/
cuda-nvrtc            12.9.86  hecca717_1  64.1 MiB   conda  https://conda.anaconda.org/conda-forge/
cuda-nvtx             12.9.79  hecca717_1  28.7 KiB   conda  https://conda.anaconda.org/conda-forge/
cuda-nvvm-tools       12.9.86  h4bc722e_2  23.1 MiB   conda  https://conda.anaconda.org/conda-forge/
cuda-version          12.9     h4f385c5_3  21.1 KiB   conda  https://conda.anaconda.org/conda-forge/
$ pixi run python -c 'import torch; print(torch.cuda.is_available())'
True

Linux containerization

As covered in Reproducible Machine Learning Workflows for Scientists, Linux containerization can become trivially templated for deployment to remote workers with no shared filesystem or cache.