Wrangling CUDA dependencies can be exceptionally difficult, especially across multiple machines and platforms. As of 2022, the full CUDA stack from NVIDIA — from compilers to development libraries — is distributed as conda packages on conda-forge. When combined with new technologies from Pixi, this allows for fully reproducible CUDA accelerated workflows to become possible.
Pixi system requirements¶
Pixi allows for a system-requirements table, that allows for a workspace to define the system level machine configuration aspects that Pixi expects on the host system.
These system requirements are expressed through conda virtual packages and can be noted from pixi info
pixi infoSystem
------------
Pixi version: 0.58.0
Platform: linux-64
Virtual packages: __unix=0=0
: __linux=6.8.0=0
: __glibc=2.39=0
: __cuda=13.0=0
: __archspec=1=skylake
Cache dir: /home/<user>/.cache/rattler/cache
Auth storage: /home/<user>/.rattler/credentials.json
Config locations: No config files found
...We can specify a system requirement for the existence of CUDA on a host system by
pixi workspace system-requirements add cuda 121 2 3 4 5 6 7 8 9 10 11 12[workspace] channels = ["conda-forge"] name = "cuda" platforms = ["linux-64"] version = "0.1.0" [tasks] [dependencies] [system-requirements] cuda = "12"
Knowing what version of CUDA to set for the system-requirement is going to require knowledge of the host platform’s NVIDIA drivers.
The NVIDIA System Management Interface (nvidia-smi) utility can provide information on a host machine’s NVIDIA driver version and the maximum supported version of CUDA that is known to work with the driver.
$ nvidia-smi --version
NVIDIA-SMI version : 580.95.05
NVML version : 580.95
DRIVER version : 580.95.05
CUDA Version : 13.0CUDA has excellent backwards compatibility, so while for this driver version CUDA 13.0 is the known supported version it is probable that CUDA 12 releases will work with the driver, and possible that newer CUDA 13 releases will work as well.
The reported CUDA version is a guarantee but not a restriction.
For our software applications, if we know our dependencies have CUDA 13 support then we should provide a system requirement of CUDA 13. If we know that dependencies only have support up to CUDA 12, or that our target host machine has a driver with support up to CUDA 12, then we should provide a system requirement of CUDA 12.
All of this now allows us to do things like
pixi workspace system-requirements add cuda 12
pixi add pytorch-gpu 'cuda-version 12.9.*'✔ Added pytorch-gpu >=2.8.0,<3
✔ Added cuda-version 12.9.*and arrive at a fully specified and locked CUDA accelerated environment with PyTorch
pixi list torchPackage Version Build Size Kind Source
libtorch 2.8.0 cuda129_mkl_hc64f9c6_301 836.2 MiB conda https://conda.anaconda.org/conda-forge/
pytorch 2.8.0 cuda129_mkl_py313_h392364f_301 24 MiB conda https://conda.anaconda.org/conda-forge/
pytorch-gpu 2.8.0 cuda129_mkl_h43a4b0b_301 46.2 KiB conda https://conda.anaconda.org/conda-forge/pixi list cudaPackage Version Build Size Kind Source
cuda-crt-tools 12.9.86 ha770c72_2 28.5 KiB conda https://conda.anaconda.org/conda-forge/
cuda-cudart 12.9.79 h5888daf_0 22.7 KiB conda https://conda.anaconda.org/conda-forge/
cuda-cudart_linux-64 12.9.79 h3f2d84a_0 192.6 KiB conda https://conda.anaconda.org/conda-forge/
cuda-cuobjdump 12.9.82 hffce074_1 239.3 KiB conda https://conda.anaconda.org/conda-forge/
cuda-cupti 12.9.79 h676940d_1 1.8 MiB conda https://conda.anaconda.org/conda-forge/
cuda-nvcc-tools 12.9.86 he02047a_2 26.1 MiB conda https://conda.anaconda.org/conda-forge/
cuda-nvdisasm 12.9.88 hffce074_1 5.3 MiB conda https://conda.anaconda.org/conda-forge/
cuda-nvrtc 12.9.86 hecca717_1 64.1 MiB conda https://conda.anaconda.org/conda-forge/
cuda-nvtx 12.9.79 hecca717_1 28.7 KiB conda https://conda.anaconda.org/conda-forge/
cuda-nvvm-tools 12.9.86 h4bc722e_2 23.1 MiB conda https://conda.anaconda.org/conda-forge/
cuda-version 12.9 h4f385c5_3 21.1 KiB conda https://conda.anaconda.org/conda-forge/$ pixi run python -c 'import torch; print(torch.cuda.is_available())'
TrueWhat’s very powerful about this functionality is that it allows for solving environments with CUDA dependencies when the platform that Pixi is doing the solve on doesn’t have CUDA support. This allows for targeting remote machines that will be the execution environment from whatever laptop you have.
Example on osx-arm64:
% pixi init example && cd example
% pixi workspace system-requirements add cuda 12
% pixi add --platform linux-64 pytorch-gpu 'cuda-version 12.9.*'
% pixi list --platform linux-64 torch
Package Version Build Size Kind Source
libtorch 2.8.0 cuda129_mkl_hc64f9c6_301 836.2 MiB conda https://conda.anaconda.org/conda-forge/
pytorch 2.8.0 cuda129_mkl_py313_h392364f_301 24 MiB conda https://conda.anaconda.org/conda-forge/
pytorch-gpu 2.8.0 cuda129_mkl_h43a4b0b_301 46.2 KiB conda https://conda.anaconda.org/conda-forge/Linux containerization¶
As covered in Reproducible Machine Learning Workflows for Scientists, Linux containerization can become trivially templated for deployment to remote workers with no shared filesystem or cache.