Skip to content

Torch shared library: undefined symbol to CBLAS

Description:

When import torch in Python as below:

import torch

An error will show, as follows:

File /usr/lib/python3.12/site-packages/torch/__init__.py:237
    235     if USE_GLOBAL_DEPS:
    236         _load_global_deps()
--> 237     from torch._C import *  # noqa: F403
    239 # Appease the type checker; ordinarily this binding is inserted by the
    240 # torch._C module initialization code in C
    241 if TYPE_CHECKING:

ImportError: /usr/lib/libtorch_cpu.so: undefined symbol: cblas_gemm_f16f16f32

I have reinstalled PyTorch, CUDA, cuDNN, BLAS/OpenBLAS, MKL, oneDNN, OpenMP, and NumPy, but no luck 😑

I suspect this issue is associated with the recent update of python with breaking change from 3.11 to 3.12, where some packages are not rebuilt or not built for 3.12.

Spec:

  • CPU: AMD Ryzen 5800H
  • GPU: NVIDIA RTX 3050 Mobile
  • Kernel: 6.8.8-arch1-1

Additional info:

  • package version(s): python-pytorch-cuda 2.3.0-2, python 3.12.3-1
  • config and/or log files: ~/.bash_profile:
# -- snip -- #
# CUDA
export CUDA_DEVICE_ORDER="PCI_BUS_ID"
export CUDA_VISIBLE_DEVICES=0
# -- snip -- #

Steps to reproduce:

  1. Install python-pytorch or python-pytorch-cuda, with Python 3.12
  2. Try importing torch in Python REPL or script.
Edited by Charles Dong
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information