Torch shared library: undefined symbol to CBLAS
Description:
When import torch
in Python as below:
import torch
An error will show, as follows:
File /usr/lib/python3.12/site-packages/torch/__init__.py:237
235 if USE_GLOBAL_DEPS:
236 _load_global_deps()
--> 237 from torch._C import * # noqa: F403
239 # Appease the type checker; ordinarily this binding is inserted by the
240 # torch._C module initialization code in C
241 if TYPE_CHECKING:
ImportError: /usr/lib/libtorch_cpu.so: undefined symbol: cblas_gemm_f16f16f32
I have reinstalled PyTorch, CUDA, cuDNN, BLAS/OpenBLAS, MKL, oneDNN, OpenMP, and NumPy, but no luck
I suspect this issue is associated with the recent update of python
with breaking change from 3.11 to 3.12, where some packages are not rebuilt or not built for 3.12.
Spec:
- CPU: AMD Ryzen 5800H
- GPU: NVIDIA RTX 3050 Mobile
- Kernel: 6.8.8-arch1-1
Additional info:
- package version(s):
python-pytorch-cuda
2.3.0-2,python
3.12.3-1 - config and/or log files:
~/.bash_profile
:
# -- snip -- #
# CUDA
export CUDA_DEVICE_ORDER="PCI_BUS_ID"
export CUDA_VISIBLE_DEVICES=0
# -- snip -- #
- link to upstream bug report, if any: https://github.com/pytorch/pytorch/issues/125391
Steps to reproduce:
- Install
python-pytorch
orpython-pytorch-cuda
, with Python 3.12 - Try importing
torch
in Python REPL or script.
Edited by Charles Dong