[rocm] No GPU support after rebuild with ROCm 6.0
Description:
After updating ROCm to 6.0 and python-pytorch-opt-rocm to 2.1.2-2, pytorch doesn't seem to be built with GPU support (which is called CUDA).
$ python -c "import torch; torch.cuda.current_device()"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.11/site-packages/torch/cuda/__init__.py", line 769, in current_device
_lazy_init()
File "/usr/lib/python3.11/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
In addition, the following returned previously 1, but now it returns 0.
python -c "import torch; print(torch.cuda.device_count())"
Additional info:
- package version(s): 2.1.2-2
- config and/or log files:
- link to upstream bug report, if any:
Steps to reproduce:
- Install
python-pytorch-opt-rocm
from testing. - Run: python -c "import torch; torch.cuda.current_device()"