RuntimeError: HIP error: shared object initialization failed (rx 6800)

Description:

current package breaks on a simple tensor test on the GPU:

CODE:

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if device.type == 'cuda':
    print(torch.cuda.get_device_name(0))
else: print('NO GPU!')

shape = (2,3,)
rand_tensor = torch.rand(shape).to(device)

print(f"Random Tensor: \n {rand_tensor} \n")

OUTPUT:

AMD Radeon RX 6800

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[6], line 11
      8 shape = (2,3,)
      9 rand_tensor = torch.rand(shape).to(device)
---> 11 print(f"Random Tensor: \n {rand_tensor} \n")

File /usr/lib/python3.11/site-packages/torch/_tensor.py:966, in Tensor.__format__(self, format_spec)
    964 if self.dim() == 0 and not self.is_meta and type(self) is Tensor:
    965     return self.item().__format__(format_spec)
--> 966 return object.__format__(self, format_spec)

File /usr/lib/python3.11/site-packages/torch/_tensor.py:461, in Tensor.__repr__(self, tensor_contents)
    457     return handle_torch_function(
    458         Tensor.__repr__, (self,), self, tensor_contents=tensor_contents
    459     )
    460 # All strings are unicode in Python 3.
--> 461 return torch._tensor_str._str(self, tensor_contents=tensor_contents)

File /usr/lib/python3.11/site-packages/torch/_tensor_str.py:677, in _str(self, tensor_contents)
    675 with torch.no_grad(), torch.utils._python_dispatch._disable_current_modes():
    676     guard = torch._C._DisableFuncTorch()
--> 677     return _str_intern(self, tensor_contents=tensor_contents)

File /usr/lib/python3.11/site-packages/torch/_tensor_str.py:597, in _str_intern(inp, tensor_contents)
    595                     tensor_str = _tensor_str(self.to_dense(), indent)
    596                 else:
--> 597                     tensor_str = _tensor_str(self, indent)
    599 if self.layout != torch.strided:
    600     suffixes.append("layout=" + str(self.layout))

File /usr/lib/python3.11/site-packages/torch/_tensor_str.py:349, in _tensor_str(self, indent)
    345     return _tensor_str_with_formatter(
    346         self, indent, summarize, real_formatter, imag_formatter
    347     )
    348 else:
--> 349     formatter = _Formatter(get_summarized_data(self) if summarize else self)
    350     return _tensor_str_with_formatter(self, indent, summarize, formatter)

File /usr/lib/python3.11/site-packages/torch/_tensor_str.py:138, in _Formatter.__init__(self, tensor)
    134         self.max_width = max(self.max_width, len(value_str))
    136 else:
    137     nonzero_finite_vals = torch.masked_select(
--> 138         tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)
    139     )
    141     if nonzero_finite_vals.numel() == 0:
    142         # no valid number, do nothing
    143         return

RuntimeError: HIP error: shared object initialization failed
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing HIP_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.

Modifying the code to run on CPU all works fine -- see below:

import torch

device='cpu'

shape = (2,3,)
rand_tensor = torch.rand(shape).to(device)

print(f"Random Tensor: \n {rand_tensor} \n")

OUPUT:

Random Tensor: 
 tensor([[0.8282, 0.5476, 0.7043],
        [0.8292, 0.1788, 0.5849]]) 

Additional info:

system fully updated via:

sudo pacman -Syy && sudo pacman -Syu

As flagged already, python-pytorch-rocm complains of missing ISA in bundle, please see:

blender#13 (comment 174466)

python-pytorch-rocm#1 (moved)

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information