Compiler problem

I have tested an example using TensorFlow and I found that I need to execute this command "export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/cuda" every-time before using it otherwise I get an error:

W0000 00:00:1739257193.106824
42634 gpu_backend_lib.cc:579] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice.
This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.
Searched for CUDA in the following directories:
./cuda_sdk_lib
ipykernel_launcher.runfiles/cuda_nvcc
ipykern/cuda_nvcc
/usr/local/cuda
/usr/lib/python3.13/site-packages/tensorflow/python/platform/../../../nvidia/cuda_nvcc
/usr/lib/python3.13/site-packages/tensorflow/python/platform/../../../../nvidia/cuda_nvcc
/usr/lib/python3.13/site-packages/tensorflow/python/platform/../../cuda
You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions.
For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.

You will find the example in the file attached.

Edited by Christian Heusel