i'm out of the loop on this one. i did a good deal of HPC -- specifically electromagnetics simulation -- about a year ago and Spir-V worked OOTB for GPGPU on AMD processors. hell, i wrote my kernels in _Rust_ and they compiled and ran just fine (Embark Studio's `rust-gpu` shim), which is about as edge-casey as i can imagine.
how is AI stuff tied to nvidia? what would make it difficult to port (or to be portable)?
AFAIK the vast majority of ML training/inference on the GPU runs on CUDA, and has done so for many years, while other vendor-specific GPU backends are slowly gaining support.
It doesn't help that AMD's equivalent open toolchain, ROCm, is barely even supported by their own GPU lineup.
how is AI stuff tied to nvidia? what would make it difficult to port (or to be portable)?