Hacker News new | past | comments | ask | show | jobs | submit login

i'm out of the loop on this one. i did a good deal of HPC -- specifically electromagnetics simulation -- about a year ago and Spir-V worked OOTB for GPGPU on AMD processors. hell, i wrote my kernels in _Rust_ and they compiled and ran just fine (Embark Studio's `rust-gpu` shim), which is about as edge-casey as i can imagine.

how is AI stuff tied to nvidia? what would make it difficult to port (or to be portable)?




AFAIK the vast majority of ML training/inference on the GPU runs on CUDA, and has done so for many years, while other vendor-specific GPU backends are slowly gaining support.

It doesn't help that AMD's equivalent open toolchain, ROCm, is barely even supported by their own GPU lineup.


FWIW, ROCm GPU support is getting better, particularly in the distro-provided packages. e.g. https://salsa.debian.org/rocm-team/community/team-project/-/...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: