After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.
Now we’ll see if Nvidia drops CUDA immediately, or waits until next quarter.
Now let’s get this working on Nvidia hardware :P
as pointless as it sounds it would be a great way to test the system and call alternative implementations of each proprietary Nvidia library. It would also be great for debugging and development to provide an API for switching implementations at runtime.
Do LLM or that AI image stuff run on CUDA?
Cuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.
Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.
Thanks that is what I was curious about. So good news!
ROCm DKMS modules
Huh? What are these?
Since when does ROCm require kernel modules? DRI exists?