Shatur@lemmy.ml to Linux@lemmy.ml · 1 year agoAMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Sourcewww.phoronix.comexternal-linkmessage-square8fedilinkarrow-up1316arrow-down15cross-posted to: [email protected][email protected][email protected][email protected]
arrow-up1311arrow-down1external-linkAMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Sourcewww.phoronix.comShatur@lemmy.ml to Linux@lemmy.ml · 1 year agomessage-square8fedilinkcross-posted to: [email protected][email protected][email protected][email protected]
minus-squareLarmyOfLone@lemm.eelinkfedilinkarrow-up7·1 year agoDo LLM or that AI image stuff run on CUDA?
minus-squareUraniumBlazer@lemm.eelinkfedilinkEnglisharrow-up11arrow-down1·edit-21 year agoCuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.
minus-squarebrianorca@lemmy.worldlinkfedilinkarrow-up5·1 year agoNearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.
minus-squareLarmyOfLone@lemm.eelinkfedilinkarrow-up1·1 year agoThanks that is what I was curious about. So good news!
Do LLM or that AI image stuff run on CUDA?
Cuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.
Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.
Thanks that is what I was curious about. So good news!