• cbarrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    10 months ago

    After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.

    From https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    10 months ago

    Now let’s get this working on Nvidia hardware :P

    as pointless as it sounds it would be a great way to test the system and call alternative implementations of each proprietary Nvidia library. It would also be great for debugging and development to provide an API for switching implementations at runtime.

    • UraniumBlazer@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Cuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.

    • brianorca@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.

  • Atemu@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    ROCm DKMS modules

    Huh? What are these?

    Since when does ROCm require kernel modules? DRI exists?