• EmergMemeHologram
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    7 months ago

    CUDA vs ROCm. Both support OpenCL which is meh.

    I target GPU for mathematical simulations and calculations, not really gaming

    • Dudewitbow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      Hence its a featureset. CUDA has a more in depth feature set because Nvidia is the leader and gets to dictate where compute goes, this in turn has a cyclical feedbackloop as devs use CUDA which locks them more and more into the ecosystem. Its a self inflicting problem till one bows out, and it wont be Nvidia.

      It forces AMD to have to play catchup and write a wrapper that converts CUDA into OpenCL because the devs wont do it.

      Ai is the interesting situation because when it came to the major libraries (e.g pytorch, tensorflow), they already have non Nvidia backends, and with microsofts desire to get AI compute to every pc, it makes more sense for them to partner with AMD/Intel due to the pc requiring a processor, while an nvidia gpu in the pc is not guaranteed. This caused more natural escape from requiring CUDA. If a project requires an Nvidia gpu, it rolls back that it was a small dev who programmed with CUDA for a feature and not the major library.

      • EmergMemeHologram
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        AMD didn’t even have a good/reliable implementation of OpenCL, which I would have liked to have succeed over CUDA.

        Intel and AMD dropped the ball massively for like 15 years after Nvidia released CUDA. It wasn’t quiet either, CUDA was pushed all over the place even it came out.