this post was submitted on 05 Jun 2024
38 points (93.2% liked)

[Dormant] moved to !teslamotors@lemmy.zip

1754 readers
1 users here now

This community has moved to:

!teslamotors@lemmy.zip

founded 2 years ago
MODERATORS
all 8 comments
sorted by: hot top controversial new old
[–] breakingcups@lemmy.world 11 points 1 year ago (1 children)

I'm sure his Tesla shareholders will understand.

[–] uebquauntbez@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

Another card removed from this house of cards? We'll see. megetspopcorn

[–] ichbinjasokreativ@lemmy.world 1 points 1 year ago (1 children)

Tesla has a decent relationship with AMD though, right? Means nvidia in nice-to-have for them, but not neccessary.

[–] Endmaker@lemmy.world 4 points 1 year ago (1 children)

How are AMD GPUs useful though? Last time I've heard, CUDA (and CuDNN) is still an Nvidia-only thing.

[–] ichbinjasokreativ@lemmy.world 7 points 1 year ago (2 children)

There are compatibility layers for cuda to run on AMD, and everything AI can also natively run on ROCm. It's a choice to use nvidia, not mandatory.

[–] Endmaker@lemmy.world 3 points 1 year ago

Oh wow. TIL

[–] notfromhere@lemmy.ml 2 points 1 year ago

What is the best working compatibility layer to run cuda on AMD? ROCm seems to drop support pretty quickly after release though so it’s hard for it to get a foothold. As Karparhy has shown, doing low level C++ has some amazing results…