this post was submitted on 15 Oct 2025
1256 points (99.0% liked)

Microblog Memes

10305 readers
2254 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If a post is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Be nice. Take political debates to the appropriate communities. Take personal disagreements to private messages.
  7. No advertising, brand promotion, or guerrilla marketing.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] HereIAm@lemmy.world 31 points 3 months ago (2 children)

Not strictly LLMs, but neural nets are really good at protein folding, something that very much directly helps understanding cancer amount other things. I know an answer doesn't magically pop out, but it's important to recognise the use cases where NN actually work well.

[–] BrioxorMorbide@lemmings.world 6 points 3 months ago

But giving all the resources to LLMs slows/prevents those useful applications of AI.

[–] merc@sh.itjust.works 6 points 3 months ago (1 children)

I'm trying to guess what industries might do well if the AI bubble does burst. I imagine there will be huge AI datacenters filled with so-called "GPUs" that can no longer even do graphics. They don't even do floating point calculations anymore, and I've heard their integer matrix calculations are lossy. So, basically useless for almost everything other than AI.

One of the few industries that I think might benefit is pharmaceuticals. I think maybe these GPUs can still do protein folding. If so, the pharma industry might suddenly have access to AI resources at pennies on the dollar.

[–] MotoAsh@piefed.social 6 points 3 months ago (1 children)

integer calculations are lossy because they're integers. There is nothing extra there. Those GPUs have plenty of uses.

[–] merc@sh.itjust.works 0 points 3 months ago (1 children)

I don't know too much about it, but from the people that do, these things are ultra specialized and essentially worthless for anything other than AI type work:

anything post-Volta is literally worse than worthless for any workload that isn't lossy low-precision matrix bullshit. H200's can't achieve the claimed 30TF at FP64, which is a less than 5% gain over the H100. FP32 gains are similarly abysmal. The B100 and B200? <30TF FP64.

Contrast with AMD Instinct MI200 @ 22TF FP64, and MI325X at 81.72TF for both FP32 and FP64. But 653.7TF for FP16 lossy matrix. More usable by far, but still BAD numbers. VERY bad.

https://weird.autos/@rootwyrm/115361368946190474

[–] MotoAsh@piefed.social 1 points 3 months ago* (last edited 3 months ago)

AI isn't even the first or the twentieth use case for those operations.

All the "FP" quotes are about floating point precision, which matters more for training and finely detailed models, especially FP64. Integer based matrix math comes up plenty often in optimized cases, which are becoming more and more the norm, especially with China's research on shrinking models while retaining accuracy metrics.