this post was submitted on 25 Jun 2024
46 points (97.9% liked)

Technology

4219 readers
379 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Running AI models without matrix math means far less power consumption—and fewer GPUs?

top 8 comments
sorted by: hot top controversial new old
[–] Gradually_Adjusting@lemmy.world 11 points 1 year ago (1 children)
[–] transientpunk@sh.itjust.works 8 points 1 year ago (1 children)
[–] SturgiesYrFase@lemmy.ml 1 points 1 year ago

I don't really want to stop, and admit it, you don't want that either. ;)

[–] bitfucker@programming.dev 5 points 1 year ago* (last edited 1 year ago) (1 children)

Good

Edit: Oh shit nvm. It still requires dedicated HW (FPGA). This is no different than say, an NPU. But to be fair, they also said the researcher tested the model on traditional GPU too and reduce memory consumption.

[–] pennomi@lemmy.world 2 points 1 year ago

Only for maximum efficiency. LLMs already run tolerably well on normal CPUs and this technique would make it much more efficient there as well.

[–] GammaGames@beehaw.org 4 points 1 year ago (1 children)
[–] FaceDeer@fedia.io 1 points 1 year ago

I don't think that making LLMs cheaper and easier to run is going to "pop that bubble", if bubble it even is. If anything this will boost AI applications tremendously.

[–] blindsight@beehaw.org 3 points 1 year ago

This could be huge, but we'll need to wait and see. The economic and ecological footprint of LLMs is problematic.

That said, will this actually help, or will they just use 3T parameter models to outcompete competitors 1T parameter models using GPUs? Really, this is more about small-scale models competing with midsize models. Like, this could bring a model as big as GPT 3.5 down to be something you could run on affordable hardware, right?

That would be really compelling for my sector (education) where there's a lot of concern about student data privacy. I could definitely pitch building a local $5K-cost LLM server that could handle a dozen or so simultaneous users. That would be enough for a small school district.