BB84

joined 1 year ago
[–] BB84@mander.xyz 3 points 6 months ago (1 children)

I think we just differ on the terminology of invention versus observation. What draws the line between a well-supported theory and an observation in the end comes down to how tangible you think the data is.

[–] BB84@mander.xyz 1 points 6 months ago

I must admit I don't know that much about MOND being tested. But yeah, from a Lambda CDM point of view it is unsurprising that MOND would not work well for every galaxy.

[–] BB84@mander.xyz 0 points 6 months ago

It's a classic MEMRI TV meme. What MEMRI TV is would require a ..... "nuanced" explanation that I don't want to get into here. Look it up on Reddit or start a thread on !nostupidquestions@lemmy.ml

[–] BB84@mander.xyz 2 points 6 months ago

WIMP is only one model of dark matter. A favorite of particle physicists. But from a purely astrophysics point of view there is little reason to believe dark matter to have any interaction beyond gravity.

[–] BB84@mander.xyz 3 points 6 months ago (3 children)

But it is a model we invented no? To explain the astrophysical and cosmological observations.

Among all those observations, a commonality is that it looks like there is something that behaves like matter (as opposed to vacuum or radiation) and interact mostly via gravity (as opposed to electromagnetically, etc.). That's why we invented dark matter.

The "it is unsuited" opinion in this meme is to poke at internet commentators who say that there must be an alternate explanation that does not involve new matter, because according to them all things must reflect light otherwise it would feel off.

Once you believe dark matter exists, you still need to come up with an explanation of what that matter actually is. That's a separate question.

(I'm not trying to make fun of people who study MOND or the like of that. just the people who non-constructively deny dark matter based on vibes.)

[–] BB84@mander.xyz 3 points 6 months ago

Particle physicists love the Weakly-Interacting Massive Particle dark matter model. But from a purely astrophysics point of view there is little reason to believe dark matter to have any interaction beyond gravity.

[–] BB84@mander.xyz 5 points 7 months ago (3 children)

I'm still far from convinced about MOND. But I guess now I'm less confident in lambda CDM too -_-

I'm inclined to believe it's one or many of the potential explanations in your second link. But even then, those explanations are mostly postdictions so they hold less weight.

[–] BB84@mander.xyz 19 points 7 months ago (7 children)

MOND is a wonderful way to explain rotation curves but since then with new observations (bullet cluster, gravitational lensing, ...) MOND doesn't really hold up.

[–] BB84@mander.xyz 7 points 7 months ago

I've heard of something similar that is able to predict an effect of dark matter (the rotation curves), but AFAIK it couldn't match other observations (bullet clusters, etc.) correctly.

Do you have a link for the model you're talking about. I'm curious.

[–] BB84@mander.xyz -3 points 7 months ago* (last edited 7 months ago) (1 children)

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what ~~Open~~AI is operating at.

@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding "I would be very very surprised if they couldn't fill [the optimal batch size] for any few-seconds window" to mean "I would be very very surprised if they are not profitable"?

The tweet I linked shows that good LLMs can be much cheaper. I am saying that ~~Open~~AI is very inefficient and thus economically "cooked", as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems

[–] BB84@mander.xyz -5 points 7 months ago (3 children)

What? I'm not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

[–] BB84@mander.xyz -5 points 7 months ago* (last edited 7 months ago) (7 children)

LLM inference can be batched, reducing the cost per request. If you have too few customers, you can't fill the optimal batch size.

That said, the optimal batch size on today's hardware is not big (<100). I would be very very surprised if they couldn't fill it for any few-seconds window.

view more: ‹ prev next ›