this post was submitted on 10 Apr 2026
1310 points (99.1% liked)

Science Memes

19858 readers
3782 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ranzispa@mander.xyz -1 points 1 hour ago (3 children)

I can tell a piece of software to do the maths for ms. Sometimes the results appear to work with reality.

People complain about LLMs hallucinating, but they have no idea of how many assumptions and just plain "everybody does it this way, I guess it works" are there in scientific research.

[–] ptu@sopuli.xyz 3 points 1 hour ago (1 children)

It’s called the heuristic method and those doing it know the limitations. Whereas LLMs will just confidently put out garbage claiming it true.

[–] ranzispa@mander.xyz 1 points 39 minutes ago

Scientific calculations - and other approaches as well - put out garbage all the time, that is the main point of what I said above.

Some limitations are known, just like it is known that LLMs have the limitation of hallucinating.

[–] vivalapivo@lemmy.today 2 points 1 hour ago (1 children)

The different domain though. LLM hallucinations may lead to catastrophe, while assuming infinite mass of an electron in absence of electromagnetic field is neat

[–] ranzispa@mander.xyz 1 points 41 minutes ago

Calculations will happily tell you that an acutely toxic drug is the best way to cure cancer.

The reason why that does not lead to catastrophe is that there are many checks and safety nets in place in order not to blindly trust any result.

The exact same approach can be applied to an LLM.

[–] paul@lemmy.org 1 points 1 hour ago

Which is ironic because everyone has, at least once, been asked "but how does it work?" and have answered "dunno, but it does"