this post was submitted on 27 Jul 2025
-5 points (33.3% liked)

Life Pro Tips

3091 readers
2 users here now

Unlocking the Secrets to Success and Fulfillment!

Rules

  1. Share valuable life pro tips.
  2. Keep it concise and clear.
  3. Stay on-topic.
  4. Respect fellow members.
  5. No self-promotion.
  6. Verify information before sharing.
  7. Avoid illegal or unethical advice.
  8. Report rule violations.

Join us and share your best life pro tips!

founded 2 years ago
MODERATORS
 

When querying AI, end the query with "provide verifiable citations". It often vastly reduces the bullshit.

top 1 comments
sorted by: hot top controversial new old
[–] sga@lemmings.world 3 points 6 days ago

It does not. I get your perspective, and would not even deny that when you added that, you got better response. what most likely happened was that it also added what seemed to be verifiable sources, but there is no guarantee that those sources cited are correct (or even exist). llms (usually) do not have ways to generate less factual or more factual responses. they just give most "likely" response. hence, you adding the “provide verifiable citations” does not really affect the factuality, but rather changes the perspective with which the answer is given. for example, if I say that a student got "x" marks, vs assume the kid is smart, the student got "y" marks. You most likely would guess y > x, but i never told you in which domain/s was the kid smart, wheter the kid was even tested in domains in which they were smart or not, or was the test even fair (i could have rigged the test to give my favuorite student higher marks). with llms, they take your additional "context", which slightly changes the "effective weights" (that is the best eli5 that i can do for it. in reality, what changed is that your additional tokens would just be add searate dot products, so the resultant likeliness vector for output tokens got changed).

I added usually, because one could design a setup (there are some things like that in production somewhere) where you adding additional context, could be first parsed a "smaller"(or more specialised) system, which would then change the parameters (temperature, top k, ....) for the actual llm, so the answer may actual become more "reproducible", but that still does not guarantee that there would be less bullshit.