this post was submitted on 10 Jun 2024
11 points (100.0% liked)

Hacker News

2171 readers
1 users here now

A mirror of Hacker News' best submissions.

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] jet@hackertalks.com 8 points 1 year ago (1 children)

These LLMs are going to keep hallucinating... I.e. acting like chat bots, until everyone understands not to trust them. Like uncle Jimmy who makes shit up all the time

[–] Juno@beehaw.org 2 points 1 year ago (1 children)

Why so negative about large language models?

[–] jet@hackertalks.com 2 points 1 year ago

No issue with the model, just that people attribute intelligence to them, when they're just chat bots. And they run into these fun situations

[–] lvxferre@mander.xyz 4 points 1 year ago

50 emails/day x 5 days x 40amonth=10,000 a month in lost sales—and that was only from people who cared enough to complain.

Multiply that by 20. Because roughly, for each complainer, you'll get 19 people simply thinking "you know what, screw it" and never voicing their discontent. 200k a month in lost sales.

And... frankly? They deserve the losses.

Pro-tip: you should "trust" the output of a large language model less than you'd trust the village idiot. Even when the later is drunk.