this post was submitted on 09 Jan 2026
715 points (99.2% liked)

Fuck AI

5157 readers
1749 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jj4211@lemmy.world 3 points 2 days ago

Well, that's part of it, broadly speaking they want to generate more content in the hopes that it will latch on to something correct, which is of course hilarious when it's confidentally incorrect. But for example: Is it 2027 next year?

Not quite! Next year will be 2026 + 1 = 2027, but since we’re currently in 2026, the next year is 2027 only after this year ends. So yes—2027 is next year

Here it got it wrong, based on training, then generated what would be a sensible math problem, sent it off to get calculated, then made words around the mathy stuff, then the words that followed are the probabilistic follow up for generating a number that matches the number in the question.

So it got it wrong, and in the process of generating more words to explain the wrong answer, it ends up correcting itself (without ever realizing it screwed up, because that discontinuity is not really something it trained on). This is also the basis of 'reasoning chains', generate more text and then only present the last bit, because in the process of generating more text it has a chance of rerolling things correctly.