this post was submitted on 09 Jan 2026
715 points (99.2% liked)

Fuck AI

5157 readers
1749 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] assassinatedbyCIA@lemmy.world 7 points 21 hours ago* (last edited 21 hours ago)

This is why I can’t buy dram

[–] gustofwind@lemmy.world 135 points 2 days ago (5 children)

AI has achieved the intelligence of the average voter

[–] Clent@lemmy.dbzer0.com 18 points 2 days ago (1 children)

I'm convinced this is why people are so seemingly impressed with AI. It's smarter than the average person because the average person is that ignorant. To these people these things are ungodly smart and because of the puffery they don't feel talked down to which increases their perception of it's intelligence; it tells them how smart and clever they are in a way no sentient entity would ever do.

[–] cecilkorik@piefed.ca 2 points 1 day ago

Yes. Remember that these things have been largely designed and funded by companies who make their money from advertising and marketing. The purpose of advertising and marketing is to convince people of something, whether that thing is actually true or not. They are experts at it, and now they have created software designed to convince people of things, whether it is actually true or not. Then they took all the combined might of their marketing and advertising expertise and infrastructure, including the AI software itself, and set it to the task of convincing people that AI is good and is going to change the world.

And everyone was convinced. Whether it is actually true or not.

[–] LodeMike@lemmy.today 20 points 2 days ago
[–] LemmyKnowsBest@lemmy.world 4 points 2 days ago

Hey don't insult the average voter like that. This post demonstrates that ai has achieved the dementia level of Trump.

load more comments (2 replies)
[–] altphoto@lemmy.today 11 points 1 day ago* (last edited 1 day ago)

Obviously its 2012 again!

2013 never happened. We just keep repeating 2012 over and over to see if we can make the world end this time around.

[–] Mvlad88@lemmy.world 52 points 2 days ago (4 children)
[–] thesdev@feddit.org 51 points 2 days ago (1 children)

So when one uses AI on Ecosia, are they helping to plant a tree or burn one? Perhaps it's a toss-up.

[–] X@piefed.world 20 points 2 days ago

Task unsuccessfully failed successfully.

[–] kryptonianCodeMonkey@lemmy.world 17 points 2 days ago* (last edited 2 days ago)

Something, something "leap years". Well that explains it.

[–] jaredwhite@humansare.social 7 points 2 days ago

That answer was wrong. So therefore, that answer was correct. No. Yes. Maybe.

[–] WorldsDumbestMan@lemmy.today 3 points 1 day ago

This isn't just the machine being ignorant or wrong.

This is a level of artificial stupidity that is downright eldritch and incomprehensible

[–] Sam_Bass@lemmy.world 3 points 1 day ago
[–] Janx@piefed.social 17 points 2 days ago

Yeah, this "sequence guesser" is definitely something we should have do all the coding, and control the entire Internet.. 

[–] SpaceCowboy@lemmy.ca 11 points 2 days ago (1 children)

Yeah, "AI" is just statistical analysis. There's more data in it's database that indicates the 2027 is not year and only a few days worth of data that indicates that it is. Since there's more data indicating 2027 is not next year, it chooses that as the correct answer.

LLMs are a useful tool if you know what it is, it's strengths and weaknesses. But it's not intelligent and doesn't understand how things work. But if you have some fuzzy data you want analyzed and validate the results, it can save some time to get to a starting point. It's kinda like wikipedia in a way, you get to a starting point faster, but have to take things with a grain of salt and put some effort make sure things are accurate.

[–] pez@piefed.blahaj.zone 4 points 1 day ago

Google pulled the AI overview from the search, but "is it 2027 next year ai overview" was a suggested search because this screenshot is making the rounds. The AI overview now has all the discussion of this error in it's data set and mentions it in it's reply but still fails.

[–] samus12345@sh.itjust.works 23 points 2 days ago (2 children)
[–] YoSoySnekBoi@kbin.earth 18 points 2 days ago (2 children)
[–] samus12345@sh.itjust.works 13 points 2 days ago (2 children)
[–] Rai@lemmy.dbzer0.com 8 points 2 days ago (1 children)

It’s amazing how it works with like every LLM

[–] samus12345@sh.itjust.works 5 points 2 days ago (2 children)

I'm guessing they're still stuck on thinking it's 2025 somehow, which is pretty crazy since keeping accurate track of dates and numbers SHOULD be the most basic thing an AI can do.

[–] naught@sh.itjust.works 14 points 2 days ago (1 children)

AI doesn't "know" anything. It's a big statistical probability model that predicts words based on context. It is very specifically BAD at math and dates etc. because it works based on words, not numbers. Models can't do math

[–] samus12345@sh.itjust.works 5 points 2 days ago

Yeah, I'm being colloquial with saying "thinking," and that makes sense! Computers that are bad at math, what genius!

[–] jj4211@lemmy.world 7 points 2 days ago (1 children)

Actually, it's what traditional software should be very good at, LLM is actually inherently kind of bad at it. Much work has gone into getting the LLMs to detect mathy stuff and try to shuffle off to some other technology that can cope with it.

load more comments (1 replies)
load more comments (1 replies)
[–] jj4211@lemmy.world 6 points 2 days ago (1 children)

Copilot was interesting: Is it 2027 next year? Not quite! Next year will be 2026 + 1 = 2027, but since we’re currently in 2026, the next year is 2027 only after this year ends. So yes—2027 is next year, starting on January 1, 2027.

[–] buddascrayon@lemmy.world 13 points 2 days ago (1 children)

Talking to AI is like talking to an 8-year-old. Has just enough information to be confidently wrong about everything.

[–] bbboi@feddit.uk 0 points 12 hours ago

An eight year old that takes everything literally.

[–] Ethalis@jlai.lu 19 points 2 days ago (1 children)

I love how overexplained this wrong answer is

[–] jj4211@lemmy.world 3 points 2 days ago

Well, that's part of it, broadly speaking they want to generate more content in the hopes that it will latch on to something correct, which is of course hilarious when it's confidentally incorrect. But for example: Is it 2027 next year?

Not quite! Next year will be 2026 + 1 = 2027, but since we’re currently in 2026, the next year is 2027 only after this year ends. So yes—2027 is next year

Here it got it wrong, based on training, then generated what would be a sensible math problem, sent it off to get calculated, then made words around the mathy stuff, then the words that followed are the probabilistic follow up for generating a number that matches the number in the question.

So it got it wrong, and in the process of generating more words to explain the wrong answer, it ends up correcting itself (without ever realizing it screwed up, because that discontinuity is not really something it trained on). This is also the basis of 'reasoning chains', generate more text and then only present the last bit, because in the process of generating more text it has a chance of rerolling things correctly.

[–] criticon@lemmy.ca 14 points 2 days ago

I recently found this (it can be replicated with many values):

[–] Mulligrubs@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

This is worth at least 500 trillion dollars!

We have a virtual parrot, it's not "intelligence" in any way. So many suckers

[–] leftzero@lemmy.dbzer0.com 10 points 2 days ago (1 children)

Can we be done with this whole years thing?

It's very evidently not working out.

Every one of them is orders of magnitude worse than the one before.

Lets just not have a new year, just stop here, and try to go back if possible?

[–] kkj@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago)

I'm still not entirely convinced that we've left 2016.

[–] CaptDust@sh.itjust.works 11 points 2 days ago

Next year (2028) will be when it finally takes over everything

[–] stupidcasey@lemmy.world 11 points 2 days ago

TBF how things are going we will give up the Gregorian calendar this year in favor of a random number generator.

[–] Shanmugha@lemmy.world 6 points 2 days ago

Definitely killer of all jobs. Sad part is that I'm guessing it will take many lives before the hysteria is over

[–] Chais@sh.itjust.works 2 points 1 day ago

Aneurism posting

[–] kadu@scribe.disroot.org 7 points 2 days ago

(this is when your question is relevant)

[–] Kowowow@lemmy.ca 6 points 2 days ago

I could totally relate id the AI claimed it was still like 2020 or something

"Huh, me too buddy"

PhD level, I tell ya.

[–] ytg@sopuli.xyz 4 points 2 days ago

I guess they're trained on words, not numbers

[–] Etterra@discuss.online 4 points 2 days ago (1 children)

Seven acres of rainforest were burned and 37,000 gallons of water were wasted to answer this question.

[–] lividweasel@lemmy.world 4 points 2 days ago

…but at least the line went up a bit. That’s all that matters! 📈

[–] Blaster_M@lemmy.world 4 points 2 days ago* (last edited 2 days ago)
load more comments
view more: next ›