this post was submitted on 09 Oct 2025
27 points (100.0% liked)

Fuck AI

4289 readers
1295 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

I'd like to be able to get a rating of whether an article I'm reading is likely to be LLM generated or not, as a measure of how much I should trust it. Ideally I'd like this in my browser as an extension alongside UBlock and consent-o-matic.

Does anyone know if such a thing exists? I found Winston from a quick review of the extension store but it is a paid extension and I'd rather something free.

top 9 comments
sorted by: hot top controversial new old
[–] Sylra@lemmy.cafe 3 points 1 day ago (1 children)

Stick to a small circle of trusted people and websites. Skip mainstream news. Small blogs, niche forums, and tiny YouTube channels are often more honest.

Avoid Google for discovery. It's not great anymore. Use DuckDuckGo, Qwant, or Yandex instead. For deeper but less precise results, try Mojeek or Marginalia. Google works okay only if you're searching within one site, like site:reddit.com.

Sometimes, searching in other languages helps find hidden gems with less junk. Use a translator if needed.

[–] lichtmetzger@discuss.tchncs.de 2 points 1 day ago* (last edited 1 day ago)

Avoid Google for discovery. It’s not great anymore. Use DuckDuckGo, Qwant, or Yandex instead. For deeper but less precise results, try Mojeek or Marginalia.

I went with Kagi. It costs some money, but they have a special deal with Google to get access to their API. It's basically their search results without the AI slop, ads and BS. It works just like Google worked back in the good, old days.

Let's see how long this will last. But I will enjoy it as long as I can.

Kagi also has a news section. I'm not entirely sure which sites they pull from, but on a first glance it looks more clean and less sloppy than the ones from the big players.

Edit: They explain how they aggregate those news here.

[–] lechekaflan@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

Sora-generated videos are now disturbingly close to realistic given the high framerate equivalent to a smartphone camera. Which would make automated detection difficult.

[–] LesserAbe@lemmy.world 11 points 2 days ago (2 children)

From what little I've read there are many organizations claiming they can detect AI written content (I don't know about plugins) but there's little evidence that they're accurately able to do so.

[–] Sylra@lemmy.cafe 1 points 1 day ago

Tools like Turnitin or GPTzero don't work well enough to trust. The real issue isn't just detecting AI writing. It's doing it without falsely accusing students. Even a 0.5% false positive rate is too high when someone's academic future is on the line. I'm more concerned about wrongly flagging human-written work than missing AI use. These tools can't explain why they suspect AI. At best, they only catch obvious cases. Ones you'd likely notice yourself anyway.

[–] ZDL@lazysoci.al 1 points 2 days ago

The LLM grifters have, indeed, spawned a spin-off community of grifters targeting the anti-LLM community.

It's grifters all the way down.

[–] Blackfeathr@lemmy.world 14 points 3 days ago* (last edited 3 days ago)

Here is one that I know of.. no ratings I think, just a blocklist: https://github.com/laylavish/uBlockOrigin-HUGE-AI-Blocklist

[–] TIN@feddit.uk 1 points 1 day ago

Thanks everyone for your replies. I guess the days of believing online reviews are at an end then. I wonder what emerges from this. Presumably some kind of mutually assured trust scores to direct our searches to places we believe. I think Klout tried to do something like this a few years ago.

[–] technocrit@lemmy.dbzer0.com 1 points 2 days ago* (last edited 2 days ago)

There is no such thing as "AI" detection because "AI" doesn't exist.

If you're talking about avoiding generated content, then I don't think that's realistic either. Any "tests" are bound to become less and less accurate, as generated content becomes better and intentionally harder to detect.