this post was submitted on 18 Feb 2026
947 points (99.1% liked)

Technology

81534 readers
4166 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] dreadbeef@lemmy.dbzer0.com 7 points 21 hours ago* (last edited 21 hours ago)

Get off of Github and I bet you those drop to nearly zero. Using Github is a choice with all of the AI slop it enables. They aren't getting rid of it any time soon. The want agents and people making shitty code PRs—that's money sent Microsoft's way in their minds.

Now that they see what the cost of using Github is maybe Godot will (re?)consider codeberg or a self-hosted forgejo instance that they control.

[–] DylanMc6@lemmy.dbzer0.com 11 points 1 day ago

A lot of programmers with thigh-high striped socks should take one for the team and take back Godot and such. Seriously!

[–] BitsAndBites@lemmy.world 89 points 1 day ago (1 children)

It's everywhere. I was just trying to find some information on starting seeds for the garden this year and I was met with AI article after AI article just making shit up. One even had a "picture" of someone planting some seeds and their hand was merged into the ceramic flower pot.

The AI fire hose is destroying the internet.

[–] maplesaga@lemmy.world 20 points 1 day ago (2 children)

I fear when they learn a different layout. Right now it seems they are usually obvious, but soon I wont be able to tell slop from intelligence.

[–] jj4211@lemmy.world 2 points 20 hours ago

You will be able to tell slop from intelligence.

However, you won't be able to tell AI slop from human slop, and we've had human slop around and already overwhelming, but nothing compared to LLM slop volume.

In fact, reading AI slop text reminds me a lot of human slop I've seen, whether it's 'high school' style paper writing or clickbait word padding of an article.

[–] badgermurphy@lemmy.world 15 points 1 day ago

One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn't matter.

That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don't think that any tweaking or fiddling with the model will ever be able to do anything toward what you're describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.

The reason they can't make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn't have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.

Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of "people" say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!

[–] the_citizen@lemmy.world 32 points 1 day ago (1 children)

Why people try to contribute even if they don't work on their codes? Ai slop not helping at all.

[–] AnyOldName3@lemmy.world 21 points 1 day ago

CV padding and main character syndrome.

[–] GreenKnight23@lemmy.world 22 points 1 day ago (1 children)

just deny PRs that are obvious bots and ban them from ever contributing.

even better if you're running your own git instance and can outright ban IP ranges of known AI shitlords.

[–] sin_free_for_00_days@sopuli.xyz 16 points 1 day ago (1 children)
[–] GreenKnight23@lemmy.world 12 points 1 day ago

fuck em

If my own mother can't shame me, a glorified sex bot has a snowballs chance in hell of doing it.

[–] Adderbox76@lemmy.ca 80 points 2 days ago

This was honestly my biggest fear for a lot of FOSS applications.

Not necessarily in a malicious way (although there's certainly that happening as well). I think there's a lot of users who want to contribute, but don't know how to code, and suddenly think...hey...this is great! I can help out now!

Well meaning slop is still slop.

[–] MystikIncarnate@lemmy.ca 65 points 2 days ago

Look. I have no problems if you want to use AI to make shit code for your own bullshit. Have at it.

Don't submit that shit to open Source projects.

You want to use it? Use it for your own shit. The rest of us didn't ask for this. I'm really hoping the AI bubble bursts in a big way very soon. Microsoft is going to need a bail out, openai is fucking doomed, and z/Twitter/grok could go either way honestly.

Who in their right fucking mind looks at the costs of running an AI datacenter, and the fact that it's more economically feasible to buy a fucking nuclear power plant to run it all, and then say, yea, this is reasonable.

The C-whatever-O's are all taking crazy pills.

[–] Hiro8811@lemmy.world 49 points 2 days ago (2 children)

AI crowd trying hard to find uses for AI

[–] M0oP0o@mander.xyz 1 points 21 hours ago

But all that money can't be wrong!

[–] setsubyou@lemmy.world 31 points 2 days ago* (last edited 2 days ago)

I think the open slop situation is also in part people who just want a feature and genuinely think they’re helping. People who can’t do the task themselves also can’t tell that the LLM also can’t do it.

But a lot of them are probably just padding their GitHub account too. Any given popular project has tons of forks by people who just want to have lots of repositories on their GitHub but don’t actually make changes because they can’t actually do it. I used to maintain my employer’s projects on GitHub and literally we’d have something like 3000 forks and 2990 of them would just be forks with no changes by people with lots of repositories but no actual work. Now these people are using LLMs to also make changes…

[–] raynethackery@lemmy.world 81 points 2 days ago (1 children)

This is big tech trying to kill FOSS.

[–] village604@adultswim.fan 10 points 1 day ago

Which is funny because most of them rely on it

[–] Routhinator@lemmy.ca 52 points 2 days ago (1 children)

Get that code off of slophub and move it to Codeberg.

[–] lepinkainen@lemmy.world 21 points 2 days ago (3 children)

Is codeberg magically immune to AI slop pull requests?

[–] Routhinator@lemmy.ca 35 points 2 days ago

No but they are actively not promoting it or encouraging it. Github and MS are. If you're going to keep staying on the pro-AI site, you're going to eat the consequences of that. Github are actively encouraging these submissions with profile badges and other obnoxious crap. Its not an appropriate env for development anymore. Its gamified AI crap.

[–] woelkchen@lemmy.world 28 points 2 days ago

No (just like Lemmy isn't immune against AI comments) but Github is actively working towards AI slop

load more comments (1 replies)
[–] e8d79@discuss.tchncs.de 97 points 2 days ago (1 children)

I think moving off of GitHub to their own forge would be a good first step to reduce this spam.

[–] orize@lemmy.dbzer0.com 40 points 2 days ago (4 children)
[–] e8d79@discuss.tchncs.de 42 points 2 days ago (5 children)

Codeberg is cool but I would prefer not having all FOSS project centralised on another platform. In my opinion projects of the size of Godot should consider using their own infrastructure.

[–] JackbyDev@programming.dev 25 points 2 days ago

Let's be realistic. Not everyone is going to move to Codeberg. Godot moving to Codeberg would be decentralizing.

load more comments (4 replies)
load more comments (3 replies)
[–] brucethemoose@lemmy.world 49 points 2 days ago

Godot is also weighing the possibility of moving the project to another platform where there might be less incentive for users to "farm" legitimacy as a software developer with AI-generated code contributions.

Aahhh, I see the issue know.

That’s the incentive to just skirt the rules of whatever their submission policy is.

[–] tabular@lemmy.world 223 points 2 days ago* (last edited 2 days ago) (11 children)

Before hitting submit I'd worry I've made a silly mistake which would make me look a fool and waste their time.

Do they think the AI written code Just Works (TM)? Do they feel so detached from that code that they don't feel embarrassment when it's shit? It's like calling yourself a fictional story writer and writing "written by (your name)" on the cover when you didn't write it, and it's nonsense.

[–] kadu@scribe.disroot.org 164 points 2 days ago (5 children)

I'd worry I've made a silly mistake which would make me look a fool and waste their time.

AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

Somebody comes along saying "Oh look at the image is just genera..." and you cut them with "looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?"

load more comments (5 replies)
[–] Pamasich@kbin.earth 37 points 2 days ago

Nowadays people use OpenClaw agents which don't really involve human input beyond the initial "fix this bug" prompt. They independently write the code, submit the PR, argue in the comments, and might even write a hit piece on you for refusing to merge their code.

[–] Feyd@programming.dev 105 points 2 days ago (2 children)

LLM code generation is the ultimate dunning Kruger enhancer. They think they're 10x ninja wizards because they can generate unmaintainable demos.

load more comments (2 replies)
[–] atomicbocks@sh.itjust.works 75 points 2 days ago (3 children)

From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

[–] notso@feddit.org 49 points 2 days ago (1 children)

You guys, it's almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon's playbook by flooding the zone with slop.

load more comments (1 replies)
load more comments (2 replies)
load more comments (7 replies)
[–] Bongles@lemmy.zip 14 points 1 day ago (2 children)

I don't contribute to open source projects (not talented enough at the moment, I can do basic stuff for myself sometimes) but I wonder if you can implement some kind of requirement to prove that your code worked to avoid this issue.

Like, you're submitting a request that fixes X thing or adds Y feature, show us it doing it before we review it in full.

[–] Magnum 13 points 1 day ago (3 children)

Tests, what you are asking for are automated tests.

load more comments (3 replies)

The trouble is just volume and time, even just reading through the description and "proof it works" would take a few minutes, and if you're getting 10s of these a day it can easily eat up time to find the ones worth reviewing. (and these volunteers are working in their free time after a normal work day, so wasting 15 or 30 minutes out of the volunteers one or two hours of work is throwing away a lot of time.

Plus, when volunteering is annoying the volunteers stop showing up which kills projects

[–] MITM0@lemmy.world 51 points 2 days ago* (last edited 2 days ago) (6 children)

So I guess it is time to switch to a different style of FOSS development ?

The cathedral style, which is utilized by Fossil, basically in order to contribute you'll have to be manually included into the group. It's a high-trust environment where devs know each other on a 1st-name basis.

Oh BTW, Fossil is a fully-fledged alternative to Git & Github. It has:

  • Version-Tracking
  • Webserver
  • Bug-tracker
  • Ticketting-system
  • Wiki
  • Forum
  • Chat
  • And a Graphical User-Interface which you can theme

All in One binary

load more comments (6 replies)
[–] SocialMediaRefugee@lemmy.world 21 points 2 days ago

A similar problem is happening in submissions to science journals.

[–] Luden@lemmings.world 34 points 2 days ago (6 children)

I am a game developer and a web developer and I use AI sometimes just to make it write template code for me so that I can make the boilerplate faster. For the rest of the code, AI is soooo dumb it's basically impossible to make something that works!

load more comments (6 replies)
load more comments
view more: next ›