this post was submitted on 11 Mar 2026
649 points (96.2% liked)

Linux Gaming

25213 readers
348 users here now

Discussions and news about gaming on the GNU/Linux family of operating systems (including the Steam Deck). Potentially a $HOME away from home for disgruntled /r/linux_gaming denizens of the redditarian demesne.

This page can be subscribed to via RSS.

Original /r/linux_gaming pengwing by uoou.

No memes/shitposts/low-effort posts, please.

Resources

Help:

Launchers/Game Library Managers:

General:

Discord:

IRC:

Matrix:

Telegram:

founded 2 years ago
MODERATORS
(page 5) 50 comments
sorted by: hot top controversial new old
[–] AceFuzzLord@lemmy.zip 1 points 2 weeks ago

Biggest problem with this will be when whatever agent they use to write code inevitably spits out copyrighted code and the project gets killed.

If you don't know every single codebase the genAI LLM is trained off of, you cannot trust it. Might as well be playing Russian Roulette with a fully loaded gun. You'll have better odds of surviving than if you use genAI LLM slop.

[–] Peasley@lemmy.world 1 points 2 weeks ago

I have been a sponsor on Patreon almost since the account was opened (maybe 4 months in). It's my longest-running Patreon sponsorship.

I've gone ahead and cancelled. Many thanks to the developers, sorry it had to end like this.

[–] IEatDaFeesh@lemmy.world -1 points 2 weeks ago (3 children)

Lemmings being outraged is hilarious to me. We're just gonna pretend the pre-LLM time period didn't have people mindlessly copy paste code into all of our known projects? At least with LLMs you can keep asking questions/sources for each prompt response unlike in the past. In the past you'd just get rude remarks by someone who ultimately didn't help you.

[–] petrol_sniff_king@lemmy.blahaj.zone 1 points 2 weeks ago (1 children)

We're just gonna pretend the pre-LLM time period didn't have people mindlessly copy paste code into all of our known projects?

No, IEatDaFeesh, that's something that first-years do. Are you a first-year?

[–] IEatDaFeesh@lemmy.world 1 points 2 weeks ago (1 children)

Ohh, so you invent your own code/algorithms for every project? I am assuming someone of your caliber doesn't ever need to install packages with functions other people made (gasp) because that would be beneath you right? Even copying the code straight from the documentation is an insult to our intelligence! Developers who use LLMs as a search engine to find documentation are morally wrong because that leads to copying code from the documentation! You're right, only first years would copy code outlined in the documentation!

You've opened my eyes because now I see that even using the base functions of a language is technically copying code from the creators of said language. I realize that I never wrote those sort functions in the backend so I'm committing computer science sin!

Every library my team has ever included in a project has gone through rounds of evaluation to make sure it is 1. publically trusted, 2. well tested, 3. and still in active development. I have no idea what this has to do with mindlessly copying code.

so you invent your own code/algorithms for every project?

If you're going to submit an algorithm that isn't maintained and you don't know how it works, I'm not merging your pull request.

[–] GreenKnight23@lemmy.world 0 points 2 weeks ago

if you just straight up copypasta'd code before AI you were just as big of an idiot as these sloppers are.

[–] Auth@lemmy.world -2 points 3 weeks ago

Everyone will forget about this by next week. Not a huge deal IMO. Its open source already so the code stealing doesnt cross any lines IMO.

[–] BlackLaZoR@lemmy.world -2 points 3 weeks ago (3 children)

Holy fuck, people dunking on guy who works for free.

If you don't like AI commits, write your own

load more comments (3 replies)
[–] vga@sopuli.xyz -3 points 3 weeks ago* (last edited 3 weeks ago)

This is the way.

Other cool techniques:

  • keep a private git repo with CLAUDE.md etc and then push into the public repo without those files.
  • insert bugs and typos that are so clumsy no AI would ever do them
[–] Retail4068@lemmy.world -4 points 2 weeks ago (2 children)

Good for him. Y'all are insanely prejudiced and have lost the thread. 

[–] FauxLiving@lemmy.world 0 points 2 weeks ago (2 children)

This is every thread on the topic of AI.

It's toxic comments and spam downvoting disagreement. It's low effort performative 'activism' by people, most of which are too lazy to even type a comment.

Anyone participating in the harassment of an open source dev needs to fuck right off. Their opinion about AI doesn't give them license to be toxic assholes.

load more comments (2 replies)
load more comments (1 replies)
[–] HalfAFrisbee@lemmy.world -4 points 3 weeks ago

Lutris was always worthless slop.

[–] antihumanitarian@lemmy.world -5 points 3 weeks ago (3 children)

I don't think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google "ai overviews" or whatever they call it. If you know what you're doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI "coauthoring" I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don't and can't know what process they used to make it, evaluate it on its own merits.

There's a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is "but you participate in capitalism, therefore you're a hypocrite" tier of criticism. If amoral corporations are the only ones using these tools, and open source "stays pure", all we get is even more power concentrating with the corporations. This isn't Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”

This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn't out of moral restraint, the outcome is the amoral side winning.

Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that's a pretty low floor. Basically, you can't copyright a work that's the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.

Also, Open Claw isn't the apocalyptic vulnerability like it's reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn't a sound jump to make, Open Claw doesn't even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn't the old days when you could message "ignore previous instructions" and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don't recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.

Tldr: it's coming for us all, sticking your head in the sand isn't going to save you.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›