this post was submitted on 28 Mar 2026
154 points (90.1% liked)

Technology

83251 readers
3900 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] ranzispa@mander.xyz 3 points 1 day ago

The Linux foundation, home to some of the best software engineers, and known not to pick up any trend just because it's new - let's say they still work with patches sent in a mailing list - reckon that a new tool is being very useful to them to the point that they're integrating it into their workflow.

People still criticise them because they should know better such tool is useless.

[–] XLE@piefed.social 89 points 3 days ago (8 children)

How did I end up on a timeline where Microsoft is talking about rolling back AI in its OS and practically acknowledging vibe coding caused problems... and Linux developers are talking about ramping up its usage?

Obviously Microsoft is still worse here, but what are these trajectories?

[–] kreskin@lemmy.world 31 points 2 days ago* (last edited 2 days ago) (24 children)

What I think you are also seeing is AI sucking at some things and doing better than humans in others.

AI is pretty great at adding unit tests to code, for example, where humans do a just-OK job. Or in writing code for a very direct well scoped small problem.

AI is just OK at understanding product nuance and choices during larger implementations, or getting end to end coding right for any complex use cases.

load more comments (24 replies)
load more comments (7 replies)
[–] Mongostein@lemmy.ca 110 points 3 days ago (5 children)

Linux kernel czar?

I’m curious about this but I refuse to click the link because that just sounds so fucking stupid.

[–] inari@piefed.zip 75 points 3 days ago (14 children)

The headline is stupid but the article is interesting. Greg is saying that since last month for some unknown reason, AI bug reports have gotten good and useful, and something current Linux maintainers can handle. 

[–] justOnePersistentKbinPlease@fedia.io 44 points 3 days ago (1 children)

Yeah, but then article says that "good" ones still need reams of human work to make them acceptable.

Article is propaganda.

[–] inari@piefed.zip 24 points 3 days ago (2 children)

Greg says they're mostly small bug fixes and that the current maintainers can handle it, not sure where you're getting the "reams" bit from

load more comments (2 replies)
load more comments (13 replies)
[–] wewbull@feddit.uk 8 points 3 days ago

We Brits use Czar as a colloquialism for "person in charge of...".

So the head of the water regulator might be referred to as the water Czar (and they deserve a similar fate).

[–] deadbeef79000@lemmy.nz 17 points 3 days ago

It's an affectation of The Register they like reporting real news with a sometimes quirky voice. It's also British so some of the language and humour doesn't quite work as well in other parts of the world.

load more comments (2 replies)
[–] Quazatron@lemmy.world 2 points 1 day ago

These are some of the most pragmatic engineers out there. They don't pick up any new tool just because it's trendy. I'm old enough to have watched Torvalds create Git virtually overnight because the kernel devs hated Bitbucket.

If they can work with LLMs, they must have found some use case for it.

From my limited experience, it can be a good help to point out flaws in my code, not so much at generating what I want it to do.

[–] riskable@programming.dev 18 points 3 days ago (5 children)

Either a lot more tools got a lot better,

That's what it was. Even the free, open source models are vastly superior to the best of the best from just a year ago.

People got into their heads that AI is shit when it was shit and decided at that moment that it was going to be stuck in that state forever. They forget that AI is just software and software usually gets better over time. Especially open source software which is what all the big AI vendors are building their tools on top of.

We're still in the infancy of generative AI.

[–] frongt@lemmy.zip 29 points 3 days ago (1 children)

I tried one for the first time yesterday. It was mediocre at best. Certainly not production code. It would take just as much effort to refine it as it would to just write it in the first place.

[–] ranzispa@mander.xyz 1 points 1 day ago

To be fair: take this methodology that was developed years ago but apply these specific cutting edge methodologies to improve it gives you much better code in a day or two than any scientific code I've ever seen published by affirmed researchers in the field. You get the code, documentation and tests. Is the code easy to maintain? Most certainly not. Is code published by scientists maintainable? You're lucky if it even runs. You take that and you have a partially working solution, you spend a week rewriting it and you have a working better methodology that would likely have taken you a year to develop.

[–] XLE@piefed.social 15 points 3 days ago (27 children)

If you read AI critics, you will see people presenting solid financial evidence of the failure of AI companies to do what they promised. Remember Sam Altman promised AGI in 2025? I certainly do, and now so do you.

Do you have any concrete evidence that this financial flop will turn around before it runs out of money?

[–] freeman@sh.itjust.works 12 points 3 days ago

Whether AI can reliably detect issues and generate working code is a whole different thing from CEO's delusions and hyperbole to game the market. Their financial success is also irrelevant, in fact it's better if the sub/token model fails and we are left with locally ran models.

load more comments (26 replies)
load more comments (3 replies)
load more comments
view more: next ›