7heo

joined 2 years ago
MODERATOR OF
[–] 7heo@lemmy.ml 28 points 1 year ago (6 children)

I personally don't think we're getting anything. I always saw the "it takes time" point as a way to duck the issue until the community would forget about it entirely.

So, all in all, if you really want to get some answers, maybe wait enough time to give LMG more than enough time to investigate; so much so that using the "this wasn't a reasonable amount of time" defence strategy would be impossible, and then try to get the community to care again. And ask for answers. I personally do not see this happening, ever, but I hope I'm wrong.

[–] 7heo@lemmy.ml 2 points 1 year ago

The thing is, devops is pretty complex and pretty diverse. You've got at least 6 different solutions among the popular ones.

Last time I checked only the list of available provisioning software, I counted 22.

Sure, some like cdist are pretty niche, but still, when you apply for a company, even tho it is going to either be AWS (mostly), azure, GCE, oracle, or some run of the mill VPS provider with extended cloud features (simili S3 based on minio, "cloud LAN", etc), and you are likely going to use terraform for host provisioning, the most relevant information to check is which software they use. Packer? Or dynamic provisioning like Chef? Puppet? Ansible? Salt? Or one of the "lesser ones"?

And thing is, even among successive versions, among compatible stacks, the DSL evolved, and the way things are supposed to be done changed. For example, before hiera, puppet was an entirely different beast.

And that's not even throwing docker or (or rkt, appc) in the mix. Then you have k8s, podman, helm, etc.

The entire ecosystem has considerable overlap too.

So, on one hand, you have pretty clean and useable code snippets on stackoverflow, github gist, etc. So much so that tools like that emerged... And then, the very second LLMs were able to produce any moderately usable output, they were trained on that data.

And on the other hand, you have devops. An ecosystem with no clear boundaries, no clear organisation, not much maturity yet (in spite of the industry being more than a decade old), and so organic that keeping up with developments is a full time job on its own. There's no chance in hell LLMs can be properly trained on that dataset before it cools down. Not a chance. Never gonna happen.

[–] 7heo@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

Do bullets kill soldiers?

Infantry soldiers in the open, possibly. Soldiers in an APC? No.

Same applies to companies. A single sufficient bad review on a small, one-person company can take it out entirely. A single review of a big corporation? Not even one from a big shot like MKBHD.

This headline is dumb.

[–] 7heo@lemmy.ml 1 points 1 year ago

I'd argue that what is holding the Linux GUI back is the amount of options, combined with the lack of proper interoperability testing (not for the lack of trying, but between the amount of options and the amount of versions, it is absolutely unfeasible), and the lack of strong design choice on the side of distributions: everyone wants to have and support everything under the sun, even if it means having 4 or 5 different flavours or editions of a particular distribution.

Don't get me wrong, I salute the intention and the initiative, but concretely, this almost always (and I put "almost" to be safe, I've never seen a counter example) means a clunky, unpolished experience in most cases.

I usually describe it as:

If GUIs were doors:

  • Mac OS would be selling literally only one kind of door, that is super slick, brushed metal, glass and white, fancy, with a black glass and brushed metal handle, has a great feel to it, good heft, great handling, satisfying sound and feedback, etc, but then you need to buy everything else from them (including your lights, flooring, etc) or it just won't open. Of course they sell everything at a premium.
  • Windows would be your standard wooden office door with the standard metal handle and the standard automatic door closer; but anyone can open it even when locked, it needs to be changed every other year, if you "customise" (i.e. adapt it in any way) it it will wear out 10x faster, and any adjustment you do (handle spring tension, automated closer strength and kickback, hinges adjustment, etc) will be reset at night randomly every other week, the door will get new "features" (like microphones, a search prompt, an assistant, etc) randomly, and you can use any kind of furniture you want, but during the "night resets" (aka "upgrades"), all the furniture in the office will be reset to be "Microsoft furniture", and you will need to exchange it all back in the next morning. And for various unpredictable reasons, once in a while, when going through the door, it will close unexpectedly and violently, slamming you in the face with full force.
  • Linux and FOSS in general is a collection of community made IKEA inspired doors. You can mix and match anything. Any kind of door, any kind of hinge. Any kind of handle. Want a door that opens sideways? Go for it. Want a door that slides up? Do it. Want a butterfly door? Sure. A proximity sensor as a handle? Totally. A carbon fibres and ceramic door? Absolutely. All at once? Why not. In the end, no door is exactly the same, even across the same building, and you often need a few minutes to figure out how new doors work in new buildings. And of course, lots of doors are ill designed, with completely unnecessary features, and conflicting options, like both a sideways and butterfly hinge. Still works, but has caveats. But hey, if it breaks, or doesn't fit, you can change it any time, get parts anywhere, and there is an absolutely insane amount of community made documentation on most of it (except the internals, some of it is hard to understand, some of it is absolutely obscure, and most of it is documented by people who made it exclusively for people who made it)

IMHO what we would need is for distributions to "adopt" a given GUI (or DE), and stick to that. Do not even carry the packages for something else. If it is needed, another distribution will be made. That would simplify things a lot, and would greatly relieve the stress on maintainers.

And it would make for a much more approachable user experience.

[–] 7heo@lemmy.ml 2 points 1 year ago

I would not call that a "privacy proxy", it is very disingenuous. It is a normal proxy, which replaces the technical metadata from your connection, so that automated tracking is harder. But it will not replace or remove any of your input. And you can easily be tracked that way too.

[–] 7heo@lemmy.ml 2 points 1 year ago

I see a proper keyboard in that picture. Where is the keyboard in my pocket, eh Ted? There was one thing that mattered above all, and that's the thing that's missing?? WHERE IS THE KEYBOARD IN MY POCKET, TED?

[–] 7heo@lemmy.ml 6 points 1 year ago (1 children)

Yeah, I find the puzzle sliding JavaScript captchas the best as a user. Cognitively better than "training neural networks to recognise protestors", and still fast enough that it doesn't feel like a forced ad. Reliability might however vary a lot between implementations.

[–] 7heo@lemmy.ml 5 points 1 year ago

Plus, that way, you have a trail of invites. If something goes wrong, you can prune entire branches and mitigate most abuse.

[–] 7heo@lemmy.ml 9 points 1 year ago* (last edited 1 year ago) (1 children)

I believe you're missing the actual causality chain here.

While it is actually proven that vendors will degrade your experience artificially to "motivate" you to buy new devices, in the never ending pursuit of monetary gain, there is no such potential incentive here: you aren't paying for new drivers.

And while others suggest biases, I do believe you are witnessing an effect that is at least partially real, if not totally, but not for the reasons you believe:

Most programs that leverage GPUs end up being GPU bottlenecked. Meaning that one can almost always improve the program's performance by using a better GPU.

But then, why does a new driver not improve performance, and rather, simply "bring a degraded performance back to previous levels"?

Well, that has to do with auto-updates, and the way drivers are distributed.

While, in a world where one would have to manually update everything, a new driver would almost certainly mean better performance for a given program, most programs in our world auto-update automatically (and sometimes even, silently). And the developers are usually on top of things wrt drivers, because they follow drivers updates closely, get early versions, etc.

Meaning that when a driver is updated, your apps usually are, too. In a way that leverage the new driver for more processing, rather than faster processing. But unlike your automatically updated apps, your drivers are updated manually.

And the consequence of such updates, when you are too slow to update your drivers, is a degraded experience.

Not because anyone artificially throttled your device's performance, but because you lag too much behind expected updates.

[–] 7heo@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

And Docker initially used Ubuntu. They explicitly and specifically switched to Alpine in 2016 for performance, to minimise the overhead.

[–] 7heo@lemmy.ml 4 points 1 year ago* (last edited 1 year ago) (3 children)

Note: this comment is long, because it is important and the idea that "systemd is always better, no matter the situation" is absolutely dangerous for the entire FOSS ecosystem: both diversity and rationality are essential.

Systemd can get more efficient than running hundreds of poorly integrated scripts

In theory yes. In practice, systemd is a huge monolithic single-point-of-failure system, with several bottlenecks and reinventing-the-wheel galore. And openrc is a far cry from "hundreds of poorly integrated scripts".

I think it is crucial we stop having dogmatic "arguments" with argumentum ad populum or arguments of authority, or we will end up recreating a Microsoft-like environment in free software.

Let's stop trying to shoehorn popular solutions into ill suited use cases, just because they are used elsewhere with different limitations.

Systemd might make sense for most people on desktop targets (CPUs with several cores, and several GB of RAM), because convenience and comfort (which systemd excels at, let's be honest) but as we approach "embedded" targets, simpler and smaller is always better.

And no matter how much optimisation you cram into the bigger software, it will just not perform like the simpler software, especially with limited resources.

Now, I take OpenRC as an example here, because it is AFAIR the default in devuan, but it also supports runit, sinit, s6 and shepherd.

And using s6, you just can't say "systemd is flat out better in all cases", that would be simply stupid.

[–] 7heo@lemmy.ml -5 points 1 year ago (5 children)

Devuan + xfce.

view more: ‹ prev next ›