moonpiedumplings

joined 2 years ago
6
submitted 16 hours ago* (last edited 16 hours ago) by moonpiedumplings@programming.dev to c/emulation@lemmy.world
 

Firstly, I would like to begin with the way Duckstation was relicensed from GPL to CC-by-NonCommercial-Noderivatives (non-foss license).

I've seen a lot of people incorrectly claiming that this violates the GPL, but the way the duckstation developer did this was not a violation of the GPL. The duckstation developer gained prior contributors approval, and/or rewrote all GPL code for which they didn't.

source: https://www.gamingonlinux.com/2024/09/playstation-1-emulator-duckstation-changes-license-for-no-commercial-use-and-no-derivatives/

I have the approval of prior contributors, and if I did somehow miss you, then please advise me so I can rewrite that code. I didn't spend several weekends rewriting various parts for no reason. I do not have, nor want a CLA, because I do not agree with taking away contributor's copyright.

It should be noted that the version the AUR package uses is the older, still GPL version of the program. There is a git version which uses the latest, and it seems to be okay, but I should note that part of the packaging process on many distros, is essentially forking the software and making a derivative — something incompatible with CC ND.

I have been following this drama for a while, specifically on the r/emulationonandroid reddit community, and there is even more context to be had.

Now, about the dropping of Linux support. The problem, goes a lot deeper than "Arch users annoying".

Firstly, I want to state that there is a running, widely believed theory that Stenzek, the developer of the AetherSX2 android emulator, Talred, are the same person. You see this manifest in comments/posts like this one, but it's all over the sub. (This comment states that Stenzek was never really harassed and I disagree, I will get to that later/)

The problem is that this developer has a pattern of insisting on having a discord community, but being unwilling/unable to moderate it properly, or appoint other/enough moderators to act as a shield between them in the community members.

Arch users are what is being complained about, but the android emulation community has some pretty bad members, due to the high prevalence of children. So they would go on the discord, troll, harass, and be annoying. For example, this instance here.

It culminated with a final update that added ads and decreased performance: https://www.reddit.com/r/EmulationOnAndroid/comments/11q726j/do_not_update_aethersx2_on_google_play_i_repeat/

Now, I do not condone harassment, and I think that the members of the community who are acting in bad faith are ultimately in the wrong here. But at the same time, you are not obligated to have a discord for your software project.

In my opinion, the real problem here is the flawed idea that every software needs to have a "community". I have watched around 3-4 projects die due to harassment on discord (not all of them related to emulation), and it's clear that moderating a community actually takes work that not everybody is willing/able to give, especially if you are interacting with children. And the r/emulationonandroid software is particularly forgetful about this, as they just repeat these patterns over and over again and it drives me nuts.

I'm currently watching the latest android switch emulator use a discord server for communications and do their releases on Github —after the previous iteration's discord server owner locked down the discord server (a lot of blame is placed on powertripping mods but this is the kinda thing that happens when people get fed up with dealing with children tbh). And before that, the Nintendo DMCA fiasco happened. But don't worry, I'm sure the latest switch emulators combination of discord + github will go well and nothing bad will happen at all.

In addition to that, right now I am in 100 discord servers (they don't let you join more without Nitro), because people treat discord as an issue tracker and distribution hub for their small software projects and it drives me nuts.

I would prefer small software projects to not create a community, and instead integrate into existing communities that already have established moderators, so that they protected from harassment and children being annoying.

No, the duckstation dev obtained the consent of contributors and/or rewrote all GPL code.

https://www.gamingonlinux.com/2024/09/playstation-1-emulator-duckstation-changes-license-for-no-commercial-use-and-no-derivatives/

I have the approval of prior contributors, and if I did somehow miss you, then please advise me so I can rewrite that code. I didn't spend several weekends rewriting various parts for no reason. I do not have, nor want a CLA, because I do not agree with taking away contributor's copyright.

[–] moonpiedumplings@programming.dev 1 points 18 hours ago* (last edited 18 hours ago)

This is not the same as the Fedora OBS situation. Duckstation is now under a CC by no derivatives, non commercial clause.

Packaging software can be considered making a derivative, so people can't really legally package the latest version of Duckstation anymore.

Instead, the packages use the older GPL version*.

In theory, nothing would stop Fedora flatpak from simply shipping the OBS version in their own repo instead. But here, something like that isn't legally possible.

To add further context, it is theorized that this developer**, Stenzek, is actually an alt account of Talreth, the creator of the android PS2 emulator. Both accounts have a pattern of creating a discord, and then being unwilling/unable to moderate it (or appoint any other moderators).

So Talreth got harassed on discord, because the audience of an android ps2 emulator is mostly children, many of whom are ungrateful. And it ended with Talreth's final update to Aethersx2 being a borking update that broke the emulator for everyone.

And that is why I would rather not use the official version of duckstation. I'm not interested in seeing my home directory get nuked because some kid called Stenzek a slur on the discord.

And this is what distros exist for. They act as a barrier, betweem potentially hostile developers and the users. For example, when Audacity added telemetry, all the distros would patch it out when they compiled it.

*Afaik, he did either get permission to relicense, or rewrite GPL contributions of others. The latest version of Duckstation does not illegally use GPL code.

** In r/emulationonandroid, I have been following this drama for a while. A long while. The less mature community seems to be drama prone and it sucks.

EDIT: Found the reddit post: https://www.reddit.com/r/EmulationOnAndroid/comments/11q726j/do_not_update_aethersx2_on_google_play_i_repeat/ . It's not truly broken, but it did get ads and performance was made much worse.

[–] moonpiedumplings@programming.dev 8 points 1 day ago (1 children)

at just a glance I have some theories:

  1. the project was named after the creator. Maybe they wanted it to seem more community organized

  2. Food reference. Foss developers often name stuff after food, Idk why. Maybe cuz they like mangos (I do too).

  3. Dodges copyright or trademark issues. Certain things, like town names (wayland is a town in the US) are essentially uncopyrightable/trademarkable, so by naming your project after those you eliminate a whole host of potential legal issues.

These are just theories though. No reason is actually given, at least not that I could find based on 30s of searching.

[–] moonpiedumplings@programming.dev 7 points 5 days ago* (last edited 5 days ago)

No, because they don't deviate enough from arch to avoid issues with breakages on updates. Just recently on lemmy someone was wondering why all their vlc plugins were uninstalled. Easy fix for someone who knows how to use pacman, but that and similar incidents make cachyos not really a "just works" system.

[–] moonpiedumplings@programming.dev 2 points 1 week ago* (last edited 1 week ago)

Many helm charts, like authentik or forgejo integrate bitnami helmcharts for their databases. So that's why this is concerning to me,

But, I was planning to switch to operators like cloudnativepostgres for my databases instead and disable the builtin bitnami images. When using the builtin bitnami images, automatic migration between major releases is not supported, you have to do it yourself manually and that dissapointed me.

Does that happen often? I had, apparently incorrectly, assumed those things were more or less fire and forget.

Bootloaders are also software affected by vulnerabilities (CVE's). But this comment did make me curious. Do the CVE's that affect grub, would a person of threat model/usecase 1 in my comment above care about them?

Many of them do indeed seem to non issues. From the list here.

Grub CVE's requiring the config file to be malicious, like this one are pretty much non issues. The config file is encrypted, in my setup at least (but again, not the default. Also idk if the config file is signed/verified).

I think this one is somewhat concerning. USB devices plugged in could corrupt grub.

Someone could possibly do something similar with hard drives, replacing the one in the system. The big theoretical vulnerability I am worried about is someone crafting a partition in such a way that it does RCE through Grub. Or maybe's it's already happened, my research isn't that deep. But with such a vulnerability, someone could shrink the EFI partition and then put another partition there, that grub reads, and then the code execution exploit happens.

But honestly, if someone could replace/modify hard drives, or add/remove USB devices, what if they just replace your entire system motherboard with a malicious one? This is very difficult to defend against, but you could check for it happening by having your motherboard be password protected, and you always log into your motherboard whenever you boot to make sure the password is the same. (Although perhaps someone could copy over the hashes (at least I am assuming the passwords are hashed) from one motherboard to another).

But if something like that is in your threat model, it's important to note that ethernet, and many other firmware is proprietary (meaning you cannot audit or modify the code), and also has what's called "DMA" — direct memory access. It can read and write to the Linux kernel with permissions higher than root. So if I have access to your device, I could replace your wifi card with a malicious one that modifies stuff after you boot or does any other things.

What you are supposed to do is prevent tampering in the first place, or for a much cheaper cost, have "tamper evident protection", things that inform you if the system was tampered with. Stickers over the screws are an easy and cheap example..

But DefCon has a village dedicated to breaking tamper evident protection. Lol.

I think if your adversary is a nationstate, secure boot usecase 1 is simply broken and doesn't work. It's too easy to replace any of the physical components with malicious one's for them, because there is no verification of those. I think Secure Boot usecase 1 is for protecting against corporate espionage in mid to high tier corpos. Corporations also tend to give people devices, and they can ensure that those devices have tamper evidence/tamper resistance on top of secure boot. Of course I think a nationstate can get through them, but I don't think it's included in the threat model.

Nationstates can easily break the system of secure boot, and probably have methods in addition to or separate from secure boot for protecting themselves.

Wait what, that just seems like home directory encryption with extra steps 🤦 I guess I’ll go back to Veracrypt then.

Performance on LUKS might be better since LUKS is a first class citizen. But maybe performance with veracrypt is better since only the home directory is encrypted. I tried duckduckgo but the top results were AI slop with no benchmarks so I'm not gonna bother doing further research.

[–] moonpiedumplings@programming.dev 6 points 1 week ago* (last edited 1 week ago) (1 children)

I'm on my phone rn and can't write a longer post. This comment is to remind me to write an essay later. I've been using authentik heavily for my cybersecurity club and have a LOT of thoughts about it.

The tldr about authentik's risk of enshittification is that authentik follows a pattern I call "supportware". It's when extremely (intentionally/accidentally) complex software (intentionally/accidentally) lacks edge cases in their docs,because you are supposed to pay for support.

I think this is a sustainable business model, and I think keycloak has some similar patterns (and other Red Hat software).

The tldr about authentik itself is that it has a lot of features, but not all of them are relevant to your usecase, or worth the complexity. I picked up authentik for invites (which afaik are rare, also official docs about setting up invites were wrong, see supportware), but invites may not something you care about.

Anyway. Longer essay/rant later. Despite my problems, I still think authentik is the best for my usecase (cybersecurity club), and other options I've looked at like zitadel (seems to be more developer focused),or ldap + sso service (no invites afaik) are less than the best option.

Sidenote: Microsoft entra is offers similar features to what I want from authentik, but I wanted to self host everything.

Openwrt seems to be more popular

[–] moonpiedumplings@programming.dev 4 points 1 week ago* (last edited 1 week ago) (2 children)

but wouldn’t only the bootloader need to be signed

So the bootloader also gets updated, and new versions of the bootloader need to get signed. So if the BIOS is responsible for signing the bootloader, then how does the operating system update the bootloader?

To my understanding a tamper-proof system already assumes full disk-encryption anyway

Kinda. The problem here, IMO, is that Secure boot conflates two usecases/threat models into one:

  1. I am a laptop owner who wants to prevent tampering with the software on my system by someone with physical access to my device
  2. I am a server operator who wants to enforce usage of only signed drivers and kernels. This locks down modification/insertion of drivers and kernels as a method of obtaining a rootkit on my servers.

The second person does not use full disk encryption, or care about physical security at all, really (because they physically lock up the server racks).

What happens in this setup is that the bootloader checks the kernel's signature, and the kernel checks the driver's signature... and they enable this feature depending on whether or not the secure boot EFI motherboard variable is enabled. So this feature isn't actually tied to the motherboard's ability to verify the bootloader. For example, grub has it's own signature verification that can be enabled seperately.

The first person does not have malware in their system in their threat model. So they can enable full disk encryption, and then they don't care about the kernel and drivers being signed.

EXCEPT THEY ACTUALLY DO BECAUSE NOBODY DOES THE SETUP WHERE THE KERNELS AND DRIVERS ARE ENCRYPTED BY DEFAULT.

You must explicitly ask for this setup from the Linux distro installers (at least, all the one's I've used). By default, /boot, where the kernel and drivers are stored, is stored unencrypted in another external partition, and not in the LUKS encrypted partition.

What I do, is I have /boot/efi be the external EFI partiion. /boot/efi is where the bootloader is installed, and the kernels are stored in /boot, which is located on my encrypted BTRFS partition. The grub bootloader is the only unencrypted part of my system, like the setup you suggested. But I had to ask for this by changing the partitioning scheme on CachyOS, and on other distros I used before this one.

Very interestingly about this setup, is that grub cannot see the config it needs to boot. It guesses at which disk it should decrypt, and if I have a usb drive plugged in, it guesses wrong and my system won't boot.

Continuing, the problem with setups like this is that in order to verify the bootloader, you must have secure boot enabled. Grub will then read this EFI configuration, and attempt to verify the kernels and drivers. As far as I can tell, there is no way to disable this other than changing the source code or binary patching grub.

I have a blog post where I explored this: https://moonpiedumplings.github.io/playground/arch-secureboot/index.html

So this means that even in setups where everything is encrypted except grub, you still have to sign the kernels and drivers in order to have a bootable system (unless you patch grub).

I eventually decided that this wasn't worth it, and gave up on secure boot for now.

This article explains why. It's not an issue that affects all motherboards.

https://wiki.debian.org/UEFI#Force_grub-efi_installation_to_the_removable_media_path

[–] moonpiedumplings@programming.dev 2 points 1 week ago* (last edited 1 week ago) (1 children)

So Signal does not have reproducible builds, which are very concerning securitywise. I talk about it in this comment: https://programming.dev/post/33557941/18030327 . The TLDR is that no reproducible builds = impossible to detect if you are getting an unmodified version of the client.

Centralized servers compound these security issues and make it worse. If the client is vulnerable to some form of replacement attack, then they could use a much more subtle, difficult to detect backdoor, like a weaker crypto implementation, which leaks meta/userdata.

With decentralized/federated services, if a client is using other servers other than the "main" one, you either have to compromise both the client and the server, or compromise the client in a very obvious way that causes the client to send extra data to server's it shouldn't be sending data too.

A big part of the problem comes with what Github calls "bugdoors". These are "accidental" bugs that are backdoors. With a centralized service, it becomes much easier to introduce "bugdoors" because all the data routes through one service, which could then silently take advantage of this bug on their own servers.

This is my concern with Signal being centralized. But mostly I'd say don't worry about it, threat model and all that.

I'm just gonna @ everybody who was in the conversation. I posted this top level for visibility.

@Ulrich@feddit.org @rottingleaf@lemmy.world @jet@hackertalks.com @eleitl@lemmy.world @Damage@feddit.it

EDIT: elsewhere in the thread it is talked about what is probably a nation state wiretapping attempt on an XMPP service: https://www.devever.net/~hl/xmpp-incident

For a similar threat model, signal is simply not adequate for reasons I mentioned above, and that's probably what poqVoq was referring to when he mentioned how it was discussed here.

The only timestamps shared are when they signed up and when they last connected. This is well established by court documents that Signal themselves share publicly.

This of course, assumes I trust the courts. But if I am seeking maximum privacy/security, I should not have to do that.

https://www.devever.net/~hl/xmpp-incident

This article discusses some mitigations.

You an also use a platform like simplex or the tor routing ones, but they aren't going to offer the features of XMPP. It's better to just not worry about it. This kind of attack is so difficult to defend against that it should be out of the threat model of the vast majority of users.

 

cross-posted from: https://programming.dev/post/33535348

Nixgl: https://github.com/nix-community/nixGL

Also, it seems like this requires the latest "stateversion", since this is a new feature.

This is pretty big, because it makes it easy to use applications that use the GPU from nixpkgs on non Nixos systems.

 

Nixgl: https://github.com/nix-community/nixGL

Also, it seems like this requires the latest "stateversion", since this is a new feature.

This is pretty big, because it makes it easy to use applications that use the GPU from nixpkgs on non Nixos systems.

 

cross-posted from: https://programming.dev/post/32779890

I want to like, block interaction with a window that I am keeping on top of other windows so I can see it but still click to stuff behind it.

It turns out mpv already has this implemented. https://github.com/mpv-player/mpv/pull/8949

Technically no windows or mac support (presumably it's possible there; dunno), but OP only asked for linux stuff so I'll close this

And then I could remove the title bar if I really don't want to interact with the app.

 

I want to like, block interaction with a window that I am keeping on top of other windows so I can see it but still click to stuff behind it.

It turns out mpv already has this implemented. https://github.com/mpv-player/mpv/pull/8949

Technically no windows or mac support (presumably it's possible there; dunno), but OP only asked for linux stuff so I'll close this

And then I could remove the title bar if I really don't want to interact with the app.

 

Older article (2019), but it introduced me to some things I didn't know. Like I didn't know that cockpit could manage Kubernetes.

 

So this is a pretty big deal to me (it looks recent, just put up last October). One of my big frustrations with Matrix was that they didn’t offer helm charts for a kubernetes deployment, which makes it difficult for entities like nonprofits and community clubs to use it for their own purposes. Those entities need more hardware than an individual self hoster, and may want features like high availability, and kubernetes makes horizontal scaling and high availability easy.

Now, according to the site, many of these features seem to be "enterprise only" — but it's very strangely worded. I can't find anything that explicitly states these features aren't in the fully FOSS self hosted version of matrix-stack, and instead they seem to be only advertised as features of the enterprise version

My understanding of Kubernetes architecture is that it's difficult for people to not do high availability, which is why this makes me wonder.

Looking through the docs for the "enterprise version, it doesn't look like anything really stops me from doing this with the community addition.

They do claim to have rewritten synapse in rust though

Being built in Rust allows server workers to use multiple CPU cores for superior performance. It is fully Kubernetes-compatible, enabling scaling and resource allocation. By implementing shared data caches, Synapse Pro also significantly reduces RAM footprint and server costs. Compared to the community version of Synapse, it's at least 5x smaller for huge deployments.

And this part does not seem to be open source (unless it's rebranded conduit, but conduit doesn't seem to support the newer Matrix Authentication Service.)

So, it looks Matrix/Element has recently become simultaneously much more open source, but also more opaque.

 

See title

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

view more: next ›