According to the author, that has happened quite a while ago and we're now at the next step.
How different is this to gitlab’s open core model?
That's a really good question that I don't immediately have a satisfying answer to.
There are some differences I can point out though:
- Gitlab has demonstrated its commitment to keep the core of their product, though limited in features, free and open source. As of now, BW's clients cannot even be compiled without the proprietary SDK anymore.
- Gitlab was always a permissive license (MIT) and never attempted to subvert its original license terms
- Gitlab-EE's "closed" core is actually quite open (go read the source code) but still squarely in the proprietary camp because it requires you to have a valid subscription to exercise your freedoms.
Is this a permanent change?
It'd be quite trivial for them to do in technical terms: Either license the SDK as GPL or stop using it in the clients.
I don't see a reason for them to roll it back though. This was decided long ago and they explicitly decided to stray away from the status quo and make it closed source.
The only thing I could see making them revert this would be public pressure. If they lose a sufficient amount of subscribers over this, that might make them reconsider. Honestly though by that time, the cat's out of the bag and all the public goodwill and trust is gone.
It's honestly a bafflingly bad decision from even just a business perspective. I predict they'll lose at least 20% but likely 30-50% of their subscribers to this.
Is the involvement of investors the root of this?
I find that likely. If it stinks, it's usually something stinky's fault.
Are we overreacting as it doesn’t meet our strict definition of foss?
They are attempting to subvert one of the FOSS licenses held in the highest regard. You cannot really be much more anti than this.
An "honest" switch to completely proprietary licenses with a public announcement months prior would have been easier to accept.
As with all of their services, the back-end is closed-source.
For the purposes of user freedom, it's not that critical as the back-end merely facilitates the storage and synchronisation of encrypted data. This is different from the bitwarden case where they're now including freedom disrespecting code into the most critical part of their software: the clients which handle the unencrypted data.
Fact of the matter remains however that Proton Pass restricts your freedom by not allowing you to self-host it.
If you are fine with not being able to self-host, I'd say it's a good option though. Doubly so if you are already a customer of their other services.
Proton has demonstrated time and time again to act for the benefit of its users in the past decade and I see no incentive for them to stop doing so. I'd estimate a low risk of enshittification for Proton which is high praise for a company of their size.
Keepass isn't really in the same category of product as Bitwarden. The interesting part of bitwarden is that it's ran as a service.
the fact that the two programs communicate using standard protocols does not mean they are one program for purposes of GPLv3
The fact that they would even think about attempting to subvert the GPL (much less actually pulling through with it) makes me think they have stopped being an open source company a while ago.
It would break a lot, require a new API, and devs reworking a lot of programs.
As I understand it, this would have been a perfectly backwards compatible change. You'd only get the events if you explicitly asked for them.
The Immich app.
Although since it doesn't really function as a full gallery app yet, so I have Fossify gallery installed as a backup to open images in via intent.
I only learned about Aves today and trying it out for the same purpose, I think I like its picture viewer better.
It was there and "consumed space". It was just a less noticeable icon before (dark grey downwards-facing chevron).
In what regard?
Statistically, yes.
spoiler
(This is a Joke.)
In simple terms, Large Language Models predict the continuation of a given text word-by-word. The algorithms it uses to do so use a quite gigantic corpus of statistical data and a few other minor factors to predict these words.
The statistical data is quite sophisticated but, in the end, it is merely statistical; a prediction for what is the most likely word given a set of words based on previous data. There is nothing intelligent in "AI" chat bots and the like.
If you ask an LLM chatbot a question, what actually happens is that the LLM predicts the most likely continuation of the question text. In almost all of its training data, what comes after a question will be a sentence that answers the preceding question and there are some other tricks to make it exceedingly likely for an answer to follow a question in chatbot-type LLMs.
However, if its data predicts that the most likely words that come after "What should I put on my Pizza" are "Glue can greatly enhance the taste of Pizza." then that's what it'll output. It doesn't reason about anything or has any sort of storage of facts that it systematically combines to give you a sensible answer, it merely predicts what a sensible answer could be based on what was probable according to the statistical data; it imitates.
If you have some text and want a probable continuation that often occured in texts similar to it, LLMs can be great for that. Though note that if it doesn't have any probable continuation, it will often fall back to an improbable one that is less improbable than all the others.
Measure resource usage during play. What is the bottleneck?
https://lemmy.ndlug.org/post/1268531