Considering the lead developer of GrapheneOS bans anyone from their chat for asking how an Android phone with GrapheneOS compares to a non-android phone, such as a PinePhone or Librem 5, in terms of security, because, according to said developer, PhonePhone and Librem5 are "scam products" and even asking questions about them is "spreading misinformation" and "promotion of fraud", I'd be quite, quite vary of the claims GrapheneOS developers make about its security.
S410
If it's the data side that got damaged, you might be able to restore the disk, as long as the damage is not major. The actual data is written on a thin film that's sandwiched between two layers of plastic. The plastic on the outside can be ground down and polished back to a smooth, clean finish. Disk polishers used to be kinda popular back in the day.
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn't even use the overused term "AI".
LLMs, for example, are something like... a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren't as easily defined, we have to resort to other methods. E.g. "machine learning".
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can't even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that "apple slices + batter = apple pie", assuming it has been tuned (aka trained) right.
Learning is, essentially, "algorithmically copy-paste". The vast majority of things you know, you've learned from other people or other people's works. What makes you more than a copy-pasting machine is the ability to extrapolate from that acquired knowledge to create new knowledge.
And currently existing models can often do the same! Sometimes they make pretty stupid mistakes, but they often do, in fact, manage to end up with brand new information derived from old stuff.
I've tortured various LLMs with short stories, questions and riddles, which I've written specifically for the task and which I've asked the models to explain or rewrite. Surprisingly, they often get things either mostly or absolutely right, despite the fact it's novel data they've never seen before. So, there's definitely some actual learning going on. Or, at least, something incredibly close to it, to the point it's nigh impossible to differentiate it from actual learning.
It's illegal if you copy-paste someone's work verbatim. It's not illegal to, for example, summarize someone's work and write a short version of it.
As long as overfitting doesn't happen and the machine learning model actually learns general patterns, instead of memorizing training data, it should be perfectly capable of generating data that's not copied verbatim from humans. Whom, exactly, a model is plagiarizing if it generates a summarized version of some work you give it, particularly if that work is novel and was created or published after the model was trained?
I have a 120 gig SSD. The system takes up around 60 gigs + BTRFS snapshots and its overhead. A have around 15 gigs of wiggle room, on average. Trying to squeeze some /home stuff in there doesn't really seem that reasonable, to be honest.
As long as you don't re-format the partition. Not all installers are created equal, so it might be more complicated to re-install the OS without wiping the partition entirely. Or it might be just fine. I don't really install linux often enough to know that. ¯\_(ツ)_/¯
You can put your /home on a different BTRFS subvolume and exclude it from being snapshotted.
Not OP, but I have the same setup.
I have BTRFS on /, which lives on an SSD and ext4 on an HDD, which is /home. BTRFS can do snapshots, which is very useful in case an update (or my own stupidity) bricks the systems. Meanwhile, /home is filled with junk like cache files, games, etc. which doesn't really make sense to snapshot, but that's, actually, secondary. Spinning rust is slow and BTRFS makes it even worse (at least on my hardware) which, in itself, is enough to avoid using it.
Reviewing the source code of an entire operating system is not a task doable by a single person, particularly when that person is not an expert in the field.
A proper code audit needs to be done by a team of professionals capable of spotting things like actual security vulnerabilities and logic errors that might result in more data being exposed, than advertised.