this post was submitted on 18 Mar 2024
46 points (94.2% liked)

Asklemmy

43810 readers
1 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

sudo's Hall of pain

you are viewing a single comment's thread
view the rest of the comments
[–] brokenlcd@feddit.it 1 points 1 year ago (7 children)

I'm trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren't really practical to backup 1tb of data on.

For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn't immediatly kill the partition if i mess up again.

[–] Atemu@lemmy.ml 0 points 1 year ago (6 children)

I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

It's possible to make that work; through discipline and mechanism.

You'd need like 12 of them but if you'd carve your data into <80GB chunks, you could store every chunk onto a separate scrap drive and thereby back up 1TB of data.

Individual files >80GB are a bit more tricky but can also be handled by splitting them into parts.

What such a system requires is rigorous documentation where stuff is; an index. I use git-annex for this purpose which comes with many mechanisms to aid this sort of setup but it's quite a beast in terms of complexity. You could do every important thing it does manually without unreasonable effort through discipline.

For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

Another good practice is to attempt any changes on a test model. You'd create a sparse test image (truncate -s 1TB disk.img), mount via loopback and apply the same partition and filesystem layout that your actual disk has. Then you first attempt any changes you plan to do on that loopback device and then verify its filesystems still work.

[–] brokenlcd@feddit.it 0 points 1 year ago* (last edited 1 year ago) (1 children)

The problem is that i didn't mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

As for the hard drives, I'm already trying to do that, for bigger files i just break them up with split. I'm just waiting until i have enough disks to do that.

[–] Atemu@lemmy.ml 1 points 1 year ago

The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

For that issue, I recommend never using unstable device names and always using /dev/disk/by-id/.

As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

I'd highly recommend to start backing up the most important data ASAP rather than waiting to be able to back up all data.

load more comments (4 replies)
load more comments (4 replies)