this post was submitted on 14 Dec 2025
421 points (97.3% liked)
Technology
77682 readers
3284 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In case you missed it in the article, the transfer speeds are mentioned just two paragraphs prior to the one you cited:
Writing 360 TB at 4 MB/s will take over 1000 days, almost 3 years. Retrieving 360 TB at a rate of 30 MB/s is about 138 days. That capacity to bitrate ratio that is going to be really hard to use in a practical way, and it'll be critical to get that speed up. Their target of 500 MB/s is still more than 8 days to read or write the data from one storage platter.
One counterpoint - even with a weak speed to capacity ratio it could be very useful to have a lot of storage for incremental backup solutions, where you have a small index to check what needs to be backed up, only need to write new/modified data, and when restoring you only need to read the indexes and the amount you're actually restoring. This saves time writing the data and lets you keep access to historical versions.
There's two caveats here, of course, assuming those are not rewritable. One, you need to be able to quickly seek to the latest index, which can't reliably be at the start, and two, you need a format that works without rewriting any data, possibly with a footer (like tar or zip, forgot which one), which introduces extra complexity (though I foresee a potential trick where the previous index can leave an unallocated block of data to write the address of the next index, to be written later)
I was so blind sided by the fact that the tech isn’t for consumers that I forgot to mention the r/w speeds