this post was submitted on 02 Aug 2025
372 points (97.2% liked)

Programming

21961 readers
112 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] squaresinger@lemmy.world 13 points 2 days ago (1 children)

Multi-cloud is difficult, that's true. But having backups outside the single cloud is easy.

That way if your cloud provider pulls the plug, you will have to reconfigure everything but at least your data stays intact.

To be able to recover from something like that you don't need multiple working cloud setups. You just need backups, so that in an event like OOP's, you spend a few weeks rebuilding the configurations instead of years rebuilding your projects.

[–] Glitchvid@lemmy.world 8 points 2 days ago (1 children)

It really depends, pulling hundreds of GiB out of AWS for backing up on say GCS is going to add up extremely quickly. The cloud companies make it intentionally painful to leave or interop.

[–] squaresinger@lemmy.world 2 points 1 day ago (1 children)

Even large projects rarely have hundreds of GB of code. They might have hundreds of gigs of artifacts and history, but not all of that needs to be backed up. That's where tiered backup strategies come into play.

Code (or what ever else is the most painful to recover) is backed up in e.g. git, with version history and many different locations.

Artifacts either don't need a backup at all, or maybe one copy. If they get lost, they can be rebuilt.

Temporary stuff like build caches don't need backups.

You don't even need to backup the VMs. Backing up a setup script is enough. Sure, all of this is more complicated than to just backup your whole cloud storage space, but it also requires orders of magnitude less storage.

[–] Glitchvid@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

In this guy's specific case, it may be financially feasible to back up onto other cloud solutions, for the reasons you stated.

However public cloud is used for a ton of different things. If you have 4TiB of data in Glacier, you will be paying through the absolute nose pulling that data down into another cloud; highway robbery prices.

Further as soon as you talk about something more than just code (say: UGC, assets, databases) the amount of data needing to be "egressed" from the cloud balloons, as does the price.

[–] squaresinger@lemmy.world 1 points 1 day ago (1 children)

Retrofitting stuff is of course difficult. If it's done from the beginning it wouldn't be that difficult or expensive.

4TB isn't that much. That's small enough that it can fit in a cold backup on a hard drive or two.

[–] Glitchvid@lemmy.world 3 points 1 day ago

Multi-cloud is far from trivial, which is why most companies... don't.

Even if you are multi-cloud, you will be egressing data from one platform to another and racking up large bills (imagine putting CloudFront in front of a GCS endpoint lmao), you are incentivized to stick on a single platform. I don't blame anyone for being single-cloud with the barriers they put up, and how difficult maintaining your own infrastructure is.

Once you get large enough to afford tape libraries then yeah having your own offsite for large backups makes a lot of sense, but otherwise the convenience and reliability (when AWS isn't nuking your account) of managed storage is hard to beat — cold HDDs are not great, and m-disc is pricey.