this post was submitted on 05 Aug 2025
102 points (97.2% liked)

Tech

1721 readers
139 users here now

A community for high quality news and discussion around technological advancements and changes

Things that fit:

Things that don't fit

Community Wiki

founded 2 years ago
MODERATORS
 

A software engineer has warned against trusting cloud data storage services in a painstakingly detailed blog post detailing their own “complete digital annihilation” at the hands of AWS admins. Developer Abdelkader Boudih, pen name Seuros, says they had been a fee-paying AWS subscriber for a decade, with the cloud service becoming a firm part of their workflow. Suffice to say, the developer’s long-standing relationship with AWS has now ended acrimoniously.

top 20 comments
sorted by: hot top controversial new old
[–] friend_of_satan@lemmy.world 8 points 6 days ago

Not your computer, not your data.

[–] bitcrafter@programming.dev 16 points 1 week ago (2 children)

On top of all the other horrors, am I the only one seriously bothered by the fact that every dry-run is just a single fat-finger away from deleting all of a customer's data across all of AWS? I know that whenever I design a script to do something as dangerous as this, at the very least the default behavior is for it to do a dry-run so that if you want it to actually go ahead and make the changes you have to pass in an additional argument such as --confirm-deletion; for something this dangerous and apparently irreversible, I would probably also prompt the user to type "IAMSURE" before proceeding.

[–] Dultas@lemmy.world 15 points 1 week ago

Even AWS forces you to empty a bucket before deleting and confirms it by making you type delete to delete the bucker.

[–] fubarx@lemmy.world 4 points 1 week ago (1 children)

Protip: Never use the console or CLI.

Just use CDK. Test deleting and redeploying. Script the whole deployment process so it can be updated multiple times a day against different accounts by different people. Never worry about fat-fingering nothing.

[–] bitcrafter@programming.dev 1 points 6 days ago (1 children)

That sounds even more reasonable to me!

What does "CDK" refer to?

[–] bamboo@lemmy.blahaj.zone 3 points 6 days ago (1 children)
[–] bitcrafter@programming.dev 2 points 6 days ago

Ah, so just adding the irony of AWS not even using its own tools to prevent these kinds of mishaps.

[–] bamboo@lemmy.blahaj.zone 16 points 1 week ago

AWS charges for every little thing, so for them to then delete your data with no warning is kinda insane. I will say, if the author thought AWS was bad, they're going to really feel it working on Oracle's platform for whatever tooling they're building to migrate out of AWS. In a few months there will be a similar blog post about Oracle suing him or something else insane. Sometimes the devil you know is better I guess?

[–] tyler@programming.dev 11 points 1 week ago (1 children)

AWS very likely can recover all of their data, they probably just don't want to. We had a devops person at our company run a script that wiped out 95% of our Lambdas, 'irreversibly' according to AWS docs. AWS spent 2 weeks with our devops team to recover as many of the lambdas as possible. Most of the recovered lambdas were just sent over to us as randomly identified zips, but we did get the majority of them back.

[–] shiftymccool@programming.dev 9 points 1 week ago (1 children)

I have to know, how do you lose lambdas? You should still have the source code. Please tell me you didn't code them directly in the aws console...

[–] tyler@programming.dev 2 points 5 days ago

I meant to respond to this yesterday. We didn't lose the lambda code, we lost lambda versions, which are immutable versions of your Lambda. There is no way to restore these (hence immutable).

We had every lambda version's code tagged in github as a release and while we could have redeployed them it would have taken just as long if not longer, due to how long our deployments for the lambdas in question were (20minutes to 1.5h depending on the lambda).

There were a lot of suboptimal things that happened to make it a shitshow, but essentially:

  • we should have been using function Aliases from the beginning, our versions were referenced directly, so redeploying would have resulted in needing massive db changes downstream to result in referencing the right lambda versions.
  • AWS should have specified that what we were doing was not what they intended (they've updated the docs now, but at the time their docs literally just said if you want immutable functions you can use function versions!).
  • we should have saved off the function.zips before deploying (we didn't think that was necessary because we had all the code and the artifact was the least important part of our deploy)
  • we should have had our own AWS account rather than using the company's 'shared' account which was how everything was done at the time.

This all resulted in a dumb devops dude getting a ticket to clean up our dev account due to running out of lambda storage space. He cleaned up the dev account with a script that was built to only be run against dev. Then he decided even though the ticket said just clean up dev, he would take a look at prod and clean that one up too.

Thus managing to take down the entire company's sales infrastructure.

The shared aws account and the devops script to clean up lambdas was built before I started at that company, but the rest of the code/architecture was mine and one other person's design. It worked really really well for what it was built for (immutable rules for specific points in time), but there were a lot of shortcomings and things we missed that resulted in everything going badly that month.

[–] chonkyninja@lemmy.world 2 points 1 week ago (1 children)

Zero depth article full of nothing. Aka, dumb developer didn’t back his shit up somewhere safe.

[–] 4am@lemmy.zip 10 points 1 week ago (2 children)

Isn’t the whole idea of the cloud that you pay for it to be safe? Why bother if they’re not going to keep it safe for you?

[–] Dave@lemmy.nz 7 points 1 week ago (1 children)

Backups are about protecting your stuff from yourself as much as anything. If it's possible to delete all record of all your stuff with one wrong key press then you haven't backed it up properly.

[–] bitcrafter@programming.dev 9 points 1 week ago (1 children)

I suspect that if this person had known that using AWS was putting all of their data within one wrong key press of being completely deleted without recovery, then they would have reconsidered using AWS.

[–] Dave@lemmy.nz 2 points 1 week ago (1 children)

Maybe, but I dunno, AWS isn't advertised as a consumer cloud storage like OneDrive or Dropbox, right? It's object storage for people who understand technical things like this and who write programs that include things like a recycle bin for recovery.

[–] bitcrafter@programming.dev 6 points 1 week ago (1 children)

All of the diligence in the world on your end does not matter if, on the AWS end, the employees can and do delete all of your data via fat-fingering without involving you at all, which is what happened here.

[–] Dave@lemmy.nz 4 points 1 week ago (1 children)

I guess so. I dunno. A 3 2 1 backup is pretty common around here. So even if someone deleted one copy, you'd have two left. Having a single place with all your data in the world just seems like a bad idea (yes I'm aware that this is the case for many users of cloud storage).

[–] bitcrafter@programming.dev 4 points 6 days ago

Sure, but the point is that he was supposedly paying AWS to have multiple backups in multiple regions, which he very carefully set up to maximize redundancy. If, at the end of the day, there is no actual redundancy because AWS itself is actually malicious insofar that it will delete all of your data for no good reason at all and then blame it on you, then they are being very dishonest about their product.

[–] pulsewidth@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

No. The 'whole idea' of the cloud is that it's cheaper than self-hosting via your hosted servers you manage in a DC or in-house. That's been its primary value proposition for the last 15+ years.

Rigorous backups and data safety are secondary concerns that you often pay for as an additional service or add-on feature with your cloud provider.