Not your computer, not your data.
Tech
A community for high quality news and discussion around technological advancements and changes
Things that fit:
- New tech releases
- Major tech changes
- Major milestones for tech
- Major tech news such as data breaches, discontinuation
Things that don't fit
- Minor app updates
- Government legislation
- Company news
- Opinion pieces
On top of all the other horrors, am I the only one seriously bothered by the fact that every dry-run is just a single fat-finger away from deleting all of a customer's data across all of AWS? I know that whenever I design a script to do something as dangerous as this, at the very least the default behavior is for it to do a dry-run so that if you want it to actually go ahead and make the changes you have to pass in an additional argument such as --confirm-deletion
; for something this dangerous and apparently irreversible, I would probably also prompt the user to type "IAMSURE" before proceeding.
Even AWS forces you to empty a bucket before deleting and confirms it by making you type delete to delete the bucker.
Protip: Never use the console or CLI.
Just use CDK. Test deleting and redeploying. Script the whole deployment process so it can be updated multiple times a day against different accounts by different people. Never worry about fat-fingering nothing.
That sounds even more reasonable to me!
What does "CDK" refer to?
Cloud Development Kit: https://aws.amazon.com/cdk/
Ah, so just adding the irony of AWS not even using its own tools to prevent these kinds of mishaps.
AWS charges for every little thing, so for them to then delete your data with no warning is kinda insane. I will say, if the author thought AWS was bad, they're going to really feel it working on Oracle's platform for whatever tooling they're building to migrate out of AWS. In a few months there will be a similar blog post about Oracle suing him or something else insane. Sometimes the devil you know is better I guess?
AWS very likely can recover all of their data, they probably just don't want to. We had a devops person at our company run a script that wiped out 95% of our Lambdas, 'irreversibly' according to AWS docs. AWS spent 2 weeks with our devops team to recover as many of the lambdas as possible. Most of the recovered lambdas were just sent over to us as randomly identified zip
s, but we did get the majority of them back.
I have to know, how do you lose lambdas? You should still have the source code. Please tell me you didn't code them directly in the aws console...
I meant to respond to this yesterday. We didn't lose the lambda code, we lost lambda versions, which are immutable versions of your Lambda. There is no way to restore these (hence immutable).
We had every lambda version's code tagged in github as a release and while we could have redeployed them it would have taken just as long if not longer, due to how long our deployments for the lambdas in question were (20minutes to 1.5h depending on the lambda).
There were a lot of suboptimal things that happened to make it a shitshow, but essentially:
- we should have been using function Aliases from the beginning, our versions were referenced directly, so redeploying would have resulted in needing massive db changes downstream to result in referencing the right lambda versions.
- AWS should have specified that what we were doing was not what they intended (they've updated the docs now, but at the time their docs literally just said if you want immutable functions you can use function versions!).
- we should have saved off the function.zips before deploying (we didn't think that was necessary because we had all the code and the artifact was the least important part of our deploy)
- we should have had our own AWS account rather than using the company's 'shared' account which was how everything was done at the time.
This all resulted in a dumb devops dude getting a ticket to clean up our dev account due to running out of lambda storage space. He cleaned up the dev account with a script that was built to only be run against dev. Then he decided even though the ticket said just clean up dev, he would take a look at prod and clean that one up too.
Thus managing to take down the entire company's sales infrastructure.
The shared aws account and the devops script to clean up lambdas was built before I started at that company, but the rest of the code/architecture was mine and one other person's design. It worked really really well for what it was built for (immutable rules for specific points in time), but there were a lot of shortcomings and things we missed that resulted in everything going badly that month.
Zero depth article full of nothing. Aka, dumb developer didn’t back his shit up somewhere safe.
Isn’t the whole idea of the cloud that you pay for it to be safe? Why bother if they’re not going to keep it safe for you?
Backups are about protecting your stuff from yourself as much as anything. If it's possible to delete all record of all your stuff with one wrong key press then you haven't backed it up properly.
I suspect that if this person had known that using AWS was putting all of their data within one wrong key press of being completely deleted without recovery, then they would have reconsidered using AWS.
Maybe, but I dunno, AWS isn't advertised as a consumer cloud storage like OneDrive or Dropbox, right? It's object storage for people who understand technical things like this and who write programs that include things like a recycle bin for recovery.
All of the diligence in the world on your end does not matter if, on the AWS end, the employees can and do delete all of your data via fat-fingering without involving you at all, which is what happened here.
I guess so. I dunno. A 3 2 1 backup is pretty common around here. So even if someone deleted one copy, you'd have two left. Having a single place with all your data in the world just seems like a bad idea (yes I'm aware that this is the case for many users of cloud storage).
Sure, but the point is that he was supposedly paying AWS to have multiple backups in multiple regions, which he very carefully set up to maximize redundancy. If, at the end of the day, there is no actual redundancy because AWS itself is actually malicious insofar that it will delete all of your data for no good reason at all and then blame it on you, then they are being very dishonest about their product.
No. The 'whole idea' of the cloud is that it's cheaper than self-hosting via your hosted servers you manage in a DC or in-house. That's been its primary value proposition for the last 15+ years.
Rigorous backups and data safety are secondary concerns that you often pay for as an additional service or add-on feature with your cloud provider.