you’d have to be mad to willingly pipe a script to bash without checking it. holy shit
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
And you better inspect and execute a downloaded copy, because a malicious actor can serve a different file for curl/wget than to your browser
They can even serve a different file for curl vs curl|bash
Yeah that do, I remember that the demo was pretty impressive ten fifteen years ago!
Does curl send a different useragent when it's piped?
Searching for those words just vomits 'hOW to SeT cUrL's UseRaGenT' blog spam.
Its timing based. When piped a script, bash executes each line completly before taking the next line from the input. Curl has a limited output buffer.
- Operation that takes a long time. Like a sleep, or if you want it less obvious. A download, an unzip operation, apt update, etc.
- Fill the buffer with more bash commands.
- Measure on the server if at some point curl stops downloading the script.
- Serve a malicious payload.
Oh that is clever.
Not that I know of, which means I can only assume it'll be a timing-based attack.
With strategic use of sleep statements in the script you should stand a pretty good chance of detecting the HTTP download blocking while the script execution is paused.
If you were already shipping the kind of script that unpacks a binary payload from the tail end of the file and executes it, it's well within the realm of possibility to swap it for a different one.
Yep! That's what the post shows.
I created a live demo file, too, so that you can actually see the difference based on how you request the file.
Is it different from running a bash script you downloaded without checking it? E.g. the installer that you get with GOG games?
Genuine question, I'm no expert.
I have no problems with running scripts from the internet, AFTER you check them. Do NOT blindly run a script you found on the internet. As others have said download them, then check them, then and only then run them if they're safe. NEVER pipe to bash, ever.
Ok but not everyone has that skill. And anyway, how is this different to running a binary where you can't check the code?
it's exactly the same. Don't run binaries you don't trust fully. But i get what you mean. miley_cyrus_nude.jpg.exe is probably gonna end badly.
Yeah I get that, but I would install docker, cloudflared, etc by piping a convenience script to bash without hesitation. I've already decided to install their binary, I don't see why the install script is any higher risk.
I know it's a controversial thing for everyone to make their own call on, I just don't think the risk for a bash script is any higher than a binary.
I won't lie, I use curl | bash as well, but I do dislike it for two reasons:
Firstly, it is much, much easier to compromise the website hosting than the binary itself, usually. Distributed binaries are usually signed by multiple keys from multiple servers, resulting in them being highly resistant to tampering. Reproducible builds (two users compiling a program get the same output) make it trivial to detect tampering as well.
On the other hand, websites hosting infrastructure is generally nowhere near as secure. It's typically one or two VPS's, and there is no signature or verification that the content is "official". So even if I'm not tampering with the binary, I can still tamper with the bash script to add extra goodies to it.
On the other hand (but not really relevant to what OP is talking about), just because I trust someone to give me a binary in a mature programming language they have experience writing in, doesn't mean I trust them to give me a script in a language known for footguns. A steam bug in their bash script once deleted a user's home directory. There have also been issues with AUR packages, which are basically bash scripts, breaking people's systems as well. When it comes to user/community created scripts, I mostly trust them to not be malicious, and I am more fearful of a bug or mistake screwing things up. But at the same time, I have little confidence in my ability to spot these bugs.
Generally, I only make an exception for running bash installers if the program being installed is a "platform" that I can use to install more software. K3s (Kubernetes distro), or the Nix package manager are examples. If I can install something via Nix or Docker then it's going to be installed via there and not installed via curl | bash. Not every developer under the sun should be given the privilege of running a bash script on my system.
As a sidenote, docker doesn't recommend their install script anymore. All the instructions have been removed from the website, and they recommend adding their own repo's instead. Personally, I prefer to get it from the distro's repositories, as usually that's the simplest and fastest way to install docker nowadays.
Firstly, it is much, much easier to compromise the website hosting than the binary itself, usually. Distributed binaries are usually signed by multiple keys from multiple servers, resulting in them being highly resistant to tampering. Reproducible builds (two users compiling a program get the same output) make it trivial to detect tampering as well.
Yeah this is a fair call.
But at the same time, I have little confidence in my ability to spot these bugs.
This is the key thing for me. I am not likely to spot any issues even if they were there! I'd only be scanning for external connections or obviously malicious code, which I do when I don't have as much trust in the source.
As a sidenote, docker doesn’t recommend their install script anymore.
Yeah I used it as an example because there are very few times I ever remember piping to bash, but that's probably the most common one I have done in the past.
It's really only about trusting the source. Your operating system surely has thousands of scripts that you've never read and never checked. And wouldn't have time to. And people don't complain about that.
But it's really bad practice to run random things from random sites. So the practice of downloading a script and running it is frowned upon. Mostly as a way of maintaining good security hygiene.
And it's wild how much even that has been absolutely normalized by all these shitty lazy developers and platforms. Vibe coding it just going to make it worse. All these programs that look nice on the surface and are just slop on the inside. It's going to be a mess.
The post is specifically about how you can serve a totally different script than the one you inspect. If you use curl to fetch the script via terminal, the webserver can send a different script to a browser based on the UserAgent.
And whether or not you think someone would be mad to do it, it's still a widespread practice. The article mentions that piping curl straight to bash is already standard procedure for Proxmox helper scripts. But don't take anyone's word for it, check it out:
https://community-scripts.github.io/ProxmoxVE/
It's also the recommended method for PiHole:
The reality is a lot of newcomers to Linux won't even understand the risks involved, it's run because that's what they're told or shown to do. That's what I did for pihole many years ago too, I'll admit
I've been accused of "gate keeping" when I tell people that this is a shitty way to deploy applications and that nobody should do it.
Users are blameless, I find the fault with the developers.
Asking users to pipe curl to bash because it's easier for the developer is just the developer being lazy, IMO.
Developers wouldn't get a free pass for taking lazy, insecure shortcuts in programming, I don't know why they should get a free pass on this.
Most developers I've looked at would happily just paste the curl|bash thing into the terminal.
I often would skim the script in the browser, but a. This post shows that's not fool proof and b. a sufficiently sophisticated malicious script would fool a casual read
In addition to the other examples it's also in the default installation mode for node.js - they use this to install nvm
Ya cant even blame someone non-technical falling for this if they haven't been explicitly informed - it's getting reinforced as completely normal by too many "reputable" projects.
I'm pretty sure brew on mac is the same too
Yes this has risks. At the same time anytime you run any piece of software you are facing the same risks, especially if that software is updated from the internet. Take a look at the NIST docs in software supply chain risks.
But those are two very different things, I can very easily give you a one liner using curl|bash that will compromise your system, to get the same level of compromise through a proper authenticated channel such as apt/pacman/etc you would need to compromise either their private keys and attack before they notice and change them or stick malicious code in an official package, either of those is orders of magnitude more difficult than writing a simple bash script.
This is a bit like saying crossing the street blindfolded while juggling chainsaws and crossing the street on a pedestrian crossing while the light is red for cars both carry risk. Sure. One's a terrible idea though.
Oh, people will keep using it no matter how much you warn them.
Proxmox-helper-scripts is a perfect example. They'll agree with you until that site comes up, and then its "it'll never, ever get hacked and subverted, nope, can't happen, impossible".
Wankers.
I was looking at that very thing last night.
But then I realized, "why can't immich just create usable packages like we had before?" and noped back out.
But, for a moment, I was sure a little inspection and testing would make the Internet equivalent of NYC MTA coin-sucking magically safe. It looked so eeeeasy.
Use our easy bash oneliner to install our software!
Looks inside script
if [ $(command -v apt-get) ]; then apt-get install app; else echo "Unsupported OS"
Still less annoying than trying to build something from source in which the dev claims has like 3 dependencies but in reality requires 500mb of random packages you've never even heard of, all while their build system doesn't do any pre comp checking so the build fails after a solid hours of compilation.
Curl bash is no different than running an sh script you dont know manually…
True, but this is specifically about scripts you think you know, and how curl bash might trick you into running a different script entirely.
Never have I ever piped curl to bash.
Anytime I see a project that had this in their install instructions, I don't use that project.
It shows how dumb the devs are
Yes, this is the correct approach from a security perspective.
An alternative that will avoid the user agent trick is to curl | cat, which just prints the result of the first command to the console. curl >> filename.sh will write it to a script file that you can review and then mark executable and run if you deem it safe, which is safer than doing a curl | cat followed by a curl | bash (because it's still possible for the 2nd curl to return a different set of commands).
You can control the user agent with curl and spoof a browser's user agent for one fetch, then a second fetch using the normal curl user agent and compare the results to detect malicious urls in an automated way.
A command line analyzer tool would be nice for people who aren't as familiar with the commands (and to defeat obfuscation) and arguments, though I believe the problem is NP, so it won't likely ever be completely foolproof. Though maybe it can be if it is run in a sandbox to see what it does instead of just analyzed.
You mean blindly running code is bad? /s
a more cautious user might first paste the url into the address bar of their web browser to see what the script looks like before running it.
Wow, I never thought anyone would be that dumb.
Why wouldn't they just wget it, read it, and then execute it?
Oh the example in the article is the nice version if this attack.
Checking the script as downloaded by wget or curl and then piping curl to bash is still a terrible idea, as you have no guarantee you'll get the same script in both cases:
I never thought about opening it in a browser. I always used curl to download such a script and view it where it was supposed to be run.
I'm a bit lost with
a more cautious user might first paste the url into the address bar of their web browser to see what the script looks like before running it. In the
You... You just.... You just dump the curl output to file and examine that and then run it if its good
Just a weird imagined sequence to me.
@K3can@lemmy.radio love the early 2000s stylesheet/color theme of your blog 🙂