moonpiedumplings

joined 2 years ago
[–] moonpiedumplings@programming.dev 7 points 2 years ago (1 children)

https://www.fieggen.com/shoelace/ianknot.htm

Also relevant: https://www.fieggen.com/shoelace/grannyknot.htm

I used to triple knot my shoes and they would still come untied. Then I switched to the ian knot, and my shoes haven't come untied by themselves in forever.

A tip I have is to move away from manjaro.

When you use a rolling release, you lose one of the main features of stable release distros: Automatic, unattended upgrades. AFAIK, every stable release distro has those, and none of the rolling releases do (except maybe opensuses's new slowroll and centos rolling, but I wouldn't recommend or use them).

Manjaro has other issues too, but that's the big one.

Although I use arch on my laptop, I run debian on my server because I don't want to have to baby it, especially since I primarily access it remotely. Automatic upgrades are one less complication removed, allowing me to focus on my server itself.

As for application deployment itself, I recommend using application containers, either via docker or podman. There are many premade containers for those platforms, for apps like jellyfin, or the various music streaming apps people use to replace spotify (I can't remember any of the top of my head, but I know you have lots of options).

However, there are two caveats to docker (not podman) people should know:

  • Docker containers don't auto update. Although you can use something like watchtower to automatically update them. As for podman, podman has an auto update command you can probably configure to run regularly.
  • Docker bypasses your firewall. If you forward port 80, docker will go around the firewall and publish it. The reason for this is that most linux firewalls work by using iptables or nftables behind the hood, but docker also edits those directly... this has security implications, I've seen many container services people didn't intend to put on the public internet, on there.

Podman, however, respects your firewall rules. Podman isn't perfect though, there are some apps that won't run in podman containers, although my use case is a little more niche (greenbone service and vulnerability scanner).

As for where to start, projects like linuxserver provide podman/docker containers, which you can use to deploy many apps fairly easily, once you learn how to launch apps with the compose file. Check out this nextcloud dockerized, they provide. Nextcloud is a google drive alternative, although sometimes people complain about it being slow.. I don't know about the quality of linuxserver's nextcloud, so you'd have to do some research for that, and find a good docker container.

[–] moonpiedumplings@programming.dev 2 points 2 years ago (1 children)

Python in Excel requires Internet access because calculations run on remote servers in the Microsoft Cloud. The calculations are not run by your local Excel application. 

From: https://support.microsoft.com/en-us/office/troubleshoot-python-in-excel-errors-7736520d-47ef-43a8-b640-d826afb63249

[–] moonpiedumplings@programming.dev 0 points 2 years ago (1 children)

Because much of mozilla's funding is from a deal with google, that's why.

US$300 million annually. Approximately 90% of Mozilla's royalties revenue for 2014 was derived from this contract

From https://en.wikipedia.org/wiki/Mozilla_Foundation

A lot of money, but not enough to actually to actually do a lot. They keep cutting features their "customers" like. Why?

Because development is expensive.

Google props mozilla up to pretend they don't have a monopoly on the internet. Just enough money to barely keep up, not enough to truly stay competitive.

Mozilla wants to not rely on google money, so they are trying to expand their products. AI is overhyped, but still useful, and something worth investing in.

[–] moonpiedumplings@programming.dev 4 points 2 years ago (4 children)

Mozilla: ignores years of customer complaints and requests

Are these customers donating, or purchasing mozilla products or services so that mozilla doesn't have to rely on google's donations?

Mozilla: creates new product nobody asked for

https://github.com/Mozilla-Ocho

Nearly 10k and 400 stars on those respective repos.

A way to run a large language model on any operating system, in any OS, in a simple, local, and privacy respecting manner?

For linux we have docker, but Windows users were starving for a good way to do this, and even on linux, removing the step of configuring docker (or other container runtimes) to work with nvidia, is nice.

And it's still FOSS stuff they aren't being paid for, currently. But there are plenty of ways to monetize this.

Here's an easy one: tie in the the vpn service they have to allow you to access the web ui of the computer running the llamafile remotely. Configure something like end to end encryption or or nat traversal (so not even mozilla can sniff the traffic), and you end up with a private LLM you can access remotely.

With this, maybe they can afford some actual development on firefox, without having to rely on google money.

Do you have any other book recommendations? Although I dislike the trope of the application of actual scientific knowledge, as characters get very OP very quickly, I love seeing characters using yhe scientific method to figure out what they can or can't do.

Quantum League

I looked up the book description, and a strong sense of deja vu hit me at the word "actuator"... I think I've read this book before.

Currently reading Industrial Strength magic by Macronomicon, and it scratches this itch for me, but waiting for chapter updates, even when daily, is so painful.

[–] moonpiedumplings@programming.dev 2 points 2 years ago* (last edited 2 years ago) (1 children)

The guide won't work. Grub attempts to verify everything in /boot, even if it is encrypted, which is pointless for a desktop use case.

https://moonpiedumplings.github.io/playground/arch-secureboot/

Original guide I followed: https://wejn.org/2021/09/fixing-grub-verification-requested-nobody-cares/

[–] moonpiedumplings@programming.dev 1 points 2 years ago* (last edited 2 years ago) (1 children)

They could. But in countries where internet access is restricted by authorities, running any more than an insignificant amount of traffic over a VPN, even protocols as stealthy as the ones that make them indistinguishable from website (http/s) traffic, can be noticable... and being noticed can get you killed.

Snowflake, on the other hand, runs proxies to users of the snowflake browser extension, who act as entry points. It's named so because connections are ephemeral, and last for a short time, like snowflakes. This makes it much harder to distinguish.

It's not only about what internet traffic, it's also about where.

And of course, the how is relevant too. Not many people want to spend the time to set up an ssl vpn (and multiple people using it makes it easier to spot).

You need to understand what you're asking when you suggest people set up their own proxy. You're asking them to learn a skill, most likely in their free time (free time and energy they may not even have), and without many resources to learn (censored internet), and then rest their lives and livelihoods on that skill. Depending on the regime, maybe the lives of their friends and family, as well.

Comparatively, it's like two clicks to select snowflake as an entrypoint in the tor browser configuration options.

[–] moonpiedumplings@programming.dev 2 points 2 years ago (1 children)

Yeah, unintentional bugs are much easier to deal with than maliciousness, like replacing the "file upload" button with buy nitro, or discord in the browser's audio being finnicky (dark pattern you don't get this problem on element or the discord app.)

Of course, there are unintentional bugs as well, on top of maliciousness.

Screenshot_20240115-160050

Lmao. I'm guessing this is because they've begun to use LLM's for moderation (maybe trying to replace real humans?), but LLM's can't really count.

[–] moonpiedumplings@programming.dev 3 points 2 years ago* (last edited 2 years ago) (2 children)

The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their "vram" as well, allowing you to run bigger models on smaller devices.

Llama.cpp was the software that users did this with originalky. I can't find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0

[–] moonpiedumplings@programming.dev 2 points 2 years ago (1 children)

Did you test with different kernels? Them using a custom scheduler that prioritizes desktop applications might cause background things to run slower.

Plus, the use of ananicy (cpu/ram limiter) limits stuff like that as well.

I use cachyos because they set up zram, anf uksmd by defualt. That's ram compression and deduplication, and it'a pretty powerful in my experience. If you're using cachyos, then uksmdstats and zramctl can give you an idea of how much you are saving.

view more: ‹ prev next ›