terribleplan

joined 2 years ago
MODERATOR OF
[–] terribleplan@lemmy.nrd.li 8 points 2 years ago

So, hear me out... What if we put a scheme in place where anyone who wanted to use the API had to pay for access? And then we charge like 20x what we should to put them out of business. I am sure that would work out well.

[–] terribleplan@lemmy.nrd.li 2 points 2 years ago (3 children)

If you're taking that approach make sure you shut down the stack before you copy the data over so everything gets copied over consistently (e.g. the DB isn't in the middle of a write), and yes it should pretty much be that easy.

[–] terribleplan@lemmy.nrd.li 3 points 2 years ago

Nope, not from a DNS level. All the posts you are reading are cached by whatever instance your account is on. Basically the only thing served from the remote instance is full-size media uploaded to that instance. Even thumbnails are served from whatever instance you use. Mastodon/Akkoma/etc can be set up to even proxy full-size media for users, which is a feature I imagine will eventually make its way into Lemmy. Your best bet at the moment would be to find an instance that defederates those you don't want to see (or run your own and do so yourself). I know "blocking" an instance is an often-requested feature, so that may end up a feature in Lemmy itself at some point.

[–] terribleplan@lemmy.nrd.li 5 points 2 years ago* (last edited 2 years ago) (1 children)

Whoo, can't wait for this season of "Wait, I thought we made progress last episode/chapter!?"

I am a bit behind on the manga, but it has been really hard to be motivated to read it. It feels like any minuscule piece of progress is followed by immediate regression. I was very much in the mindset of "Fuck you, I'll see you next week" for a while, haha.

I'll comment my thoughts after I get around to watching the episode a bit later today.

[–] terribleplan@lemmy.nrd.li 5 points 2 years ago

Lemmy and Akkoma, both in docker with Traefik in front.

[–] terribleplan@lemmy.nrd.li 4 points 2 years ago

Ext4 because it is rock solid and a reasonable foundation for Gluster. Moving off of ZFS to scale beyond what a single server can handle. I would still run ZFS for single-server many-drive situations, though MDADM is actually pretty decent honestly.

[–] terribleplan@lemmy.nrd.li 9 points 2 years ago* (last edited 2 years ago)

Nope. PETG is maybe the easiest "safer" option, but AFAIK there isn't a true food safe filament. Also 3d printed things will basically be impossible to clean without extensive post-processing (including probably needing to coat it in something), so "safer" single use pretty much.

[–] terribleplan@lemmy.nrd.li 2 points 2 years ago* (last edited 2 years ago) (1 children)

A few of these servers were stacked on top of each other (and a monitor box to get the stack off the ground) in a basement for several years, it's a journey.

[–] terribleplan@lemmy.nrd.li 3 points 2 years ago

No. - sent from my iNstance

[–] terribleplan@lemmy.nrd.li 3 points 2 years ago* (last edited 2 years ago)

Things don’t get backfilled, so until a new action happens on an old post/comment/etc they won’t show up on your instance. New things should make their way in eventually though.

Taking the link of a specific post/comment from the community instance and searching for it from your instance should populate it on your instance, just like you probably had to do to get this community to show up so you could subscribe/post at all.

There are backfill tools/scripts, but unless you really want old posts I wouldn't use them. It unnecessarily increases the load on already struggling popular/overloaded instances like lemmy.world.

[–] terribleplan@lemmy.nrd.li 5 points 2 years ago (3 children)

Business in the front:

  • Mikrotik CRS2004-1G-12S+2XS, acting as a router. The 10g core switch plugs into it as well as the connection to upstairs
  • 2u cable management thing
  • Mikrotik CRS326-24S+2Q+, most 10g capable things hook into this, it uses its QSFP+ ports to uplink to the router and downlink to the (rear) 1g switch.
  • 4u with a shelf, there are 4x mini-pcs here, most of them have a super janky 10g connection via an M.2 to PCIe riser.
  • "echo", Dell R710. I am working on migrating off of/decomissioning this host.
  • "alpha", Dell R720. Recently brought back from the dead. Recently put a new (to me) external SAS card into it, and it acts as the "head" unit for the disk shelf I recently bought.
  • "foxtrot", Dell R720xd. I love modern-ish servers with >= 12-disks per 2u. I would consider running a rack full of these if I could... forgive the lack of a label, my label maker broke at some point before acquiring this machine.
  • "delta", "Quantum" something or other, which is really just a whitelabeled Supermicro 3u server.
  • Unnamed disk shelf, "NFS04-JBOD1" to its previous owner. Some Supermicro JBOD that does 45 drives in 4u, hooked up to alpha.

Party in the back:

  • You can see the cheap monitor I use for console access.
  • TP-Link EAP650, sitting on top of the rack. Downstairs WAP.
  • Mikrotik CRS328-24P-4S+, rear-facing 1g PoE/access switch. The downstairs WAP hooks into that as well as the one mini-PC I didn't put a 10g card on. It also provides power (but not connectivity) to the upstairs switch. It used to get a lot more use before I went to 10g basically everywhere. Bonds 4x SFP+ to upllink via the 10g switch in front.
  • You can see my cable management, which I would describe as "adequate".
  • You can see my (lack of) power distribution and power backup strategy, which I would describe as "I seriously need to buy some PDUs and UPSs"

I opted for a smaller rack as my basement is pretty short.

As far as workloads:

  • alpha and foxtrot (and eventually delta) are the storage hosts running Ubuntu and using gluster. All spinning disks. ~160TiB raw
  • delta currently runs TrueNAS, working on moving all of the storage into gluster and adding this in to that. ~78TiB raw, with some bays used for SSDs (l2arc/zil) and 3 used in a mirror for "important" data.
  • echo, currently running 1 (Ubuntu) VM in Proxmox. This is where the "important" (frp, Traefik, DNS, etc) workloads run right now.
  • mini-pcs, running ubuntu, all sorts of random stuff (dockerized), including this Lemmy instance. Mounting the gluster storage if necessary. They also have a gluster volume amongst themselves for highly redundant SSD-backed storage.

The gaps in the naming scheme:

  • I don't remember what happened to bravo, it was another R710, pretty sure it died, or I may have given it away, or it may be sitting in a disused corner of my basement.
  • We don't talk about charlie, charlie died long ago. It was a C2100. Terrible hardware. Delta was bought because charlie died.

Networking:

  • The servers are all connected over bonded 2x10g SFP+ DACs to the 10g switch.
  • The 1g switch is connected to the 10g switch with QSFP+ breakout to bonded 4x SFP+ DAC
  • The 10g switch is connected to the router with QSFP+ breakout to bonded 4x SFP+ DAC
  • The router connects to my ISP router (which I sadly can't bypass...) using a 10GBASE-T SFP+.
  • The router connects to an upstairs 10g switch (Mikrotik CRS305-1G-4S+) via a SFP28 AOC (for future upgrade possibilities)
  • I used to do a lot of fancy stuff with VLANs and L3 routing and stuff... now it's just a flat L2 network. Sue me.
view more: ‹ prev next ›