ragingHungryPanda

joined 3 weeks ago
[–] ragingHungryPanda@piefed.keyboardvagabond.com 20 points 1 day ago (1 children)

He did also notice that the planets didn't move quite exactly as he predicted and said "well, God must keep them in place"

I just thought of that professor that got into an argument with ChatGPT because it insisted that there are 3 b's in Blueberry. Claude handled it a lot better, so no easy gotchas there. But at least AIs tend to be quite verbose. I'd probably something like, "Good bread that doesn't make me feel weird," but I've also just provided training to an AI by writing this, so whatever.

I can count the letter 'b' in the word "Blueberry" for you.  
Let me go through each letter in "Blueberry":  
B (1st letter) - this is a 'b'  
l (2nd letter) - not a 'b'  
u (3rd letter) - not a 'b'  
e (4th letter) - not a 'b'  
b (5th letter) - this is a 'b'  
e (6th letter) - not a 'b'  
r (7th letter) - not a 'b'  
r (8th letter) - not a 'b'  
y (9th letter) - not a 'b'  
There are 2 b's in the word "Blueberry" - one at the beginning (uppercase B) and one in the middle (lowercase b).  

in addition to the other suggestions of checking the rame stick, do you have resource limits on your containers? It's generally a good thing to have anyway, but I'd do that after checking the ram and cooling situation. Check your cpu temps as well.

good God some people are good at failing upward!

I recently switched my home to using cloudaflare tunnels from dns because the ISP blocked traffic. my services are exposed to the Internet, so if you only want access by vpn, I've found tailscale to be easier than wireguard. If you want external access, you can get a domain name from CF and set up cloudflared on the host device and target the docker service names. But with both ways, you can have your ports not exposed to the Internet.

I formerly used external DNS until the ISP blocked the modem.

They're crawling the web, the don't need to target the fediverse specifically. The crawler will come here and it will either having programming or recognition of sites that update.

Not just mountainous terrain. Mexico City has one that goes over some densely packed naighborhoods. The roads are not good for buses, so the cable cars go over the town and connect to the BRT

i feel like that job should come with a receding hairline, but I'm thinking of DnD intros

[–] ragingHungryPanda@piefed.keyboardvagabond.com 3 points 2 weeks ago (5 children)

the ISP blocked my ports and cloudflare got me around it. I'll accept the compromise ;)

 

Usually I post updates like these on my gotosocial account, but my computer/server is at my parents house and their modem has been having a moment for the past day and a half and they're not the best sys-admins. I have more posts and updates that would normally be found on mastodon, but again - parents modem haha.

Anyway, for background I've been renting a couple of VPS servers out of the Netherlands and I'm running Talos OS and kubernetes. I'm in the works of standing up some digital-nomad / backpacker oriented instances called "keyboardvagabond.com" and eventually I'll get a landing page, etc. There's still more work to do before going live even though the services are running.

The lates bit of work came after a meetup at my job where no one came for official discussion, so we talked about self-hosting. I was strongly encouraged to get off of using external-dns and dns routing to use Cloudflare's tunnels instead. I had avoided them because I felt a bit intimidated. I got the first test pod running in like 15 minutes and then began migrating all of the application endpoints. I still need to seal off the k8s and talos ports, for which I might use warp.

The adventure part came to me realizing that I wasn't pulling in images on the piefed instance, so I figured that something was wrong. I checked k9s and there was about 50 cron jobs the send queue all in ImgePullBackoff. When I migrated harbor registry, I just went to the landing page, but didn't sign in. It took a bit of figuring things out, but I had to switch the backend in nginx to use https, port 443, and tls no verify, then change cloudflare to use HTTPS with a different host name than a host name for a specific pod (the new one is harbor-registry.harbor-registry.svc.cluster.local:443).

Anyway, it's all working now and the jobs slowly cleaned up, but it's fun seeing that the latest jobs can't be made due to "not enough memory" (crying with sunglasses emoji here). The piefed-worker pod is screaming along at its maximum of 1cpu core and 60% maximum memory, so it's all looking good.

Edit

Event MORE fun in self hosting. The ISP blocked my ports! Thankfully I was talking with my manager about cloudflare tunneling. I just moved my domain names over to cloudflared and everything is back up again. Took about an hour or so to migrate everything.